PEARL XI : DevOps Orginated from Enterprise System Management and Agile S/W Methodology

PEARL XI : DEVOPS (portmanteau of development and operations) is a software development lifecycle approach that stresses communication, collaboration and integration between software developers and information technology (IT) operations professionals .  Many of the ideas (and people) involved in DevOps originated from the Enterprise Systems Management and Agile software development movements.

DevOps - Continuous Value

IT now powers most businesses. The central role  that IT plays translates into huge demands on the IT  staff to develop and deploy new applications and  services at an accelerated pace. To meet this demand, many software development organizations  are applying Lean principles through such approaches as Agile software development. Influenced heavily by Lean methodology, Agile methodology  is based on frequent, customer-focused releases  and strives to eliminate all steps that don’t add value  for the customer. Using Agile methodology, development teams are able to shrink development cycles dramatically and increase application quality.
Unfortunately, the increasing number of software  releases, growing complexity, shrinking deployment time frames, and limited budgets are presenting the  operations staff with unprecedented challenges.  Operations can begin to address these challenges  by learning from software developers and adopting  Lean methodology. That requires re-evaluating current processes, ferreting out sources of waste, and  automating wherever possible.

According to the Lean Enterprise Institute, “The core idea [of Lean] is to maximize customer value while  minimizing waste. Simply, Lean means creating  more value for customers with fewer resources.”

This involves a five-step process for guiding the implementation of Lean techniques:
1. Specify value from the standpoint of the end customer.
2. Identify all the steps in the value stream, eliminating whenever possible those steps that do not create value.
3. Make the value-creating steps occur in tight sequence so the product will flow smoothly toward the customer.
4. As flow is introduced, let customer demand determine the time to market.
5. As value is specified, value streams are identified, wasted steps are removed, and customer-demand centric flow is established, begin the process again and continue it until a state of perfection is reached in which perfect value is created with no waste.
Clearly, Lean is not a one-shot proposition. It’s a reiterative process of continuous improvement.
Bridge the DevOps gap
There are obstacles to bringing Lean methodology to operations. One of the primary ones is the cultural difference between development and operations. Developers are usually driven to embrace the latest technologies and methodologies. Agile principles mean that they are aligning more closely with business requirements, and the business has an imperative to move quickly to stay competitive. Consequently, the development team is incentivized to move applications from concept to marketing as quickly as possible.
The culture of operations is typically cautious and deliberate. They are incentivized to maintain stability and business continuity. They are well aware of the consequences and high visibility of problems, such as performance slowdowns and outages, which are
caused by improperly handled releases.
As a result, there is a natural clash between the businessdriven need for speed on the development side and the conservative inertia on the operations side. Each group has different processes and ways of looking at things.
The result is often called the DevOps gap. The DevOps movement has arisen out of the need to address this disconnect. DevOps is an approach that looks to bring the benefits of Agile and Lean methodologies into operations, reducing the barriers to delivering more value for the customer and aligning with the business. It stresses the importance of communication, collaboration, and integration between the two groups, and even combining responsibilities. Today, operations teams find themselves at a critical decision point. They can adopt the spirit of DevOps and strive to close the gap. That requires working more closely with development. It means getting involved earlier in the development cycle instead of waiting for new applications and services to “come over the fence.” And conversely, developers will need to be more involved in application support. The best way to facilitate this change is by following the development team’s lead in adopting Lean methodology by reducing waste and focusing on customer value.
On the other hand, not closing the gap can have serious repercussions for operations. In frustration, developers may bypass operations entirely and go right to the cloud. This is already occurring in some companies.

Another challenge that operations teams face is in how to take the new intellectual property that the development organizations have built for the business and get it out to customers as quickly as possible, with the least number of errors and at the lowest cost. That requires creating a release process that is fast, efficient, and repeatable. That’s where Lean methodology provides the most value.

DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration betweensoftware developers and information technology (IT) operations professionals. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services.

A DevOps approach applies agile and lean thinking principles to all stakeholders in an organization who develop, operate,  or benefit from the business’s software systems, including customers, suppliers partners. By extending lean principles across  the entire software supply chain, DevOps capabilities will  improve productivity through accelerated customer feedback  cycles, unified measurements and collaboration across an enterprise, and reduced overhead, duplication, and rework

Companies with very frequent releases may require a DevOps awareness or orientation program. Flickr developed a DevOps approach to support a business requirement of ten deployments per day; this daily deployment cycle would be much higher at organizations producing multi-focus or multi-function applications. This is referred to as continuous deployment or continuous delivery  and is frequently associated with the lean startup methodology. Working groups, professional associations and blogs have formed on the topic since 2009.

DevOps aids in software application release management for a company by standardizing development environments. Events can be more easily tracked as well as resolving documented process control and granular reporting issues. Companies with release/deployment automation problems usually have existing automation but want to more flexibly manage and drive this automation — without needing to enter everything manually at the command-line. Ideally, this automation can be invoked by non-operations resources in specific non-production environments. Developers are given more environment control, giving infrastructure more application-centric understanding.

Simple processes become clearly articulated using a DevOps approach. The goal is to maximize the predictability, efficiency, security and maintainability of operational processes. This objective is very often supported by automation.

DevOps integration targets product delivery, quality testing, feature development and maintenance releases in order to improve reliability and security and faster development and deployment cycles. Many of the ideas (and people) involved in DevOps came from the Enterprise Systems Management and Agile software development movements.

The focus of Lean is on delivering value to the customer and doing so as quickly and efficiently as possible. It is flow oriented rather than batch oriented. Its purpose is to smooth the flow of the value stream and make it customer centric.

DevOps incorporates lean thinking and agile methodology as follows:

  • Eliminate any activity that is not necessary for learning what  customers want. This emphasizes fast, continuous iterations  and customer insight with a feedback loop.
  • Eliminate wait times and delays caused by manual processes  and reliance on tribal knowledge.
  • Enable knowledge workers, business analysts, developers, testers, and other domain experts to focus on creative activities (not procedural activities) that help sustain innovation, and avoid expensive and dangerous organization and technology “resets.”
  • Optimize risk management by steering with meaningful  delivery analytics that illuminate validated learning by  reducing uncertainty in ways that can be measured.

The first step for operations in adopting Lean methodology is to understand the big picture. That means not only developing an understanding of the end-to-end release process but also understanding the release process within the overall context of the DevOps plan, build, and run cycle. In this cycle, development plans a new application based on the requirements of the business, builds the application, and then releases it to operations. Operations then assumes responsibility for running the application.
In examining processes, therefore, operations should not only look at the release process itself but also at the process before the release to determine where opportunities lie for closer cooperation between the two groups. For example, operations may see a way for development to improve the staging process for operational production of an application.
Release process management (RPM) solutions are available that enable IT to map out and document the entire application lifecycle process, end to end, from planning through release to retirement. These solutions provide a collaboration platform that can bring operations and development closer together and provide that “big picture” visibility so vital to Lean. They also enable operations to consolidate release processes that are fragmented across spreadsheets, hand-written notes, and various other places.
In examining the release process itself, operations should look for areas to tighten the flow and eliminate unnecessary tasks. The operations group in one company, for example, examined the release process and found that it was re-provisioning the same servers three times when it was only necessary to do so once.
Anything that doesn’t directly contribute to customer value (like unnecessary meetings, approvals, and communication) should be considered for elimination.
Automate for consistency and speed
Manual procedures are major contributors to waste. For example, an existing release process may call for a database administrator (DBA) to update a particular database manually. This manual effort is inefficient and susceptible to errors. It’s also unlikely to be done in a consistent fashion: If there are several DBAs, each one may build a database differently.
Automation eliminates waste as well as a major source of errors. Automation ensures that processes are repeatable and consistently applied, while also ensuring frictionless compliance with corporate policies and external regulations. Deployment automation and configuration management tools can help by automating a wide variety of processes based on best practices. For Lean methodology to really work, processes must be predictable and consistent. That means that simple automation is not enough. The delivery of the entire software stack should be automated. This means that all environment builds — whether in pre- or post-production — should be completely automated. Also, the software deployment process must be completely automated, including code, content, configurations, and whatever else is required.

Automate manual and overhead activities (enabling continuous delivery) such as change propagation and orchestration,  traceability, measurement, progress reporting, etc.
By automating the whole software stack, it becomes much easier to ensure compliance with operations and security. This can save vast amounts of time usually wasted waiting on security approval for new application deployments.

It is preferable to automate the time-consuming operational policies like initiating the required change request approvals, configuring performance monitoring, and so on. The mundane manual tasks, like these policies, create the most waste.
Before diving into automation, however, it’s essential for operations to map out and fully understand the end-to-end release process. When you use a release process management (RPM) platform to drive the end-to-end process, the team can review the process holistically to uncover sources of waste and determine where to apply automation tools to best streamline the process, eliminate waste, and accelerate delivery.
Measure success and continually improve Lean is an iterative approach to continuous improvement, and iteration necessitates feedback.


Consequently, operations must establish a means of tracking the impact of adopting Lean
methodology. In establishing the feedback metrics, keep in mind that the primary purpose of Lean methodology is not just to smooth and accelerate the release cycle; it’s also to create more value for customers and do it with fewer resources.
Consequently, operations should measure not only the increase in speed of releases but also the impact of the releases on cost and on customer value. For example, did the release result in a spike in the number of service desk incidents? This would not only increase support costs but also would degrade the customer experience. Or did the lack of capacity planning result in over-taxed infrastructure and degrade end-user performance? Here, it’s important to monitor application performance and availability from the customer’s perspective. Customers are not interested in the performance metrics of the individual IT infrastructure components that support a service.
They care about the overall user experience. In particular, how quickly did they complete their transactions end to end?
Application Performance Management (APM) solutions can track and report on a wide variety of metrics, including customer experience. These metrics provide valuable feedback to both the operations and development teams in measuring the impact of Lean implementation and identifying areas that require further attention. With these solutions in place, operations can operate in a mode of continuous improvement.

Use meaningful measurement and monitoring of progress (enabling continuous optimization) for improved visibility across the organization, including the software value delivery supply chain.

IBM DevOps Platform

IBM provides an open, standards-based DevOps platform that supports a continuous innovation, feedback and improvement lifecycle, enabling a business to plan, track, manage, and automate all aspects of continuously delivering business ideas. At the same time, the business is able to manage both existing and new workloads in enterprise-class systems and open the door to innovation with cloud and mobile solutions. This capability includes an iterative set of quality checks and verification phases that each product or piece of application code must pass before release to customers. The IBM solution provides a continuous feedback loop for all aspects of the delivery process (e.g., customer experience and sentiments, quality metrics, service level agreements, and environment data) and enables continuous testing of ideas and capabilities with end users in a customer facing environment.
IBM’s DevOps solution consists of the open standards based platform, DevOps Foundation services, with end to end DevOps lifecycle capabilities. To accommodate varying levels of maturity within an IT team’s delivery processes

Plan and measure: This adoption path consists of one major practice:
This adoption path consists of one major practice:
Continuous business planning: Continuous business planning employs lean principles to start small by identifying the outcomes and resources needed to test the business vision/value, to adapt and adjust continually, measure actual progress and learn what customers really want and shift direction with agility and update the plan. 

Develop and test: This adoption path consists of two major practices:
Collaborative development: Collaborative development enables collaboration between business, development, and QA organizations—including contractors and vendors in outsourced projects spread across time zones—to deliver innovative, quality software continuously. This includes support for polyglot programming and support multiplatform
development, elaboration of ideas, and creation of user stories complete with cross-team change and lifecycle management.
Collaborative development includes the practice of continuous integration, which promotes frequent team integrations and automatic builds. By integrating the system more frequently, integration issues are identified earlier when they are easier to fix, and the overall integration effort is reduced via continuous feedback as the project shows constant and demonstrable progress.
Continuous testing: Continuous testing reduces the cost of testing while helping development teams balance quality and speed. It eliminates testing bottlenecks through virtualized dependent services, and it simplifies the creation of virtualized test environments that can be easily deployed, shared, and updated as systems change. These capabilities reduce the cost of provisioning and maintaining test environments and
shorten test cycle times by allowing integration testing earlier in lifecycle.
Release and deploy: This adoption path consists of one major practice:
Continuous release and deployment: Continuous release and deployment provides a continuous delivery pipeline that automates deployments to test and production environments.
It reduces the amount of manual labor, resource wait-time, and rework by means of push-button deployments that allow higher frequency of releases, reduced errors, and end-to-end transparency for compliance.
Monitor and optimize: This adoption path consists of two major practices:
Continuous monitoring: Continuous monitoring offers enterprise-class, easy-to-use reporting that helps developers and testers understand the performance and availability of their application, even before it is deployed to production.
The early feedback provided by continuous monitoring is vital for lowering the cost of errors and change, and for steering projects toward successful completion.

Continuous customer feedback and optimization:
Continuous customer feedback provides the visual evidence and full context for analyzing customer behavior and pinpointing customer pain points. Feedback can be applied during
both pre- and post-production phases to maximize the value of every customer visit and ensure that more transactions are completed successfully. This allows immediate visibility into the sources of customer struggles that affect their behavior and impact business.

Benefits of the IBM DevOps solution

By adopting this solution to address needs, organizations can  unlock new business opportunities:

  • Deliver a differentiated and engaging customer experience  that builds customer loyalty and increases market share by  continuously obtaining and responding to customer feedback
  • Obtain fast-mover advantage to capture markets with quicker time to value based on software-based innovation, with  improved predictability and success
  • Increase capacity to innovate by reducing waste and rework  in order to shift resources to higher value activities

Keep up with the future
By adopting Lean methodology, operations teams can catch up with and even get ahead of the large and rapidly increasing amount of new and updated services flowing from Agile-accelerated development teams. And they can do so without increasing costs or jeopardizing stability and business continuity.
In so doing, operations can help increase customer value, which has a direct effect on revenue, competitiveness, and the brand. Moreover, the operations team will have the metrics to demonstrate its contribution to the business. That enables the team to transform its image in the organization from software-release speed barrier to high-velocity enabler.

Traditional approaches to software development and delivery  are no longer sufficient. Manual processes are error prone,  break down, and they create waste and delayed response.
Businesses can’t afford to focus on cost while neglecting speed of delivery, or choose speed over managing risk. A DevOps  approach offers a powerful solution to these challenges.
DevOps reduces time to customer feedback, increases quality, reduces risk and cost, and unifies process, culture, and tools  across the end to end lifecycle—which includes adoption path to plan and measure, develop and test, release and deploy, and monitor and optimize.

Advertisements

PEARL XIX : Effective Steps to reduce technical debt: An agile approach

PEARL XIX : Effective Steps to reduce technical debt: An agile approach

In every codebase, there are the dark corners and alleys developers fear. Code that’s impossibly brittle; code that bites back with regression bugs; code that when you attempt to follow, will drive you beyond chaos.

Ward Cunningham created a beautiful metaphor for the hard-to-change, error-prone parts of code when he likened it to financial debt. Technical debt prevents you from moving forward, from profiting, from staying “in the black.” As in the real world, there’s cheap debt, debt with an interest lower than you can make in a low-risk financial instrument. Then there’s the expensive stuff, the high-interest credit card fees that pile on even more debt.

The impact of accumulated technical debt can be decreased efficiency, increased cost, and extended delays in the maintenance of existing systems. This can directly jeopardize operations, undermining the stability and reliability of the business over time. It also can stymie the ability to innovate and grow

DB Systel, a subsidiary of Deutsche Bahn, is one of Germany’s leading information technology and communications providers, running approximately 500 high-availability business systems for its customers. In order to keep this complex environment—a mix of packaged and in-house–developed systems that range from mainframe to mobile—running efficiently while continuing to address the needs of its customers, DB Systel decided to embed processes and tools within its development and maintenance activities to actively address its technical debt.

DB Systel’s software developers have employed new tools during development so they can detect and correct errors more efficiently. Using a software analysis and measurement platform from CAST, DB Systel has been able to uncover architectural hot spots and transactions in its core systems that carry significant structural risk. DB Systel is now better able to track the nonfunctional quality characteristics of its systems and precisely measure changes in architecture- and code-level technical debt within these applications to prioritize the areas with highest impact.

By implementing this strategy at the architecture level, DB Systel has seen a reduction in time spent on error detection and an increased focus on leading-practice development techniques. The company also noticed a rise in employees’ intrinsic motivation as a result of using CAST. With an effective technical debt management process in place, DB Systel is mitigating the possibility of software deterioration while also enriching application quality.

Technical debt is a drag. It can kill productivity, making maintenance annoying, difficult, or, in some cases, impossible. Beyond the obvious economic downside, there’s a real psychological cost to technical debt. No developer enjoys sitting down to his computer in the morning knowing he’s about to face impossibly brittle, complicated source code. The frustration and helplessness thus engendered is often a root cause of more systemic problems, such as developer turnover— just one of the real economic costs of technical debt.

However, the consequences of failing to identify and measure technical debt can be significant. An application with a lot of technical debt may not be able to fulfill its business purpose and may never reach production. Or technical debt may require weeks or months of remedial refactoring before the application emerges into production. At best, it could reach production, but be limited in its ability to meet users’ needs.

Every codebase contains some measure of technical debt. One class of debt is fairly harmless: byzantine dependencies among bizarrely named types in stable, rarely modified recesses of  system. Another is sloppy code that is easily fixed on the spot, but often ignored in the rush to address higher-priority problems. There are many more examples.

This section outlines a general workflow and several tactics for dealing with the high-interest debt

In order to fix technical debt, team need to cultivate buy-in from stakeholders and teammates alike. To do this,they need to start thinking systemically. Systems thinking is long-range thinking. It is investment thinking. It’s the idea that effort you put in today will let you progress at a predictable and sustained pace in the future.

Technical debt (also known as design debt or code debt) is a neologism metaphor referring to the eventual consequences of poor software architecture and software development within a code-base. The debt can be thought of as work that needs to be done before a particular job can be considered complete. If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy.

As a change is started on a codebase, there is often the need to make other coordinated changes at the same time in other parts of the codebase or documentation. The other required, but uncompleted changes, are considered debt that must be paid at some point in the future. Just like financial debt, these uncompleted changes incur interest on top of interest, making it cumbersome to build a project. Although the term is used in software development primarily, it can also be applied to other professions.

Common causes of technical debt include (a combination of):

  • Business pressures, where the business considers getting something released sooner before all of the necessary changes are complete, builds up technical debt comprising those uncompleted changes
  • Lack of process or understanding, where businesses are blind to the concept of technical debt, and make decisions without considering the implications
  • Lack of building loosely coupled components, where functions are hard-coded, when business needs change, the software is inflexible.
  • Lack of test suite, which encourages quick and risky band-aids to fix bugs.
  • Lack of documentation, where code is created without necessary supporting documentation. That work to create the supporting documentation represents a debt that must be paid.
  • Lack of collaboration, where knowledge isn’t shared around the organization and business efficiency suffers, or junior developers are not properly mentored
  • Parallel development at the same time on two or more branches can cause the build up of technical debt because of the work that will eventually be required to merge the changes into a single source base. The more changes that are done in isolation, the more debt that is piled up.
  • Delayed refactoring – As the requirements for a project evolve, it may become clear that parts of the code have become unwieldy and must be refactored in order to support future requirements. The longer that refactoring is delayed, and the more code is written to use the current form, the more debt that piles up that must be paid at the time the refactoring is finally done.
  • Lack of knowledge, when the developer simply doesn’t know how to write elegant code.

“Interest payments” are both in the necessary local maintenance and the absence of maintenance by other users of the project. Ongoing development in the upstream project can increase the cost of “paying off the debt” in the future. One pays off the debt by simply completing the uncompleted work.

The build up of technical debt is a major cause for projects to miss deadlines. It is difficult to estimate exactly how much work is necessary to pay off the debt. For each change that is initiated, an uncertain amount of uncompleted work is committed to the project. The deadline is missed when the project realizes that there is more uncompleted work (debt) than there is time to complete it in. To have predictable release schedules, a development team should limit the amount of work in progress in order to keep the amount of uncompleted work (or debt) small at all times.

“As an evolving program is continually changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it.”
— Meir Manny Lehman, 1980
While Manny Lehman’s Law already indicated that evolving programs continually add to their complexity and deteriorating structure unless work is done to maintain it, Ward Cunningham first drew the comparison between technical complexity and debt in a 1992 experience report:

“Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.”
— Ward Cunningham, 1992
In his 2004 text, Refactoring to Patterns, Joshua Kerievsky presents a comparable argument concerning the costs associated with architectural negligence, which he describes as “design debt”.

“…doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical  debt incurs interest payments, which come in  the form of the extra effort that we have to do in  future development because of the quick and dirty  design choice. We can choose to continue paying  the interest, or we can pay down the principal by  refactoring the quick and dirty design into the  better design. Although it costs to pay down the
principal, we gain by reduced interest payments  in the future.”

–Martin Fowler

Technical Debt” refers to delayed technical  work that is incurred when technical short  cuts are taken, usually in pursuit of calendar driven software schedules. Just like financial  debt, some technical debts can serve valuable  business purposes. Other technical debts are  simply counterproductive. The ability to take on
debt safely, track their debt, manage their debt, and pay down their debt varies among different  organizations. Explicit decision making before  taking on debt and more explicit tracking of debt  are advised.

–Steve McConnell

Activities that might be postponed include documentation, writing tests, attending to TODO comments and tackling compiler and static code analysis warnings. Other instances of technical debt include knowledge that isn’t shared around the organization and code that is too confusing to be modified easily.

In open source software, postponing sending local changes to the upstream project is a technical debt

 The basic workflow for tackling technical debt—indeed any kind of improvement—is repeatable. Essentially, there are four:

  1. Identify where are the debt. How much is each debt item affecting  company’s bottom line and team’s productivity?
  2. Build a business case and forge a consensus on priority with those affected by the debt, both team and stakeholders.
  3. Fix the debt  on the  chosen item, head on with proven tactics.
  4. Repeat. Go back to step 1 to identify additional debt and hold the line on the improvements  made.

Agile Approach to Technical Debt

Involve the Product Owner and “promote” him to be the sponsor of technical debt reduction.

Sometimes it’s hard to find debt, especially if a team is new to a codebase. In cases where there’s no collective memory or oral tradition to draw on, team can use a static analysis tool such as NDepend (ndepend.com) to probe the code for the more troublesome spots.

Determining test coverage  can be another valuable tool for discovering hidden debt.

Use the log feature of  version control system to generate a report of changes over the last month or two. Find the parts of the system that receive the most activity, changes or additions, and scrutinize them for technical debt. This will help to find the bottlenecks that are challenging  today; there’s very little value in fixing debt in those parts of your system that change rarely.

Inventory and structure known technical debt

Having convinced the product owner it is time to collect and inventory known technical problems and map them on a structure that visualizes the system and project landscape.

It is not about completely understanding all topics. It is about finding a proper structure, identifying the most important issues and mapping those onto the structure. It’s about extracting knowledge about the systems from the heads to develop a common picture of existing technical problems.

Therefore write the names / identifiers of all applications and modules individual own on cards. These cards shall be pinned on a whiteboard. In the next step extract to-do’s (to solve existing problems) from all documentation media used (wiki, jira, confluence, code documentation, paper), write them on post-its and stuck them next to the application name it belongs to.This board shall be accessible to all team members over a period of some days. Every team member was responsible to complete, restructure, and correct the board during this period so that they could go on with a round portfolio of the existing debt in the systems.

Having collected and understood the work to reduce the technical debt within the systems the team now need a baseline for defining a good strategy – a repayment plan. Therefore  Costs and benefits shall be estimated. 

Obtaining consensus is key. We want the majority of team members to support the current improvement initiative e selected.Luke Hohmann’s “Buy a Feature” approach from his book Innovation Games (innovationgames.com) will help to get consensus.

  1. Generate a short list (5-9 items) of things you want to improve. Ideally these items are in your short-term path.
  2. Qualify the items in terms of difficulty. we can use the abstract notion of a T-shirt size: small, medium, large or extra-large
  3. Give your features a price based on their size. For example, small items may cost $50, medium items $100, and so on.
  4. Give everyone a certain amount of money. The key here is to introduce scarcity into the game. You want people to have to pool their money to buy the features they’re interested in. You want to price, say, medium features at a cost where no one individual can buy them. It’s valuable to find where more than a single individual sees the priority since you’re trying to build consensus.
  5. Run a short game, perhaps 20 or 30 minutes in length, where people can discuss, collude, and pitch their case. This can be quite chaotic and also quite fun, and you’ll see where the seats of influence are in your team.
  6. Review the items that were bought and by what margins they were bought. You can choose to rank your list by the purchased features or, better yet, use the results of the Buy a Feature game in combination with other techniques, such as an awareness of the next release plan.

Taking on some judicious technical debt can be an appropriate decision to meet schedules or to prototype a new feature set, as long as the decision was made with a clear understanding of the costs involved later in the project, such as code refactoring.

As Martin Fowler says, “The useful distinction  isn’t between debt or non-debt, but between prudent and reckless  debt.”

techdebt-state.png

Technical debt actually begets more tech debt over time, and its state diagram is depicted.

Load Testing as a Practice to identify Technical Debt

Load testing exposes weaknesses in an application that cannot be found through traditional functional testing. Those weaknesses are generally reflected in the application’s inability to scale appropriately. Testers are also typically  already planning to perform load testing at some point prior to  the production release of the application.

Load testing involves enabling virtual users to execute predetermined actions simultaneously against the application. The  scripts exercise features either singly or in sequences expected  to be common among production users.

Load testing looks at the characteristics of an application under a simulated load, similar to the way it might operate in a production environment. At the highest level, it determines if an application will support the number of simultaneous users specified in the project requirements.

However, it does more than that. By looking at system characteristics as you increase the number of simultaneous users, you  can make some useful statements regarding what resources  are being stressed, and where in the application they are being stressed. With this information, the team can identify weaknesses in the application that are generally the result of incurring  technical debt, therefore providing the basis for identifying the  debt.

Some automation and measurement tools are required to successfully identify and assess technical debt with load testing.

Coding / Testing Practices

Management has to make the time  through proactive investment, but so does the team. Each team member needs to invest in their own knowledge and  education on how to write clean code, their business domain  and how to do their jobs optimally. While teams learn during  the project through retrospectives, design reviews and pair  programming, teams should learn agile engineering practices  for design, development and testing. Whether through courses,  conferences, user groups, podcasts, web sites or books – there  are many options for learning better coding practices to reduce technical debt

Design Principles and Techniques

Additionally, architects need to learn about evolutionary design principles and refactoring techniques for fixing poor designs today and building better designs tomorrow. Lastly, a governance group should meet periodically to review performance and plan future system
changes to further reduce technical debt.

Definition of Done

Establish a common “definition of done” for each requirement, user story or use case and ensure its validated with the business before development begins. A simple format  such as “this story is done when: <list of criteria>” works well. The Product Owner presents “done” to the Developers, User
Interface Designers, Testers and Analysts and together they collaboratively work out the finer implementation details. Set expectations with developers that only stories meeting “done” (as validated by the testers) will be accepted and contribute towards velocity. Similarly, set expectations with management and analysts that only stories that are “ready” are scheduled for development to ensure poor requirements don’t cause further technical debt.

In all popular languages and platforms today, open source and commercial tools are available to automate builds, the continuous integration of code changes, unit testing, acceptance testing, deployments, database setup, performance testing and many other common manual activities. In addition to reducing manual effort, automation reduces the risk of mistakes and over-reliance on one individual for performing critical activities. First setup automated builds ( Ant, nAnt or rake), followed by continuous integration ( Hudson). Next setup automated unit testing (JUnit, NUnit or RSpec) and acceptance testing ( FitNesse and Selenium). Finally setup automated deployments (r Capistrano or custom
shells scripts,). It’s amazing what a few focused team members can accomplish in a relatively short period of time if given time to focus on automating common activities to reduce technical debt.

Consider rating and rewarding developers on the quality of their code. In some cases, fewer skilled developers may be better than volumes of mediocre resources whose work may require downstream reversal of debt. Regularly run code complexity reviews and technical debt assessments, sharing the results across the team. Not only can specific examples help the team improve, but trends can signal that a project is headed in the wrong direction or encountering unexpected complexity.

PEARL XIII: Effective Visual Management with Information Radiators

PEARL XIII: 

Effective visual management with Information Radiators

Information Radiators

Information Radiators, also known as Big Visible Charts, are useful quite simply because they provide an effective way to communicate project status, issues, or metrics without a great deal of effort from the team. The premise is that these displays make critical, changing information about a project accessible to anyone with enough ambition to walk over to the team area and take a look.

“An Information radiator is a display posted in a place where people can see it as they work or walk by. It shows readers information they care about without having to ask anyone a question. This means more communication with fewer interruptions.”

Information Radiators are

  • Is large and easily visible to the casual, interested observer
  • Is understood at a glance
  • Changes periodically, so that it is worth visiting
  • Is easily kept up to date

“Information Radiator” is a popular term invented by Alistair Cockburn that is used to describe any artifact that conveys project information and is publicly displayed in the workspace or surroundings. Information radiators are very popular in the Agile world, and they are an essential component of visual management. Most Agile teams recognize the value of information radiators and implement them to some degree in their processes.

Information radiators take on many different shapes and sizes. Traditional implementations range from hand-drawn posters of burndown charts to coloured sticky notes on a whiteboard.

Many development teams create elaborate information radiators using electronic devices like traffic lights or glowing orbs to indicate build status. Most teams, including many at Atlassian, roll their own electronic wallboards to pull real-time data from development tools for display on large monitors over the team workspace.

Visual control is a business management technique employed in many places where information is communicated by using visual signals instead of texts or other written instructions. The design is deliberate in allowing quick recognition of the information being communicated, in order to increase efficiency and clarity. These signals can be of many forms, from different coloured clothing for different teams, to focusing measures upon the size of the problem and not the size of the activity, tokanban, obeya and heijunka boxes and many other diverse examples. In The Toyota Way, it is also known as mieruka.

The three most popular information radiators are

  • Task Boards,
  • Big Visible Charts (which includes burn downs and family) and
  • Continuous Integration build health indicators (including lava lamps and stolen street lights).

Task Boards

MockedTaskBoard

MockedTaskBoard

The most important information radiator in visual management is the Task Board. (In Scrum, Agilists sometimes call task boards as Scrum boards). The task board has the mission of visually representing the work that is being done by the team. They are the most complex and versatile artifact: a physical task board is a “living” entity that has to be manually maintained. Tasks boards are being undervalued by most agile teams today. This might be because there has not been a lot of focus on their potential, or perhaps there are simply not many examples around on what makes a great task board. In any case, it’s time to take task boards to the next level.

In its most basic form, a task board can be drawn on a whiteboard or even a section of wall. Using electrical tape or a dry erase pen, the board is divided into three columns labeled “To Do”, “In Progress” and “Done”. Sticky notes or index cards, one for each task the team is working on, are placed in the columns reflecting the current status of the tasks.

Many variants exist. Different layouts can be used, for instance by rows instead of columns (although the latter is much more common). The number and headings of the columns can vary, further columns are often used for instance to represent an activity, such as “In Test”.

The task board is updated frequently, most commonly during the daily meeting, based on the team’s progress since the last update. The board is commonly “reset” at the beginning of each iteration to reflect the iteration plan.

Expected Benefits
  • The task board is an “information radiator” – it ensures efficient diffusion of informations relevant to the whole team
  • The task board serves as a focal point for the daily meeting, keeping it focused on progress and obstacles
  • The simplicity and flexibility of the task board and its elementary materials (sticky notes, sticky dots etc.) allow the team to represent any relevant information: colors can be used to distinguish features from bug fixes, sticky orientation can be used to convey special cases such as blocked tasks, sticky dots can be used to record the number of days a task spends “In Progress”…
Common Pitfalls
  • Many teams new to Agile rush to adopt an electronic simulation (“virtual task board”) without first getting significant experience with a physical task board, even though virtual boards are much less flexible and poorer in affordances
  • Even geographically distributed teams for whom a virtual task board is a necessity can benefit from using physical task boards locally and replicating the information in an electronic tool
Origins

Sticky notes or index cards had been used for visual management of project scheduling well before Scrum and Extreme Programming brought these “low tech” approaches and their benefits back into the spotlight. However, the precise format of the task board described here did not become a de facto standard until the mid-2000’s.

  • 2003: the five-column task board format is described by Mike Cohn on his Web site; at the time, as this photo gallery collected by Bill Wake shows, very diverse variants still abound
  • 2007: the simplified three-column task board format (“To Do”, “In Progress”, “Done”) becomes, around that time, more popular and more standard than the original five-column version

What makes a great Task Board
A good task board should be carefully designed with readability and usability in mind, and the project methodology should actively rely on it. This implies that the use of the task board should be standardized and form part of the process. If task boards (and other information radiators) are not an integral part of the project methodology, maintaining them might be perceived as overhead or duplication of work. This results in boards not being updated and becoming out of sync with the work actually being done. An incomplete or stale task board is worthless.

A task board is a living entity and should be kept healthy.

You have a great task board if…

  • Team members never complain about having to use it.
  • The daily standup happens against it.
  • Random people that pass by stop to look at it, expressing interest and curiosity.
  • Your boss has proudly shown it to his boss.
  • You see team members updating it regularly during the day.
  • It passes the hallway usability test: a person who has never seen it before can understand it quickly and without explanations.
  • You catch a senior manager walking the floor and looking at it.
  • It just looks great!

Information Radiators are useful quite simply because they provide an effective way to communicate project status, issues, or metrics without a great deal of effort from the team. The premise is that these displays make critical, changing information about a project accessible to anyone with enough ambition to walk over to the team area and take a look.
Information Radiators are also good ways to remind the team of critical items, such as issues that need to be addressed, items on which the team is currently working, key models for the system on which they are working, and the status of testing.
Depending on the type of information tracked on the Information Radiators, these displays can also help the team to identify problems early. This is especially true if the team is tracking key metrics about their performance where trends in the information will indicate something is out of whack for the team. This type of information includes passing and failing tests, completed functionality, and task progress.

Using Information Radiators

As a team, determine what information would be very helpful to see plastered on a wall in plain sight. The need for an Information Radiator may be identified at the very beginning of a project, or as a result of feedback generated during a retrospective. Ideally, it will communicate information that needs to go to a broad audience, changes on a regular basis, and is relevant for the team.
Decide not only what you want to show, but the best way to convey it. There are a variety of methods to choose from, including a whiteboard and markers, sticky notes, pins, dots, or a combination of all of the above. Anything goes, as long as it is not dependent on a computer and some fancy graphics software. Unless of course you are working with a distributed team; see suggestions for that situation below.
Grab the necessary tools and get to work, but don’t forget to have a little fun with the creation process. Remember to make the Information Radiator easy to read, understand, and update. You want this to be a useful, living display of information, so don’t paint yourself into a corner at the beginning.
Remember to update the information radiator when the information changes. If you are using it to track tasks, you may change it several times a day. If you are using it to track delivery of features, it may be updated once a week or every two weeks.
Check in with the team regularly to find out if the Information Radiator is up to date and still useful. Find out if people outside the team are using it to gather information about the team’s progress without causing an interruption. Find out if there are possible improvements, or if the information radiator is no longer needed. Whatever feedback you receive, act on it.

Traffic lights , lava lamps

Traffic lights , lava lamps

Information radiators can take a variety of forms, from wallboards to lava lamps and traffic lights.

A wallboard is a type of information radiator that displays vital data about the progress of the development team. Similar to a scoreboard at a sporting event, wallboards are large, highly visible and easy to understand for anyone walking by.

Traditional wallboards are made of paper or use sticky notes on a wall. Electronic wallboards are very effective since they update automatically with real-time data ensuring that people check back regularly.

Common information radiators used in Scrum environment includes

  • Product Vision
  • Product Backlog/release plan
  • Iteration Backlog
  • Burn Down and Burn Up charts
  • Impediment List

In lean environment , information radiators are of specialized type which is called as Visual Control . In lean environment, visual controls are used to make it easier to control an activity or process through a variety of visual signals or cues.

The visual control:

  • Conveys its information visually 
  • Mirrors (at least some part of) the process being used by the team
  • Describes the state of the work‐in‐process
  • Is used to control the work‐in‐process
  • Can be viewed by anyone

Visual controls should be present at all levels: business, management, and team. That is, they should help the business see how value is being created as well as assist management and the team in building software as effectively and efficiently as possible.
The visual controls used in Lean‐Agile include

  • Product Vision : Every Agile team should have a product vision. This provides the big picture for the product: what is motivating the development effort, what the current objectives are, and the key  features. We have seen teams who were flailing about, trying to figure out what they were  supposed to be doing, suddenly gain focus when the  Product Champion produced a product vision statement
  • Product Backlog / Release Plan / Lean Portfolio
  • Iteration Backlog – simple team, multiple teams
  • Story Point Burn‐Up Chart
  • Iteration Burn‐Down Chart
  • Business Value Delivered Chart
  • Impediment List

A backlog is the accumulation of work that has to be done over time. In Lean‐Agile, the product backlog describes that part of the product that is still to be developed. Before the first iteration, it shows every piece of information that is known about the product at that time, represented in terms of features and stories. As each iteration begins, some of these stories move from the product backlog to the iteration backlog. At the end of each iteration, the completed features move off of both backlogs.
During “Iteration 0,” the features in the product backlog are organized to reflect the priorities of the business: higher‐priority features on the left and lower‐priority features on the right.

 

The entire enterprise (business, management, and development teams) also need line of sight to velocity (points/time), which, after the first few stories are done, should begin to represent the velocity of business solutions delivered. This is a clear representation of the value stream (from flow of ideas to completed work) and is mapped to the number of points that teams can sustainably deliver in a release.

We can use two visual controls working as a pair to give management a dashboard‐type view of work . The release burn‐up chart tracks cumulative points delivered with each iteration. This can be broken out or rolled up based on program, stakeholder, and so on. The feature burn‐up chart is created by using the initial high‐level estimate to calculate the percent complete after each iteration and plotting this with all features currently identified in the release plan.
This view gives the enterprise a clear visual control of current priorities, along with what work is actually in process. A Lean enterprise continuously watches out for too many features being worked on at any time—this indicates process problems and potential thrashing.

The Impediment List :  A foundation of both Scrum and Lean‐Agile is that continuous improvement includes continually removing impediments. One of the purposes of the daily meeting is to expose impediments, to make them explicit. The Scrum Master or Agile project manager must maintain a list of current impediments so that progress on resolving them can be visible to all. Entries on this list should include:

  • Date entered on list
  • Description of impediment
  • Severity of impediment (what its impact is)
  • Who it affects
  • Actions being taken to remove it

The impediment list should be maintained on an ongoing basis as impediments are removed, added, or modified in some way.

Kanban Board

In the context of Agile teams where the “Kanban method” of continuous improvement (or some of its concepts) has been followed, the following adaptations are often seen:

  • Such teams deemphasize the use of iterations, effort estimates and velocity as a primary measure of progress;
  • They rely on measures of lead time or cycle time instead of velocity; and in the most visible alteration, they replace the task board with a “kanban board”:
  • Unlike a task board, the kanban board is not “reset” at the beginning of each iteration, its columns represent the different processing states of a “unit of value”, which is generally (but not necessarily) equated with a user story.
  • In addition, each column may have associated with it a “WIP limit” (for “work in process” or “work in progress”): if a given state, for instance “in manual testing”, has a WIP limit of, say, 2, then the team “may not” start testing a third user story if two are already being worked on.
  • Whenever such a situation arises, the priority is to clear current work-in-process, and team members will “swarm” to help those working on the activity that’s blocking flow
Also Known As

The term “kanban” is Japanese  with the sense of a sign, poster or billboard, and derived from roots which literally translate as “visual board”.

Its meaning within the Agile context is borrowed from the Toyota Production System, where it designates a system to control the inventory levels of various parts. It is analogous to (and in fact inspired by) cards placed behind products on supermarket shelves to signal “out of stock” items and trigger a resupply “just in time”.

The Toyota system affords a precise accounting of inventory or “work in process”, and strives for a reduction of inventory levels, considered wasteful and harmful to performance.

The phrase “Kanban method” also refers to an approach to continuous improvement which relies on visualizing the current system of work scheduling, managing “flow” as the primary measure of performance, and whole-system optimization – as a process improvement approach, it does not prescribe any particular practices.

Common Pitfalls

Kanban boards are generally more sophisticated than “mere” task boards. This is not a mistake in and of itself; however, it is not advisable that the kanban board should serve as a pretext to reintroduce a “waterfall”-like, linear sequence of activities structuring the process of software development. This may lead to the creation of information silos or over-specialization among team members.

In particular, teams should be wary of kanban boards not accompanied by WIP limits, not only defined but also enforced with respect to demands from managers, customers or other stakehoders. It is from these limits that the kanban approach derives its effectiveness.

Expected Benefits

In some contexts, measuring lead time rather than velocity, and dispensing with the regular rhythm of iterations, may be the more appropriate choice: for instance, when there is little concern with achieving a specific release date, or when the team’s work is by nature continuous and ongoing, such as enhancement or maintenance, in particular of more than one product.

At the risk of oversimplifying, a “kanban board” setup can be considered first for efforts involving maintenance or ongoing evolution, whereas a “task board” setup may be a more natural first choice in efforts described as “projects”.

Origins
  • 2001: Mary Poppendieck’s article, “Lean Programming”, draws attention to the structural parallels between Agile and the ideas known as Lean or the “Toyota Production System”
  • 2003: expanding on their earlier work on Lean Programming, Mary and Tom Poppendieck’s book “Lean Software Development” describes the Agile task board as a “software kanban system”
  • 2007: the first few experience reports from teams using the specific set of alterations known as “kanban” (no iterations, no estimates, continuous task boards with WIP limits) are published, including reports from Corbis (David Anderson) and BueTech (Arlo Belshee)
  • 2007: the “kanbandev” mailing list is formed to provide a venue for discussion of kanban-inspired Agile planning practices
  • 2009: two entities dedicated to exploring the kanban approach are formed, one addressing business concerns, the LSSC and a more informal one aimed at giving the community more visibility: the Limited WIP Society
Flow of Work

After visualizing the work, you will be able to watch how it moves across the board. Kanban teams call this observing the Flow of Work. When there are bottlenecks in the process, or blocking issues that prevent work from being completed, you start to see it play out on the board.

Stop Starting and Start Finishing!

Kanban’s primary mechanism for improving the flow of work is the Work-in-Process Limit. You basically set a policy for the team, saying that we’ll limit how much work is started, but not finished, at any one time. When the board starts to fill up with too much unfinished work, team members re-direct their attention and collaborate to help get some of the work finally finished, before starting any more new work.

On a Kanban board, you visualize the flow of work. You can visualize how the work is flowing during the sprint through the very fast iterations of design, develop, and test that happen within each sprint.  Or, you can let the flow of work be governed by the kanban board and replace the sprint container with a regular deployment cadence instead. It’s a minor difference on the surface, but it can have a major impact.

With the sprint approach, you spend time at the beginning of every sprint trying to estimate and plan a batch of work for the entire sprint. Then you work on it and push hard to have the entire batch completed, tested and deployed by the end of the sprint.

When you’re focusing on flow and using Kanban, you could still deploy completed code every two weeks. But instead of going through the whole cycle of planning, estimating and moving a batch of work through to completion, you can move work through the system without batching two weeks of work at a time. Planning and estimating, designing, building and testing can happen for each individual item as it reaches the top of the priority list in the backlog. When there’s capacity on the Kanban board for more work (WIP limits are being honored, and people are available to do more work), they pull the next item to work on. When the two-week cadence comes around, you simply deliver whatever is ready to deliver.  This encourages members of the team to take each item all the way through to completion individually, instead of focusing on having a two-week batch of work completed at one time. You ensure that each item is Done. With a capital D. “Done-Done”, some people call it. And you get it to “Done-Done” before pulling the next item available for you to work on.

The Agile practice of iterative delivery is a huge improvement over old waterfall or stage-gate project management methods. Focusing on Flow using Kanban can sometimes be even more efficient, resulting in shorter lead times and higher productivity. It can lessen the feeling of always “starting and stopping” without abandoning the value of having a regular cadence to work toward.

Parking Lot Chart

“Parking Lot Chart” is used to provide a top-level digested summary of project status (not to be confused with a “Parking Lot List,” a tool facilitators use to capture unresolved issues). It is first described in Feature Driven Development (FDD) [Palmer02], and is widely used in agile projects today. It is sometimes also called a “Project Dashboard”.

Niko-niko Calendar

The team installs a calendar one one of the room’s walls. The format of the calendar allows each team member to record, at the end of every workday, a graphic evaluation of their mood during that day. This can be either a hand-drawn “emoticon”, or a colored sticker, following a simple color code, for instance: blue for a bad day, red for neutral, yellow for a good day.

Over time, the niko-niko calendar reveals patterns of change in the moods of the team, or of individual members.

Also Known As

The Japanese word “niko” means “smile”; following a common pattern of word doubling in Japanese, “niko-niko” has a meaning closer to “smiley”.

The term “mood board” is also seen. It is an information radiator.

Expected Benefits

The value of this practice lies in making somewhat objective an important element of team performance – motivation or well-being – which is generally seen as entirely subjective and thus impossible to measure and track.

This may be seen as an illustration of the Gilb Measurability Principle: “anything you need to quantify can be measured in some way that is superior to not measuring it at all.”

In other words, a measurement does not have to be perfect or even very precise, as long as your intent is to get a quantitative handle on something that was previously purely qualitative; the important thing is to take that first step toward quantifying.

Common Pitfalls

As with other activities, such as retrospectives, where team members are asked to report subjective feelings, self-censorship is always a risk. This could be the case, for instance, if team members who report poor days are blamed for “whining”, by management or by team mates.

Origins
  • 2001: among the visualizations described in Norm Kerth’s “Project Retrospectives”, the “Energy Seismograph” can perhaps be seen as a forerunner of the niko-niko calendar
  • 2006: niko-niko calendars are first described by Akinori Sakata in this Web article

PEARL XXII : Threat Modelling for Agile Applications

PEARL XXII :  The key to effectively incorporating threat modeling is to decide on the scope of the threat modeling that  will be performed during the various stages of  agile development project by incorporating Security Development Life cycle for Agile Development projects which includes Threat modelling.

Many software development organizations, including product organizations like Microsoft, use Agile software development and management methods to build their applications.
Historically, security has not been given the attention it needs when developing software with Agile  methods. Since Agile methods focus on rapidly creating features that satisfy customers’ direct needs, and  security is a customer need, it’s important that it not be overlooked. In today’s highly interconnected  world, where there are strong regulatory and privacy requirements to protect private data, security must  be treated as a high priority.
There is a perception today that Agile methods do not create secure code, and, on further analysis, the  perception is reality. There is very little “secure Agile” expertise available in the market today. This needs  to change. But the only way the perception and reality can change is by actively taking steps to integrate  security requirements into Agile development methods.

With Agile release cycles taking as little as one week, there simply isn’t enough time for teams to  complete SDL requirements for every release. On the other hand, there are serious security  issues that the SDL is designed to address, and these issues simply can’t be ignored for any release—no  matter how small.

A set of software development process improvements called the Security Development life-cycle (SDL) has been developed. The SDL has been shown to reduce the number of vulnerabilities in shipping software by more than 50 percent. However, from an Agile viewpoint, the SDL is heavyweight because it was designed primarily to help secure very large products, Agile practitioners in order to adopt the SDL, two changes must be made. First, SDL additions to Agile processes must be lean. This means that for each feature, the team does just enough SDL work for that feature before working on the next one. Second, the development phases (design, implementation, verification, and release) associated with the classic waterfall-style SDL do not apply to Agile and must be reorganized into a more Agile-friendly format.  A streamlined approach that melds agile methods and security—the Security Development life-cycle for Agile Development (SDL-Agile) has to be put into practice.

Let’s look at some basic examples of what threat modeling is and what it isn’t:

Definitions of what is and isn't involved in threat modeling

How to perform threat modeling
Threat modeling is largely a team activity. Does that mean an individual can’t threat model alone on a small project? Of course not, but will get the greatest benefit from the activity by including as many project members as possible. This is because each member will bring unique perspectives to the exercise, and that input is essential when trying to identify the various ways that an attacker might attempt to break your application or service.

The recommended approach is to use a whiteboard to sketch out each threat model, thereby facilitating team discussion around the diagram. The SDL is specific in recommending that threat models use the data flow diagram (DFD) as the foundational diagramming technique. Having a project member who is already familiar with data flow diagramming can be a big help, and that individual would be best suited to facilitate the first threat modeling session. So, the first step in threat modeling is to draw a DFD on the whiteboard that represents your envisioned application. Next, let’s look at the various threat modeling diagram types.

The highest-level diagram you can create is the context diagram. The context diagram is typically a simple diagram that captures the most basic interactions of your application with external actors. Figure 1 is an example of a context diagram. Following the context diagram, there are a variety of more detailed diagrams that can show more specific interactions of your application’s internal components and actors. The Diagram Levels table below lists the various levels of diagrams, along with descriptions of what each level typically contains.

Threat model diagram
Figure 1. Context diagram

Diagram levels

Once you have created a few of these high-level diagrams with your team, the next step is to identify the trust boundaries that we discussed earlier. Trust boundaries are represented by drawing lines that separate the various zones of trust in your diagram. It is the action of applying the trust boundary that changes the DFD into a threat model. It’s critical that you get the trust boundaries correct because that is where your team will focus its attention when discussing threats to the application. Figure 2 shows a Level 0 threat model with trust boundaries included.

  • Identify Trust Boundaries, Data Flow, Entry Point
  • Privileged Code
  • Modify Security Profile based on above

Level 0 diagram with trust boundaries
Figure 2. Level 0 diagram with trust boundaries

Once you’ve created the threat model, your team will be ready to start discussing the threats to the application as envisioned in deployment. In order to do this efficiently, the SDL has categorized potential threats along with the most common mitigations to those threats in order to simplify the process. For a more complete discussion on threat models, refer to the book Threat Modeling by Frank Swiderski and Window Snyder.

Let’s next examine the process you can follow to identify, classify, and mitigate threats to your application. We call this process STRIDE.

STRIDE

STRIDE is a framework for classifying threats during threat modeling. STRIDE stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation Privilege. The table below provides basic descriptions of the elements of STRIDE.

STRIDE definitions

Notice how each of the definitions are described in terms of what an “attacker” can do to your application. This is an important point: When threat modeling, it’s crucial to convey to your project team that they should be more focused on what an attacker can do than what a legitimate user can do. What your users can do with your application should be captured in your user stories and not in the threat model.

Now that you’ve created the threat model and discussed the threats to the various components and interfaces, it’s time to determine how to help protect your application from those threats. We call these mitigations. the following table lists the mitigation categories with respect to each of the STRIDE elements.

STRIDE and mitigation categories

 The basics of threat modeling, has been depicted above , the following section delves into how to incorporate threat modeling with agile methods.  Additionally, it’s important to note that once  threat models have been created on the whiteboard, it is essential to capture them as a drawing. Microsoft provides an excellent tool for this purpose: the Microsoft SDL Threat Modeling Tool. This tool allows you to easily create threat model diagrams, and provides a systematic approach to identifying threats based on trust boundaries and evaluating common mitigations to those threats.

At some point, the major SDL artifact—the threat model—must be used as a baseline for the product.  Whether this is a new product or a product already under development, a threat model must be built as  part of the sprint design work. Like many good Agile practices, the threat model process should be time-boxed and limited to only the parts of the product that now exist or are in development.
Once a threat model baseline is in place, any extra work updating the threat model will usually be small,  incremental changes.
A threat model is a critical part of securing a product because a good threat model helps to:

  • Determine potential security design issues.
  • Drive attack surface analysis and most “at-risk” components.
  • Drive the fuzz-testing process.

During each sprint, the threat model should be updated to represent any new features or functionality  added during that sprint. The threat model should also be updated to represent any significant design  changes, even if the functionality stays the same.

A threat model needs to be built for the current product, but it is imperative that the team remains lean. A minimal, but useful, threat model can be built by analyzing high-risk entry points and data in the system. At a minimum, the following should be identified and threat models built around the entry
points and data:

  • Anonymous and remote network endpoints
  • Anonymous or authenticated local endpoints into high-privileged processes
  • Sensitive, confidential, or personally identifiable data held in data stores used in the application

Integrated threat modeling

Integrated threat modeling ensures that the dedicated security architect liaises with the QA lead and development lead, and periodically identifies, documents, rates, and reviews the threats to the system. The key point to be noted is that threat modeling is not a one-time activity; rather, it is an iterative process that needs to be carried out in every sprint

  • Modify the architecture diagram based on changes in – Sprint (if any)
  • Periodic Review and ID of assets
  • Update Threat Document with vulnerabilities.
  • ID threat attributes such as Countermeasures, attack technique etc
  • Use techniques such as DREAD to rate the threats
  • Threats with High Rating must be fixed immediately in subsequent sprints

Training Development Team

Focusing on improving processes and giving staff better awareness training could reap huge rewards – cutting the time taken to spot breaches and even preventing many from happening in the first place. Therefore it is suggested that the development team be given sufficient training in secure code development at least covering the following

  •  Cross-site scripting vulnerabilities
  •  SQL injection vulnerabilities
  •  Buffer/integer overflows
  •  Input validation
  • Language Specific issues

Also, it would help if developers periodically review the industry
standards, such as OWASP Top 10 vulnerabilities, and understand
how to develop code devoid of such vulnerabilities.

Continuing Threat Modeling
Threat modeling is one of the every-sprint SDL requirements for SDL-Agile. Unlike most of the other every-sprint requirements, threat modeling is not easily automated and can require significant team effort. However, in keeping with the spirit of agile development, only new features or changes being implemented in the current sprint need to be threat modeled in the current sprint. This helps to minimize the amount of developer time required while still providing all the benefits of threat modeling.
Fuzz Testing
Fuzz testing is a brutally effective security testing technique, especially if the team has never used fuzz testing on the product. The threat model should determine what portions of the application to fuzz test. If no threat model exists, the initial list should include high-risk items,
High-Risk Code.
After this list is complete, the relative exposure of each entry point should be determined, and this drives the order in which entry points are fuzzed. For example, remotely accessible or unauthenticated endpoints are higher risk than local-only or authenticated endpoints.
The beauty of fuzz testing is that once a computer or group of computers is configured to fuzz the application, it can be left running, and only crashes need to be analyzed. If there are no crashes from the outset of fuzz testing, the fuzz test is probably inadequate, and a new task should be created to analyze why the fuzz tests are failing and make the necessary adjustments.
Using a Spike to Analyze and Measure Unsecure Code in Bug Dense and “At-Risk”
Code
A critical indicator of potential security bug density is the age of the code. Based on the experiences of organizations like Microsoft developers and testers, the older the code, the higher the number of security bugs found in the code. If the project has a large amount of legacy code or risky code individual should locate as many vulnerabilities in this code as possible. This is achieved through a spike. A spike is a time-boxed “side project” with a well-defined goal (in this case, to find security bugs). It can be thought  of this spike as a mini security push. The goal of the security push in organizations like Microsoft is to bring risky code up to date in a short amount of time relative to the project duration.
Note that the security push doesn’t propose fixing the bugs yet but rather analyzing them to determine how bad they are. If a lot of security bugs are found in code with network connections or in code that handles sensitive data, these bugs should not only be fixed soon, but also another spike should be set up to comb the code more thoroughly for more security bugs.

The following defines the highest risk code  that should receive greater scrutiny if  the code is legacy code and should be written with the greatest care if the code is new code.

  • Windows services and *nix daemons listening on network connections
  • Windows services running as SYSTEM or *nix daemons running as root
  • Code listening on unauthenticated network ports connections
  • ActiveX controls
  • Browser protocol handlers (for example, about: or mms:)
  • setuid root applications on *nix
  • Code that parses data from untrusted (non-admin or remote) files
  • File parsers or MIME handlers

Examples of analysis performed during a spike include:

  • All code. Search for input validation failures leading to buffer overruns and integer overruns. Also, search for insecure passwords and key handling, along with weak cryptographic algorithms.
  • Web code. Search for vulnerabilities caused through improper validation of user input, such as CSS.
  • Database code. Search for SQL injection vulnerabilities.
  • Safe for scripting ActiveX controls. Review for C/C++ errors, information leakage, and dangerous operations.

All appropriate analysis tools available to the team should be run during the spike, and all bugs triaged and logged. Critical security bugs, such as a buffer overrun in a networked component or a SQL injection vulnerability, should be treated as high-priority unplanned items.
Exceptions
The SDL requirement exception workflow is somewhat different in SDL-Agile than in the classic SDL.
Exceptions in SDL-Classic are granted for the life of the release, but this won’t work for Agile projects. A “release” of an Agile project may only last for a few days until the next sprint is complete, and it would be a waste of time for project managers to keep renewing exceptions every week. To address this issue, project teams following SDL-Agile can choose to either apply for an exception for the duration of the sprint (which works well for longer sprints) or for a specific amount of time, not to exceed six months (which works well for shorter sprints). When reviewing the requirement exception, the security advisor can choose to increase or decrease the severity of the exception by one level (and thus
increase or decrease the seniority of the manager required to approve the exception) based on the requested exception duration.
For example, say a team requests an exception for a requirement normally classified as severity 3, which requires manager approval. If they request the exception only for a very short period of time, say two weeks, the security advisor may drop the severity to a 4, which requires only approval from the team’s security champion. On the other hand, if the team requests the full six months, the security advisor may increase the severity to a 2 and require signoff from senior management due to the increased risk. In addition to applying for exceptions for specific requirements, teams can also request an exception for an entire bucket. Normally teams must complete at least one requirement from each of the bucket categories during each sprint, but if a team cannot complete even one requirement from a bucket, the team requests an exception to cover that entire bucket. The team can request an exception for the duration of the sprint or for a specific time period, not to exceed six months, just like for single exceptions. However, due to the broad nature of the exception—basically stating that the team is going to skip an entire category of requirements—bucket exceptions are classified as severity 2 and require the
approval of at least a senior manager.
Final Security Review
A Final Security Review (FSR) similar to the FSR performed in the classic waterfall SDL is required at the end of every agile sprint. However, the SDL-Agile FSR is limited in scope—the security advisor only needsto review the following:

  • All every-sprint requirements have been completed, or exceptions for those requirements have been granted.
  • At least one requirement from each bucket requirement category has been completed (or an exception has been granted for that bucket).
  • No bucket requirement has gone more than six months without being completed (or an exception  has been granted).
  • No one-time requirements have exceeded their grace period deadline (or exceptions have been granted).
  • No security bugs are open that fall above the designated severity threshold (that is, the security bug bar).

Some of these tasks may require manual effort from the security advisor to ensure that they have been  completed satisfactorily (for example, threat models should be reviewed), but in general, the SDL-Agile  FSR is considerably more lightweight than the SDL-Classic FSR.

PEARL VII : Agile metrics measurement for Software Quality

PEARL VII :

The metrics for measuring software quality in an agile environment are drastically different from that of those used in traditional IT landscapes.  How to effectively measure Agile S/w development  Quality in Agile Enterprise using Agile Metrics  is dealt here.

 

Agile software development grew in part from the intersection of  erstwhile alternative practices that predated it. And even more  derivatives had sprung up throughout the years. This had led to  the evolution of Agile in its current state. Wherein, all the defined  practices becomes a catalog from which development teams could choose from, and adapt what is appropriate for their case. This evolution then leads to the fact that development teams will  not easily have uniform processes. Forcing uniformity will most  likely negate the benefits of Agile. That said, most metrics are  intimately tied to actual practices, but measurements should be  able to cope with this perceived variability

The traditional metrics are also in conflict with agile’s and  lean’s principles. For example, a focus on adherence to  estimates is incompatible with agile’s principle of embracing  change. It will lead to chasing obstacles, instead of seizing  opportunities.

 In Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results, David Anderson combines TOC with Agile software development, with the objective of creating a process that “scales in scope and discipline to be acceptable in the boardrooms of the Fortune 1000″ . Anderson compares traditional “waterfall”, FDD, XP and RAD approaches, and proposes a rigorous metrics approach. 

Anderson provides a convincing argument for the traditional metrics inability to measure agile software  development . By demonstrating how they violate Reinertsen’s criteria for a good metric. Traditional metrics do not meet the criterion of being relevant, because of the high cost focus. The cost should not be the  main concern.  Moreover, they  elude the requirements of being simple and easy to collect. For  example, the once popular traditional metric to count the lines  of code has no simple correlation with the actual effort. The  software complexity results in a nonlinear function between the effort and the lines of code. It also motivate to squeeze in

It is impossible to create a metric set that would suit all agile projects. Every project has different goals and needs, and, as  the incremental and emergent nature of agile methods
[Mnkandla and28 Dwolatzky, 2007] implies that metrics – as part of the framework of development technologies – should also be allowed to emerge. There are, however, methods for choosing suitable metrics.

When measuring the production side of the development it is important to select metrics that support and reflect the financial counterparts.  The most commonly recommended agile production metrics, which are described in the following sections.

Total project duration: Agile projects get done quicker than traditional projects. By starting development sooner and cutting out bloatware — unnecessary requirements — agile project teams can deliver products quicker. Measure total project duration to help demonstrate efficiency

Time to market: Time to market is the amount of time an agile project takes to provide value, either through internal use or by generating income, by releasing working products and features to users.

Time to market is especially important for companies with revenue-generating products, because it aids in budgeting throughout the year. It’s also very important if you have a self-funding project— a project being paid for by the income from the product.

Total project cost: Cost on agile projects is directly related to duration. Because agile projects are faster than traditional projects, they can also cost less. Organizations can use project cost metrics to plan budgets, determine return on investment, and know when to exercise capital redeployment.

Return on investment: Return on investment (ROI) is income generated by the product, less project costs: money in versus money out. On agile projects, ROI is fundamentally different than it is on traditional projects. Agile projects have the potential to generate income with the very first release and can increase revenue with each new release.

New requests within ROI budgets: Agile projects’ ability to quickly generate high ROI provides organizations with a unique way to fund additional product development. New product features may translate to higher product income. If a project is already generating income, it can make sense for an organization to roll that income back into new development and see higher revenue.

Capital redeployment: On an agile project, when the cost of future development is higher than the value of that future development, it’s time for the project to end. The organization may then use the remaining budget from the old project to start a new, more valuable project.

Team member turnover: Agile projects tend to have higher morale. One way of quantifying morale is by measuring turnover through a couple metrics:

  • Scrum team turnover: Low scrum team turnover can be one sign of a healthy team environment. High scrum team turnover can indicate problems with the project, the organization, the work, individual scrum team members, burnout, ineffective product owners forcing development team commitments, personality incompatibility, a scrum master who fails to remove impediments, or overall team dynamics.

  • Company turnover: High company turnover, even if it doesn’t include the scrum team, can affect morale and effectiveness. High company turnover can be a sign of problems within the organization. As a company adopts agile practices, it may see turnover decrease.

Requirements and design quantification

Tom Gilb strongly believes that quantification of requirements is an essential concept missing from the agile paradigm, or even from software engineering in general. He claims that this lack is a risk for project failure, as software engineers and project managers cannot properly manage project results, control risks and costs, or prioritize tasks [Gilb and Cockburn, 2008]. Especially important are quality characteristics, because “functions and use cases are far less interesting” [Gilb and Cockburn, 2008] and “that is where most people have problems with quantification” [Gilb and Brodie, 2007]. Gilb assures that only numerically expressed quality goals are clear enough and therefore quantification is a step needed on the way from high level requirements to design ideas.
Design ideas also should be quantified [Gilb and Brodie, 2007]. It is done by the estimation of their value (to the customer – business value, not technical) and cost (effort). Then it is possible to identify the best designs – the ones with the highest value-to-cost ratio – and reprioritize the following development steps. A similar idea is described by Kile and Inampudi [2007]. The value and cost for implementing a requirement is estimated (without considering different design ideas) and the requirements are prioritized according to the result. This was a solution to a problem with the requirements prioritization ability of the customer and the development team. With some requirements having high value and high cost, and others having moderate value but very low cost, it was not easy to compare their desirability without quantification.

Metrics can also aid the application of agile practices. For example, in refactoring they can give information on the appropriate time for and the significance of a refactoring step [Kunz et al., 2008].

Heuristics for wise agile measurement [Hartmann and Dymond, 2006]

A good metric or diagnostic:

  1. Affirms and reinforces Lean and Agile principles.
  2. Measures outcome, not output.
  3. Follows trends, not numbers.
  4. Belongs to a small set of metrics and diagnostics.
  5. Is easy to collect.
  6. Reveals, rather than conceals, its context and significant variables.
  7. Provides fuel for meaningful conversation.
  8. Provides feedback on a frequent and regular basis.
  9. May measure Value (Product) or Process.
  10. Encourages “good-enough” quality.

Leffingwell argues that measurement should happen during iteration retrospectives and release retrospectives.

He proposes two categories of metrics: quantitative and qualitative. Quantitative metrics for an iteration consist of process metrics,  such  as  the  number  of  user   stories   accepted / rejected / rescheduled / added, and product metrics related to quality (defect count) and testing (number of test cases, percentage of automated test cases, unit test coverage).

Quantitative metrics for a release measure the release progress with value delivery (number of features delivered to the customer and their total value expressed in feature points, feature debt – existing customer commitments), conformance to release date, and technical debt (number of refactoring targets and number of refactorings completed, also called architectural debt or design debt). The qualitative assessment for both iteration and release requires listing what went well and what did not, revealing what should be continued and what needs to be improved.

Kunz et al. [2008] propose to combine software measurement with refactoring in order to indicate when a refactoring step is necessary, how important it is, how it affects quality, and what side effects it has. Metrics would be used here as triggers for needed refactoring steps.

Categories of Metrics

Quality

  • Defect Count
  • Technical Debt
  • Faults-Slip-Through
  • Sprint Goal success rate

Predictability

  • Velocity
  • Running Automated Tests

Value

  • Customer Satisfaction Survey
  • Business Value Delivered

Lean

  • Lead Time
  • Work In Progress
  • Queues

Cost

  • Average Cost Per Functions

Commonly recommended agile production metrics.

Lean Metrics

The selection of production metrics must carefully consider what has been advised in the previous sections. Inventory based metrics possess all these characteristics and give the advantage of addressing the importance of flow,  The most significant inventory based metrics are summarized below

Lead time – Relates to the financial metric Throughput. The lead time should be as short and stable as possible. It reduces the risk that the requirements become outdated and provides predictability. The metric is supported by Poppendieck, who states that the most important to measure is the ―concept-to-cash -time together with financial metrics .

 Queues – In software development queue time is a large part of the lead time. In contrast to the lead time, queue metrics are leading indicators. Large queues indicate that the future lead time will be long, which enables preventive actions. By calculating the cost of delay of the items in the queues, precedence can be given to the most urgent ones.

 Work in Progress – Constraining the WIP in different phases is one of the best ways to prevent large queues. If used in combination with queue metrics, WIP constraints prevent dysfunctional behavior such as simply renaming the objects in queues to work in progress. The metric is also an indicator of how well the team collaborates . A low WIP shows that the team works together on the same tasks. In addition, the Kanban method, which is built around the idea of constraining the WIP promises that it will result in an overall better software development,

These metrics can be visualized in a cumulative flow diagram, By tracking the investment’s way along the value chain towards becoming throughput, the inventory based metrics correlates well with the financial metric Investment.

Cost Metrics

Anderson argues the only cost metric needed is Average Cost Per Function (ACPF) and should only be used to estimate future operation expenses

Business Value Metrics

Agile software development puts the focus on the delivery of business value. Methods such as Scrum prioritize the work by value, making it sensible to measure the business value. It has also been observed that the trend in the industry is to measure value .

Hartmann notes that agile methods encourage the development to be responsible for delivering value rapidly and that the core metric should oversee this accountability. The quick delivery of value means that the investment is converted into value producing software as soon as possible. Leading metrics of business value includes estimations and is not an exact science.

Mike Cohn offers a possible solution to measure the business value , which involves dividing the business case’s value between the tasks. The delivery of value can be displayed in a Business Value Burnup Chart, One way to verify the delivery of business value, is to ask the customer if the features are actually used . It has proved useful to survey the customer over the time of a release and is much in line with the agile principle of customer cooperation.

 Quality Metrics

Lean metrics can indicate the products’ quality and provide predictability. For example, large queues in the implementation phase indicate poor quality and a stable lead time contributes to predictability. However, it might be necessary to supplement and balance them with more specific metrics.

A quality metric recommended by the agile community is Technical Debt .

Technical debt is a metaphor referring to the consequences of taking shortcuts in the software development. For example, code written in haste that is in need of refactoring. The debt can be represented in financial figures, which makes the metric suitable to communicate to upper management .

Technical metric measures the team’s overall technical indebtedness; known problems and issues being delivered at the end of the sprint. This is usually counted using bugs but could also be deliverables such as training material, user documentation, delivery media, and other

The counting of defects can be used as a quality metric. The defect count may occur in various stages of the development.

Counting defects in individual iterations can have a fairly large variation and may paint a misleading picture . Another aspect of defects is where they have been introduced. The fault-slips-through metric measures the test efficiency by where the defects should have been found and where it actually was . It monitors how well the test process works and addresses the cost savings of finding defects early. In case studies on implementation of lean metrics, the faults-slip-through has been recommended as the quality metric of choice

The primary purpose of measuring Faults Slip Through is to make sure that  the test process finds the right faults in the right phase, i.e. commonly earlier. Fault Slip Through represents the number of faults not  detected in a certain activity. These faults have instead been  detected in a later activity

Sprint goal success rates: A successful sprint should have a working product feature that fulfills the sprint goals and meets the scrum team’s definition of done: developed, tested, integrated, and documented.

Throughout the project, the scrum team can track how frequently it succeeds in reaching the sprint goals and use success rates to see whether the team is maturing or needs to correct its course.

EVM Metrics

AgileEVMTerms

AgileEVMTerms

AgileEVMMetrics

AgileEVMMetrics

 Predictability Metrics

What many organizations hope to gain from the measurement is predictability . In several of the agile methods the velocity of delivered requirements is used to achieve predictability and estimate the delivery capacity. The average velocity can serve as a good predictability metric, but can easily be gamed if used for other purposes. For example, Velocity used to measure productivity can degrade the quality.

Velocity is a capacity planning tool sometimes used in Agile software development. Velocity tracking is the act of measuring said velocity. The velocity is calculated by counting the number of units of work completed in a certain interval, the length of which is determined at the start of the project

The main idea behind velocity is to help teams estimate how much work they can complete in a given time period based on how quickly similar work was previously completed.

The following terminology is used in velocity tracking.

Unit of work
The unit chosen by the team to measure velocity. This can either be a real unit like hours or days or an abstract unit like story points or ideal days. Each task in the software development process should then be valued in terms of the chosen unit.
Interval
The interval is the duration of each iteration in the software development process for which the velocity is measured. The length of an interval is determined by the team. Most often, the interval is a week, but it can be as long as a month.

To calculate velocity, a team first has to determine how many units of work each task is worth and the length of each interval. During development, the team has to keep track of completed tasks and, at the end of the interval, count the number of units of work completed during the interval. The team then writes down the calculated velocity in a chart or on a graph.

The first week provides little value, but is essential to provide a basis for comparison.Each week after that, the velocity tracking will provide better information as the team provides better estimates and becomes more used to the methodology.

Velocity chart

Velocity chart

ss_iteration_burnup_2

The Velocity Chart shows the amount of value delivered in each sprint, enabling you to predict the amount of work the team can get done in future sprints. It is useful during your sprint planning meetings, to help you decide how much work you can feasibly commit to.

You can estimate your team’s velocity based on the total Estimate (for all completed stories) for each recent sprint. This isn’t an exact science — looking at several sprints will help you to get a feel for the trend. For each sprint, the Velocity Chart shows the sum of the Estimates for complete and incomplete stories. Estimates can be based on story points, business value, hours, issue count, or any numeric field of your choice . Please note that the values for each issue are recorded at the time the sprint is started.

 Running Automated Tests measures the productivity by the size of the product . It counts test points defined as each step in every running automated test. The belief is that the number of tests written is better in proportion to the requirement’s size, than the traditional lines-of-code metric. The metric addresses the risk of neglected testing, which is usually associated with productivity metrics. It motivates to write tests and to design smaller, more adaptive tests. Moreover, it has proven to be a good indicator of the complexity and to some extent on the quality . For measuring release predictability, Dean Leffingwell proposes to measure the projected value of each feature relative to the actual . However, the goal should not be to achieve total adherence. Instead, the objective should be to stay within a range of compliance to plan, which allows for both predictability and capturing of opportunities.

Actual Stories Completed vs. Committed Stories

The measure is taken by comparing the number of stories committed to in sprint planning and the number of stories identified in the sprint review as completed.

Visualization

To get the full value of agile measurement, the metrics need to be acted upon. The visualization of the metrics helps to ensure that actions are taken and achieves transparency in the organization . The company’s strategies become communicated and the coordination increases.

Spider Chart

Spider Chart

In Kanban the visualization of the workflow is an important activity and facilitates self-organizational behavior . For example, when a bottleneck is shown the employees tend to work together to elevate the bottleneck. Both Kanban and Scrum use card walls to visualize the work flow where each card represents a task and its current location in the value chain. The inventory based metrics can then be collected using the card walls. A very effective way to visualize the inventory based metrics is cumulative flow diagrams .

The cumulative flow diagram is an area graph, which shows the workflow on a daily basis.

Cumulative Flow Diagram

Are you a manager or business stakeholder working on an Agile points are the unit of measure used by this team for estimation Scrum Project and facing the following issues?

You have a strong feeling that there are bottlenecks in the process but are facing a lot of difficulty in mapping it to the process.

You are not satisfied with the burn-up and burn-down charts produced by the team and are interested in getting more insight into the process.
There is a panacea to solve all of the above issues. The name of the magic potion is “cumulative flow diagram”. Cumulative flow diagrams (CFDs) applied on top of the basic principles of KAN (visual) + Ban (card) can give you an insight into the project and keep everyone updated. CFD can be an extremely powerful tool when applied to a Scrum model.

A Cumulative Flow Diagram (CFD) is an area chart that shows the various statuses of work items for a product, version, or sprint. The horizontal x-axis in a CFD indicates time, and the vertical y-axis indicates cards (issues). Each coloured area of the chart equates to a workflow status (i.e. a column on your board).

A CFD can be useful for identifying bottlenecks. If your chart contains an area that is widening vertically over time, the column that equates to the widening area will generally be a bottleneck.

Multi-color CFDS look complicated but pretty. The pain of understanding it is worth the gain you get from it. These diagrams can help you in making critical business decisions. They will help give you better visibility regarding the time to market dates for features. Applying it on top of a Scrum project will help you see an accurate picture of the progress on your project.
Giving CFD to this team will help them in following the “Inspect and Adapt” Scrum principle. They can further zoom into the work in progress to see various flow states. CFD will help you in analyzing the actual progress and bottlenecks in any project. CFD can be drawn using area charts in MS Excel.

Workflow for CFD

Workflow for CFD

Cumulative Flow Diagram

Cumulative Flow Diagram

There are many other usages of CFDs besides finding bottlenecks. CFDs are multi-utility graphs that continuously report the true status of an Agile project. CFDs can help in determining lead time, cycle time, size of backlog, WIP, and bottlenecks at any point in time
Lead time is the time from when the feature entered  backlog to its completion status. This is of utmost interest to business stakeholders. It can help business people decide about the time  to market for features. They can plan marketing campaigns based  on lead times.
Cycle time is the time taken by team from when they started work  on it to completion status. This helps the project leads make important decisions in selecting items for working.
WIP is the work currently lying in different stages of the software lifecycle. Cycle time is directly proportional to the work in progress items. Keeping a limit on WIP is a key to success in any Agile project. In Scrum we try to limit the WIP within a sprint, but limiting
it across the various flow states will help further in gaining better control and visibility in a sprint.
Controlling WIP is the mantra for victory in any project. With CFDs, WIP is no longer a black box and anyone can see the work distribution at any point in time. Thus CFDs provide better insight and the power for better governance in any Agile methodology.A single diagram can contain information about lead time, WIP, queues and bottlenecks.

In Scrum, the Burndown Chart is a standard artifact. It allows the teams to monitor its progress and trends. The Burndown Chart tracks completed stories and the estimated remaining work. There are also variations of the Burndown Chart . For example, the Burnup Chart contains information about scope changes. For even better predictability, story points may be used. The stories are assigned points by the estimated effort to implement them.

A Burndown Chart shows the actual and estimated amount of work to be done in a sprint. The horizontal x-axis in a Burndown Chart indicates time, and the vertical y-axis indicates cards (issues).

Iteration Burn Down chart

Iteration Burn Down chart

Use a Burndown Chart to track the total work remaining and to project the likelihood of achieving the sprint goal. By tracking the remaining work throughout the iteration, a team can manage its progress and respond accordingly.

To communicate the KPIs, many organizations use Balanced Scorecards or Dashboards Dashboards are used to effectively monitor, analyze and manage the organization’s performance . The level of detail of the dashboards varies, ranging from graphical high-level KPIs to low-level data for root cause analysis. In order to communicate and facilitate that metrics are acted upon the measurement practice should: Visualize the metrics to achieve transparency. Be careful to not create dysfunctional behavior with the visualization.

Continuous Improvement

Kaizen is the Japanese word for continuous improvement and is a part of the lean software development . It is also found in agile software development. For example, Scrum has retrospectives after each sprint where improvements are identified. The retrospectives have similarities to Deming’s Plan-Do- Check-Act (PDCA) .

The PDCA is a cycle of four phases, which should drive the continuous improvement. What is notable is that the PDCA prescribe measurement to verify that improvements are achieved. Petri Heiramo observes that the retrospectives lack measurements and argues that it can lead to undesirable results . Without any metrics, it will be difficult to determine whether any targets have been met. This in turn can be demoralizing for the commitment to the improvement efforts.

Heiramo, suggest that these three questions should be added to the retrospective: What benefit or outcome do we expect out of this improvement/change? How do we measure it? Who is responsible for measuring it? Diagnostics can be used to obtain these measurements, In order for the diagnostics to achieve process improvement, the measurement practice should Be an integrated part of a process improvement framework.

Agile Metrics at  MAMDAS – a software development unit in the Israeli Air Force 

MAMDAS – a software development unit in the Israeli Air Force develops large scale , enterprise critical applications for Israeli Airforce. The project is developed by a team of 60 skilled developers and testers, organized in a hierarchical structure of small groups.
The project develops large-scale, enterprise-critical software, intended to be used by a large and varied user population. During December 2004, the first XP team was established for MAMDAS to implement XP . It was encouraging to observe that after the first two weeks iteration “managers were very surprised to see something running” and everyone agreed that “the pressure to deliver every two weeks leads to amazing results”  Still, accurate metrics are required in order to take professional decisions, to analyze long-term effects, and to increase confidence of all management levels with respect to the process that XP inspires.

They described four metrics and the kinds of data that are gathered to calculate them. These four metrics present information about the amount and quality of work that is performed, about the pace of the work progresses, and about the status of the remaining
work versus remaining human resources.

Product size, initially just called ‘Product’, is the first metrics. It aims at presenting the amount of completed work. The data that was selected to reflect the amount of work is the number of test points. One test point is defined as one test step in an automatic acceptance testing scenario  or as one line of unit tests. The number of test points is calculated for all kinds of written test and is gathered per iteration per component. Additional information is gathered with respect to the number of test points for tests that pass, the number of points for tests that fail, and the number of points for tests that do not run at all.

Pulse is the second metrics, which aims to measure how continuous the integration is. The data is automatically gathered from the development environment by counting how many check-in operations occur per day. The data is gathered for code check-ins, automatic-test check-ins, and detailed specifications check-ins. When referring to code it means code plus its unit test.

Burn-down is the third metrics. It presents the project remaining work versus the remaining human resources. This metrics is supported by the main planning table that is updated for each task according to kinds of activities (code, tests, or detailed specifications), dates of opening and closing, estimate and real time of development and, the component that it belongs to. In addition, this metrics is supported with the human resources table that is updated when new information regarding teammates’ absence arrives. This table also contains the product’s component assigned to each of the teammates and with the percentages of her/his position in the project. By using the data of these tables, this metrics can present the remaining work in days versus the remaining human resources in days. This information can be presented per week or for any group of weeks till a complete release, both for the entire team or for any specific component.

The burn-down graph answers a very basic managerial question: are we going to meet the goals of this release, and if not, what can we do about it? Release goals were set before each release – each goal is a high-level feature. Goals are defined by the user, and are verified by matching a rough estimate of the effort required to complete each goal (given by the development team) to the total available resources.
Once goals are defined and estimated, both remaining work and remaining resources are based on this initial estimation, which is refined as the release progresses.

Faults is the fourth metrics, which counts faults per iteration. During the release on which  all faults that were discovered in a specific iteration were fixed at the beginning of the next iteration. The faults metrics is required to continuously metrics the product’s quality. Note that the product size metrics doesn’t do it, since although it metrics test points, it does not correlate between the number of failed or un-run test steps to the number of actual bugs.The feedback on the Size Metrics was that it motivates writing tests and that it can be referred also as complexity metrics.

The use of the presented metrics mechanism increases confidence of the team members as well of the unit’s management with respect to using agile methods.Further, these metrics enable an accurate and professional decision making process for both short- and long-term purposes.

Technical Debt in Petrobras , Brazil

As an oil and gas company, Petrobras (http://www.petrobras.com.br) develops software in areas which demands increasingly innovative solutions in short time intervals. The company started officially with Scrum in March of 2009, using its lightweight framework to create collaborative self-organizing teams that could effectively deliver products. After the first team had adopted Scrum with a relative success, the manager noticed that the framework could be used in other teams, and thus he invested in training and coaching so that the teams could also have the opportunity to try the methodology. At that time, only the software development department for E&P, whose software helps to Exploit and Produce oil and gas, had management endorsement in adopting scrum and agile practices that would let teams deliver better products faster. About one year and half later, all teams in the department were using Scrum as its software development process. The developers and the stakeholders in general noticed expressive gains with the adoption of Scrum.
The results had varied from the skill of the team leadership in agile methodologies, customer participation, level of collaboration between team members, technical expertise among other factors.
The architecture team of the software development department for E&P was composed of four employees, whose responsibility was to help teams and offer support for resolution of problems related to agile methods and architecture. At that time, it had to work with 25 teams which had autonomy regarding its technical decisions. In fact, autonomy was one of the main managerial concerns when adopting agile methods.
After Scrum adoption, there was active debate, training and architectural meetings about whether Agile engineering practices should also be adopted in parallel with managerial practices; in hindsight, it would have accelerated the benefits had they been adopted. But the constraints of time and budget, decisions made by non- technical staff, and the bureaucracy in areas such as infrastructure and database, led to the postponement of those efforts initially. Moreover, the infrastructure area had only build and continuous integration (CI) tools available. And, unfortunately, these tools were not taken seriously by the teams. The automated deployment was relatively new and was postponed because of fear of implementing it in immature phase. Other tools and monitoring mechanisms were not used by the teams even so the architecture team was aware of its possible benefits.
Despite all initiatives in training and supporting in agile practices such as configuration management, automated tests and code analysis, teams, represented by 25 focal points in architectural meetings, did not show much interest in adopting many agile practices – particularly technical practices. Delivering the product on the date agreed with the customer and maintaining the legacy code were the most urgent issues. Analyzing retrospectively it seems that the main cause for this situation was that debt was getting accrued unconsciously. Serving the client was a much more visible and imperative goal. This can be one explanation for the ineffectiveness of prior attempts in introducing technical practices. Be it by the means of specialized training or by the support of the architecture team.
With these not so effective attempts to promote continuous improvement with teams, the architecture team sought a way to motivate them to experiment agile practices without a top-down “forced adoption”. The technical debt metaphor,  was the basis for the approach.

Given the context aforementioned, especially regarding the role of the architecture team serving various teams in parallel and the fact that the teams have autonomy in its technical decisions, there was a need for the technical debt estimation and visualization in a “macro-level”, i.e., not only associated with source code aspects but
with technical practices involving the product in general. This would give the opportunity to see the actual state of the department and indicate the “roadmap” for future interaction with the development teams.
The actions involved in introducing this kind of visualization and the management activities based on that visualization. The architecture team modeled a board, where the lines corresponds to teams and the columns are the categories and subcategories of technical debt, based on the work of Chris Sterling .In each cell, formed by the pair team x technical debt category, the maturity of the team was evaluated according to predefined criteria. They used the colors red, yellow and green to show the compliance level of each criterion.

After the design of the technical debt board, each team was invited to a rapid meeting in front of the board, where all team members talked about the status of each criterion, translating it to the respective color. During these meeting the teams could also conclude that some categories were not relevant or applicable for their systems.
This meeting should happen every month, so that the progress of each category could
be updated. At the end of the meeting, the team members agreed which of the categories would be the aim for the next meeting or, to put it another way, where they would invest their efforts in reducing the technical debt.

To measure the technical debt at source code level, the architecture team has made use of the tool Sonar (http://www.sonarsource.org/) . Sonar has a plugin that allows estimating how much effort would be required to fix each debt of the project. Sonar considers as debts: cohesion and complexity metrics, duplications, lack of comments, coding rules violation, potential bugs and no unit tests or useless ones.  The important aspect is that an estimative is calculated, and Sonar shows the results financially and the effort in man days necessary to take the debt to zero (the daily rate of the developer in the context of the project must be informed).
It is important to mention that Sonar, in fact, use many other tools internally to analyze the source code – each one for different aspects of the analysis. It works as an aggregator to display results of other tools such as PMD, Findbugs, Cobertura and Checkstyle among others.

The teams could make the debt rise during a whole month without even knowing about it.
To address this situation, the architecture team created a virtual tiled board, where each tile had information about the build state of each team in the department. The major information was the actual state of the build and the project name. If everything was ok (compilation and automated tests), the tile is green , if the compilation was broken, the tile turns red  and if there were failed tests, the tile turns yellow . Besides the build information, there is other information: total number of tests, number of failed tests, test coverage, number of lines and technical debt (calculated in Sonar).

The virtual tiled board was placed in a big screen in a place where everybody in the room could see it from their workplaces. The main objective was that when the team members saw their failed build and that instant feedback would lead them to make corrective actions so the build could go green again.
As the mechanisms of feedback were implemented, the teams had instant information
about what should be done to lower the levels of technical debt. With this information, they could prioritize which categories they would try to improve in the next month. If the team had some difficulties addressing any of the categories, they could call upon the architecture team support.

PEARL II : Agile Project Management for a value driven approach

PEARL II : Agile Project Management ensures a value-driven approach that allows  to deliver high-priority, high-quality work . Agile Project Management establish the project’s context. It enables to manage the team’s environment, encourage team decision making  and promote autonomy whenever possible. Agile  Project Management expects the best out of people, elevates  the individual and gives them respect. It helps foster a  team culture that values people and encourages healthy  relationships. 

Introduction

Agile_leadership

Agile_leadership

One of the common misconceptions about agile processes is that there is no need for agile project management, and that agile projects are self reliant . It is easy to understand how an agile process’ use of self-organizing teams, its rapid pace, and the decreased emphasis on detailed plans lead to this perception. In a recent egroup, a project manager at a company that was implementing agile had been moved to another area because, “…agile doesn’t require project management capability.”

However, The truth is agile processes still require project management.

Agile methodology does not clearly define the role of manager but instead defines similar roles like coach/facilitator who performs the role of Agile Project manager. 

An Agile project manager understands how the Agile delivery engine works – that the concept is based on self-organization and undisturbed activity. In addition, he or she has the ability to manage business needs and goals, requirements, organizational models, contracts and overarching as well as ‘rolling’ planning methods,’

Agile Project Manager

Agile Project Manager

In Agile transitions, there is a need for an Agile project manager when the project and delivery engine is exposed to complexity factors – as in, for example, when several teams collaborate on a release from a number of international locations.’ Requirements themselves may also lead to complexity such as when teams are faced with regulatory requirements or the need for extremely rapid alteration cycles. Other complexity factors are outsourcing and procurement – that is, the purchase of various services from multiple providers – or when a company starts too many projects at the same time resulting in staff having so much to do that nothing gets done.

‘Several of these complexity factors exist in many of the companies and they have a cumulative effect. Agile project managers can achieve great things in such complex conditions by designing a comprehensive project environment providing oversight and structure. Agile contracts may be a valuable component of the project environment.

The Agile fixed price is a contractual model agreed upon by suppliers and customers of IT projects that develop software using Agile methods. The model introduces an initial test phase after which budget, due date, and the way of steering the scope within the framework is agreed upon.

This differs from traditional fixed-price contracts in that fixed-price contracts usually require a detailed and exact description of the subject matter of the contract in advance. Fixed price contracts aim at minimizing the potential risk caused by unpredictable, later changes. In contrast, Agile fixed price contracts simply require a broad description of the entire project instead of a detailed one.

In Agile contracts, the supplier and the customer together define their common assumptions in terms of the business value, implementation risks, expenses (effort) and costs. On the basis of these assumptions, an indicative fixed price scope is agreed upon which is not yet contractually binding. This is followed by the test phase (checkpoint phase), during which the actual implementation begins. At the end of this phase, both parties compare the empirical findings with their initial assumptions. Together, they then decide on the implementation of the entire project and fixate the conditions under which changes are allowed to happen.

Further aspects of an Agile contract are risk share (both parties divide the additional expenses for unexpected changes equally among themselves) or the option of either party leaving the contract at any stage (exit points).

Jim Highsmith, one of the originators of the Agile Manifesto and a recognized expert in  agile approaches, has defined agility in project management by the following statements:
“Agility is the ability to both create and respond to change in order to profit in a turbulent  business environment,” and “Agility is the ability to balance flexibility and stability” .
In contrast with traditional project methods, agile methods emphasize the incremental  delivery of working products or prototypes for client evaluation and optimization.  While so-called “predictive” project management methods assume that the entire set of  requirements and activities can be forecast at the beginning of the project, agile methods combine all the elements of product development, such as requirements, analysis, design, development and testing — in brief, regular iterations. Each iteration delivers a working  product or prototype, and the response to that product or prototype serves as crucial  input into the succeeding iterations.
Agile theory assumes that changes, improvements and additional features will be incorporated throughout the product development life cycle, and that change, rather than perceived as a failing of the process, is seen as an opportunity to improve the product and make it more fit for its use and business purpose.

The Need for Agile Project Management

Project management is critical to the success of most projects, even projects following agile processes. Without management, project teams may pursue the wrong project, may not include the right mix of personalities or skills, may be impeded by organizational dysfunction, or may not deliver as much value as possible. There are initiatives to formalize these management responsibilities. 

When it comes to agile project management roles, it’s worth noting that most agile processes – Scrum in particular – do not include a project manager. Without a specific person assigned, agile “project manager” roles and responsibilities are distributed among others on the project, namely the team, the ScrumMaster and the product owner.

Waterfall-vs-Agile

Waterfall-vs-Agile

Agile Project Management and Shared Vision

For a team to succeed with agile development it is essential that a shared vision be established. The vision must be shared not just among developers on the development team but also with others within the company. Most plan-driven processes also advocate the need for a shared vision; however, if that vision isn’t communicated or is imprecise or changing, the project can always fall back on its detailed (but not necessarily accurate) lists of tasks and procedures. This is not the case on an agile project and agile project participants use the shared vision to guide their day-to-day work much more actively.

The formation of the project vision is not the responsibility of the agile project manager; usually the vision comes directly from a customer or customer proxy, such as a product manager. The project manager, however, is usually involved in distilling the customer’s grand vision into a meaningful plan for everyone involved in the project. Rather than a detailed command-and-control plan based on Gantt charts, however, the agile plan’s purpose is to lay out an investment vision against which management can assess and frequently adjust its investments, lay out a common set of understanding from which emergence, adaptation and collaboration occur, and establish expectations against which progress will be measured. The project manager works with the customer to layout a common set of understanding from which emergence, adaptation and collaboration can occur. The agile project nurtures project team members to implement the vision. The agile project manager understands the effects  of the mutual interactions among the project’s parts and steers the project towards continuous learning  and adaptation on the edge. 

Scope of Agile Project Management:

Agile Process

Agile Process

In an agile project, the entire team is responsible in managing the team and it is not just the project manager’s responsibility. When it comes to processes and procedures, the common sense is used over the written policies.

This makes sure that there is no delay is management decision making and therefore things can progress faster.

In addition to being a manager, the agile project management function should also demonstrate the leadership and skills in motivating others. This helps retaining the spirit among the team members and gets the team to follow discipline.

Agile project manager is not the ‘Head’ of the software development team. Rather, this function facilitates and coordinates the activities and resources required for quality and speedy software development.

Agile Project Management and Obstacles

Most agile processes prescribe a highly focused effort on creating a small set of features during an “iteration” or “sprint” after which the team quickly regroups and decides on the set of features for the next iteration or sprint. While an iteration is ongoing the team members are expected to focus exclusively on the current iteration.

While this sharp focus leads to greater productivity during the current iteration it can cause a bit of a billiard-ball effect as the conclusion of one iteration can bounce out the start of the next. A project manager who spends a small amount of time looking forward at the next iteration is an excellent buffer against this effect. For example, many organizations have travel restrictions that require plane tickets to be purchased two weeks in advance. If a team could benefit from having a remotely located employee on site during the coming iteration the time to plan for that is during the current iteration.

Another type of obstacle may be a team member.

While agile processes such as Extreme Programming and Scrum rely on self-organizing teams, an agile project manager cannot simply turn a team loose on a project. The agile manager must still monitor that corporate policies or project rules are followed. Participation on an agile team does not turn all developers into model employees. In most cases the team itself will employ some form of sanctioning on an employee who is not working hard enough or is exhibiting other performance or behavior problems. However, in the most severe cases the collective team usually cannot be the one to terminate or officially reprimand an employee. Performance feedback can always be expressed in terms of team views of the individual’s contribution to the team  But if a counseling or coaching session is necessary it is usually best when between just the project manager and the team member.

Businesses must evolve to use flexible practices such as agile project management, to prevent significantly slowing down the productivity of their business and risking their profits.

Round Table Event at hosting and colocation firm UKFast’s Manchester office in Jan 2014

UKFast is one of the UK’s leading managed hosting and colocation providers, supplying dedicated server hosting, critical application hosting, and cloud hosting solutions. it fully own, manage and operate its ISO-accredited data centre complex, which offers over 30,000 sq ft of enterprise-grade facilities for collocating IT equipment.

All of its hosting solutions are designed to help businesses grow, with 24/7/365 UK-based support and dedicated account management as standard. It is exceptionally proud of the standard of service it gives to clients and believe it’s what really sets them apart from other providers.

Old fashioned firms must encourage the ownership and innovation needed to create a happier, more modern workplace or face the consequences of being left behind by companies that do.

That’s the view of six project management experts who gathered at a roundtable event to discuss whether old management techniques have become nothing more than a hindrance to businesses and whether traditional practices still have a role in the workplace.

Lawrence Jones, CEO of hosting and colocation firm UKFast, believes a fun work environment, bolstered by flexible and collaborative project management techniques, results in happier people within the company.

Jones said: “I believe an enjoyable workplace is often a productive workplace. A fun and friendly work ethic is the route to economic recovery and adopting agile project management techniques comes hand in hand with this.

“Having a culture in place that encourages collaboration not only motivates the team, it also means that clients receive a better, faster service that isn’t weighed down by traditional box-ticking procedures.”

Agile project management focuses on the continuous improvement, team input and delivery of essential quality products. By breaking up a project into “sprints” – worked on by different team members simultaneously – agile encourages collaboration and integration unlike other, traditional methods that are often rigidly sequential.

Ninety per cent of respondents in the 7th Annual State Agile Development survey cited that agile improved their ability to manage changing priorities compared to waterfall, while the top two other benefits listed were increased productivity (85 per cent) and improved project visibility (84 per cent).

Ian Carroll, Principal Consultant at ThoughtWorks agrees said: “It is about doing a good job. Nobody comes into work to do a bad job. Agile project management requires cross-functional teams, which result in happier people coming together to make for a much happier work place.”

Clare Walsh, Digital Delivery Director from Redweb believes that teamwork created by agile project is vital for a company’s success.

Walsh said: “It’s about creating that sense of involvement. When somebody really feels involved, they can own it. That’s the whole point of agile, that the team feel some kind of ownership about what they are creating. It’s about empowerment and people feeling like they have the ability to make decisions.”

Beccy Weeks, IS Manager at Saint Gobain Building Distribution has seen agile bring big benefits to her company, including improvements to the integration of new staff members.

She said: “Agile prevents isolation. We’ve found that it’s easier to bring new people into the team as they get on straight away. They are immediately communicating; they feel part of the team and can join in with the banter. They are integrated much quicker using the agile approach, rather than the waterfall management.”

James Cannings, Chief Technical Officer at MMT Digital believes the culture within his workplace has positively transformed due to the change to agile management.

He said: “Agile has completely transformed the culture of my agency over the last two years. We were proud of the culture we had, but I now feel bad in the sense of how we treated the developers who just worked from one project to the next. It was quite a sort of hierarchal structure. Agile now creates a fun environment, which brings the whole team together.”

Mark Kelly, Digital and Social Media Marketing Consultant regrets the management approach he previously undertook.

He said: “The developers and designers were treated like mushrooms in the waterfall approach, not knowing what the guys next to them were doing. The agile approach therefore most certainly provides visibility for everyone on the team.”

The experts gathered at hosting and colocation firm UKFast’s Manchester office to debate the topic of project management and how businesses can implement new techniques to the best effect. Here are their top tips:

  1. Don’t forget, agile is a means to an end rather than an end point itself
  2. How you visualise the world dominates how we perceive the world so having a card wall or something to illustrate tasks can really help teams
  3. Agile isn’t the best for building a skyscraper but great for a fast-paced software development – it could just be about design and development rather than the whole project
  4. It’s about creating a culture of fun as well as ensuring a fast time to market.

Agile Project Management and Organizational Dysfunction

Many companies have at least one dysfunctional area. This may be the “furniture police” who won’t let programmers rearrange furniture to facilitate pair programming. Or it may be a purchasing group that takes six weeks to process a standard software order. In any event these types of insanity get in the way of successful projects. One way to view the project manager is as the bulldozer responsible for quickly removing these problems.

The Scrum process includes a daily meeting during which all team members are asked three questions. One of these questions is, “What is in the way of you doing your work?” The agile project manager takes it on himself to eliminate these impediments. Ideally, he or she becomes so adept at this that impediments are always removed within 24 hours (that is, before the next daily meeting).

Participate in enough agile projects and you begin to hear the same impediments brought up time after time. For example:

    • My ____ broke and I need a new one today.
    • The software I ordered still hasn’t arrived.
    • I can’t get the ____ group to give me any time and I need to meet with them.
    • The department VP has asked me to work on something else “for a day or two”

The project manager is responsible for optimizing team productivity; this means it’s his responsibility to do whatever possible to minimize obstacles. Most organizations will not want every developer calling the ordering department to follow up on delivery dates for software. Similarly, a project manager who can know when to push the IT manager for quick setup of a new PC and when not to push (perhaps saving a favor for later) will be more effective than every programmer calling that same IT manager.

Agile Project manager may need to Work out resource movement with a realistic transition plan with minimum impact on business.

Agile Project Management and Politics

Politics are at play in almost every organization. Most organizations have only limited funds that may be applied across a spectrum of competing projects and new project ideas. Projects compete for budget dollars (team size, tools, etc.), personnel (everyone wants the best programmer), resources (time or access to the large database server), and attention from higher level managers. Too many projects fail for political reasons. A project manager uses the various agile mechanisms to minimize politics and keep everything visible and obvious.

For instance, the project manager works with the customer to ensure that the product backlog (Scrum) or stories (XP) is visible and everyone understands that it directs the team to the most profitable and valuable work possible. The project manager uses product increments and demonstrations of working functionality to keep everyone aware of real progress against goals, commitments, and visions, thereby minimizing opportunities for rumors, misinformation, and other weapons of political maneuvering. Working with the customer, the project manager helps the customer and organization to value results instead of reports.

Agile Project Leadership in Agile Project Management

Agile project leadership is key parameter in ensuring the project success within constraints and boundaries of the organization.
There are many leadership models discussed in popular leadership literature from Jim Collins to John C. Maxwell. Most of the models talk about leadership at organizational level. The agile project leadership has narrow canvass – which is ensuring successful agile projects, successful agile adoptions. The leadership levels applicable to Agile project management are Collaborative Leadership, Servant Leadership and Trans formative leadership in Levels 2,3,4 respectively. Level 1 – Positional leadership is not applicable to agile project management canvas.

 Collaborative leadership.

Here the leader gets things done through collaboration. Relationship is the key characteristic of this level. A collaborative leader builds a strong relationship with the people. Leaders at this level care for their people. They support their people and motivate them constantly. The true collaborative nature of leadership stitches the bond between the leader and the team. Many agile teams experience collaborative leadership. An agile coach builds a strong relationship among the scrum team through his coaching and facilitation skills.

Servant Leadership

Here the leader serves first to lead consequently. Serving is the key idea in this level.
The term “Servant Leadership” was coined by Robert K. Greenleaf in “The Servant as Leader”, an essay that he first published in 1970. In his essay, he said “The servant-leader is servant first… It begins with the natural feeling that one wants to serve, to serve
first. Then conscious choice brings one to aspire to lead.” The idea of servant leadership can be traced to the 4th century B.C. Chanakya in his book Arthashastra wrote that a king [leader] is a paid servant and enjoys the resources of the state together with the people. 

A servant leader serves the team unequivocally. Leaders at this  level gain the respect through serving the team. They listen to the  team; they take cues from observing the team and empower them  in decision-making. Serving is a leadership attitude and a mindset.  Agile coaches are expected to have this attitude.

Robert Greenleaf has introduced the concept of the “servant-leader.”1 Perhaps this is the most appropriate way of thinking of the agile project manager. On an agile project the project manager does not so much manage the project as he both serves and leads the team. Perhaps this is one reason why, anecdotally, it seems much more common to see an agile project manager also function as a contributor to the project team (whether writing or running tests, writing code or documentation, etc.).

Plan-driven software methodologies use a command-and-control approach to project management. A project plan is created that lists all known tasks. The project manager’s job then becomes one of enforcing the plan. Changes to the plan are typically handled through “change control boards” that either reject most changes or they institute enough bureaucracy that the rate of change is slowed to the speed that the plan-driven methodology can accommodate. There can be no servant-leadership in this model. Project managers manage: they direct, administer and supervise.

Agile project management, on the other hand, is much more about leadership than about management. Rather than creating a highly detailed plan showing the sequence of all activities the agile project manager works with the customer to layout a common set of understandings from which emergence, adaptation and collaboration can occur. The agile project manager lays out a vision and then nurtures the project team to do the best possible to achieve the plan. Inasmuch as the manager represents the project to those outside the project he or she is the project leader. However, the project manager serves an equally important role within the project while acting as a servant to the team, removing their impediments, reinforcing the project vision through words and actions, battling organizational dysfunction, and doing everything possible to ensure the success of the team. The agile project manager is a true coach and friend to the project teams. 

With “light touch” control, agile project managers realize that increased control doesn’t cause increased order;  they approach management with courage by accepting that they can’t know everything in advance, and relinquish some control to achieve greater order.

Throughout the project, the project manager identifies practices that aren’t followed, seeks to  understand why; and removes obstacles to their implementation. Used thus, for example , the agile practices provide simple generative rules without restricting autonomy and creativity

Trans-formative Leadership

Transformative in nature. Here the leader transforms others as leaders. The key characteristic of this level is transformation. Transformative leaders lead by example; work through organizational constraints well; they glide over organizational politics; they stretch the organizational boundaries and lead the team to newer areas. Transformative leaders develop others to reach levels 3 and 4. They have big picture in mind and think global always.

Dysfunctions of Teams

Teams go through many levels of maturity. Bruce Tuckman describes the four stages of Forming, Storming, Norming and Performing. Patrick Lencioni takes us though the 5 Dysfunctions of a Team. Teams just don’t start day one as a productive, self-organizing team. They go through a process. There are bumps and challenges. The Agile Project Manager is responsible for getting the team through these phases with as little pain as possible. Even after a team gets to a mature stage, the slightest change can revert the team to a former, less mature stage. team gets to a mature stage, the slightest change can revert the team to a former, less mature stage.

Agile software development is very faced paced and in order to accommodate change effectively, it is very disciplined and requires constant attention to the process, the results and the team in order to stay on track. Agile methodologies describe many practices that guide us through the mechanics of building software in an agile fashion. But we have to also address the changes required in leadership style in order to see the benefits we strive for by adopting agile methodologies.
 
Some new agile PMs take such a strong ‘hands off’ approach that the team struggles to get through the first couple of phases of team maturity and agile appears to fail.
Agile PM must find ways to get the team members to know each other, then they can start to trust each other, then they can learn how to communicate, solve problems, have good debates and make decisions. Facilitate and encourage this process throughout the project.
 
Some techniques for getting folks through the early phases of team maturity are:
 
  • Inception or planning activities
  • Informal social events
  • Team building events
  • Remember the Future activity
  • Retrospectives
  • Team Health checks

 People are a key part to the success of any project. No matter what organizational model we invent, if people are not engaged it won’t work. Motivation, engagement, ownership and self-organization are the core values of an agile mindset. Agile project managers’ first concern should be team morale. If team members are good professionals they’ll know how to sort out any situation while in the right mood. If they’re not good professionals… we have a bigger problem and it is not going to be solved by agile techniques or any other solution.

Tossing more process to an already dysfunctional team is not going to help out, it will only add to the mess. The key to success is doing more with less: less process, less people, less rules, less time to get things done. Let people do what they’re here for – design and build – and free them from as much paperwork and red-tape as possible.

Any critical view needs an alternative proposal, so here is recipe to avoid the liturgical trap and encourage actual continuous improvement:

  • Always be mindful, question the necessity of the process: A good way to test how valuable a ceremony or practice is for a team, especially if  detecting a smell, is to make it voluntary. If team members find it valuable they’ll perpetuate the practice and all will feel its usefulness. If they abandon the practice, it’s time to look for alternatives. That is self-organization.
  • Be flexible and try new things, in their fullness: Some of the agile practices need time and training to actually work. It’s difficult at the beginning and people resist change. As a team, they can be disciplined, when they decide to try out a practice, adopt it in its completeness, with full understanding of it for a period of time. They can then incorporate it in  process based on its proven effectiveness.
  • Check on team morale often: Using retrospectives, one-on-one conversations, anonymous polls or whatever other way, Agile PM ask people what they need to get things done. Find out their aspirations and expectations and take steps to increase motivation.
  • Find your balance: When motivation fails, discipline and good practices help. But we cannot rely only on discipline to achieve success, as discipline has a price we pay in terms of team morale. Motivation has the same effect as discipline without paying this price.

Responsibilities of an Agile Project Manager:

Following are the responsibilities of an agile project management function. From one project to another, these responsibilities can slightly change and become interpreted differently.

  • Responsible for maintaining the agile values and practices in the project team.
  • The agile project manager removes impediments as the core function of the role.
  • Helps the project team members to turn the requirements backlog into working software functionality.
  • Facilitates and encourages effective and open communication within the team.
  • Responsible for holding agile meetings that discusses the short-term plans and plans to overcome obstacles.
  • Enhances the tool and practices used in the development process.
  • Agile project manager is the chief motivator of the team and plays the mentor role for the team members as well.

Fowler (2002) and Blotner (2003) suggest that a manager can and should help the team be more productive by offering some insight into the ways in which things can be done in an agile environment.
Grenning (2001) observed that having senior people monitor the team’s progress at a monthly design-as-built review meeting accelerated the development process while lowering the number of bugs.

Adaptive Teams

Another major premise is the ability to adapt – hence the Agility. Delivering features early may not be enough when the goalpost you are shooting for moves, but shorter feedback and corrective cycles means that you can correct the course of action early and able to respond to emerging market opportunities.

Obviously, there is no magic bullet, the journey would still have to be made, with team doing all that is necessary for the application to rise – what is different though, is that team will not be in the dark about how they are going to get to the end, customers will not have to wait till the end to touch and feel the product, there won’t be any grand change control processes if the goalpost keeps moving, instead business will be able to change their minds and team will be able to adapt, learn and produce valuable piece of software along the way.

To ensure delivery of a “non-obsolete” solution, it is important that any changes to project requirements are handled proactively by making necessary adjustments to the development process (process dimension) and the development team (people dimension).

Agile Project Management at Salesforce.com

Salesforce.com has recently completed an agile transformation of a two hundred person team within a three month window. This is one of the largest and fastest “big-bang” agile rollouts. It focused on creating self-organizing teams, debt-free iterative development, transparency and automation.

Salesforce.com is a market and technology leader in on-demand services. it routinely process over 85 million transactions a day and have over 646,000 subscribers. Salesforce.com builds a CRM solution and an on-demand application platform. The services technology group is responsible for all product development inside Salesforce.com and has grown 50% per year since its inception eight years ago, delivering an average of four major releases each year. Before agile rollout it had slowed to one major release a year. The agile rollout was designed to address problems with its previous methodology:

  • Inaccurate early estimates resulting in missed feature complete dates and compressed testing schedules.

  • Lack of visibility at all stages in the release.

  • Late feedback on features at the end of our release cycle.

  • Long and unpredictable release schedules.

  • Gradual productivity decline as the team grew.

Before the agile rollout the R&D group leveraged a loose, waterfall-based process with an entrepreneurial culture. The R&D teams are functionally organized into program management, user experience, product management, development, quality engineering, and documentation. Although different projects and teams varied in their specific approaches, overall development followed a phase-based functional waterfall. Product management produced feature functional specifications. User experience produced feature prototypes and interfaces. Development wrote technical specifications and code. The quality team tested and verified the feature functionality. The documentation team documented the functionality. The system test team tested the product at scale. Program management oversaw projects and coordinated feature delivery across the various functions.

its waterfall-based process was quite successful in growing our company in its early years while the team was small. However, the company grew quickly and became a challenge to manage as the team scaled beyond the capacity of a few key people. Although they were successfully delivering patch releases, the time between its major releases was growing longer (from 3 months to over 12). Due to fast company growth and lengthening of  release cycles, many people in R&D had not participated in a major release of main product. Releases are learning opportunities for the organization. A reduction in releases meant fewer opportunities to learn. This had a detrimental affect on morale and on ability to deliver quality features to market.

Transition Approach

An original company founder and the head of the R&D technology group launched an organizational change program. He created a cross-functional team to address slowing velocity, decreased predictability and product stability. This cross-functional team redesigned and rebuilt the development process from the ground up using key values from the company’s founding: KISS (Keep it Simple Stupid), iterate quickly, and listen to  customers. These values are a natural match for agile methodologies.

It was very important to position the change as a return to core values as a technology organization rather than a wholesale modification of how they deliver software. There were three key areas that were already in place that helped the transition: 1) the on-demand software model is a natural fit for agile methods; 2) an extensive automated test system was already in place to provide the backbone of the new methodology; and 3) a majority of the R&D organization was collocated.

One team member wrote a document describing the new process, its benefits and why they were transitioning from the old process. They led 45 one-hour meetings with key people from all levels in the organization. Feedback from these meetings was incorporated into the document after each meeting, molding the design of the new process and creating broad organizational buy-in for change. This open communication feedback loop allowed everyone to participate in the design of the new process and engage as an active voice in the solution. Two key additions to the initial paper were a plan for integrating usability design and clarification on how much time we needed for release closure sprints.

At this point, most literature recommended an incremental approach using pilot projects and a slow roll out. They also considered changing every team at the same time. There were people in both camps and it was a difficult decision. The key factor driving us toward a big-bang rollout was to avoid organizational dissonance and a desire for decisive action. Everyone would be doing the same thing at the same time. One of the key arguments against the big-bang rollout was that they would make the same mistakes with several teams rather than learning with a few starter teams and that they would not have enough coaches to assist teams every day. One team in the organization had already successfully run a high visibility project using Scrum . This meant that there was at least one team that had been successful with an agile process before they rolled out to all the other teams. They made a key decision to move to a “big-bang rollout” moving all teams to the new process rather than just a few.

Some of the key wins since the rollout have been:

  • Focus on team throughput rather than individual productivity

  • Cross-functional teams that now meet daily

  • Simple, agile process with common vocabulary

  • Prioritized work for every team

  • A single R&D heartbeat with planned iterations.

  • User stories & new estimation methods

  • Defined organizational roles – ScrumMaster, Product Owner, Team Member

  • Continuous daily focus on automated tests across the entire organization

  • Automation team focused on build speed & flexibility

  • Daily metric drumbeats with visibility into the health of  products and release

  • Product line Scrum of Scrums provide weekly visibility to all teams

  • R&D-wide sprint reviews and team retrospectives held every 30 days

  • Product Owner & ScrumMaster weekly special interest groups (SIGs)

  • A time-boxed release on the heels of  biggest release ever

  • Reduction of 1500+ bugs of debt

  • Potentially release-able product every 30 days

Although they are still learning and growing as an organization, these benefits have surpassed our initial goals for the rollout. Some areas that they are still focusing on are: teamwork, release planning, bug debt reduction, user stories and effective tooling.

PEARL XXV : Scaled Agile Framework® pronounced SAFe™

PEARL XXV :

Scaled Agile Framework® pronounced SAFe™ – All individuals and enterprises can benefit from the application of these innovative and empowering scaled agile methods.

SAFe Core Values

SAFe Core Values

Our modern world runs on software. In order to keep pace, we build increasingly complex and sophisticated software systems. Doing so requires larger teams and continuously rethinking the methods and practices – part art, science, engineering, mathematics, social science – that we use to organize and manage these important activities. The Scaled Agile Framework is an interactive knowledge base for implementing agile practices at enterprise scale. The Scaled Agile Framework represents one such set of advances. The Scaled Agile Framework®, or SAFe, provides a recipe for adopting Agile at enterprise scale.  It is illustrated in the big picture. As Scrum is to the Agile team, SAFe is to the Agile enterprise. SAFe tackles the tough issues – architecture, integration, funding, governance and roles at scale.  It is field-tested and enterprise-friendly. SAFe is the brainchild of Dean LeffingwellAs Ken Schwaber and Jeff Sutherland are to Scrum, Dean Leffingwell is to SAFe. SAFe is based on Lean and Agile principles. There are three levels in SAFe:

  • Team
  •  Program
  •  Portfolio
scaled Agile Framework big picture

scaled Agile Framework big picture

At the Team Level: Scrum with XP engineering practices are used. Design/Build/Test (DBT) teams deliver working, fully tested software every two weeks.  There are five to nine members of each team.

The scrum team is renamed as the DBT team (from Design / Build / Test) and the sprint review is described as the sprint demo .

One positive aspect of SAFe is its alignment between team and business objectives during the PSI planning. (Potentially Ship-able Increment)  

It makes it easier to see the connection between the company roadmap/vision and  day-to-day work. High level view of Business and Architectural needs behind the company investment and its connection to the particular epic on the program level, then story implemented on team level is helpful during the planning.

Similarly, the HIP Sprints (from Hardening / Innovation / Planning) scheduled at the end of each PSI.

Spikes

Spikes is A story or task aimed at answering a question or gathering information, rather than at producing shippable product.

In practice, the spikes teams take on are often proof-of-concept types of activities. The definition above says that the work is not focused on the finished product. It may even be designed to be thrown away at the end. This gives your product owner the proper expectation that you will most likely not directly implement the spike solution. During the course of the sprint, you may discover that what you learned in the spike cannot be implemented for any practical purpose. Or you may discover that the work can be used for great benefit on future stories. Either way, the intention of the spike is not to implement the completed work “as is.”

There are two other characteristics that spikes should have:

  1. Have clear objectives and outcomes for the spike. Be clear on the knowledge you are trying to gain and the problem(s) you are trying to address. It’s easy for a team to stray off into something interesting and related, but not relevant.
  2. Be timeboxed. Spikes should be timeboxed so you do just enough work that’s just good enough to get the value required.

At the Program Level:

Features are services provided by the system that fulfill stakeholders needs. They are maintained in program backlog and are sized to fit in PSI/Release so that each PSI/Release delivers conceptual integrity. Features bridges the gap between user stories and EPics.

SAFe defines an Agile Release Train (ART).  As iteration is to team, train is to program. The ART (or train) is the primary vehicle for value delivery at the program level.  It delivers a value stream for the organization. SAFe is three letter acronym (TLA) heaven – DBT, ART, RTE, PSI, NFR, RMT and I&A! . Between 5 and 10 teams work together on a train.  They synchronize their release boundaries and their iteration boundaries. Every 10 weeks (5 iterations) a train delivers a Potentially Shippable Increment (PSI).  A demo and inspect and adapt sessions are held.  Planning begins for the next PSI. PSIs provide a steady cadence for the development cycle.  They are separate from the concept of market releases, which can happen more or less frequently and on a different schedule. New program level roles are defined

  •  System Team
  •  Product Manager
  •  System Architect
  •  Release Train Engineer (RTE)
  •  UX and Shared Resources (e.g., security, DBA)
  •  Release Management Team

In IT/PMI environments the Program Manager or Senior Project Manager might fill one of two roles.  If they have deep domain expertise, they are likely to fill the Product Manager role.  If they have strong people management skills and understand the logistics of release they often become the Release Train Engineer SAFe makes a distinction between content (what the system does) and design (how the system does it).  There is separate “authority” for content and design. The Product Manager (Program Manager) has content authority at the program level.  S / He defines and prioritizes the program backlog.

SAFe defines an artifact hierarchy of Epics – Features – User Stories.  The program backlog is a prioritized list of features.  Features can originate at the Program level, or they can derive from Epics defined at the Portfolio level.  Features decompose to User Stories which flow to Team-level backlogs. Features are prioritized based on Don Reinersten’s Weighted Shortest Job First (WSJF) economic decision framework. The System Architect has design authority at the program level.  He collaborates day to day with the teams, ensuring that non-functional requirements (NFRs) are met.  He works with the enterprise architect at the portfolio level to ensure that there is sufficient architectural runway to support upcoming user and business needs. The UX Designer(s) provides UI design, UX guidelines and design elements for the teams.  In a similar manner, shared specialists provide services such as security, performance and database administration across the teams. The Release Train Engineer (RTE) is the Uber-ScrumMaster. The Release Management Team is a cross-functional team – with representation from marketing, dev, quality, ops and deployment – that approves frequent releases of quality solutions to customers.

For agility at scale, a small magnitude of modeling has been introduced to support the vision , upcoming features, and ongoing extension to the upcoming Architectural Runway for each Agile Release train.

Agile Release train is the long lived team of agile teams typically consists of 50 to 125 individuals, that serves the program level value delivery in SAFe.  Using a common team sprint cadence each train has dedicated resources to continuously define build test and deliver value to one of the enterprise value streams. Teams are aligned to a common mission via a single program backlog and include the program management, architectural, UX guidance and release train engineer roles. Each train produces valuable and evaluate able system level potential shipable increment at least 8 to 12 weeks accordance with PSI Objectives established by the teams during each release planning event but team can release any time according to market needs.

Cadence is what gives a team a feeling of demarcation, progression, resolution or flow. A pattern which allows the team to know what they are doing and when it will be done. For very small, or mature teams, this cadence could by complex, arrhythmic or syncopated. However, it is enough to allow a team to make reliable commitments because recognizing their cadence allows them to understand their capability or capacity.

The program backlog is the single, definitive repository for all the work anticipated by the program. The backlog is created from the breakdown of business and architectural epics into features that will address user needs and deliver business benefits . The purpose of the program roadmap is to establish alignment across all teams, while also providing predictability to the deliverables over an established time horizon

Program EPics affect single release train.

SAFe provides a cadence-based approach to the delivery of value via PSIs. Schedule, manage and govern your synchronized PSIs

Shared Iteration schedules allow multiple teams to stay on the same cadence and facilitate roll up reporting. Release Capacity Planning allows you to scale agile initiatives across multiple teams and deploy more predictable releases. Cross team dependencies are quickly identified and made visible to the entire program.

At the Portfolio Level:

The Portfolio Vision defines how the enterprise’s business strategy will be achieved.

In the Scaled Agile Framework, the Portfolio Level is the highest and most strategic layer where programs are aligned to the company’s business strategy and investment approach

PPM has a central role in Strategy, Investment Funding, Program Management and Governance. Investment Themes drive budget allocations. Themes are done as part of the budgeting process with a lifespan of 6-12 months.

Epics are enterprise initiatives that are sufficiently substantial in scope , they warrant analysis and understanding of potential ROI. EPics require light weight business case that elaborate business and technology impact and implementation strategy. EPics are generally cross cutting and impact multiple organizations,budget, release trains, and occur over multiple PSI.

Portfolio Epics affect multiple release trains. Epics cut across all three business dimensions of Time ( Multiple PSI,years) , Scope (Release Trains, Applications,solutions and business platforms)  , Organizations(Department, Business units, Partners, End-To-End business value chain).

Portfolio philosophy is centralized strategy with local execution. Epics define large development initiatives that encapsulate the new development necessary to realize the benefits of investment themes.Program Project Management represents individuals responsible for strategy, Investment funding, program management and governance. They are the stewards of portfolio vision, define relevant value streams,control the budget through investment themes, define and prioritize cross cutting portfolio backlog epics, guide agile release trains and report to business on investment spends and program progress. SAFe provides seven transformation patterns to lead the organization to program portfolio management.

  • Decentralized Decision making
  • Demand management, continuous value flow
  • Light weight, epic only business cases
  • Decentralized Rolling wave planning
  • Agile estimating and planning
  • Self organizing, self management agile release trains
  • Objective Fact based measures and milestones

Rolling Wave Planning is the process of project planning in waves as the project proceeds and later details become clearer. Work to be done in the near term is based on high level assumptions; also, high level milestones are set. As the project progresses, the risks, assumptions, and milestones originally identified become more defined and reliable. One would use Rolling Wave Planning in an instance where there is an extremely tight schedule or timeline to adhere to; whereas more thorough planning would have placed the schedule into an unacceptable negative schedule variance.

This is an approach that iteratively plans for a project as it unfolds, similar to the techniques used in Scrum (development) and other forms of Agile software development.

Progressive Elaboration is what occurs in this rolling wave planning process. Progressive Elaboration means that over time we elaborate the work packages in greater detail. Progressive Elaboration refers to the fact that as the weeks and months pass we have planned to provide that missing, more elaborated detail for the work packages as they now appear on the horizon.

Investment themes represent the set of initiatives that drive the enterprise’s investment in systems, products, applications, and services. Epics can be grouped by investment themes and then can visualize relative capacity allocations to determine if planned epics are in alignment with the overall business strategy. Epics are large-scale development initiatives that realize the value of investment themes.

SAFe-Levels

SAFe-Levels

There are business epics (customer-facing) and architectural epics (technology solutions). Business and architectural epics are managed in parallel Kanban systems. Objective metrics support IT governance and continuous improvement. Enterprise architecture is a first class citizen.  The concept of Intentional Architecture provides a set of planned initiatives to enhance solution design, performance, security and usability. SAFe patterns provide a transformation roadmap.

Architectural Runway exists when the enterprise platforms have sufficient existing technological infrastructure(code) to support the implementation of the highest priority features without excessive delay inducing redesign, In order to achieve some degree of runway , the enterprise must continuously invest in refactoring and extending existing platforms

SAFe suggests development and implementation of kanban systems for business and archtiecture portfolio epics.

Architectural epics kanban system suggests visibility , Work In Process limits and continuous flow to portfolio level architectural epics. This kanban system has four states, funnel, backlog, analysis and implementation. The architecture epic kanban is typically under the auspices of CTO/ Technology office which includes enterprise and system architects.

Business epics kanban system suggests visibility , Work In Process limits and continuous flow to portfolio level business epics. This kanban system has four states, funnel, backlog, analysis and implementation. The business epic kanban is typically under the auspices of  program portfolio management . comprised of those executives and business owners who have responsibility for implementing business strategy.

Value streams

Value streams

Lean Approach

The scaled agile framework is based on number of trends in the modern software engineering.

  • Lean Thinking
  • Product Development flow
  • Agile Development
Dean Leffingwell and Lean Thinking

Dean Leffingwell and Lean Thinking

Agile gives tools needed to empower and engage development teams to achieve unprecedented levels of productivity,quality and engagement. The SAFe House of Lean provides the following constructs

  • The Goal : Value, Sustain-ably the shortest lead time Best quality and value to people
  • Respect for people
  • Kaizen (continuous improvement)
  • Principles of product development flow
  • Foundation Management : Lean thinking manager:Teacher

Investment themes reflect how a portfolio allocates budgets to various initiatives that it has allocated to the portfolio business strategy. Investment themes are portfolio level capacity allocations in that each theme gets resource implied by the budget.