PEARL XI : DevOps Orginated from Enterprise System Management and Agile S/W Methodology

PEARL XI : DEVOPS (portmanteau of development and operations) is a software development lifecycle approach that stresses communication, collaboration and integration between software developers and information technology (IT) operations professionals .  Many of the ideas (and people) involved in DevOps originated from the Enterprise Systems Management and Agile software development movements.

DevOps - Continuous Value

IT now powers most businesses. The central role  that IT plays translates into huge demands on the IT  staff to develop and deploy new applications and  services at an accelerated pace. To meet this demand, many software development organizations  are applying Lean principles through such approaches as Agile software development. Influenced heavily by Lean methodology, Agile methodology  is based on frequent, customer-focused releases  and strives to eliminate all steps that don’t add value  for the customer. Using Agile methodology, development teams are able to shrink development cycles dramatically and increase application quality.
Unfortunately, the increasing number of software  releases, growing complexity, shrinking deployment time frames, and limited budgets are presenting the  operations staff with unprecedented challenges.  Operations can begin to address these challenges  by learning from software developers and adopting  Lean methodology. That requires re-evaluating current processes, ferreting out sources of waste, and  automating wherever possible.

According to the Lean Enterprise Institute, “The core idea [of Lean] is to maximize customer value while  minimizing waste. Simply, Lean means creating  more value for customers with fewer resources.”

This involves a five-step process for guiding the implementation of Lean techniques:
1. Specify value from the standpoint of the end customer.
2. Identify all the steps in the value stream, eliminating whenever possible those steps that do not create value.
3. Make the value-creating steps occur in tight sequence so the product will flow smoothly toward the customer.
4. As flow is introduced, let customer demand determine the time to market.
5. As value is specified, value streams are identified, wasted steps are removed, and customer-demand centric flow is established, begin the process again and continue it until a state of perfection is reached in which perfect value is created with no waste.
Clearly, Lean is not a one-shot proposition. It’s a reiterative process of continuous improvement.
Bridge the DevOps gap
There are obstacles to bringing Lean methodology to operations. One of the primary ones is the cultural difference between development and operations. Developers are usually driven to embrace the latest technologies and methodologies. Agile principles mean that they are aligning more closely with business requirements, and the business has an imperative to move quickly to stay competitive. Consequently, the development team is incentivized to move applications from concept to marketing as quickly as possible.
The culture of operations is typically cautious and deliberate. They are incentivized to maintain stability and business continuity. They are well aware of the consequences and high visibility of problems, such as performance slowdowns and outages, which are
caused by improperly handled releases.
As a result, there is a natural clash between the businessdriven need for speed on the development side and the conservative inertia on the operations side. Each group has different processes and ways of looking at things.
The result is often called the DevOps gap. The DevOps movement has arisen out of the need to address this disconnect. DevOps is an approach that looks to bring the benefits of Agile and Lean methodologies into operations, reducing the barriers to delivering more value for the customer and aligning with the business. It stresses the importance of communication, collaboration, and integration between the two groups, and even combining responsibilities. Today, operations teams find themselves at a critical decision point. They can adopt the spirit of DevOps and strive to close the gap. That requires working more closely with development. It means getting involved earlier in the development cycle instead of waiting for new applications and services to “come over the fence.” And conversely, developers will need to be more involved in application support. The best way to facilitate this change is by following the development team’s lead in adopting Lean methodology by reducing waste and focusing on customer value.
On the other hand, not closing the gap can have serious repercussions for operations. In frustration, developers may bypass operations entirely and go right to the cloud. This is already occurring in some companies.

Another challenge that operations teams face is in how to take the new intellectual property that the development organizations have built for the business and get it out to customers as quickly as possible, with the least number of errors and at the lowest cost. That requires creating a release process that is fast, efficient, and repeatable. That’s where Lean methodology provides the most value.

DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration betweensoftware developers and information technology (IT) operations professionals. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services.

A DevOps approach applies agile and lean thinking principles to all stakeholders in an organization who develop, operate,  or benefit from the business’s software systems, including customers, suppliers partners. By extending lean principles across  the entire software supply chain, DevOps capabilities will  improve productivity through accelerated customer feedback  cycles, unified measurements and collaboration across an enterprise, and reduced overhead, duplication, and rework

Companies with very frequent releases may require a DevOps awareness or orientation program. Flickr developed a DevOps approach to support a business requirement of ten deployments per day; this daily deployment cycle would be much higher at organizations producing multi-focus or multi-function applications. This is referred to as continuous deployment or continuous delivery  and is frequently associated with the lean startup methodology. Working groups, professional associations and blogs have formed on the topic since 2009.

DevOps aids in software application release management for a company by standardizing development environments. Events can be more easily tracked as well as resolving documented process control and granular reporting issues. Companies with release/deployment automation problems usually have existing automation but want to more flexibly manage and drive this automation — without needing to enter everything manually at the command-line. Ideally, this automation can be invoked by non-operations resources in specific non-production environments. Developers are given more environment control, giving infrastructure more application-centric understanding.

Simple processes become clearly articulated using a DevOps approach. The goal is to maximize the predictability, efficiency, security and maintainability of operational processes. This objective is very often supported by automation.

DevOps integration targets product delivery, quality testing, feature development and maintenance releases in order to improve reliability and security and faster development and deployment cycles. Many of the ideas (and people) involved in DevOps came from the Enterprise Systems Management and Agile software development movements.

The focus of Lean is on delivering value to the customer and doing so as quickly and efficiently as possible. It is flow oriented rather than batch oriented. Its purpose is to smooth the flow of the value stream and make it customer centric.

DevOps incorporates lean thinking and agile methodology as follows:

  • Eliminate any activity that is not necessary for learning what  customers want. This emphasizes fast, continuous iterations  and customer insight with a feedback loop.
  • Eliminate wait times and delays caused by manual processes  and reliance on tribal knowledge.
  • Enable knowledge workers, business analysts, developers, testers, and other domain experts to focus on creative activities (not procedural activities) that help sustain innovation, and avoid expensive and dangerous organization and technology “resets.”
  • Optimize risk management by steering with meaningful  delivery analytics that illuminate validated learning by  reducing uncertainty in ways that can be measured.

The first step for operations in adopting Lean methodology is to understand the big picture. That means not only developing an understanding of the end-to-end release process but also understanding the release process within the overall context of the DevOps plan, build, and run cycle. In this cycle, development plans a new application based on the requirements of the business, builds the application, and then releases it to operations. Operations then assumes responsibility for running the application.
In examining processes, therefore, operations should not only look at the release process itself but also at the process before the release to determine where opportunities lie for closer cooperation between the two groups. For example, operations may see a way for development to improve the staging process for operational production of an application.
Release process management (RPM) solutions are available that enable IT to map out and document the entire application lifecycle process, end to end, from planning through release to retirement. These solutions provide a collaboration platform that can bring operations and development closer together and provide that “big picture” visibility so vital to Lean. They also enable operations to consolidate release processes that are fragmented across spreadsheets, hand-written notes, and various other places.
In examining the release process itself, operations should look for areas to tighten the flow and eliminate unnecessary tasks. The operations group in one company, for example, examined the release process and found that it was re-provisioning the same servers three times when it was only necessary to do so once.
Anything that doesn’t directly contribute to customer value (like unnecessary meetings, approvals, and communication) should be considered for elimination.
Automate for consistency and speed
Manual procedures are major contributors to waste. For example, an existing release process may call for a database administrator (DBA) to update a particular database manually. This manual effort is inefficient and susceptible to errors. It’s also unlikely to be done in a consistent fashion: If there are several DBAs, each one may build a database differently.
Automation eliminates waste as well as a major source of errors. Automation ensures that processes are repeatable and consistently applied, while also ensuring frictionless compliance with corporate policies and external regulations. Deployment automation and configuration management tools can help by automating a wide variety of processes based on best practices. For Lean methodology to really work, processes must be predictable and consistent. That means that simple automation is not enough. The delivery of the entire software stack should be automated. This means that all environment builds — whether in pre- or post-production — should be completely automated. Also, the software deployment process must be completely automated, including code, content, configurations, and whatever else is required.

Automate manual and overhead activities (enabling continuous delivery) such as change propagation and orchestration,  traceability, measurement, progress reporting, etc.
By automating the whole software stack, it becomes much easier to ensure compliance with operations and security. This can save vast amounts of time usually wasted waiting on security approval for new application deployments.

It is preferable to automate the time-consuming operational policies like initiating the required change request approvals, configuring performance monitoring, and so on. The mundane manual tasks, like these policies, create the most waste.
Before diving into automation, however, it’s essential for operations to map out and fully understand the end-to-end release process. When you use a release process management (RPM) platform to drive the end-to-end process, the team can review the process holistically to uncover sources of waste and determine where to apply automation tools to best streamline the process, eliminate waste, and accelerate delivery.
Measure success and continually improve Lean is an iterative approach to continuous improvement, and iteration necessitates feedback.


Consequently, operations must establish a means of tracking the impact of adopting Lean
methodology. In establishing the feedback metrics, keep in mind that the primary purpose of Lean methodology is not just to smooth and accelerate the release cycle; it’s also to create more value for customers and do it with fewer resources.
Consequently, operations should measure not only the increase in speed of releases but also the impact of the releases on cost and on customer value. For example, did the release result in a spike in the number of service desk incidents? This would not only increase support costs but also would degrade the customer experience. Or did the lack of capacity planning result in over-taxed infrastructure and degrade end-user performance? Here, it’s important to monitor application performance and availability from the customer’s perspective. Customers are not interested in the performance metrics of the individual IT infrastructure components that support a service.
They care about the overall user experience. In particular, how quickly did they complete their transactions end to end?
Application Performance Management (APM) solutions can track and report on a wide variety of metrics, including customer experience. These metrics provide valuable feedback to both the operations and development teams in measuring the impact of Lean implementation and identifying areas that require further attention. With these solutions in place, operations can operate in a mode of continuous improvement.

Use meaningful measurement and monitoring of progress (enabling continuous optimization) for improved visibility across the organization, including the software value delivery supply chain.

IBM DevOps Platform

IBM provides an open, standards-based DevOps platform that supports a continuous innovation, feedback and improvement lifecycle, enabling a business to plan, track, manage, and automate all aspects of continuously delivering business ideas. At the same time, the business is able to manage both existing and new workloads in enterprise-class systems and open the door to innovation with cloud and mobile solutions. This capability includes an iterative set of quality checks and verification phases that each product or piece of application code must pass before release to customers. The IBM solution provides a continuous feedback loop for all aspects of the delivery process (e.g., customer experience and sentiments, quality metrics, service level agreements, and environment data) and enables continuous testing of ideas and capabilities with end users in a customer facing environment.
IBM’s DevOps solution consists of the open standards based platform, DevOps Foundation services, with end to end DevOps lifecycle capabilities. To accommodate varying levels of maturity within an IT team’s delivery processes

Plan and measure: This adoption path consists of one major practice:
This adoption path consists of one major practice:
Continuous business planning: Continuous business planning employs lean principles to start small by identifying the outcomes and resources needed to test the business vision/value, to adapt and adjust continually, measure actual progress and learn what customers really want and shift direction with agility and update the plan. 

Develop and test: This adoption path consists of two major practices:
Collaborative development: Collaborative development enables collaboration between business, development, and QA organizations—including contractors and vendors in outsourced projects spread across time zones—to deliver innovative, quality software continuously. This includes support for polyglot programming and support multiplatform
development, elaboration of ideas, and creation of user stories complete with cross-team change and lifecycle management.
Collaborative development includes the practice of continuous integration, which promotes frequent team integrations and automatic builds. By integrating the system more frequently, integration issues are identified earlier when they are easier to fix, and the overall integration effort is reduced via continuous feedback as the project shows constant and demonstrable progress.
Continuous testing: Continuous testing reduces the cost of testing while helping development teams balance quality and speed. It eliminates testing bottlenecks through virtualized dependent services, and it simplifies the creation of virtualized test environments that can be easily deployed, shared, and updated as systems change. These capabilities reduce the cost of provisioning and maintaining test environments and
shorten test cycle times by allowing integration testing earlier in lifecycle.
Release and deploy: This adoption path consists of one major practice:
Continuous release and deployment: Continuous release and deployment provides a continuous delivery pipeline that automates deployments to test and production environments.
It reduces the amount of manual labor, resource wait-time, and rework by means of push-button deployments that allow higher frequency of releases, reduced errors, and end-to-end transparency for compliance.
Monitor and optimize: This adoption path consists of two major practices:
Continuous monitoring: Continuous monitoring offers enterprise-class, easy-to-use reporting that helps developers and testers understand the performance and availability of their application, even before it is deployed to production.
The early feedback provided by continuous monitoring is vital for lowering the cost of errors and change, and for steering projects toward successful completion.

Continuous customer feedback and optimization:
Continuous customer feedback provides the visual evidence and full context for analyzing customer behavior and pinpointing customer pain points. Feedback can be applied during
both pre- and post-production phases to maximize the value of every customer visit and ensure that more transactions are completed successfully. This allows immediate visibility into the sources of customer struggles that affect their behavior and impact business.

Benefits of the IBM DevOps solution

By adopting this solution to address needs, organizations can  unlock new business opportunities:

  • Deliver a differentiated and engaging customer experience  that builds customer loyalty and increases market share by  continuously obtaining and responding to customer feedback
  • Obtain fast-mover advantage to capture markets with quicker time to value based on software-based innovation, with  improved predictability and success
  • Increase capacity to innovate by reducing waste and rework  in order to shift resources to higher value activities

Keep up with the future
By adopting Lean methodology, operations teams can catch up with and even get ahead of the large and rapidly increasing amount of new and updated services flowing from Agile-accelerated development teams. And they can do so without increasing costs or jeopardizing stability and business continuity.
In so doing, operations can help increase customer value, which has a direct effect on revenue, competitiveness, and the brand. Moreover, the operations team will have the metrics to demonstrate its contribution to the business. That enables the team to transform its image in the organization from software-release speed barrier to high-velocity enabler.

Traditional approaches to software development and delivery  are no longer sufficient. Manual processes are error prone,  break down, and they create waste and delayed response.
Businesses can’t afford to focus on cost while neglecting speed of delivery, or choose speed over managing risk. A DevOps  approach offers a powerful solution to these challenges.
DevOps reduces time to customer feedback, increases quality, reduces risk and cost, and unifies process, culture, and tools  across the end to end lifecycle—which includes adoption path to plan and measure, develop and test, release and deploy, and monitor and optimize.

PEARL XIX : Effective Steps to reduce technical debt: An agile approach

PEARL XIX : Effective Steps to reduce technical debt: An agile approach

In every codebase, there are the dark corners and alleys developers fear. Code that’s impossibly brittle; code that bites back with regression bugs; code that when you attempt to follow, will drive you beyond chaos.

Ward Cunningham created a beautiful metaphor for the hard-to-change, error-prone parts of code when he likened it to financial debt. Technical debt prevents you from moving forward, from profiting, from staying “in the black.” As in the real world, there’s cheap debt, debt with an interest lower than you can make in a low-risk financial instrument. Then there’s the expensive stuff, the high-interest credit card fees that pile on even more debt.

The impact of accumulated technical debt can be decreased efficiency, increased cost, and extended delays in the maintenance of existing systems. This can directly jeopardize operations, undermining the stability and reliability of the business over time. It also can stymie the ability to innovate and grow

DB Systel, a subsidiary of Deutsche Bahn, is one of Germany’s leading information technology and communications providers, running approximately 500 high-availability business systems for its customers. In order to keep this complex environment—a mix of packaged and in-house–developed systems that range from mainframe to mobile—running efficiently while continuing to address the needs of its customers, DB Systel decided to embed processes and tools within its development and maintenance activities to actively address its technical debt.

DB Systel’s software developers have employed new tools during development so they can detect and correct errors more efficiently. Using a software analysis and measurement platform from CAST, DB Systel has been able to uncover architectural hot spots and transactions in its core systems that carry significant structural risk. DB Systel is now better able to track the nonfunctional quality characteristics of its systems and precisely measure changes in architecture- and code-level technical debt within these applications to prioritize the areas with highest impact.

By implementing this strategy at the architecture level, DB Systel has seen a reduction in time spent on error detection and an increased focus on leading-practice development techniques. The company also noticed a rise in employees’ intrinsic motivation as a result of using CAST. With an effective technical debt management process in place, DB Systel is mitigating the possibility of software deterioration while also enriching application quality.

Technical debt is a drag. It can kill productivity, making maintenance annoying, difficult, or, in some cases, impossible. Beyond the obvious economic downside, there’s a real psychological cost to technical debt. No developer enjoys sitting down to his computer in the morning knowing he’s about to face impossibly brittle, complicated source code. The frustration and helplessness thus engendered is often a root cause of more systemic problems, such as developer turnover— just one of the real economic costs of technical debt.

However, the consequences of failing to identify and measure technical debt can be significant. An application with a lot of technical debt may not be able to fulfill its business purpose and may never reach production. Or technical debt may require weeks or months of remedial refactoring before the application emerges into production. At best, it could reach production, but be limited in its ability to meet users’ needs.

Every codebase contains some measure of technical debt. One class of debt is fairly harmless: byzantine dependencies among bizarrely named types in stable, rarely modified recesses of  system. Another is sloppy code that is easily fixed on the spot, but often ignored in the rush to address higher-priority problems. There are many more examples.

This section outlines a general workflow and several tactics for dealing with the high-interest debt

In order to fix technical debt, team need to cultivate buy-in from stakeholders and teammates alike. To do this,they need to start thinking systemically. Systems thinking is long-range thinking. It is investment thinking. It’s the idea that effort you put in today will let you progress at a predictable and sustained pace in the future.

Technical debt (also known as design debt or code debt) is a neologism metaphor referring to the eventual consequences of poor software architecture and software development within a code-base. The debt can be thought of as work that needs to be done before a particular job can be considered complete. If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy.

As a change is started on a codebase, there is often the need to make other coordinated changes at the same time in other parts of the codebase or documentation. The other required, but uncompleted changes, are considered debt that must be paid at some point in the future. Just like financial debt, these uncompleted changes incur interest on top of interest, making it cumbersome to build a project. Although the term is used in software development primarily, it can also be applied to other professions.

Common causes of technical debt include (a combination of):

  • Business pressures, where the business considers getting something released sooner before all of the necessary changes are complete, builds up technical debt comprising those uncompleted changes
  • Lack of process or understanding, where businesses are blind to the concept of technical debt, and make decisions without considering the implications
  • Lack of building loosely coupled components, where functions are hard-coded, when business needs change, the software is inflexible.
  • Lack of test suite, which encourages quick and risky band-aids to fix bugs.
  • Lack of documentation, where code is created without necessary supporting documentation. That work to create the supporting documentation represents a debt that must be paid.
  • Lack of collaboration, where knowledge isn’t shared around the organization and business efficiency suffers, or junior developers are not properly mentored
  • Parallel development at the same time on two or more branches can cause the build up of technical debt because of the work that will eventually be required to merge the changes into a single source base. The more changes that are done in isolation, the more debt that is piled up.
  • Delayed refactoring – As the requirements for a project evolve, it may become clear that parts of the code have become unwieldy and must be refactored in order to support future requirements. The longer that refactoring is delayed, and the more code is written to use the current form, the more debt that piles up that must be paid at the time the refactoring is finally done.
  • Lack of knowledge, when the developer simply doesn’t know how to write elegant code.

“Interest payments” are both in the necessary local maintenance and the absence of maintenance by other users of the project. Ongoing development in the upstream project can increase the cost of “paying off the debt” in the future. One pays off the debt by simply completing the uncompleted work.

The build up of technical debt is a major cause for projects to miss deadlines. It is difficult to estimate exactly how much work is necessary to pay off the debt. For each change that is initiated, an uncertain amount of uncompleted work is committed to the project. The deadline is missed when the project realizes that there is more uncompleted work (debt) than there is time to complete it in. To have predictable release schedules, a development team should limit the amount of work in progress in order to keep the amount of uncompleted work (or debt) small at all times.

“As an evolving program is continually changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it.”
— Meir Manny Lehman, 1980
While Manny Lehman’s Law already indicated that evolving programs continually add to their complexity and deteriorating structure unless work is done to maintain it, Ward Cunningham first drew the comparison between technical complexity and debt in a 1992 experience report:

“Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.”
— Ward Cunningham, 1992
In his 2004 text, Refactoring to Patterns, Joshua Kerievsky presents a comparable argument concerning the costs associated with architectural negligence, which he describes as “design debt”.

“…doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical  debt incurs interest payments, which come in  the form of the extra effort that we have to do in  future development because of the quick and dirty  design choice. We can choose to continue paying  the interest, or we can pay down the principal by  refactoring the quick and dirty design into the  better design. Although it costs to pay down the
principal, we gain by reduced interest payments  in the future.”

–Martin Fowler

Technical Debt” refers to delayed technical  work that is incurred when technical short  cuts are taken, usually in pursuit of calendar driven software schedules. Just like financial  debt, some technical debts can serve valuable  business purposes. Other technical debts are  simply counterproductive. The ability to take on
debt safely, track their debt, manage their debt, and pay down their debt varies among different  organizations. Explicit decision making before  taking on debt and more explicit tracking of debt  are advised.

–Steve McConnell

Activities that might be postponed include documentation, writing tests, attending to TODO comments and tackling compiler and static code analysis warnings. Other instances of technical debt include knowledge that isn’t shared around the organization and code that is too confusing to be modified easily.

In open source software, postponing sending local changes to the upstream project is a technical debt

 The basic workflow for tackling technical debt—indeed any kind of improvement—is repeatable. Essentially, there are four:

  1. Identify where are the debt. How much is each debt item affecting  company’s bottom line and team’s productivity?
  2. Build a business case and forge a consensus on priority with those affected by the debt, both team and stakeholders.
  3. Fix the debt  on the  chosen item, head on with proven tactics.
  4. Repeat. Go back to step 1 to identify additional debt and hold the line on the improvements  made.

Agile Approach to Technical Debt

Involve the Product Owner and “promote” him to be the sponsor of technical debt reduction.

Sometimes it’s hard to find debt, especially if a team is new to a codebase. In cases where there’s no collective memory or oral tradition to draw on, team can use a static analysis tool such as NDepend (ndepend.com) to probe the code for the more troublesome spots.

Determining test coverage  can be another valuable tool for discovering hidden debt.

Use the log feature of  version control system to generate a report of changes over the last month or two. Find the parts of the system that receive the most activity, changes or additions, and scrutinize them for technical debt. This will help to find the bottlenecks that are challenging  today; there’s very little value in fixing debt in those parts of your system that change rarely.

Inventory and structure known technical debt

Having convinced the product owner it is time to collect and inventory known technical problems and map them on a structure that visualizes the system and project landscape.

It is not about completely understanding all topics. It is about finding a proper structure, identifying the most important issues and mapping those onto the structure. It’s about extracting knowledge about the systems from the heads to develop a common picture of existing technical problems.

Therefore write the names / identifiers of all applications and modules individual own on cards. These cards shall be pinned on a whiteboard. In the next step extract to-do’s (to solve existing problems) from all documentation media used (wiki, jira, confluence, code documentation, paper), write them on post-its and stuck them next to the application name it belongs to.This board shall be accessible to all team members over a period of some days. Every team member was responsible to complete, restructure, and correct the board during this period so that they could go on with a round portfolio of the existing debt in the systems.

Having collected and understood the work to reduce the technical debt within the systems the team now need a baseline for defining a good strategy – a repayment plan. Therefore  Costs and benefits shall be estimated. 

Obtaining consensus is key. We want the majority of team members to support the current improvement initiative e selected.Luke Hohmann’s “Buy a Feature” approach from his book Innovation Games (innovationgames.com) will help to get consensus.

  1. Generate a short list (5-9 items) of things you want to improve. Ideally these items are in your short-term path.
  2. Qualify the items in terms of difficulty. we can use the abstract notion of a T-shirt size: small, medium, large or extra-large
  3. Give your features a price based on their size. For example, small items may cost $50, medium items $100, and so on.
  4. Give everyone a certain amount of money. The key here is to introduce scarcity into the game. You want people to have to pool their money to buy the features they’re interested in. You want to price, say, medium features at a cost where no one individual can buy them. It’s valuable to find where more than a single individual sees the priority since you’re trying to build consensus.
  5. Run a short game, perhaps 20 or 30 minutes in length, where people can discuss, collude, and pitch their case. This can be quite chaotic and also quite fun, and you’ll see where the seats of influence are in your team.
  6. Review the items that were bought and by what margins they were bought. You can choose to rank your list by the purchased features or, better yet, use the results of the Buy a Feature game in combination with other techniques, such as an awareness of the next release plan.

Taking on some judicious technical debt can be an appropriate decision to meet schedules or to prototype a new feature set, as long as the decision was made with a clear understanding of the costs involved later in the project, such as code refactoring.

As Martin Fowler says, “The useful distinction  isn’t between debt or non-debt, but between prudent and reckless  debt.”

techdebt-state.png

Technical debt actually begets more tech debt over time, and its state diagram is depicted.

Load Testing as a Practice to identify Technical Debt

Load testing exposes weaknesses in an application that cannot be found through traditional functional testing. Those weaknesses are generally reflected in the application’s inability to scale appropriately. Testers are also typically  already planning to perform load testing at some point prior to  the production release of the application.

Load testing involves enabling virtual users to execute predetermined actions simultaneously against the application. The  scripts exercise features either singly or in sequences expected  to be common among production users.

Load testing looks at the characteristics of an application under a simulated load, similar to the way it might operate in a production environment. At the highest level, it determines if an application will support the number of simultaneous users specified in the project requirements.

However, it does more than that. By looking at system characteristics as you increase the number of simultaneous users, you  can make some useful statements regarding what resources  are being stressed, and where in the application they are being stressed. With this information, the team can identify weaknesses in the application that are generally the result of incurring  technical debt, therefore providing the basis for identifying the  debt.

Some automation and measurement tools are required to successfully identify and assess technical debt with load testing.

Coding / Testing Practices

Management has to make the time  through proactive investment, but so does the team. Each team member needs to invest in their own knowledge and  education on how to write clean code, their business domain  and how to do their jobs optimally. While teams learn during  the project through retrospectives, design reviews and pair  programming, teams should learn agile engineering practices  for design, development and testing. Whether through courses,  conferences, user groups, podcasts, web sites or books – there  are many options for learning better coding practices to reduce technical debt

Design Principles and Techniques

Additionally, architects need to learn about evolutionary design principles and refactoring techniques for fixing poor designs today and building better designs tomorrow. Lastly, a governance group should meet periodically to review performance and plan future system
changes to further reduce technical debt.

Definition of Done

Establish a common “definition of done” for each requirement, user story or use case and ensure its validated with the business before development begins. A simple format  such as “this story is done when: <list of criteria>” works well. The Product Owner presents “done” to the Developers, User
Interface Designers, Testers and Analysts and together they collaboratively work out the finer implementation details. Set expectations with developers that only stories meeting “done” (as validated by the testers) will be accepted and contribute towards velocity. Similarly, set expectations with management and analysts that only stories that are “ready” are scheduled for development to ensure poor requirements don’t cause further technical debt.

In all popular languages and platforms today, open source and commercial tools are available to automate builds, the continuous integration of code changes, unit testing, acceptance testing, deployments, database setup, performance testing and many other common manual activities. In addition to reducing manual effort, automation reduces the risk of mistakes and over-reliance on one individual for performing critical activities. First setup automated builds ( Ant, nAnt or rake), followed by continuous integration ( Hudson). Next setup automated unit testing (JUnit, NUnit or RSpec) and acceptance testing ( FitNesse and Selenium). Finally setup automated deployments (r Capistrano or custom
shells scripts,). It’s amazing what a few focused team members can accomplish in a relatively short period of time if given time to focus on automating common activities to reduce technical debt.

Consider rating and rewarding developers on the quality of their code. In some cases, fewer skilled developers may be better than volumes of mediocre resources whose work may require downstream reversal of debt. Regularly run code complexity reviews and technical debt assessments, sharing the results across the team. Not only can specific examples help the team improve, but trends can signal that a project is headed in the wrong direction or encountering unexpected complexity.

PEARL XIII: Effective Visual Management with Information Radiators

PEARL XIII: 

Effective visual management with Information Radiators

Information Radiators

Information Radiators, also known as Big Visible Charts, are useful quite simply because they provide an effective way to communicate project status, issues, or metrics without a great deal of effort from the team. The premise is that these displays make critical, changing information about a project accessible to anyone with enough ambition to walk over to the team area and take a look.

“An Information radiator is a display posted in a place where people can see it as they work or walk by. It shows readers information they care about without having to ask anyone a question. This means more communication with fewer interruptions.”

Information Radiators are

  • Is large and easily visible to the casual, interested observer
  • Is understood at a glance
  • Changes periodically, so that it is worth visiting
  • Is easily kept up to date

“Information Radiator” is a popular term invented by Alistair Cockburn that is used to describe any artifact that conveys project information and is publicly displayed in the workspace or surroundings. Information radiators are very popular in the Agile world, and they are an essential component of visual management. Most Agile teams recognize the value of information radiators and implement them to some degree in their processes.

Information radiators take on many different shapes and sizes. Traditional implementations range from hand-drawn posters of burndown charts to coloured sticky notes on a whiteboard.

Many development teams create elaborate information radiators using electronic devices like traffic lights or glowing orbs to indicate build status. Most teams, including many at Atlassian, roll their own electronic wallboards to pull real-time data from development tools for display on large monitors over the team workspace.

Visual control is a business management technique employed in many places where information is communicated by using visual signals instead of texts or other written instructions. The design is deliberate in allowing quick recognition of the information being communicated, in order to increase efficiency and clarity. These signals can be of many forms, from different coloured clothing for different teams, to focusing measures upon the size of the problem and not the size of the activity, tokanban, obeya and heijunka boxes and many other diverse examples. In The Toyota Way, it is also known as mieruka.

The three most popular information radiators are

  • Task Boards,
  • Big Visible Charts (which includes burn downs and family) and
  • Continuous Integration build health indicators (including lava lamps and stolen street lights).

Task Boards

MockedTaskBoard

MockedTaskBoard

The most important information radiator in visual management is the Task Board. (In Scrum, Agilists sometimes call task boards as Scrum boards). The task board has the mission of visually representing the work that is being done by the team. They are the most complex and versatile artifact: a physical task board is a “living” entity that has to be manually maintained. Tasks boards are being undervalued by most agile teams today. This might be because there has not been a lot of focus on their potential, or perhaps there are simply not many examples around on what makes a great task board. In any case, it’s time to take task boards to the next level.

In its most basic form, a task board can be drawn on a whiteboard or even a section of wall. Using electrical tape or a dry erase pen, the board is divided into three columns labeled “To Do”, “In Progress” and “Done”. Sticky notes or index cards, one for each task the team is working on, are placed in the columns reflecting the current status of the tasks.

Many variants exist. Different layouts can be used, for instance by rows instead of columns (although the latter is much more common). The number and headings of the columns can vary, further columns are often used for instance to represent an activity, such as “In Test”.

The task board is updated frequently, most commonly during the daily meeting, based on the team’s progress since the last update. The board is commonly “reset” at the beginning of each iteration to reflect the iteration plan.

Expected Benefits
  • The task board is an “information radiator” – it ensures efficient diffusion of informations relevant to the whole team
  • The task board serves as a focal point for the daily meeting, keeping it focused on progress and obstacles
  • The simplicity and flexibility of the task board and its elementary materials (sticky notes, sticky dots etc.) allow the team to represent any relevant information: colors can be used to distinguish features from bug fixes, sticky orientation can be used to convey special cases such as blocked tasks, sticky dots can be used to record the number of days a task spends “In Progress”…
Common Pitfalls
  • Many teams new to Agile rush to adopt an electronic simulation (“virtual task board”) without first getting significant experience with a physical task board, even though virtual boards are much less flexible and poorer in affordances
  • Even geographically distributed teams for whom a virtual task board is a necessity can benefit from using physical task boards locally and replicating the information in an electronic tool
Origins

Sticky notes or index cards had been used for visual management of project scheduling well before Scrum and Extreme Programming brought these “low tech” approaches and their benefits back into the spotlight. However, the precise format of the task board described here did not become a de facto standard until the mid-2000’s.

  • 2003: the five-column task board format is described by Mike Cohn on his Web site; at the time, as this photo gallery collected by Bill Wake shows, very diverse variants still abound
  • 2007: the simplified three-column task board format (“To Do”, “In Progress”, “Done”) becomes, around that time, more popular and more standard than the original five-column version

What makes a great Task Board
A good task board should be carefully designed with readability and usability in mind, and the project methodology should actively rely on it. This implies that the use of the task board should be standardized and form part of the process. If task boards (and other information radiators) are not an integral part of the project methodology, maintaining them might be perceived as overhead or duplication of work. This results in boards not being updated and becoming out of sync with the work actually being done. An incomplete or stale task board is worthless.

A task board is a living entity and should be kept healthy.

You have a great task board if…

  • Team members never complain about having to use it.
  • The daily standup happens against it.
  • Random people that pass by stop to look at it, expressing interest and curiosity.
  • Your boss has proudly shown it to his boss.
  • You see team members updating it regularly during the day.
  • It passes the hallway usability test: a person who has never seen it before can understand it quickly and without explanations.
  • You catch a senior manager walking the floor and looking at it.
  • It just looks great!

Information Radiators are useful quite simply because they provide an effective way to communicate project status, issues, or metrics without a great deal of effort from the team. The premise is that these displays make critical, changing information about a project accessible to anyone with enough ambition to walk over to the team area and take a look.
Information Radiators are also good ways to remind the team of critical items, such as issues that need to be addressed, items on which the team is currently working, key models for the system on which they are working, and the status of testing.
Depending on the type of information tracked on the Information Radiators, these displays can also help the team to identify problems early. This is especially true if the team is tracking key metrics about their performance where trends in the information will indicate something is out of whack for the team. This type of information includes passing and failing tests, completed functionality, and task progress.

Using Information Radiators

As a team, determine what information would be very helpful to see plastered on a wall in plain sight. The need for an Information Radiator may be identified at the very beginning of a project, or as a result of feedback generated during a retrospective. Ideally, it will communicate information that needs to go to a broad audience, changes on a regular basis, and is relevant for the team.
Decide not only what you want to show, but the best way to convey it. There are a variety of methods to choose from, including a whiteboard and markers, sticky notes, pins, dots, or a combination of all of the above. Anything goes, as long as it is not dependent on a computer and some fancy graphics software. Unless of course you are working with a distributed team; see suggestions for that situation below.
Grab the necessary tools and get to work, but don’t forget to have a little fun with the creation process. Remember to make the Information Radiator easy to read, understand, and update. You want this to be a useful, living display of information, so don’t paint yourself into a corner at the beginning.
Remember to update the information radiator when the information changes. If you are using it to track tasks, you may change it several times a day. If you are using it to track delivery of features, it may be updated once a week or every two weeks.
Check in with the team regularly to find out if the Information Radiator is up to date and still useful. Find out if people outside the team are using it to gather information about the team’s progress without causing an interruption. Find out if there are possible improvements, or if the information radiator is no longer needed. Whatever feedback you receive, act on it.

Traffic lights , lava lamps

Traffic lights , lava lamps

Information radiators can take a variety of forms, from wallboards to lava lamps and traffic lights.

A wallboard is a type of information radiator that displays vital data about the progress of the development team. Similar to a scoreboard at a sporting event, wallboards are large, highly visible and easy to understand for anyone walking by.

Traditional wallboards are made of paper or use sticky notes on a wall. Electronic wallboards are very effective since they update automatically with real-time data ensuring that people check back regularly.

Common information radiators used in Scrum environment includes

  • Product Vision
  • Product Backlog/release plan
  • Iteration Backlog
  • Burn Down and Burn Up charts
  • Impediment List

In lean environment , information radiators are of specialized type which is called as Visual Control . In lean environment, visual controls are used to make it easier to control an activity or process through a variety of visual signals or cues.

The visual control:

  • Conveys its information visually 
  • Mirrors (at least some part of) the process being used by the team
  • Describes the state of the work‐in‐process
  • Is used to control the work‐in‐process
  • Can be viewed by anyone

Visual controls should be present at all levels: business, management, and team. That is, they should help the business see how value is being created as well as assist management and the team in building software as effectively and efficiently as possible.
The visual controls used in Lean‐Agile include

  • Product Vision : Every Agile team should have a product vision. This provides the big picture for the product: what is motivating the development effort, what the current objectives are, and the key  features. We have seen teams who were flailing about, trying to figure out what they were  supposed to be doing, suddenly gain focus when the  Product Champion produced a product vision statement
  • Product Backlog / Release Plan / Lean Portfolio
  • Iteration Backlog – simple team, multiple teams
  • Story Point Burn‐Up Chart
  • Iteration Burn‐Down Chart
  • Business Value Delivered Chart
  • Impediment List

A backlog is the accumulation of work that has to be done over time. In Lean‐Agile, the product backlog describes that part of the product that is still to be developed. Before the first iteration, it shows every piece of information that is known about the product at that time, represented in terms of features and stories. As each iteration begins, some of these stories move from the product backlog to the iteration backlog. At the end of each iteration, the completed features move off of both backlogs.
During “Iteration 0,” the features in the product backlog are organized to reflect the priorities of the business: higher‐priority features on the left and lower‐priority features on the right.

 

The entire enterprise (business, management, and development teams) also need line of sight to velocity (points/time), which, after the first few stories are done, should begin to represent the velocity of business solutions delivered. This is a clear representation of the value stream (from flow of ideas to completed work) and is mapped to the number of points that teams can sustainably deliver in a release.

We can use two visual controls working as a pair to give management a dashboard‐type view of work . The release burn‐up chart tracks cumulative points delivered with each iteration. This can be broken out or rolled up based on program, stakeholder, and so on. The feature burn‐up chart is created by using the initial high‐level estimate to calculate the percent complete after each iteration and plotting this with all features currently identified in the release plan.
This view gives the enterprise a clear visual control of current priorities, along with what work is actually in process. A Lean enterprise continuously watches out for too many features being worked on at any time—this indicates process problems and potential thrashing.

The Impediment List :  A foundation of both Scrum and Lean‐Agile is that continuous improvement includes continually removing impediments. One of the purposes of the daily meeting is to expose impediments, to make them explicit. The Scrum Master or Agile project manager must maintain a list of current impediments so that progress on resolving them can be visible to all. Entries on this list should include:

  • Date entered on list
  • Description of impediment
  • Severity of impediment (what its impact is)
  • Who it affects
  • Actions being taken to remove it

The impediment list should be maintained on an ongoing basis as impediments are removed, added, or modified in some way.

Kanban Board

In the context of Agile teams where the “Kanban method” of continuous improvement (or some of its concepts) has been followed, the following adaptations are often seen:

  • Such teams deemphasize the use of iterations, effort estimates and velocity as a primary measure of progress;
  • They rely on measures of lead time or cycle time instead of velocity; and in the most visible alteration, they replace the task board with a “kanban board”:
  • Unlike a task board, the kanban board is not “reset” at the beginning of each iteration, its columns represent the different processing states of a “unit of value”, which is generally (but not necessarily) equated with a user story.
  • In addition, each column may have associated with it a “WIP limit” (for “work in process” or “work in progress”): if a given state, for instance “in manual testing”, has a WIP limit of, say, 2, then the team “may not” start testing a third user story if two are already being worked on.
  • Whenever such a situation arises, the priority is to clear current work-in-process, and team members will “swarm” to help those working on the activity that’s blocking flow
Also Known As

The term “kanban” is Japanese  with the sense of a sign, poster or billboard, and derived from roots which literally translate as “visual board”.

Its meaning within the Agile context is borrowed from the Toyota Production System, where it designates a system to control the inventory levels of various parts. It is analogous to (and in fact inspired by) cards placed behind products on supermarket shelves to signal “out of stock” items and trigger a resupply “just in time”.

The Toyota system affords a precise accounting of inventory or “work in process”, and strives for a reduction of inventory levels, considered wasteful and harmful to performance.

The phrase “Kanban method” also refers to an approach to continuous improvement which relies on visualizing the current system of work scheduling, managing “flow” as the primary measure of performance, and whole-system optimization – as a process improvement approach, it does not prescribe any particular practices.

Common Pitfalls

Kanban boards are generally more sophisticated than “mere” task boards. This is not a mistake in and of itself; however, it is not advisable that the kanban board should serve as a pretext to reintroduce a “waterfall”-like, linear sequence of activities structuring the process of software development. This may lead to the creation of information silos or over-specialization among team members.

In particular, teams should be wary of kanban boards not accompanied by WIP limits, not only defined but also enforced with respect to demands from managers, customers or other stakehoders. It is from these limits that the kanban approach derives its effectiveness.

Expected Benefits

In some contexts, measuring lead time rather than velocity, and dispensing with the regular rhythm of iterations, may be the more appropriate choice: for instance, when there is little concern with achieving a specific release date, or when the team’s work is by nature continuous and ongoing, such as enhancement or maintenance, in particular of more than one product.

At the risk of oversimplifying, a “kanban board” setup can be considered first for efforts involving maintenance or ongoing evolution, whereas a “task board” setup may be a more natural first choice in efforts described as “projects”.

Origins
  • 2001: Mary Poppendieck’s article, “Lean Programming”, draws attention to the structural parallels between Agile and the ideas known as Lean or the “Toyota Production System”
  • 2003: expanding on their earlier work on Lean Programming, Mary and Tom Poppendieck’s book “Lean Software Development” describes the Agile task board as a “software kanban system”
  • 2007: the first few experience reports from teams using the specific set of alterations known as “kanban” (no iterations, no estimates, continuous task boards with WIP limits) are published, including reports from Corbis (David Anderson) and BueTech (Arlo Belshee)
  • 2007: the “kanbandev” mailing list is formed to provide a venue for discussion of kanban-inspired Agile planning practices
  • 2009: two entities dedicated to exploring the kanban approach are formed, one addressing business concerns, the LSSC and a more informal one aimed at giving the community more visibility: the Limited WIP Society
Flow of Work

After visualizing the work, you will be able to watch how it moves across the board. Kanban teams call this observing the Flow of Work. When there are bottlenecks in the process, or blocking issues that prevent work from being completed, you start to see it play out on the board.

Stop Starting and Start Finishing!

Kanban’s primary mechanism for improving the flow of work is the Work-in-Process Limit. You basically set a policy for the team, saying that we’ll limit how much work is started, but not finished, at any one time. When the board starts to fill up with too much unfinished work, team members re-direct their attention and collaborate to help get some of the work finally finished, before starting any more new work.

On a Kanban board, you visualize the flow of work. You can visualize how the work is flowing during the sprint through the very fast iterations of design, develop, and test that happen within each sprint.  Or, you can let the flow of work be governed by the kanban board and replace the sprint container with a regular deployment cadence instead. It’s a minor difference on the surface, but it can have a major impact.

With the sprint approach, you spend time at the beginning of every sprint trying to estimate and plan a batch of work for the entire sprint. Then you work on it and push hard to have the entire batch completed, tested and deployed by the end of the sprint.

When you’re focusing on flow and using Kanban, you could still deploy completed code every two weeks. But instead of going through the whole cycle of planning, estimating and moving a batch of work through to completion, you can move work through the system without batching two weeks of work at a time. Planning and estimating, designing, building and testing can happen for each individual item as it reaches the top of the priority list in the backlog. When there’s capacity on the Kanban board for more work (WIP limits are being honored, and people are available to do more work), they pull the next item to work on. When the two-week cadence comes around, you simply deliver whatever is ready to deliver.  This encourages members of the team to take each item all the way through to completion individually, instead of focusing on having a two-week batch of work completed at one time. You ensure that each item is Done. With a capital D. “Done-Done”, some people call it. And you get it to “Done-Done” before pulling the next item available for you to work on.

The Agile practice of iterative delivery is a huge improvement over old waterfall or stage-gate project management methods. Focusing on Flow using Kanban can sometimes be even more efficient, resulting in shorter lead times and higher productivity. It can lessen the feeling of always “starting and stopping” without abandoning the value of having a regular cadence to work toward.

Parking Lot Chart

“Parking Lot Chart” is used to provide a top-level digested summary of project status (not to be confused with a “Parking Lot List,” a tool facilitators use to capture unresolved issues). It is first described in Feature Driven Development (FDD) [Palmer02], and is widely used in agile projects today. It is sometimes also called a “Project Dashboard”.

Niko-niko Calendar

The team installs a calendar one one of the room’s walls. The format of the calendar allows each team member to record, at the end of every workday, a graphic evaluation of their mood during that day. This can be either a hand-drawn “emoticon”, or a colored sticker, following a simple color code, for instance: blue for a bad day, red for neutral, yellow for a good day.

Over time, the niko-niko calendar reveals patterns of change in the moods of the team, or of individual members.

Also Known As

The Japanese word “niko” means “smile”; following a common pattern of word doubling in Japanese, “niko-niko” has a meaning closer to “smiley”.

The term “mood board” is also seen. It is an information radiator.

Expected Benefits

The value of this practice lies in making somewhat objective an important element of team performance – motivation or well-being – which is generally seen as entirely subjective and thus impossible to measure and track.

This may be seen as an illustration of the Gilb Measurability Principle: “anything you need to quantify can be measured in some way that is superior to not measuring it at all.”

In other words, a measurement does not have to be perfect or even very precise, as long as your intent is to get a quantitative handle on something that was previously purely qualitative; the important thing is to take that first step toward quantifying.

Common Pitfalls

As with other activities, such as retrospectives, where team members are asked to report subjective feelings, self-censorship is always a risk. This could be the case, for instance, if team members who report poor days are blamed for “whining”, by management or by team mates.

Origins
  • 2001: among the visualizations described in Norm Kerth’s “Project Retrospectives”, the “Energy Seismograph” can perhaps be seen as a forerunner of the niko-niko calendar
  • 2006: niko-niko calendars are first described by Akinori Sakata in this Web article

PEARL XXII : Threat Modelling for Agile Applications

PEARL XXII :  The key to effectively incorporating threat modeling is to decide on the scope of the threat modeling that  will be performed during the various stages of  agile development project by incorporating Security Development Life cycle for Agile Development projects which includes Threat modelling.

Many software development organizations, including product organizations like Microsoft, use Agile software development and management methods to build their applications.
Historically, security has not been given the attention it needs when developing software with Agile  methods. Since Agile methods focus on rapidly creating features that satisfy customers’ direct needs, and  security is a customer need, it’s important that it not be overlooked. In today’s highly interconnected  world, where there are strong regulatory and privacy requirements to protect private data, security must  be treated as a high priority.
There is a perception today that Agile methods do not create secure code, and, on further analysis, the  perception is reality. There is very little “secure Agile” expertise available in the market today. This needs  to change. But the only way the perception and reality can change is by actively taking steps to integrate  security requirements into Agile development methods.

With Agile release cycles taking as little as one week, there simply isn’t enough time for teams to  complete SDL requirements for every release. On the other hand, there are serious security  issues that the SDL is designed to address, and these issues simply can’t be ignored for any release—no  matter how small.

A set of software development process improvements called the Security Development life-cycle (SDL) has been developed. The SDL has been shown to reduce the number of vulnerabilities in shipping software by more than 50 percent. However, from an Agile viewpoint, the SDL is heavyweight because it was designed primarily to help secure very large products, Agile practitioners in order to adopt the SDL, two changes must be made. First, SDL additions to Agile processes must be lean. This means that for each feature, the team does just enough SDL work for that feature before working on the next one. Second, the development phases (design, implementation, verification, and release) associated with the classic waterfall-style SDL do not apply to Agile and must be reorganized into a more Agile-friendly format.  A streamlined approach that melds agile methods and security—the Security Development life-cycle for Agile Development (SDL-Agile) has to be put into practice.

Let’s look at some basic examples of what threat modeling is and what it isn’t:

Definitions of what is and isn't involved in threat modeling

How to perform threat modeling
Threat modeling is largely a team activity. Does that mean an individual can’t threat model alone on a small project? Of course not, but will get the greatest benefit from the activity by including as many project members as possible. This is because each member will bring unique perspectives to the exercise, and that input is essential when trying to identify the various ways that an attacker might attempt to break your application or service.

The recommended approach is to use a whiteboard to sketch out each threat model, thereby facilitating team discussion around the diagram. The SDL is specific in recommending that threat models use the data flow diagram (DFD) as the foundational diagramming technique. Having a project member who is already familiar with data flow diagramming can be a big help, and that individual would be best suited to facilitate the first threat modeling session. So, the first step in threat modeling is to draw a DFD on the whiteboard that represents your envisioned application. Next, let’s look at the various threat modeling diagram types.

The highest-level diagram you can create is the context diagram. The context diagram is typically a simple diagram that captures the most basic interactions of your application with external actors. Figure 1 is an example of a context diagram. Following the context diagram, there are a variety of more detailed diagrams that can show more specific interactions of your application’s internal components and actors. The Diagram Levels table below lists the various levels of diagrams, along with descriptions of what each level typically contains.

Threat model diagram
Figure 1. Context diagram

Diagram levels

Once you have created a few of these high-level diagrams with your team, the next step is to identify the trust boundaries that we discussed earlier. Trust boundaries are represented by drawing lines that separate the various zones of trust in your diagram. It is the action of applying the trust boundary that changes the DFD into a threat model. It’s critical that you get the trust boundaries correct because that is where your team will focus its attention when discussing threats to the application. Figure 2 shows a Level 0 threat model with trust boundaries included.

  • Identify Trust Boundaries, Data Flow, Entry Point
  • Privileged Code
  • Modify Security Profile based on above

Level 0 diagram with trust boundaries
Figure 2. Level 0 diagram with trust boundaries

Once you’ve created the threat model, your team will be ready to start discussing the threats to the application as envisioned in deployment. In order to do this efficiently, the SDL has categorized potential threats along with the most common mitigations to those threats in order to simplify the process. For a more complete discussion on threat models, refer to the book Threat Modeling by Frank Swiderski and Window Snyder.

Let’s next examine the process you can follow to identify, classify, and mitigate threats to your application. We call this process STRIDE.

STRIDE

STRIDE is a framework for classifying threats during threat modeling. STRIDE stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation Privilege. The table below provides basic descriptions of the elements of STRIDE.

STRIDE definitions

Notice how each of the definitions are described in terms of what an “attacker” can do to your application. This is an important point: When threat modeling, it’s crucial to convey to your project team that they should be more focused on what an attacker can do than what a legitimate user can do. What your users can do with your application should be captured in your user stories and not in the threat model.

Now that you’ve created the threat model and discussed the threats to the various components and interfaces, it’s time to determine how to help protect your application from those threats. We call these mitigations. the following table lists the mitigation categories with respect to each of the STRIDE elements.

STRIDE and mitigation categories

 The basics of threat modeling, has been depicted above , the following section delves into how to incorporate threat modeling with agile methods.  Additionally, it’s important to note that once  threat models have been created on the whiteboard, it is essential to capture them as a drawing. Microsoft provides an excellent tool for this purpose: the Microsoft SDL Threat Modeling Tool. This tool allows you to easily create threat model diagrams, and provides a systematic approach to identifying threats based on trust boundaries and evaluating common mitigations to those threats.

At some point, the major SDL artifact—the threat model—must be used as a baseline for the product.  Whether this is a new product or a product already under development, a threat model must be built as  part of the sprint design work. Like many good Agile practices, the threat model process should be time-boxed and limited to only the parts of the product that now exist or are in development.
Once a threat model baseline is in place, any extra work updating the threat model will usually be small,  incremental changes.
A threat model is a critical part of securing a product because a good threat model helps to:

  • Determine potential security design issues.
  • Drive attack surface analysis and most “at-risk” components.
  • Drive the fuzz-testing process.

During each sprint, the threat model should be updated to represent any new features or functionality  added during that sprint. The threat model should also be updated to represent any significant design  changes, even if the functionality stays the same.

A threat model needs to be built for the current product, but it is imperative that the team remains lean. A minimal, but useful, threat model can be built by analyzing high-risk entry points and data in the system. At a minimum, the following should be identified and threat models built around the entry
points and data:

  • Anonymous and remote network endpoints
  • Anonymous or authenticated local endpoints into high-privileged processes
  • Sensitive, confidential, or personally identifiable data held in data stores used in the application

Integrated threat modeling

Integrated threat modeling ensures that the dedicated security architect liaises with the QA lead and development lead, and periodically identifies, documents, rates, and reviews the threats to the system. The key point to be noted is that threat modeling is not a one-time activity; rather, it is an iterative process that needs to be carried out in every sprint

  • Modify the architecture diagram based on changes in – Sprint (if any)
  • Periodic Review and ID of assets
  • Update Threat Document with vulnerabilities.
  • ID threat attributes such as Countermeasures, attack technique etc
  • Use techniques such as DREAD to rate the threats
  • Threats with High Rating must be fixed immediately in subsequent sprints

Training Development Team

Focusing on improving processes and giving staff better awareness training could reap huge rewards – cutting the time taken to spot breaches and even preventing many from happening in the first place. Therefore it is suggested that the development team be given sufficient training in secure code development at least covering the following

  •  Cross-site scripting vulnerabilities
  •  SQL injection vulnerabilities
  •  Buffer/integer overflows
  •  Input validation
  • Language Specific issues

Also, it would help if developers periodically review the industry
standards, such as OWASP Top 10 vulnerabilities, and understand
how to develop code devoid of such vulnerabilities.

Continuing Threat Modeling
Threat modeling is one of the every-sprint SDL requirements for SDL-Agile. Unlike most of the other every-sprint requirements, threat modeling is not easily automated and can require significant team effort. However, in keeping with the spirit of agile development, only new features or changes being implemented in the current sprint need to be threat modeled in the current sprint. This helps to minimize the amount of developer time required while still providing all the benefits of threat modeling.
Fuzz Testing
Fuzz testing is a brutally effective security testing technique, especially if the team has never used fuzz testing on the product. The threat model should determine what portions of the application to fuzz test. If no threat model exists, the initial list should include high-risk items,
High-Risk Code.
After this list is complete, the relative exposure of each entry point should be determined, and this drives the order in which entry points are fuzzed. For example, remotely accessible or unauthenticated endpoints are higher risk than local-only or authenticated endpoints.
The beauty of fuzz testing is that once a computer or group of computers is configured to fuzz the application, it can be left running, and only crashes need to be analyzed. If there are no crashes from the outset of fuzz testing, the fuzz test is probably inadequate, and a new task should be created to analyze why the fuzz tests are failing and make the necessary adjustments.
Using a Spike to Analyze and Measure Unsecure Code in Bug Dense and “At-Risk”
Code
A critical indicator of potential security bug density is the age of the code. Based on the experiences of organizations like Microsoft developers and testers, the older the code, the higher the number of security bugs found in the code. If the project has a large amount of legacy code or risky code individual should locate as many vulnerabilities in this code as possible. This is achieved through a spike. A spike is a time-boxed “side project” with a well-defined goal (in this case, to find security bugs). It can be thought  of this spike as a mini security push. The goal of the security push in organizations like Microsoft is to bring risky code up to date in a short amount of time relative to the project duration.
Note that the security push doesn’t propose fixing the bugs yet but rather analyzing them to determine how bad they are. If a lot of security bugs are found in code with network connections or in code that handles sensitive data, these bugs should not only be fixed soon, but also another spike should be set up to comb the code more thoroughly for more security bugs.

The following defines the highest risk code  that should receive greater scrutiny if  the code is legacy code and should be written with the greatest care if the code is new code.

  • Windows services and *nix daemons listening on network connections
  • Windows services running as SYSTEM or *nix daemons running as root
  • Code listening on unauthenticated network ports connections
  • ActiveX controls
  • Browser protocol handlers (for example, about: or mms:)
  • setuid root applications on *nix
  • Code that parses data from untrusted (non-admin or remote) files
  • File parsers or MIME handlers

Examples of analysis performed during a spike include:

  • All code. Search for input validation failures leading to buffer overruns and integer overruns. Also, search for insecure passwords and key handling, along with weak cryptographic algorithms.
  • Web code. Search for vulnerabilities caused through improper validation of user input, such as CSS.
  • Database code. Search for SQL injection vulnerabilities.
  • Safe for scripting ActiveX controls. Review for C/C++ errors, information leakage, and dangerous operations.

All appropriate analysis tools available to the team should be run during the spike, and all bugs triaged and logged. Critical security bugs, such as a buffer overrun in a networked component or a SQL injection vulnerability, should be treated as high-priority unplanned items.
Exceptions
The SDL requirement exception workflow is somewhat different in SDL-Agile than in the classic SDL.
Exceptions in SDL-Classic are granted for the life of the release, but this won’t work for Agile projects. A “release” of an Agile project may only last for a few days until the next sprint is complete, and it would be a waste of time for project managers to keep renewing exceptions every week. To address this issue, project teams following SDL-Agile can choose to either apply for an exception for the duration of the sprint (which works well for longer sprints) or for a specific amount of time, not to exceed six months (which works well for shorter sprints). When reviewing the requirement exception, the security advisor can choose to increase or decrease the severity of the exception by one level (and thus
increase or decrease the seniority of the manager required to approve the exception) based on the requested exception duration.
For example, say a team requests an exception for a requirement normally classified as severity 3, which requires manager approval. If they request the exception only for a very short period of time, say two weeks, the security advisor may drop the severity to a 4, which requires only approval from the team’s security champion. On the other hand, if the team requests the full six months, the security advisor may increase the severity to a 2 and require signoff from senior management due to the increased risk. In addition to applying for exceptions for specific requirements, teams can also request an exception for an entire bucket. Normally teams must complete at least one requirement from each of the bucket categories during each sprint, but if a team cannot complete even one requirement from a bucket, the team requests an exception to cover that entire bucket. The team can request an exception for the duration of the sprint or for a specific time period, not to exceed six months, just like for single exceptions. However, due to the broad nature of the exception—basically stating that the team is going to skip an entire category of requirements—bucket exceptions are classified as severity 2 and require the
approval of at least a senior manager.
Final Security Review
A Final Security Review (FSR) similar to the FSR performed in the classic waterfall SDL is required at the end of every agile sprint. However, the SDL-Agile FSR is limited in scope—the security advisor only needsto review the following:

  • All every-sprint requirements have been completed, or exceptions for those requirements have been granted.
  • At least one requirement from each bucket requirement category has been completed (or an exception has been granted for that bucket).
  • No bucket requirement has gone more than six months without being completed (or an exception  has been granted).
  • No one-time requirements have exceeded their grace period deadline (or exceptions have been granted).
  • No security bugs are open that fall above the designated severity threshold (that is, the security bug bar).

Some of these tasks may require manual effort from the security advisor to ensure that they have been  completed satisfactorily (for example, threat models should be reviewed), but in general, the SDL-Agile  FSR is considerably more lightweight than the SDL-Classic FSR.

PEARL VII : Agile metrics measurement for Software Quality

PEARL VII :

The metrics for measuring software quality in an agile environment are drastically different from that of those used in traditional IT landscapes.  How to effectively measure Agile S/w development  Quality in Agile Enterprise using Agile Metrics  is dealt here.

 

Agile software development grew in part from the intersection of  erstwhile alternative practices that predated it. And even more  derivatives had sprung up throughout the years. This had led to  the evolution of Agile in its current state. Wherein, all the defined  practices becomes a catalog from which development teams could choose from, and adapt what is appropriate for their case. This evolution then leads to the fact that development teams will  not easily have uniform processes. Forcing uniformity will most  likely negate the benefits of Agile. That said, most metrics are  intimately tied to actual practices, but measurements should be  able to cope with this perceived variability

The traditional metrics are also in conflict with agile’s and  lean’s principles. For example, a focus on adherence to  estimates is incompatible with agile’s principle of embracing  change. It will lead to chasing obstacles, instead of seizing  opportunities.

 In Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results, David Anderson combines TOC with Agile software development, with the objective of creating a process that “scales in scope and discipline to be acceptable in the boardrooms of the Fortune 1000″ . Anderson compares traditional “waterfall”, FDD, XP and RAD approaches, and proposes a rigorous metrics approach. 

Anderson provides a convincing argument for the traditional metrics inability to measure agile software  development . By demonstrating how they violate Reinertsen’s criteria for a good metric. Traditional metrics do not meet the criterion of being relevant, because of the high cost focus. The cost should not be the  main concern.  Moreover, they  elude the requirements of being simple and easy to collect. For  example, the once popular traditional metric to count the lines  of code has no simple correlation with the actual effort. The  software complexity results in a nonlinear function between the effort and the lines of code. It also motivate to squeeze in

It is impossible to create a metric set that would suit all agile projects. Every project has different goals and needs, and, as  the incremental and emergent nature of agile methods
[Mnkandla and28 Dwolatzky, 2007] implies that metrics – as part of the framework of development technologies – should also be allowed to emerge. There are, however, methods for choosing suitable metrics.

When measuring the production side of the development it is important to select metrics that support and reflect the financial counterparts.  The most commonly recommended agile production metrics, which are described in the following sections.

Total project duration: Agile projects get done quicker than traditional projects. By starting development sooner and cutting out bloatware — unnecessary requirements — agile project teams can deliver products quicker. Measure total project duration to help demonstrate efficiency

Time to market: Time to market is the amount of time an agile project takes to provide value, either through internal use or by generating income, by releasing working products and features to users.

Time to market is especially important for companies with revenue-generating products, because it aids in budgeting throughout the year. It’s also very important if you have a self-funding project— a project being paid for by the income from the product.

Total project cost: Cost on agile projects is directly related to duration. Because agile projects are faster than traditional projects, they can also cost less. Organizations can use project cost metrics to plan budgets, determine return on investment, and know when to exercise capital redeployment.

Return on investment: Return on investment (ROI) is income generated by the product, less project costs: money in versus money out. On agile projects, ROI is fundamentally different than it is on traditional projects. Agile projects have the potential to generate income with the very first release and can increase revenue with each new release.

New requests within ROI budgets: Agile projects’ ability to quickly generate high ROI provides organizations with a unique way to fund additional product development. New product features may translate to higher product income. If a project is already generating income, it can make sense for an organization to roll that income back into new development and see higher revenue.

Capital redeployment: On an agile project, when the cost of future development is higher than the value of that future development, it’s time for the project to end. The organization may then use the remaining budget from the old project to start a new, more valuable project.

Team member turnover: Agile projects tend to have higher morale. One way of quantifying morale is by measuring turnover through a couple metrics:

  • Scrum team turnover: Low scrum team turnover can be one sign of a healthy team environment. High scrum team turnover can indicate problems with the project, the organization, the work, individual scrum team members, burnout, ineffective product owners forcing development team commitments, personality incompatibility, a scrum master who fails to remove impediments, or overall team dynamics.

  • Company turnover: High company turnover, even if it doesn’t include the scrum team, can affect morale and effectiveness. High company turnover can be a sign of problems within the organization. As a company adopts agile practices, it may see turnover decrease.

Requirements and design quantification

Tom Gilb strongly believes that quantification of requirements is an essential concept missing from the agile paradigm, or even from software engineering in general. He claims that this lack is a risk for project failure, as software engineers and project managers cannot properly manage project results, control risks and costs, or prioritize tasks [Gilb and Cockburn, 2008]. Especially important are quality characteristics, because “functions and use cases are far less interesting” [Gilb and Cockburn, 2008] and “that is where most people have problems with quantification” [Gilb and Brodie, 2007]. Gilb assures that only numerically expressed quality goals are clear enough and therefore quantification is a step needed on the way from high level requirements to design ideas.
Design ideas also should be quantified [Gilb and Brodie, 2007]. It is done by the estimation of their value (to the customer – business value, not technical) and cost (effort). Then it is possible to identify the best designs – the ones with the highest value-to-cost ratio – and reprioritize the following development steps. A similar idea is described by Kile and Inampudi [2007]. The value and cost for implementing a requirement is estimated (without considering different design ideas) and the requirements are prioritized according to the result. This was a solution to a problem with the requirements prioritization ability of the customer and the development team. With some requirements having high value and high cost, and others having moderate value but very low cost, it was not easy to compare their desirability without quantification.

Metrics can also aid the application of agile practices. For example, in refactoring they can give information on the appropriate time for and the significance of a refactoring step [Kunz et al., 2008].

Heuristics for wise agile measurement [Hartmann and Dymond, 2006]

A good metric or diagnostic:

  1. Affirms and reinforces Lean and Agile principles.
  2. Measures outcome, not output.
  3. Follows trends, not numbers.
  4. Belongs to a small set of metrics and diagnostics.
  5. Is easy to collect.
  6. Reveals, rather than conceals, its context and significant variables.
  7. Provides fuel for meaningful conversation.
  8. Provides feedback on a frequent and regular basis.
  9. May measure Value (Product) or Process.
  10. Encourages “good-enough” quality.

Leffingwell argues that measurement should happen during iteration retrospectives and release retrospectives.

He proposes two categories of metrics: quantitative and qualitative. Quantitative metrics for an iteration consist of process metrics,  such  as  the  number  of  user   stories   accepted / rejected / rescheduled / added, and product metrics related to quality (defect count) and testing (number of test cases, percentage of automated test cases, unit test coverage).

Quantitative metrics for a release measure the release progress with value delivery (number of features delivered to the customer and their total value expressed in feature points, feature debt – existing customer commitments), conformance to release date, and technical debt (number of refactoring targets and number of refactorings completed, also called architectural debt or design debt). The qualitative assessment for both iteration and release requires listing what went well and what did not, revealing what should be continued and what needs to be improved.

Kunz et al. [2008] propose to combine software measurement with refactoring in order to indicate when a refactoring step is necessary, how important it is, how it affects quality, and what side effects it has. Metrics would be used here as triggers for needed refactoring steps.

Categories of Metrics

Quality

  • Defect Count
  • Technical Debt
  • Faults-Slip-Through
  • Sprint Goal success rate

Predictability

  • Velocity
  • Running Automated Tests

Value

  • Customer Satisfaction Survey
  • Business Value Delivered

Lean

  • Lead Time
  • Work In Progress
  • Queues

Cost

  • Average Cost Per Functions

Commonly recommended agile production metrics.

Lean Metrics

The selection of production metrics must carefully consider what has been advised in the previous sections. Inventory based metrics possess all these characteristics and give the advantage of addressing the importance of flow,  The most significant inventory based metrics are summarized below

Lead time – Relates to the financial metric Throughput. The lead time should be as short and stable as possible. It reduces the risk that the requirements become outdated and provides predictability. The metric is supported by Poppendieck, who states that the most important to measure is the ―concept-to-cash -time together with financial metrics .

 Queues – In software development queue time is a large part of the lead time. In contrast to the lead time, queue metrics are leading indicators. Large queues indicate that the future lead time will be long, which enables preventive actions. By calculating the cost of delay of the items in the queues, precedence can be given to the most urgent ones.

 Work in Progress – Constraining the WIP in different phases is one of the best ways to prevent large queues. If used in combination with queue metrics, WIP constraints prevent dysfunctional behavior such as simply renaming the objects in queues to work in progress. The metric is also an indicator of how well the team collaborates . A low WIP shows that the team works together on the same tasks. In addition, the Kanban method, which is built around the idea of constraining the WIP promises that it will result in an overall better software development,

These metrics can be visualized in a cumulative flow diagram, By tracking the investment’s way along the value chain towards becoming throughput, the inventory based metrics correlates well with the financial metric Investment.

Cost Metrics

Anderson argues the only cost metric needed is Average Cost Per Function (ACPF) and should only be used to estimate future operation expenses

Business Value Metrics

Agile software development puts the focus on the delivery of business value. Methods such as Scrum prioritize the work by value, making it sensible to measure the business value. It has also been observed that the trend in the industry is to measure value .

Hartmann notes that agile methods encourage the development to be responsible for delivering value rapidly and that the core metric should oversee this accountability. The quick delivery of value means that the investment is converted into value producing software as soon as possible. Leading metrics of business value includes estimations and is not an exact science.

Mike Cohn offers a possible solution to measure the business value , which involves dividing the business case’s value between the tasks. The delivery of value can be displayed in a Business Value Burnup Chart, One way to verify the delivery of business value, is to ask the customer if the features are actually used . It has proved useful to survey the customer over the time of a release and is much in line with the agile principle of customer cooperation.

 Quality Metrics

Lean metrics can indicate the products’ quality and provide predictability. For example, large queues in the implementation phase indicate poor quality and a stable lead time contributes to predictability. However, it might be necessary to supplement and balance them with more specific metrics.

A quality metric recommended by the agile community is Technical Debt .

Technical debt is a metaphor referring to the consequences of taking shortcuts in the software development. For example, code written in haste that is in need of refactoring. The debt can be represented in financial figures, which makes the metric suitable to communicate to upper management .

Technical metric measures the team’s overall technical indebtedness; known problems and issues being delivered at the end of the sprint. This is usually counted using bugs but could also be deliverables such as training material, user documentation, delivery media, and other

The counting of defects can be used as a quality metric. The defect count may occur in various stages of the development.

Counting defects in individual iterations can have a fairly large variation and may paint a misleading picture . Another aspect of defects is where they have been introduced. The fault-slips-through metric measures the test efficiency by where the defects should have been found and where it actually was . It monitors how well the test process works and addresses the cost savings of finding defects early. In case studies on implementation of lean metrics, the faults-slip-through has been recommended as the quality metric of choice

The primary purpose of measuring Faults Slip Through is to make sure that  the test process finds the right faults in the right phase, i.e. commonly earlier. Fault Slip Through represents the number of faults not  detected in a certain activity. These faults have instead been  detected in a later activity

Sprint goal success rates: A successful sprint should have a working product feature that fulfills the sprint goals and meets the scrum team’s definition of done: developed, tested, integrated, and documented.

Throughout the project, the scrum team can track how frequently it succeeds in reaching the sprint goals and use success rates to see whether the team is maturing or needs to correct its course.

EVM Metrics

AgileEVMTerms

AgileEVMTerms

AgileEVMMetrics

AgileEVMMetrics

 Predictability Metrics

What many organizations hope to gain from the measurement is predictability . In several of the agile methods the velocity of delivered requirements is used to achieve predictability and estimate the delivery capacity. The average velocity can serve as a good predictability metric, but can easily be gamed if used for other purposes. For example, Velocity used to measure productivity can degrade the quality.

Velocity is a capacity planning tool sometimes used in Agile software development. Velocity tracking is the act of measuring said velocity. The velocity is calculated by counting the number of units of work completed in a certain interval, the length of which is determined at the start of the project

The main idea behind velocity is to help teams estimate how much work they can complete in a given time period based on how quickly similar work was previously completed.

The following terminology is used in velocity tracking.

Unit of work
The unit chosen by the team to measure velocity. This can either be a real unit like hours or days or an abstract unit like story points or ideal days. Each task in the software development process should then be valued in terms of the chosen unit.
Interval
The interval is the duration of each iteration in the software development process for which the velocity is measured. The length of an interval is determined by the team. Most often, the interval is a week, but it can be as long as a month.

To calculate velocity, a team first has to determine how many units of work each task is worth and the length of each interval. During development, the team has to keep track of completed tasks and, at the end of the interval, count the number of units of work completed during the interval. The team then writes down the calculated velocity in a chart or on a graph.

The first week provides little value, but is essential to provide a basis for comparison.Each week after that, the velocity tracking will provide better information as the team provides better estimates and becomes more used to the methodology.

Velocity chart

Velocity chart

ss_iteration_burnup_2

The Velocity Chart shows the amount of value delivered in each sprint, enabling you to predict the amount of work the team can get done in future sprints. It is useful during your sprint planning meetings, to help you decide how much work you can feasibly commit to.

You can estimate your team’s velocity based on the total Estimate (for all completed stories) for each recent sprint. This isn’t an exact science — looking at several sprints will help you to get a feel for the trend. For each sprint, the Velocity Chart shows the sum of the Estimates for complete and incomplete stories. Estimates can be based on story points, business value, hours, issue count, or any numeric field of your choice . Please note that the values for each issue are recorded at the time the sprint is started.

 Running Automated Tests measures the productivity by the size of the product . It counts test points defined as each step in every running automated test. The belief is that the number of tests written is better in proportion to the requirement’s size, than the traditional lines-of-code metric. The metric addresses the risk of neglected testing, which is usually associated with productivity metrics. It motivates to write tests and to design smaller, more adaptive tests. Moreover, it has proven to be a good indicator of the complexity and to some extent on the quality . For measuring release predictability, Dean Leffingwell proposes to measure the projected value of each feature relative to the actual . However, the goal should not be to achieve total adherence. Instead, the objective should be to stay within a range of compliance to plan, which allows for both predictability and capturing of opportunities.

Actual Stories Completed vs. Committed Stories

The measure is taken by comparing the number of stories committed to in sprint planning and the number of stories identified in the sprint review as completed.

Visualization

To get the full value of agile measurement, the metrics need to be acted upon. The visualization of the metrics helps to ensure that actions are taken and achieves transparency in the organization . The company’s strategies become communicated and the coordination increases.

Spider Chart

Spider Chart

In Kanban the visualization of the workflow is an important activity and facilitates self-organizational behavior . For example, when a bottleneck is shown the employees tend to work together to elevate the bottleneck. Both Kanban and Scrum use card walls to visualize the work flow where each card represents a task and its current location in the value chain. The inventory based metrics can then be collected using the card walls. A very effective way to visualize the inventory based metrics is cumulative flow diagrams .

The cumulative flow diagram is an area graph, which shows the workflow on a daily basis.

Cumulative Flow Diagram

Are you a manager or business stakeholder working on an Agile points are the unit of measure used by this team for estimation Scrum Project and facing the following issues?

You have a strong feeling that there are bottlenecks in the process but are facing a lot of difficulty in mapping it to the process.

You are not satisfied with the burn-up and burn-down charts produced by the team and are interested in getting more insight into the process.
There is a panacea to solve all of the above issues. The name of the magic potion is “cumulative flow diagram”. Cumulative flow diagrams (CFDs) applied on top of the basic principles of KAN (visual) + Ban (card) can give you an insight into the project and keep everyone updated. CFD can be an extremely powerful tool when applied to a Scrum model.

A Cumulative Flow Diagram (CFD) is an area chart that shows the various statuses of work items for a product, version, or sprint. The horizontal x-axis in a CFD indicates time, and the vertical y-axis indicates cards (issues). Each coloured area of the chart equates to a workflow status (i.e. a column on your board).

A CFD can be useful for identifying bottlenecks. If your chart contains an area that is widening vertically over time, the column that equates to the widening area will generally be a bottleneck.

Multi-color CFDS look complicated but pretty. The pain of understanding it is worth the gain you get from it. These diagrams can help you in making critical business decisions. They will help give you better visibility regarding the time to market dates for features. Applying it on top of a Scrum project will help you see an accurate picture of the progress on your project.
Giving CFD to this team will help them in following the “Inspect and Adapt” Scrum principle. They can further zoom into the work in progress to see various flow states. CFD will help you in analyzing the actual progress and bottlenecks in any project. CFD can be drawn using area charts in MS Excel.

Workflow for CFD

Workflow for CFD

Cumulative Flow Diagram

Cumulative Flow Diagram

There are many other usages of CFDs besides finding bottlenecks. CFDs are multi-utility graphs that continuously report the true status of an Agile project. CFDs can help in determining lead time, cycle time, size of backlog, WIP, and bottlenecks at any point in time
Lead time is the time from when the feature entered  backlog to its completion status. This is of utmost interest to business stakeholders. It can help business people decide about the time  to market for features. They can plan marketing campaigns based  on lead times.
Cycle time is the time taken by team from when they started work  on it to completion status. This helps the project leads make important decisions in selecting items for working.
WIP is the work currently lying in different stages of the software lifecycle. Cycle time is directly proportional to the work in progress items. Keeping a limit on WIP is a key to success in any Agile project. In Scrum we try to limit the WIP within a sprint, but limiting
it across the various flow states will help further in gaining better control and visibility in a sprint.
Controlling WIP is the mantra for victory in any project. With CFDs, WIP is no longer a black box and anyone can see the work distribution at any point in time. Thus CFDs provide better insight and the power for better governance in any Agile methodology.A single diagram can contain information about lead time, WIP, queues and bottlenecks.

In Scrum, the Burndown Chart is a standard artifact. It allows the teams to monitor its progress and trends. The Burndown Chart tracks completed stories and the estimated remaining work. There are also variations of the Burndown Chart . For example, the Burnup Chart contains information about scope changes. For even better predictability, story points may be used. The stories are assigned points by the estimated effort to implement them.

A Burndown Chart shows the actual and estimated amount of work to be done in a sprint. The horizontal x-axis in a Burndown Chart indicates time, and the vertical y-axis indicates cards (issues).

Iteration Burn Down chart

Iteration Burn Down chart

Use a Burndown Chart to track the total work remaining and to project the likelihood of achieving the sprint goal. By tracking the remaining work throughout the iteration, a team can manage its progress and respond accordingly.

To communicate the KPIs, many organizations use Balanced Scorecards or Dashboards Dashboards are used to effectively monitor, analyze and manage the organization’s performance . The level of detail of the dashboards varies, ranging from graphical high-level KPIs to low-level data for root cause analysis. In order to communicate and facilitate that metrics are acted upon the measurement practice should: Visualize the metrics to achieve transparency. Be careful to not create dysfunctional behavior with the visualization.

Continuous Improvement

Kaizen is the Japanese word for continuous improvement and is a part of the lean software development . It is also found in agile software development. For example, Scrum has retrospectives after each sprint where improvements are identified. The retrospectives have similarities to Deming’s Plan-Do- Check-Act (PDCA) .

The PDCA is a cycle of four phases, which should drive the continuous improvement. What is notable is that the PDCA prescribe measurement to verify that improvements are achieved. Petri Heiramo observes that the retrospectives lack measurements and argues that it can lead to undesirable results . Without any metrics, it will be difficult to determine whether any targets have been met. This in turn can be demoralizing for the commitment to the improvement efforts.

Heiramo, suggest that these three questions should be added to the retrospective: What benefit or outcome do we expect out of this improvement/change? How do we measure it? Who is responsible for measuring it? Diagnostics can be used to obtain these measurements, In order for the diagnostics to achieve process improvement, the measurement practice should Be an integrated part of a process improvement framework.

Agile Metrics at  MAMDAS – a software development unit in the Israeli Air Force 

MAMDAS – a software development unit in the Israeli Air Force develops large scale , enterprise critical applications for Israeli Airforce. The project is developed by a team of 60 skilled developers and testers, organized in a hierarchical structure of small groups.
The project develops large-scale, enterprise-critical software, intended to be used by a large and varied user population. During December 2004, the first XP team was established for MAMDAS to implement XP . It was encouraging to observe that after the first two weeks iteration “managers were very surprised to see something running” and everyone agreed that “the pressure to deliver every two weeks leads to amazing results”  Still, accurate metrics are required in order to take professional decisions, to analyze long-term effects, and to increase confidence of all management levels with respect to the process that XP inspires.

They described four metrics and the kinds of data that are gathered to calculate them. These four metrics present information about the amount and quality of work that is performed, about the pace of the work progresses, and about the status of the remaining
work versus remaining human resources.

Product size, initially just called ‘Product’, is the first metrics. It aims at presenting the amount of completed work. The data that was selected to reflect the amount of work is the number of test points. One test point is defined as one test step in an automatic acceptance testing scenario  or as one line of unit tests. The number of test points is calculated for all kinds of written test and is gathered per iteration per component. Additional information is gathered with respect to the number of test points for tests that pass, the number of points for tests that fail, and the number of points for tests that do not run at all.

Pulse is the second metrics, which aims to measure how continuous the integration is. The data is automatically gathered from the development environment by counting how many check-in operations occur per day. The data is gathered for code check-ins, automatic-test check-ins, and detailed specifications check-ins. When referring to code it means code plus its unit test.

Burn-down is the third metrics. It presents the project remaining work versus the remaining human resources. This metrics is supported by the main planning table that is updated for each task according to kinds of activities (code, tests, or detailed specifications), dates of opening and closing, estimate and real time of development and, the component that it belongs to. In addition, this metrics is supported with the human resources table that is updated when new information regarding teammates’ absence arrives. This table also contains the product’s component assigned to each of the teammates and with the percentages of her/his position in the project. By using the data of these tables, this metrics can present the remaining work in days versus the remaining human resources in days. This information can be presented per week or for any group of weeks till a complete release, both for the entire team or for any specific component.

The burn-down graph answers a very basic managerial question: are we going to meet the goals of this release, and if not, what can we do about it? Release goals were set before each release – each goal is a high-level feature. Goals are defined by the user, and are verified by matching a rough estimate of the effort required to complete each goal (given by the development team) to the total available resources.
Once goals are defined and estimated, both remaining work and remaining resources are based on this initial estimation, which is refined as the release progresses.

Faults is the fourth metrics, which counts faults per iteration. During the release on which  all faults that were discovered in a specific iteration were fixed at the beginning of the next iteration. The faults metrics is required to continuously metrics the product’s quality. Note that the product size metrics doesn’t do it, since although it metrics test points, it does not correlate between the number of failed or un-run test steps to the number of actual bugs.The feedback on the Size Metrics was that it motivates writing tests and that it can be referred also as complexity metrics.

The use of the presented metrics mechanism increases confidence of the team members as well of the unit’s management with respect to using agile methods.Further, these metrics enable an accurate and professional decision making process for both short- and long-term purposes.

Technical Debt in Petrobras , Brazil

As an oil and gas company, Petrobras (http://www.petrobras.com.br) develops software in areas which demands increasingly innovative solutions in short time intervals. The company started officially with Scrum in March of 2009, using its lightweight framework to create collaborative self-organizing teams that could effectively deliver products. After the first team had adopted Scrum with a relative success, the manager noticed that the framework could be used in other teams, and thus he invested in training and coaching so that the teams could also have the opportunity to try the methodology. At that time, only the software development department for E&P, whose software helps to Exploit and Produce oil and gas, had management endorsement in adopting scrum and agile practices that would let teams deliver better products faster. About one year and half later, all teams in the department were using Scrum as its software development process. The developers and the stakeholders in general noticed expressive gains with the adoption of Scrum.
The results had varied from the skill of the team leadership in agile methodologies, customer participation, level of collaboration between team members, technical expertise among other factors.
The architecture team of the software development department for E&P was composed of four employees, whose responsibility was to help teams and offer support for resolution of problems related to agile methods and architecture. At that time, it had to work with 25 teams which had autonomy regarding its technical decisions. In fact, autonomy was one of the main managerial concerns when adopting agile methods.
After Scrum adoption, there was active debate, training and architectural meetings about whether Agile engineering practices should also be adopted in parallel with managerial practices; in hindsight, it would have accelerated the benefits had they been adopted. But the constraints of time and budget, decisions made by non- technical staff, and the bureaucracy in areas such as infrastructure and database, led to the postponement of those efforts initially. Moreover, the infrastructure area had only build and continuous integration (CI) tools available. And, unfortunately, these tools were not taken seriously by the teams. The automated deployment was relatively new and was postponed because of fear of implementing it in immature phase. Other tools and monitoring mechanisms were not used by the teams even so the architecture team was aware of its possible benefits.
Despite all initiatives in training and supporting in agile practices such as configuration management, automated tests and code analysis, teams, represented by 25 focal points in architectural meetings, did not show much interest in adopting many agile practices – particularly technical practices. Delivering the product on the date agreed with the customer and maintaining the legacy code were the most urgent issues. Analyzing retrospectively it seems that the main cause for this situation was that debt was getting accrued unconsciously. Serving the client was a much more visible and imperative goal. This can be one explanation for the ineffectiveness of prior attempts in introducing technical practices. Be it by the means of specialized training or by the support of the architecture team.
With these not so effective attempts to promote continuous improvement with teams, the architecture team sought a way to motivate them to experiment agile practices without a top-down “forced adoption”. The technical debt metaphor,  was the basis for the approach.

Given the context aforementioned, especially regarding the role of the architecture team serving various teams in parallel and the fact that the teams have autonomy in its technical decisions, there was a need for the technical debt estimation and visualization in a “macro-level”, i.e., not only associated with source code aspects but
with technical practices involving the product in general. This would give the opportunity to see the actual state of the department and indicate the “roadmap” for future interaction with the development teams.
The actions involved in introducing this kind of visualization and the management activities based on that visualization. The architecture team modeled a board, where the lines corresponds to teams and the columns are the categories and subcategories of technical debt, based on the work of Chris Sterling .In each cell, formed by the pair team x technical debt category, the maturity of the team was evaluated according to predefined criteria. They used the colors red, yellow and green to show the compliance level of each criterion.

After the design of the technical debt board, each team was invited to a rapid meeting in front of the board, where all team members talked about the status of each criterion, translating it to the respective color. During these meeting the teams could also conclude that some categories were not relevant or applicable for their systems.
This meeting should happen every month, so that the progress of each category could
be updated. At the end of the meeting, the team members agreed which of the categories would be the aim for the next meeting or, to put it another way, where they would invest their efforts in reducing the technical debt.

To measure the technical debt at source code level, the architecture team has made use of the tool Sonar (http://www.sonarsource.org/) . Sonar has a plugin that allows estimating how much effort would be required to fix each debt of the project. Sonar considers as debts: cohesion and complexity metrics, duplications, lack of comments, coding rules violation, potential bugs and no unit tests or useless ones.  The important aspect is that an estimative is calculated, and Sonar shows the results financially and the effort in man days necessary to take the debt to zero (the daily rate of the developer in the context of the project must be informed).
It is important to mention that Sonar, in fact, use many other tools internally to analyze the source code – each one for different aspects of the analysis. It works as an aggregator to display results of other tools such as PMD, Findbugs, Cobertura and Checkstyle among others.

The teams could make the debt rise during a whole month without even knowing about it.
To address this situation, the architecture team created a virtual tiled board, where each tile had information about the build state of each team in the department. The major information was the actual state of the build and the project name. If everything was ok (compilation and automated tests), the tile is green , if the compilation was broken, the tile turns red  and if there were failed tests, the tile turns yellow . Besides the build information, there is other information: total number of tests, number of failed tests, test coverage, number of lines and technical debt (calculated in Sonar).

The virtual tiled board was placed in a big screen in a place where everybody in the room could see it from their workplaces. The main objective was that when the team members saw their failed build and that instant feedback would lead them to make corrective actions so the build could go green again.
As the mechanisms of feedback were implemented, the teams had instant information
about what should be done to lower the levels of technical debt. With this information, they could prioritize which categories they would try to improve in the next month. If the team had some difficulties addressing any of the categories, they could call upon the architecture team support.

PEARL II : Agile Project Management for a value driven approach

PEARL II : Agile Project Management ensures a value-driven approach that allows  to deliver high-priority, high-quality work . Agile Project Management establish the project’s context. It enables to manage the team’s environment, encourage team decision making  and promote autonomy whenever possible. Agile  Project Management expects the best out of people, elevates  the individual and gives them respect. It helps foster a  team culture that values people and encourages healthy  relationships. 

Introduction

Agile_leadership

Agile_leadership

One of the common misconceptions about agile processes is that there is no need for agile project management, and that agile projects are self reliant . It is easy to understand how an agile process’ use of self-organizing teams, its rapid pace, and the decreased emphasis on detailed plans lead to this perception. In a recent egroup, a project manager at a company that was implementing agile had been moved to another area because, “…agile doesn’t require project management capability.”

However, The truth is agile processes still require project management.

Agile methodology does not clearly define the role of manager but instead defines similar roles like coach/facilitator who performs the role of Agile Project manager. 

An Agile project manager understands how the Agile delivery engine works – that the concept is based on self-organization and undisturbed activity. In addition, he or she has the ability to manage business needs and goals, requirements, organizational models, contracts and overarching as well as ‘rolling’ planning methods,’

Agile Project Manager

Agile Project Manager

In Agile transitions, there is a need for an Agile project manager when the project and delivery engine is exposed to complexity factors – as in, for example, when several teams collaborate on a release from a number of international locations.’ Requirements themselves may also lead to complexity such as when teams are faced with regulatory requirements or the need for extremely rapid alteration cycles. Other complexity factors are outsourcing and procurement – that is, the purchase of various services from multiple providers – or when a company starts too many projects at the same time resulting in staff having so much to do that nothing gets done.

‘Several of these complexity factors exist in many of the companies and they have a cumulative effect. Agile project managers can achieve great things in such complex conditions by designing a comprehensive project environment providing oversight and structure. Agile contracts may be a valuable component of the project environment.

The Agile fixed price is a contractual model agreed upon by suppliers and customers of IT projects that develop software using Agile methods. The model introduces an initial test phase after which budget, due date, and the way of steering the scope within the framework is agreed upon.

This differs from traditional fixed-price contracts in that fixed-price contracts usually require a detailed and exact description of the subject matter of the contract in advance. Fixed price contracts aim at minimizing the potential risk caused by unpredictable, later changes. In contrast, Agile fixed price contracts simply require a broad description of the entire project instead of a detailed one.

In Agile contracts, the supplier and the customer together define their common assumptions in terms of the business value, implementation risks, expenses (effort) and costs. On the basis of these assumptions, an indicative fixed price scope is agreed upon which is not yet contractually binding. This is followed by the test phase (checkpoint phase), during which the actual implementation begins. At the end of this phase, both parties compare the empirical findings with their initial assumptions. Together, they then decide on the implementation of the entire project and fixate the conditions under which changes are allowed to happen.

Further aspects of an Agile contract are risk share (both parties divide the additional expenses for unexpected changes equally among themselves) or the option of either party leaving the contract at any stage (exit points).

Jim Highsmith, one of the originators of the Agile Manifesto and a recognized expert in  agile approaches, has defined agility in project management by the following statements:
“Agility is the ability to both create and respond to change in order to profit in a turbulent  business environment,” and “Agility is the ability to balance flexibility and stability” .
In contrast with traditional project methods, agile methods emphasize the incremental  delivery of working products or prototypes for client evaluation and optimization.  While so-called “predictive” project management methods assume that the entire set of  requirements and activities can be forecast at the beginning of the project, agile methods combine all the elements of product development, such as requirements, analysis, design, development and testing — in brief, regular iterations. Each iteration delivers a working  product or prototype, and the response to that product or prototype serves as crucial  input into the succeeding iterations.
Agile theory assumes that changes, improvements and additional features will be incorporated throughout the product development life cycle, and that change, rather than perceived as a failing of the process, is seen as an opportunity to improve the product and make it more fit for its use and business purpose.

The Need for Agile Project Management

Project management is critical to the success of most projects, even projects following agile processes. Without management, project teams may pursue the wrong project, may not include the right mix of personalities or skills, may be impeded by organizational dysfunction, or may not deliver as much value as possible. There are initiatives to formalize these management responsibilities. 

When it comes to agile project management roles, it’s worth noting that most agile processes – Scrum in particular – do not include a project manager. Without a specific person assigned, agile “project manager” roles and responsibilities are distributed among others on the project, namely the team, the ScrumMaster and the product owner.

Waterfall-vs-Agile

Waterfall-vs-Agile

Agile Project Management and Shared Vision

For a team to succeed with agile development it is essential that a shared vision be established. The vision must be shared not just among developers on the development team but also with others within the company. Most plan-driven processes also advocate the need for a shared vision; however, if that vision isn’t communicated or is imprecise or changing, the project can always fall back on its detailed (but not necessarily accurate) lists of tasks and procedures. This is not the case on an agile project and agile project participants use the shared vision to guide their day-to-day work much more actively.

The formation of the project vision is not the responsibility of the agile project manager; usually the vision comes directly from a customer or customer proxy, such as a product manager. The project manager, however, is usually involved in distilling the customer’s grand vision into a meaningful plan for everyone involved in the project. Rather than a detailed command-and-control plan based on Gantt charts, however, the agile plan’s purpose is to lay out an investment vision against which management can assess and frequently adjust its investments, lay out a common set of understanding from which emergence, adaptation and collaboration occur, and establish expectations against which progress will be measured. The project manager works with the customer to layout a common set of understanding from which emergence, adaptation and collaboration can occur. The agile project nurtures project team members to implement the vision. The agile project manager understands the effects  of the mutual interactions among the project’s parts and steers the project towards continuous learning  and adaptation on the edge. 

Scope of Agile Project Management:

Agile Process

Agile Process

In an agile project, the entire team is responsible in managing the team and it is not just the project manager’s responsibility. When it comes to processes and procedures, the common sense is used over the written policies.

This makes sure that there is no delay is management decision making and therefore things can progress faster.

In addition to being a manager, the agile project management function should also demonstrate the leadership and skills in motivating others. This helps retaining the spirit among the team members and gets the team to follow discipline.

Agile project manager is not the ‘Head’ of the software development team. Rather, this function facilitates and coordinates the activities and resources required for quality and speedy software development.

Agile Project Management and Obstacles

Most agile processes prescribe a highly focused effort on creating a small set of features during an “iteration” or “sprint” after which the team quickly regroups and decides on the set of features for the next iteration or sprint. While an iteration is ongoing the team members are expected to focus exclusively on the current iteration.

While this sharp focus leads to greater productivity during the current iteration it can cause a bit of a billiard-ball effect as the conclusion of one iteration can bounce out the start of the next. A project manager who spends a small amount of time looking forward at the next iteration is an excellent buffer against this effect. For example, many organizations have travel restrictions that require plane tickets to be purchased two weeks in advance. If a team could benefit from having a remotely located employee on site during the coming iteration the time to plan for that is during the current iteration.

Another type of obstacle may be a team member.

While agile processes such as Extreme Programming and Scrum rely on self-organizing teams, an agile project manager cannot simply turn a team loose on a project. The agile manager must still monitor that corporate policies or project rules are followed. Participation on an agile team does not turn all developers into model employees. In most cases the team itself will employ some form of sanctioning on an employee who is not working hard enough or is exhibiting other performance or behavior problems. However, in the most severe cases the collective team usually cannot be the one to terminate or officially reprimand an employee. Performance feedback can always be expressed in terms of team views of the individual’s contribution to the team  But if a counseling or coaching session is necessary it is usually best when between just the project manager and the team member.

Businesses must evolve to use flexible practices such as agile project management, to prevent significantly slowing down the productivity of their business and risking their profits.

Round Table Event at hosting and colocation firm UKFast’s Manchester office in Jan 2014

UKFast is one of the UK’s leading managed hosting and colocation providers, supplying dedicated server hosting, critical application hosting, and cloud hosting solutions. it fully own, manage and operate its ISO-accredited data centre complex, which offers over 30,000 sq ft of enterprise-grade facilities for collocating IT equipment.

All of its hosting solutions are designed to help businesses grow, with 24/7/365 UK-based support and dedicated account management as standard. It is exceptionally proud of the standard of service it gives to clients and believe it’s what really sets them apart from other providers.

Old fashioned firms must encourage the ownership and innovation needed to create a happier, more modern workplace or face the consequences of being left behind by companies that do.

That’s the view of six project management experts who gathered at a roundtable event to discuss whether old management techniques have become nothing more than a hindrance to businesses and whether traditional practices still have a role in the workplace.

Lawrence Jones, CEO of hosting and colocation firm UKFast, believes a fun work environment, bolstered by flexible and collaborative project management techniques, results in happier people within the company.

Jones said: “I believe an enjoyable workplace is often a productive workplace. A fun and friendly work ethic is the route to economic recovery and adopting agile project management techniques comes hand in hand with this.

“Having a culture in place that encourages collaboration not only motivates the team, it also means that clients receive a better, faster service that isn’t weighed down by traditional box-ticking procedures.”

Agile project management focuses on the continuous improvement, team input and delivery of essential quality products. By breaking up a project into “sprints” – worked on by different team members simultaneously – agile encourages collaboration and integration unlike other, traditional methods that are often rigidly sequential.

Ninety per cent of respondents in the 7th Annual State Agile Development survey cited that agile improved their ability to manage changing priorities compared to waterfall, while the top two other benefits listed were increased productivity (85 per cent) and improved project visibility (84 per cent).

Ian Carroll, Principal Consultant at ThoughtWorks agrees said: “It is about doing a good job. Nobody comes into work to do a bad job. Agile project management requires cross-functional teams, which result in happier people coming together to make for a much happier work place.”

Clare Walsh, Digital Delivery Director from Redweb believes that teamwork created by agile project is vital for a company’s success.

Walsh said: “It’s about creating that sense of involvement. When somebody really feels involved, they can own it. That’s the whole point of agile, that the team feel some kind of ownership about what they are creating. It’s about empowerment and people feeling like they have the ability to make decisions.”

Beccy Weeks, IS Manager at Saint Gobain Building Distribution has seen agile bring big benefits to her company, including improvements to the integration of new staff members.

She said: “Agile prevents isolation. We’ve found that it’s easier to bring new people into the team as they get on straight away. They are immediately communicating; they feel part of the team and can join in with the banter. They are integrated much quicker using the agile approach, rather than the waterfall management.”

James Cannings, Chief Technical Officer at MMT Digital believes the culture within his workplace has positively transformed due to the change to agile management.

He said: “Agile has completely transformed the culture of my agency over the last two years. We were proud of the culture we had, but I now feel bad in the sense of how we treated the developers who just worked from one project to the next. It was quite a sort of hierarchal structure. Agile now creates a fun environment, which brings the whole team together.”

Mark Kelly, Digital and Social Media Marketing Consultant regrets the management approach he previously undertook.

He said: “The developers and designers were treated like mushrooms in the waterfall approach, not knowing what the guys next to them were doing. The agile approach therefore most certainly provides visibility for everyone on the team.”

The experts gathered at hosting and colocation firm UKFast’s Manchester office to debate the topic of project management and how businesses can implement new techniques to the best effect. Here are their top tips:

  1. Don’t forget, agile is a means to an end rather than an end point itself
  2. How you visualise the world dominates how we perceive the world so having a card wall or something to illustrate tasks can really help teams
  3. Agile isn’t the best for building a skyscraper but great for a fast-paced software development – it could just be about design and development rather than the whole project
  4. It’s about creating a culture of fun as well as ensuring a fast time to market.

Agile Project Management and Organizational Dysfunction

Many companies have at least one dysfunctional area. This may be the “furniture police” who won’t let programmers rearrange furniture to facilitate pair programming. Or it may be a purchasing group that takes six weeks to process a standard software order. In any event these types of insanity get in the way of successful projects. One way to view the project manager is as the bulldozer responsible for quickly removing these problems.

The Scrum process includes a daily meeting during which all team members are asked three questions. One of these questions is, “What is in the way of you doing your work?” The agile project manager takes it on himself to eliminate these impediments. Ideally, he or she becomes so adept at this that impediments are always removed within 24 hours (that is, before the next daily meeting).

Participate in enough agile projects and you begin to hear the same impediments brought up time after time. For example:

    • My ____ broke and I need a new one today.
    • The software I ordered still hasn’t arrived.
    • I can’t get the ____ group to give me any time and I need to meet with them.
    • The department VP has asked me to work on something else “for a day or two”

The project manager is responsible for optimizing team productivity; this means it’s his responsibility to do whatever possible to minimize obstacles. Most organizations will not want every developer calling the ordering department to follow up on delivery dates for software. Similarly, a project manager who can know when to push the IT manager for quick setup of a new PC and when not to push (perhaps saving a favor for later) will be more effective than every programmer calling that same IT manager.

Agile Project manager may need to Work out resource movement with a realistic transition plan with minimum impact on business.

Agile Project Management and Politics

Politics are at play in almost every organization. Most organizations have only limited funds that may be applied across a spectrum of competing projects and new project ideas. Projects compete for budget dollars (team size, tools, etc.), personnel (everyone wants the best programmer), resources (time or access to the large database server), and attention from higher level managers. Too many projects fail for political reasons. A project manager uses the various agile mechanisms to minimize politics and keep everything visible and obvious.

For instance, the project manager works with the customer to ensure that the product backlog (Scrum) or stories (XP) is visible and everyone understands that it directs the team to the most profitable and valuable work possible. The project manager uses product increments and demonstrations of working functionality to keep everyone aware of real progress against goals, commitments, and visions, thereby minimizing opportunities for rumors, misinformation, and other weapons of political maneuvering. Working with the customer, the project manager helps the customer and organization to value results instead of reports.

Agile Project Leadership in Agile Project Management

Agile project leadership is key parameter in ensuring the project success within constraints and boundaries of the organization.
There are many leadership models discussed in popular leadership literature from Jim Collins to John C. Maxwell. Most of the models talk about leadership at organizational level. The agile project leadership has narrow canvass – which is ensuring successful agile projects, successful agile adoptions. The leadership levels applicable to Agile project management are Collaborative Leadership, Servant Leadership and Trans formative leadership in Levels 2,3,4 respectively. Level 1 – Positional leadership is not applicable to agile project management canvas.

 Collaborative leadership.

Here the leader gets things done through collaboration. Relationship is the key characteristic of this level. A collaborative leader builds a strong relationship with the people. Leaders at this level care for their people. They support their people and motivate them constantly. The true collaborative nature of leadership stitches the bond between the leader and the team. Many agile teams experience collaborative leadership. An agile coach builds a strong relationship among the scrum team through his coaching and facilitation skills.

Servant Leadership

Here the leader serves first to lead consequently. Serving is the key idea in this level.
The term “Servant Leadership” was coined by Robert K. Greenleaf in “The Servant as Leader”, an essay that he first published in 1970. In his essay, he said “The servant-leader is servant first… It begins with the natural feeling that one wants to serve, to serve
first. Then conscious choice brings one to aspire to lead.” The idea of servant leadership can be traced to the 4th century B.C. Chanakya in his book Arthashastra wrote that a king [leader] is a paid servant and enjoys the resources of the state together with the people. 

A servant leader serves the team unequivocally. Leaders at this  level gain the respect through serving the team. They listen to the  team; they take cues from observing the team and empower them  in decision-making. Serving is a leadership attitude and a mindset.  Agile coaches are expected to have this attitude.

Robert Greenleaf has introduced the concept of the “servant-leader.”1 Perhaps this is the most appropriate way of thinking of the agile project manager. On an agile project the project manager does not so much manage the project as he both serves and leads the team. Perhaps this is one reason why, anecdotally, it seems much more common to see an agile project manager also function as a contributor to the project team (whether writing or running tests, writing code or documentation, etc.).

Plan-driven software methodologies use a command-and-control approach to project management. A project plan is created that lists all known tasks. The project manager’s job then becomes one of enforcing the plan. Changes to the plan are typically handled through “change control boards” that either reject most changes or they institute enough bureaucracy that the rate of change is slowed to the speed that the plan-driven methodology can accommodate. There can be no servant-leadership in this model. Project managers manage: they direct, administer and supervise.

Agile project management, on the other hand, is much more about leadership than about management. Rather than creating a highly detailed plan showing the sequence of all activities the agile project manager works with the customer to layout a common set of understandings from which emergence, adaptation and collaboration can occur. The agile project manager lays out a vision and then nurtures the project team to do the best possible to achieve the plan. Inasmuch as the manager represents the project to those outside the project he or she is the project leader. However, the project manager serves an equally important role within the project while acting as a servant to the team, removing their impediments, reinforcing the project vision through words and actions, battling organizational dysfunction, and doing everything possible to ensure the success of the team. The agile project manager is a true coach and friend to the project teams. 

With “light touch” control, agile project managers realize that increased control doesn’t cause increased order;  they approach management with courage by accepting that they can’t know everything in advance, and relinquish some control to achieve greater order.

Throughout the project, the project manager identifies practices that aren’t followed, seeks to  understand why; and removes obstacles to their implementation. Used thus, for example , the agile practices provide simple generative rules without restricting autonomy and creativity

Trans-formative Leadership

Transformative in nature. Here the leader transforms others as leaders. The key characteristic of this level is transformation. Transformative leaders lead by example; work through organizational constraints well; they glide over organizational politics; they stretch the organizational boundaries and lead the team to newer areas. Transformative leaders develop others to reach levels 3 and 4. They have big picture in mind and think global always.

Dysfunctions of Teams

Teams go through many levels of maturity. Bruce Tuckman describes the four stages of Forming, Storming, Norming and Performing. Patrick Lencioni takes us though the 5 Dysfunctions of a Team. Teams just don’t start day one as a productive, self-organizing team. They go through a process. There are bumps and challenges. The Agile Project Manager is responsible for getting the team through these phases with as little pain as possible. Even after a team gets to a mature stage, the slightest change can revert the team to a former, less mature stage. team gets to a mature stage, the slightest change can revert the team to a former, less mature stage.

Agile software development is very faced paced and in order to accommodate change effectively, it is very disciplined and requires constant attention to the process, the results and the team in order to stay on track. Agile methodologies describe many practices that guide us through the mechanics of building software in an agile fashion. But we have to also address the changes required in leadership style in order to see the benefits we strive for by adopting agile methodologies.
 
Some new agile PMs take such a strong ‘hands off’ approach that the team struggles to get through the first couple of phases of team maturity and agile appears to fail.
Agile PM must find ways to get the team members to know each other, then they can start to trust each other, then they can learn how to communicate, solve problems, have good debates and make decisions. Facilitate and encourage this process throughout the project.
 
Some techniques for getting folks through the early phases of team maturity are:
 
  • Inception or planning activities
  • Informal social events
  • Team building events
  • Remember the Future activity
  • Retrospectives
  • Team Health checks

 People are a key part to the success of any project. No matter what organizational model we invent, if people are not engaged it won’t work. Motivation, engagement, ownership and self-organization are the core values of an agile mindset. Agile project managers’ first concern should be team morale. If team members are good professionals they’ll know how to sort out any situation while in the right mood. If they’re not good professionals… we have a bigger problem and it is not going to be solved by agile techniques or any other solution.

Tossing more process to an already dysfunctional team is not going to help out, it will only add to the mess. The key to success is doing more with less: less process, less people, less rules, less time to get things done. Let people do what they’re here for – design and build – and free them from as much paperwork and red-tape as possible.

Any critical view needs an alternative proposal, so here is recipe to avoid the liturgical trap and encourage actual continuous improvement:

  • Always be mindful, question the necessity of the process: A good way to test how valuable a ceremony or practice is for a team, especially if  detecting a smell, is to make it voluntary. If team members find it valuable they’ll perpetuate the practice and all will feel its usefulness. If they abandon the practice, it’s time to look for alternatives. That is self-organization.
  • Be flexible and try new things, in their fullness: Some of the agile practices need time and training to actually work. It’s difficult at the beginning and people resist change. As a team, they can be disciplined, when they decide to try out a practice, adopt it in its completeness, with full understanding of it for a period of time. They can then incorporate it in  process based on its proven effectiveness.
  • Check on team morale often: Using retrospectives, one-on-one conversations, anonymous polls or whatever other way, Agile PM ask people what they need to get things done. Find out their aspirations and expectations and take steps to increase motivation.
  • Find your balance: When motivation fails, discipline and good practices help. But we cannot rely only on discipline to achieve success, as discipline has a price we pay in terms of team morale. Motivation has the same effect as discipline without paying this price.

Responsibilities of an Agile Project Manager:

Following are the responsibilities of an agile project management function. From one project to another, these responsibilities can slightly change and become interpreted differently.

  • Responsible for maintaining the agile values and practices in the project team.
  • The agile project manager removes impediments as the core function of the role.
  • Helps the project team members to turn the requirements backlog into working software functionality.
  • Facilitates and encourages effective and open communication within the team.
  • Responsible for holding agile meetings that discusses the short-term plans and plans to overcome obstacles.
  • Enhances the tool and practices used in the development process.
  • Agile project manager is the chief motivator of the team and plays the mentor role for the team members as well.

Fowler (2002) and Blotner (2003) suggest that a manager can and should help the team be more productive by offering some insight into the ways in which things can be done in an agile environment.
Grenning (2001) observed that having senior people monitor the team’s progress at a monthly design-as-built review meeting accelerated the development process while lowering the number of bugs.

Adaptive Teams

Another major premise is the ability to adapt – hence the Agility. Delivering features early may not be enough when the goalpost you are shooting for moves, but shorter feedback and corrective cycles means that you can correct the course of action early and able to respond to emerging market opportunities.

Obviously, there is no magic bullet, the journey would still have to be made, with team doing all that is necessary for the application to rise – what is different though, is that team will not be in the dark about how they are going to get to the end, customers will not have to wait till the end to touch and feel the product, there won’t be any grand change control processes if the goalpost keeps moving, instead business will be able to change their minds and team will be able to adapt, learn and produce valuable piece of software along the way.

To ensure delivery of a “non-obsolete” solution, it is important that any changes to project requirements are handled proactively by making necessary adjustments to the development process (process dimension) and the development team (people dimension).

Agile Project Management at Salesforce.com

Salesforce.com has recently completed an agile transformation of a two hundred person team within a three month window. This is one of the largest and fastest “big-bang” agile rollouts. It focused on creating self-organizing teams, debt-free iterative development, transparency and automation.

Salesforce.com is a market and technology leader in on-demand services. it routinely process over 85 million transactions a day and have over 646,000 subscribers. Salesforce.com builds a CRM solution and an on-demand application platform. The services technology group is responsible for all product development inside Salesforce.com and has grown 50% per year since its inception eight years ago, delivering an average of four major releases each year. Before agile rollout it had slowed to one major release a year. The agile rollout was designed to address problems with its previous methodology:

  • Inaccurate early estimates resulting in missed feature complete dates and compressed testing schedules.

  • Lack of visibility at all stages in the release.

  • Late feedback on features at the end of our release cycle.

  • Long and unpredictable release schedules.

  • Gradual productivity decline as the team grew.

Before the agile rollout the R&D group leveraged a loose, waterfall-based process with an entrepreneurial culture. The R&D teams are functionally organized into program management, user experience, product management, development, quality engineering, and documentation. Although different projects and teams varied in their specific approaches, overall development followed a phase-based functional waterfall. Product management produced feature functional specifications. User experience produced feature prototypes and interfaces. Development wrote technical specifications and code. The quality team tested and verified the feature functionality. The documentation team documented the functionality. The system test team tested the product at scale. Program management oversaw projects and coordinated feature delivery across the various functions.

its waterfall-based process was quite successful in growing our company in its early years while the team was small. However, the company grew quickly and became a challenge to manage as the team scaled beyond the capacity of a few key people. Although they were successfully delivering patch releases, the time between its major releases was growing longer (from 3 months to over 12). Due to fast company growth and lengthening of  release cycles, many people in R&D had not participated in a major release of main product. Releases are learning opportunities for the organization. A reduction in releases meant fewer opportunities to learn. This had a detrimental affect on morale and on ability to deliver quality features to market.

Transition Approach

An original company founder and the head of the R&D technology group launched an organizational change program. He created a cross-functional team to address slowing velocity, decreased predictability and product stability. This cross-functional team redesigned and rebuilt the development process from the ground up using key values from the company’s founding: KISS (Keep it Simple Stupid), iterate quickly, and listen to  customers. These values are a natural match for agile methodologies.

It was very important to position the change as a return to core values as a technology organization rather than a wholesale modification of how they deliver software. There were three key areas that were already in place that helped the transition: 1) the on-demand software model is a natural fit for agile methods; 2) an extensive automated test system was already in place to provide the backbone of the new methodology; and 3) a majority of the R&D organization was collocated.

One team member wrote a document describing the new process, its benefits and why they were transitioning from the old process. They led 45 one-hour meetings with key people from all levels in the organization. Feedback from these meetings was incorporated into the document after each meeting, molding the design of the new process and creating broad organizational buy-in for change. This open communication feedback loop allowed everyone to participate in the design of the new process and engage as an active voice in the solution. Two key additions to the initial paper were a plan for integrating usability design and clarification on how much time we needed for release closure sprints.

At this point, most literature recommended an incremental approach using pilot projects and a slow roll out. They also considered changing every team at the same time. There were people in both camps and it was a difficult decision. The key factor driving us toward a big-bang rollout was to avoid organizational dissonance and a desire for decisive action. Everyone would be doing the same thing at the same time. One of the key arguments against the big-bang rollout was that they would make the same mistakes with several teams rather than learning with a few starter teams and that they would not have enough coaches to assist teams every day. One team in the organization had already successfully run a high visibility project using Scrum . This meant that there was at least one team that had been successful with an agile process before they rolled out to all the other teams. They made a key decision to move to a “big-bang rollout” moving all teams to the new process rather than just a few.

Some of the key wins since the rollout have been:

  • Focus on team throughput rather than individual productivity

  • Cross-functional teams that now meet daily

  • Simple, agile process with common vocabulary

  • Prioritized work for every team

  • A single R&D heartbeat with planned iterations.

  • User stories & new estimation methods

  • Defined organizational roles – ScrumMaster, Product Owner, Team Member

  • Continuous daily focus on automated tests across the entire organization

  • Automation team focused on build speed & flexibility

  • Daily metric drumbeats with visibility into the health of  products and release

  • Product line Scrum of Scrums provide weekly visibility to all teams

  • R&D-wide sprint reviews and team retrospectives held every 30 days

  • Product Owner & ScrumMaster weekly special interest groups (SIGs)

  • A time-boxed release on the heels of  biggest release ever

  • Reduction of 1500+ bugs of debt

  • Potentially release-able product every 30 days

Although they are still learning and growing as an organization, these benefits have surpassed our initial goals for the rollout. Some areas that they are still focusing on are: teamwork, release planning, bug debt reduction, user stories and effective tooling.

PEARL XXV : Scaled Agile Framework® pronounced SAFe™

PEARL XXV :

Scaled Agile Framework® pronounced SAFe™ – All individuals and enterprises can benefit from the application of these innovative and empowering scaled agile methods.

SAFe Core Values

SAFe Core Values

Our modern world runs on software. In order to keep pace, we build increasingly complex and sophisticated software systems. Doing so requires larger teams and continuously rethinking the methods and practices – part art, science, engineering, mathematics, social science – that we use to organize and manage these important activities. The Scaled Agile Framework is an interactive knowledge base for implementing agile practices at enterprise scale. The Scaled Agile Framework represents one such set of advances. The Scaled Agile Framework®, or SAFe, provides a recipe for adopting Agile at enterprise scale.  It is illustrated in the big picture. As Scrum is to the Agile team, SAFe is to the Agile enterprise. SAFe tackles the tough issues – architecture, integration, funding, governance and roles at scale.  It is field-tested and enterprise-friendly. SAFe is the brainchild of Dean LeffingwellAs Ken Schwaber and Jeff Sutherland are to Scrum, Dean Leffingwell is to SAFe. SAFe is based on Lean and Agile principles. There are three levels in SAFe:

  • Team
  •  Program
  •  Portfolio
scaled Agile Framework big picture

scaled Agile Framework big picture

At the Team Level: Scrum with XP engineering practices are used. Design/Build/Test (DBT) teams deliver working, fully tested software every two weeks.  There are five to nine members of each team.

The scrum team is renamed as the DBT team (from Design / Build / Test) and the sprint review is described as the sprint demo .

One positive aspect of SAFe is its alignment between team and business objectives during the PSI planning. (Potentially Ship-able Increment)  

It makes it easier to see the connection between the company roadmap/vision and  day-to-day work. High level view of Business and Architectural needs behind the company investment and its connection to the particular epic on the program level, then story implemented on team level is helpful during the planning.

Similarly, the HIP Sprints (from Hardening / Innovation / Planning) scheduled at the end of each PSI.

Spikes

Spikes is A story or task aimed at answering a question or gathering information, rather than at producing shippable product.

In practice, the spikes teams take on are often proof-of-concept types of activities. The definition above says that the work is not focused on the finished product. It may even be designed to be thrown away at the end. This gives your product owner the proper expectation that you will most likely not directly implement the spike solution. During the course of the sprint, you may discover that what you learned in the spike cannot be implemented for any practical purpose. Or you may discover that the work can be used for great benefit on future stories. Either way, the intention of the spike is not to implement the completed work “as is.”

There are two other characteristics that spikes should have:

  1. Have clear objectives and outcomes for the spike. Be clear on the knowledge you are trying to gain and the problem(s) you are trying to address. It’s easy for a team to stray off into something interesting and related, but not relevant.
  2. Be timeboxed. Spikes should be timeboxed so you do just enough work that’s just good enough to get the value required.

At the Program Level:

Features are services provided by the system that fulfill stakeholders needs. They are maintained in program backlog and are sized to fit in PSI/Release so that each PSI/Release delivers conceptual integrity. Features bridges the gap between user stories and EPics.

SAFe defines an Agile Release Train (ART).  As iteration is to team, train is to program. The ART (or train) is the primary vehicle for value delivery at the program level.  It delivers a value stream for the organization. SAFe is three letter acronym (TLA) heaven – DBT, ART, RTE, PSI, NFR, RMT and I&A! . Between 5 and 10 teams work together on a train.  They synchronize their release boundaries and their iteration boundaries. Every 10 weeks (5 iterations) a train delivers a Potentially Shippable Increment (PSI).  A demo and inspect and adapt sessions are held.  Planning begins for the next PSI. PSIs provide a steady cadence for the development cycle.  They are separate from the concept of market releases, which can happen more or less frequently and on a different schedule. New program level roles are defined

  •  System Team
  •  Product Manager
  •  System Architect
  •  Release Train Engineer (RTE)
  •  UX and Shared Resources (e.g., security, DBA)
  •  Release Management Team

In IT/PMI environments the Program Manager or Senior Project Manager might fill one of two roles.  If they have deep domain expertise, they are likely to fill the Product Manager role.  If they have strong people management skills and understand the logistics of release they often become the Release Train Engineer SAFe makes a distinction between content (what the system does) and design (how the system does it).  There is separate “authority” for content and design. The Product Manager (Program Manager) has content authority at the program level.  S / He defines and prioritizes the program backlog.

SAFe defines an artifact hierarchy of Epics – Features – User Stories.  The program backlog is a prioritized list of features.  Features can originate at the Program level, or they can derive from Epics defined at the Portfolio level.  Features decompose to User Stories which flow to Team-level backlogs. Features are prioritized based on Don Reinersten’s Weighted Shortest Job First (WSJF) economic decision framework. The System Architect has design authority at the program level.  He collaborates day to day with the teams, ensuring that non-functional requirements (NFRs) are met.  He works with the enterprise architect at the portfolio level to ensure that there is sufficient architectural runway to support upcoming user and business needs. The UX Designer(s) provides UI design, UX guidelines and design elements for the teams.  In a similar manner, shared specialists provide services such as security, performance and database administration across the teams. The Release Train Engineer (RTE) is the Uber-ScrumMaster. The Release Management Team is a cross-functional team – with representation from marketing, dev, quality, ops and deployment – that approves frequent releases of quality solutions to customers.

For agility at scale, a small magnitude of modeling has been introduced to support the vision , upcoming features, and ongoing extension to the upcoming Architectural Runway for each Agile Release train.

Agile Release train is the long lived team of agile teams typically consists of 50 to 125 individuals, that serves the program level value delivery in SAFe.  Using a common team sprint cadence each train has dedicated resources to continuously define build test and deliver value to one of the enterprise value streams. Teams are aligned to a common mission via a single program backlog and include the program management, architectural, UX guidance and release train engineer roles. Each train produces valuable and evaluate able system level potential shipable increment at least 8 to 12 weeks accordance with PSI Objectives established by the teams during each release planning event but team can release any time according to market needs.

Cadence is what gives a team a feeling of demarcation, progression, resolution or flow. A pattern which allows the team to know what they are doing and when it will be done. For very small, or mature teams, this cadence could by complex, arrhythmic or syncopated. However, it is enough to allow a team to make reliable commitments because recognizing their cadence allows them to understand their capability or capacity.

The program backlog is the single, definitive repository for all the work anticipated by the program. The backlog is created from the breakdown of business and architectural epics into features that will address user needs and deliver business benefits . The purpose of the program roadmap is to establish alignment across all teams, while also providing predictability to the deliverables over an established time horizon

Program EPics affect single release train.

SAFe provides a cadence-based approach to the delivery of value via PSIs. Schedule, manage and govern your synchronized PSIs

Shared Iteration schedules allow multiple teams to stay on the same cadence and facilitate roll up reporting. Release Capacity Planning allows you to scale agile initiatives across multiple teams and deploy more predictable releases. Cross team dependencies are quickly identified and made visible to the entire program.

At the Portfolio Level:

The Portfolio Vision defines how the enterprise’s business strategy will be achieved.

In the Scaled Agile Framework, the Portfolio Level is the highest and most strategic layer where programs are aligned to the company’s business strategy and investment approach

PPM has a central role in Strategy, Investment Funding, Program Management and Governance. Investment Themes drive budget allocations. Themes are done as part of the budgeting process with a lifespan of 6-12 months.

Epics are enterprise initiatives that are sufficiently substantial in scope , they warrant analysis and understanding of potential ROI. EPics require light weight business case that elaborate business and technology impact and implementation strategy. EPics are generally cross cutting and impact multiple organizations,budget, release trains, and occur over multiple PSI.

Portfolio Epics affect multiple release trains. Epics cut across all three business dimensions of Time ( Multiple PSI,years) , Scope (Release Trains, Applications,solutions and business platforms)  , Organizations(Department, Business units, Partners, End-To-End business value chain).

Portfolio philosophy is centralized strategy with local execution. Epics define large development initiatives that encapsulate the new development necessary to realize the benefits of investment themes.Program Project Management represents individuals responsible for strategy, Investment funding, program management and governance. They are the stewards of portfolio vision, define relevant value streams,control the budget through investment themes, define and prioritize cross cutting portfolio backlog epics, guide agile release trains and report to business on investment spends and program progress. SAFe provides seven transformation patterns to lead the organization to program portfolio management.

  • Decentralized Decision making
  • Demand management, continuous value flow
  • Light weight, epic only business cases
  • Decentralized Rolling wave planning
  • Agile estimating and planning
  • Self organizing, self management agile release trains
  • Objective Fact based measures and milestones

Rolling Wave Planning is the process of project planning in waves as the project proceeds and later details become clearer. Work to be done in the near term is based on high level assumptions; also, high level milestones are set. As the project progresses, the risks, assumptions, and milestones originally identified become more defined and reliable. One would use Rolling Wave Planning in an instance where there is an extremely tight schedule or timeline to adhere to; whereas more thorough planning would have placed the schedule into an unacceptable negative schedule variance.

This is an approach that iteratively plans for a project as it unfolds, similar to the techniques used in Scrum (development) and other forms of Agile software development.

Progressive Elaboration is what occurs in this rolling wave planning process. Progressive Elaboration means that over time we elaborate the work packages in greater detail. Progressive Elaboration refers to the fact that as the weeks and months pass we have planned to provide that missing, more elaborated detail for the work packages as they now appear on the horizon.

Investment themes represent the set of initiatives that drive the enterprise’s investment in systems, products, applications, and services. Epics can be grouped by investment themes and then can visualize relative capacity allocations to determine if planned epics are in alignment with the overall business strategy. Epics are large-scale development initiatives that realize the value of investment themes.

SAFe-Levels

SAFe-Levels

There are business epics (customer-facing) and architectural epics (technology solutions). Business and architectural epics are managed in parallel Kanban systems. Objective metrics support IT governance and continuous improvement. Enterprise architecture is a first class citizen.  The concept of Intentional Architecture provides a set of planned initiatives to enhance solution design, performance, security and usability. SAFe patterns provide a transformation roadmap.

Architectural Runway exists when the enterprise platforms have sufficient existing technological infrastructure(code) to support the implementation of the highest priority features without excessive delay inducing redesign, In order to achieve some degree of runway , the enterprise must continuously invest in refactoring and extending existing platforms

SAFe suggests development and implementation of kanban systems for business and archtiecture portfolio epics.

Architectural epics kanban system suggests visibility , Work In Process limits and continuous flow to portfolio level architectural epics. This kanban system has four states, funnel, backlog, analysis and implementation. The architecture epic kanban is typically under the auspices of CTO/ Technology office which includes enterprise and system architects.

Business epics kanban system suggests visibility , Work In Process limits and continuous flow to portfolio level business epics. This kanban system has four states, funnel, backlog, analysis and implementation. The business epic kanban is typically under the auspices of  program portfolio management . comprised of those executives and business owners who have responsibility for implementing business strategy.

Value streams

Value streams

Lean Approach

The scaled agile framework is based on number of trends in the modern software engineering.

  • Lean Thinking
  • Product Development flow
  • Agile Development
Dean Leffingwell and Lean Thinking

Dean Leffingwell and Lean Thinking

Agile gives tools needed to empower and engage development teams to achieve unprecedented levels of productivity,quality and engagement. The SAFe House of Lean provides the following constructs

  • The Goal : Value, Sustain-ably the shortest lead time Best quality and value to people
  • Respect for people
  • Kaizen (continuous improvement)
  • Principles of product development flow
  • Foundation Management : Lean thinking manager:Teacher

Investment themes reflect how a portfolio allocates budgets to various initiatives that it has allocated to the portfolio business strategy. Investment themes are portfolio level capacity allocations in that each theme gets resource implied by the budget.

PEARL XXIV: Incorporate Cloud computing to enable Agile S/w Development

PEARL XXIV :

Organizations are changing IT architecture to incorporate cloud computing resources that enable Agile and empower software development teams with self-service.

Agile software development is winning the hearts and minds of developers and testers in leading enterprise organizations. In the “State of Agile Development” 2011 study, VersionOne highlights that 80% of the respondents from 6,042 companies surveyed have adopted Agile development practices within their organization. Nearly 50% of respondents had between two and five Agile projects underway, and one third said their organization is running 11 or more. There is a business reason for this momentum. The Agile development model enables software teams to produce higher quality software that is more in-sync with customer needs and delivers release cycles faster and more cost effectively than ever before.
Unfortunately, most software teams that adopt Agile development struggle to achieve its full potential due to legacy IT challenges, especially in the enterprise. While the Agile model accelerates the software development process, many teams find that their IT environments are not optimized to support the full potential of their Agile development release cycles. These legacy environments are often too slow, inflexible, and inadequate for Agile development processes.

Consider this: Typical provisioning time for an enterprise-grade development environment can take, at a minimum, from several weeks to several months. Most Agile release cycles span four to six weeks at most. Not only does a four week provisioning delay result in sub-optimal outcomes, but once created, these static environments are also difficult to change and cannot support the rapid iteration required for Agile development.
Essentially, Agile development methodologies require Agile infrastructure for optimal efficiency.

Leading companies are changing the way their IT teams equip and support software development teams by integrating fast, dynamic, flexible, and easily shareable cloud-based environments that are available on-demand. They are changing IT architecture to incorporate cloud computing resources that enable Agile and empower software development teams with self-service. By integrating cloud-based services into the overall IT architecture strategy, software development teams are better enabled to create, change, and scale complex computing environments as often as needed. And at the same time, IT is able to retain the full visibility and control required for security and operational governance over these environments.

Enterprises that implement Agile IT architecture to support Agile development have seen provisioning of IT resources reduced from weeks to minutes, accelerating software release cycles dramatically. For example, Cushman & Wakefield, the world’s largest real estate services firm, moved its application development and test environments to the cloud. In fewer than four months, Cushman & Wakefield development teams saw application release cycles shorten and become more efficient—and they doubled the number of projects they could complete in a given period of time.

When developers adopt the Agile model, software development will undergo rapid iterations before being deployed to production. Agile development teams must have the flexibility to develop and test each iteration of code as they converge on the final version of product. Often times, developers must also test their code across multiple operating systems, network configurations, and browsers for each release. Most enterprise applications require a complex multi-tier, multinetwork architecture that includes web servers, application servers, databases, firewalls, and load balancers. The ability to rapidly create, change, and test against these complex computing configurations is critical for Agile development speed and success. Unfortunately, most Agile teams do not have access to nimble, self-service environments that can support the high rate of change demanded by rapid iteration. Instead, progress is impeded by a variety of roadblocks inherent to legacy IT environments that are too rigid, costly, or static.

Specific road blocks may include:
Slow to provision/static development and testing environments – Developers and test engineers often must rely on IT teams for the ordering, set-up, and access to development and test environments. Once provisioned, these environments tend to be static.
Under powered infrastructure – The provisioning of environments typically involves older infrastructure that performs poorly and does not scale well. Often times the development and test lab is overloaded with too many overlapping projects.
Lack of test coverage – In order to reproduce production scenarios or customer-specific issues, development and test teams need to be able to change operating systems, browsers, and databases, or infrastructure parameters such CPU, memory, disk size, and network configuration. Existing data center architectures built for predictable workloads lack the agility or configurability to support the dynamic requirements to cover the full test matrix with each pass.

Inability to capture and reproduce complex software bugs – When developers and test engineers encounter complex bugs, they need to be able to save the memory, disk, and network settings across multiple machines and networks, so that the development team can properly diagnose the issue, while the QA team works in parallel on additional testing. Most development and test labs do not have the capacity or the flexibility to parallelize the capture of bugs with full environmental data and continue to execute on the full test matrix.

Limited sharing and collaboration – Most software development is team-oriented, and a lack of collaboration creates silos across teams and slows the release process. The challenge of collaboration and sharing is further compounded when the development, testing, and operations teams are geographically distributed with separate labs. Even more challenging is the case where constituents outside the company require access—examples include customers for user acceptance testing or contractors who perform certain test functions. IT teams using only in-house infrastructure often rely on scheduling specific shifts and resources for specific projects, and are typically unable to maintain the level of access or the pace of change required to support the multiple teams.

In short, legacy IT environments are simply not designed to handle the demands of today’s Agile software development cycles and complex application requirements.
Development teams are being pushed to create quality applications faster and at lower cost than ever before. Many are rightfully choosing the Agile model so they can focus on customer satisfaction, initiate rapid development, and build a culture of collaboration. But how can these teams succeed if they don’t have the IT environment required to support their goals?

New IT Requirements for Agile Development Teams
As discussed earlier, the goal of Agile development is to shift from a long and inflexible development process to a much shorter, more collaborative process directed toward shipping software more frequently.
To be most successful, Agile development teams will require the following from their development and test lab infrastructure:
Self-service provisioning – Software development teams need to be able to create, change, deploy, copy, re-create, delete, and change development and test lab environments on demand, without IT assistance.
On-demand scalability – Software development teams need to scale environments up and down easily.
Alibrary of virtual data center(VDC)templates – Software development teams need to create a consistent version of the current release stack, prior release stacks, and customer-specific variations as templates. Within a few minutes after a hot fix issue is reported, teams should be able to create a new environment that matches the appropriate release scenario to reproduce the issue, fix it, test it, and deploy the new code.
Complex bug capture and reproduction – QA and support teams need to easily create, snapshot, and clone complex, multi-tier environments when a complicated bug is identified. They should be able to capture and quickly recreate complex environments, including all memory and state information, so that the relevant development team resources can work to identify root cause and develop a fix, while the QA team moves forward with additional testing.
Collaboration – Developers need to share copies of their lab environments with test engineers, other development engineers, and users. Teams should be able to organize their work in projects, invite specific project members to participate, and assign specific roles or access points to each participant based on roles (user, manager, database engineer, etc.) or status (employee, contractor, etc.).
Together, these requirements enable Agile software development teams to optimize the software development model for faster, more collaborative release cycles. When the development environment they use is as Agile as their development process, they can focus on specific customer problems, quickly develop solutions, iterate with customers, and ship higher quality software faster.

Create Agile IT Using the Cloud
A typical procurement cycle requires 6 to 8 weeks to specify a development and test environment,; procure server, networking and storage hardware; and rack, configure, and test everything. That is a time-consuming, and costly process that does not match the demands of the Agile development cycle. Leveraging cloud computing resources can provide more convenient, affordable, and on-demand computing environments tailored to the needs of software development teams.
By leveraging cloud computing resources as part of an overall IT strategy, enterprises can effectively extend their existing data centers and manage the cloud as an extension of their existing environment.

By utilizing cloud computing resources for development and test environments (rather than owning and maintaining hardware), IT can provide a higher degree of self-service for end users (developers and testers), and more configurability and scalability to support Agile requirements with much lower operational costs. Cloud-enabled solutions for development and test workloads will integrate the best characteristics of virtualization, cloud automation, software as a service (SaaS), and infrastructure as a service (IaaS) to provide a complete application lifecycle solution.
This solution-centric approach enables:

Developer and tester self-service – Developers and testers can create, replicate, change, or delete entire software development and test stacks and deploy application builds with just a few mouse clicks. A cloud-based solution can better enable teams to implement continuous integration, so they can quickly execute unit and functional tests and understand how their new code interacts with the rest of the application stack.
Scalability and configurability – Developers and testers can create new release stacks ondemand, in a repeatable and dependable fashion, as they go through the peaks and troughs of release cycles.
Broader test coverage – The cloud can enable developers and test engineers, virtually unlimited resources as compared to in-house physical lab infrastructure, which typically struggles to keep up as test matrices grow and release cycles compress. In the cloud, development teams can easily run multiple test passes in parallel to test various OS/DB/browser combinations, as the cloud enables VDC templates to scale up or down as needed. A rich library of VDC templates makes it easy to select from a broad range environments that are ready to be provisioned with a mouse click.
Rapid bug capture and reproduction – In the cloud, test teams can snapshot complex bug
environments (creating VDC templates), rapidly create clones of these templates, and
continue to run tests unblocked by the saved snapshot in the development team queue. Using templates in a cloud-based environment enables testers to capture the whole environment without having to write down reproduction steps. Developers can then re-use those same environments to review the bug and quickly develop patches.
Collaboration and parallel work streams – More than ever, application development teams are geographically dispersed, providing significant challenges to cross-team collaboration. With cloud-based solutions, developers can easily collaborate and share access to environments with other developers, testers, and offshore or contract resources by creating release-specific projects, and providing role-based access as appropriate

IT Requirements for Agile Cloud
Just as development and test teams demand that cloud infrastructure support their Agile
development needs, IT professionals also require that cloud-based labs meet requirements as they relate to managing budget and user access. Specifically, IT’s need for full visibility and control of cloud environments requires that they can:

  • Set up development and test environment templates that are IT policy compliant.
  • Create users, roles, access control lists, and set permissions.
  • Establish a hybrid cloud architecture to securely connect the cloud environment to existing data center infrastructure.
  • Assign group, project, and individual-level quotas for machines, storage, and networks.
  • Track usage by month, by user, by project, and implement chargebacks if needed.
  • Audit and ensure compliance policies are followed.

IT teams that adopt a cloud-enabled Agile IT solution can:

  • Increase business agility for development teams.
  • Reduce time to market for new applications.
  • Ensure development ships better software faster.
  • Boost productivity across development/test and IT teams.
  • Lower fixed and variable costs.

Agile IT empowers development teams to achieve the full potential of the Agile model. In addition, IT will be able to retain full visibility and control over cloud environments and reduce operating costs.

Five Best Practices for Creating Cloud-Enabled Software Development
Modern enterprise organizations are leveraging cloud computing to enable IT to be Agile and to power Agile software development teams. Companies that have successfully implemented cloud-enabled software development have subscribed to the following five best practices when creating Agile infrastructure:
1 Don’tchange the Agile process to fit legacy infrastructure –  change your IT strategy to be more Agile. The very essence of the Agile model is trust and delegation, and yet some IT organizations still struggle to operate with these principles in mind. They claim to support the Agile development model, but require developers to change their model to fit the existing IT processes. Successful IT organizations are flexible and work with software development teams to create an IT cloud strategy that is self-service oriented, while still providing visibility and control for IT.
2 Empower end users with self-service environments. – Enable your developers and testers by creating VDC templates specific to each development project, and providing IT services that developers can consume easily and without intervention.
3 Expect rapid changes and fast iteration to be the new normal. –  Be ready to architect your cloud infrastructure to be more configurable, scalable, and flexible. An Agile development model is inherently fast paced, so be willing to accommodate change and quickly adapt to what works and what does not. Expect rapid iteration and design your cloud implementation accordingly.
4 Collaboration is at the heart of Agile development – Customers, line-of-business users, QA engineers, contractors, and support professionals should be able to collaborate during multiple phases of the development cycle. All of these stakeholders are expected to operate on the application based on the specific roles they play on the team. If the environment cannot be easily replicated and shared across teams, then the developers and testers will struggle to get the full benefits of Agile.
5 Maintain full visibility and control over IT operations. – Implementing Agile cloud infrastructure does not eliminate security and governance needs. IT organizations need to set security policies and enforce them through granular access control. They need to have full visibility into quota usage, resource management, and compliance.

Agile Cloud computing Infrastructure strategy at Health and Human Services(US Gov)

Agile Cloud computing Infrastructure strategy at Health and Human Services(US Gov)

 

Extending  existing IT architecture to encompass a cloud-enabled development strategy will ultimately serve your Agile development team better. The process does not have to be difficult or challenging, and in most cases, complex computing environments can be created in the cloud in just a few minutes or hours versus days or weeks. Look for enterprise-grade cloud computing service providers that can deliver on the following five key capabilities to enable  applications development teams:

1 Intuitive self-service
2 Fast productivity
3 Flexible,complex computing environments
4 Collaborative platformsforteams
5 Full visibility and controlforIT

Although it is possible to continue using on-premises infrastructure, Agile development process will be much more effective when IT infrastructure and service delivery model is Agile. Given the potential consequences of operating with dated hardware, poor collaboration, and slow provisioning of IT resources, organizations can increase business agility by embracing cloud computing for software development and testing

Developers need feedback. In Agile, near-immediate feedback is essential, and it’s generated through daily commits, continuous integration, testing, and through stakeholder input.

Agile development values rapid, continuous feedback—through analytics, and through direction from stakeholders and product owners. First, through continuous introspection, on a daily basis the technical team members can see if their code merges properly with other developers’ code, or find duplications or efficiencies in code. Next, with continuous integration and regular deployments of code to development or testing servers (daily, every few days, or weekly), product owners and stakeholders can give regular feedback
on the development by reviewing iterations and identifying necessary modifications—before it’s too late. This ensures that the developers are building what the project owners want, versus finding out months later that it’s off track.
The key to Agile is to reduce this overall feedback cycle—to shorten it. The best way to do reduce the cycle is to automate each piece of feedback functionality. Automation of code scanning and integration, testing, deployment, and all other continuous introspection activities will not only shorten cycles and speed up development, but it will also make the

Agile Methodology using cloud computing services

Agile Methodology using cloud computing services

Agile Automation using Cloud Computing

Agile development process scalable. All automated introspection functions are easily and readily available across the enterprise.
The cloud can automate feedback in all stages of a project. Here are some of the ways cloud automation can support fast feedback and provide other benefits in Development, Building, Testing, and Deployment.
Automated Development in the Cloud: Continuous integration of code must begin with proper source control management to establish version control. For successful Agile development, organizations can rely on the cloud to provide distributed and easily accessible source code management to any number of developers—in less than five minutes. The cloud enables secure, 99.9% up-time availability to source code. And organizations can quickly scale up—moving from a few developers accessing the tool up to 500 or even 1000 developers without infrastructure concerns.
Automated Build in the Cloud: Organizations building with Agile in the cloud can expedite their work, and reduce their build costs. First, developers can take their existing build images residing on multiple platforms and use virtualization to have those images pre-built, and then accessed through the cloud. This expedites provisioning of existing build images. Second, the cloud enables utility pricing, so that fees are only charged for the specific services and actual build time used, versus the cost of maintaining a dedicated server.
Automated Testing in the Cloud: Testing in the cloud provides significant advances in speed and agility. Organizations can quickly run multi-platform testing using virtual images. In addition, they can run unit tests in parallel through cloud machines; so rather than running consecutive tests on one machine, multiple tests can be simultaneously run on multiple cloud machines. This also results in cost savings by paying only for actual test time used, and not the expense of maintaining a dedicated server or testing infrastructure.

Automated Production Deployment in the Cloud: The cloud provides access to production environments in minutes, and, in some cases, push-button control to automate deployment: Leading Agile enterprise-ready cloud solutions empower organizations to export code to production in one click. Consider how important that is to supporting Agile development.
In Agile, to reduce the feedback cycle, automating production deployment as much as possible is a top priority. Whether deployment is to a test machine or a demo machine staging production, fast and frequent deployment is how the technical team can receive critical feedback from the business owners and keep the project moving quickly in the right direction. Therefore, push button deployment should be made available to all developers and the Scrum Master, as opposed to IT or an operations team. Remember, fast feedback is king in Agile. And IT requests can lengthen the feedback cycle. To counteract security management concerns regarding widespread system access, sophisticated cloud solutions for Agile incorporate pre-configured, password-ready access to the deployment system.

PEARL X : Behavior Driven Development

PEARL X : Behavior Driven Development provides stakeholder value through collaboration throughout the entire project

Behavior-driven development was developed by Dan North as a response to the issues encountered teaching test-driven development:

  • Where to start in the process
  • What to test and what not to test
  • How much to test in one go
  • What to call the tests
  • How to understand why a test fails
At the heart of BDD is a rethinking the approach to the unit testing and acceptance testing that North came up with while dealing with these issues. For example, he proposes that unit test names be whole sentences starting with the word “should” and should be written in order of business value

At its core, behavior-driven development is a specialized version of test-driven development which focuses on behavioral specification of software units.

Test-driven development is a software development methodology which essentially states that for each unit of software, a software developer must:

  • define a test set for the unit first;
  • then implement the unit;
  • finally verify that the implementation of the unit makes the tests succeed.

This definition is rather non-specific in that it allows tests in terms of high-level software requirements, low-level technical details or anything in between. The original developer of BDD (Dan North) came up with the notion of BDD because he was dissatisfied with the lack of any specification within TDD of what should be tested and how. One way of looking at BDD therefore, is that it is a continued development of TDD which makes more specific choices than TDD.

Behavior Driven Development

Behavior Driven Development

Behavior-driven development specifies that tests of any unit of software should be specified in terms of the desired behavior of the unit. Borrowing from agile software development the “desired behavior” in this case consists of the requirements set by the business — that is, the desired behavior that has business value for whatever entity commissioned the software unit under construction.  Within BDD practice, this is referred to as BDD being an “outside-in” activity.

BDD practices

The practices of BDD include:

  • Establishing the goals of different stakeholders required for a vision to be implemented
  • Drawing out features which will achieve those goals using feature injection
  • Involving stakeholders in the implementation process through outside–in software development
  • Using examples to describe the behavior of the application, or of units of code
  • Automating those examples to provide quick feedback and regression testing
  • Using ‘should’ when describing the behavior of software to help clarify responsibility and allow the software’s functionality to be questioned
  • Using ‘ensure’ when describing responsibilities of software to differentiate outcomes in the scope of the code in question from side-effects of other elements of code.
  • Using mocks to stand-in for collaborating modules of code which have not yet been written

Domain-Driven Design (DDD) is a collection of principles and patterns that help developers craft elegant object systems. Properly applied it can lead to software abstractions called domain models. These models encapsulate complex business logic, closing the gap between business reality and code.

Outside–in

BDD is driven by business value; that is, the benefit to the business which accrues once the application is in production. The only way in which this benefit can be realized is through the user interface(s) to the application, usually (but not always) a GUI.

In the same way, each piece of code, starting with the UI, can be considered a stakeholder of the other modules of code which it uses. Each element of code provides some aspect of behavior which, in collaboration with the other elements, provides the application behavior.

The first piece of production code that BDD developers implement is the UI. Developers can then benefit from quick feedback as to whether the UI looks and behaves appropriately. Through code, and using principles of good design and refactoring, developers discover collaborators of the UI, and of every unit of code thereafter. This helps them adhere to the principle of YAGNI, since each piece of production code is required either by the business, or by another piece of code already written.

YAGNI : You Are Not Goingto Need It

Behavior-Driven Development (BDD) is an agile process designed to keep the focus on stakeholder value throughout the whole project. The premise of BDD is that the requirement has to be written in a way that everyone understands it – business representative, analyst, developer, tester, manager, etc. The key is to have a unique set of artifacts that are understood and used by everyone.

User stories are the central axis around which a software project rotates. Developers use user stories to capture requirements and to express customer expectations. User stories provide the unit of effort that project management uses to plan and to track progress. Estimations are made against user stories, and user stories are where software design begins. User stories help to shape a system’s usability and user experience.

User stories express requirements in terms of The Role, The Goal, and The Motivation.

A BDD story is written by the whole team and used as both requirements and executable test cases. It is a way to perform test-driven development (TDD) with a clarity that cannot be accomplished with unit testing. It is a way to describe and test functionality in (almost) natural language.

mind-mapping-bdd

BDD Story Format
Even though there are different variations of the BDD story template, they all have two common elements: narrative and scenario. Each narrative is followed by one or more scenarios.

The BDD story format looks like this:

Narrative:
In order to [benefit]
As a [role]
I want to [feature]
Scenario: [description]
Given [context or precondition]
When [event or action]
Then [outcome validation]

“User stories are a promise for a conversation” (Ron Jeffries)
A BDD story consists of a narrative and one or more scenarios. A narrative is a short, simple description of a feature told from the perspective of a person or role that requires the new functionality. The intention of the narrative is NOT to provide a complete description of what is to be developed but to provide a basis for communication between all interested parties (business, analysts, developers, testers, etc.) The narrative shifts the focus from writing features to discussing them.
Even though it is usually very short, it tries to answer three basic questions that are often overlooked in traditional requirements.
What is the benefit or value that should be produced (In order to)?
Who needs it (As a)? And what is a feature or goal (I want to)?
With those questions answered, the team can start defining the best solution in collaboration with the stakeholders.

The narrative is further defined through scenarios that provide a definition of done, and acceptance criteria that confirm that narrative that was developed fulfill expectations.

It is important to remember that the written part of a BDD story is incomplete until discussions about that narrative occur and scenarios are written. Only the whole story (narrative and one or more scenarios) represents a full description of the functionality and definition of done.

If more information is needed, narratives can point to a diagram, workflow, spreadsheet, or any other external document.

Since narratives have some characteristics of traditional requirements, it is important to describe distinctions. Two most important differences are precision and planning

Precision
Narratives favor verbal communication. Written language is very imprecise, and team members and stakeholders might interpret the requirement in a different way.

Verbal communication wins over written.
As another example,  is the following requirement statement relating to a registration screen: “The system shall allow the user to register using 16 character username and 8 character password”.

It was unclear whether the username MUST be 16 characters or whether it could be any length up to 16 characters, or whether it could be any length with a minimum of 16 characters. In this particular case, the business analyst removed any doubt as soon as clarification was asked.

However, there are many other cases
when developers take requirements as a final product and simply implement them in a way they understand them. In those cases they might not understand the reasons behind those requirements but just “follow specifications”. They might have a better solution in mind that never gets discussed.

Planning
IEEE 830 style requirements (“The system shall…”) often consist of hundreds or even thousands of statements. Planning such a large number of statements is extremely difficult. There are too many of them to be prioritized and estimated, and it is hard to understand which functionalities should be developed. That is especially evident when those statements are separated into different sections that represent different parts of the system or products. Without adequate prioritization, estimation, and description of the functionality itself, it is very hard to accomplish an iterative and incremental development process. Even if there is some kind of iteration plan, it can take a long time for a completed functionality to be delivered, since the development of isolated parts of the
system is done in a different order and at a different speed.

Narratives are not requirement statements
The Computer Society of the Institute of Electrical and Electronics Engineers (IEEE) has published a set of guidelines on how to write software requirements specifications. This document is known as IEEE Standard 830 and it was last updated in 1998. One of the
characteristics of an IEEE 830 statement is the use of the phrase
“The system shall…”. Examples would be:

The system shall allow the user to login using a username and password.

The system shall have a login confirmation screen.

The system shall allow 3 unsuccessful login attempts.

Writing requirements in this way has many disadvantages: it is error prone and time-consuming, to name but two. Two other important disadvantages are that it is boring and too long to read.
This might seem irrelevant until you realize the implications. If reviewers and, if there is such a process, those who need to sign off requirements do NOT read it thoroughly and skip sections out of boredom, or because it does NOT affect them, many things will be missed. Moreover, having a big document written at that level often prevents readers from understanding the big picture and the real goal of the project.

A Waterfall model combined with IEEE 830 requirements tends to plan everything in advance, define all details, and hope that the project execution will be flawless. In reality, there are almost no successful software projects that manage to accomplish these goals. Requirements change over time resulting in “change requests”. Changes are unavoidable and only through constant communication and short iterations can the team reduce the impact of these changes. IEEE 830 statements are a big document in the form of a checklist. Written, done, forgotten, the overall understanding is lost. The need for constant reevaluation is nonexistent.
Consider the following requirements:

  • The product shall have 4 wheels.
  • The product shall have a steering wheel.
  • The product shall be powered by electricity.
  • The product shall be produced in different colors.

Each of those statements can be developed and tested independently and assembled at the end of the process. The first image in someone’s head might be an electrically-powered car.
That image is incorrect. It is a car, it has four wheels, it is powered by electricity (rechargeable batteries) and it can be purchased in different colors. it is A toy car

That is probably not what individual would think from reading those statements. A better description would be:

Narrative:
In order to provide entertainment for children
As a parent
I want a small-sized car
By looking at this narrative, it is clear what the purpose is (enter- tainment for children), who needs it (parents), and what it is (a small-sized car). It does not provide all the details since the main purpose is to establish the communication that will result in more information and understanding of someone’s needs.

That process might end with one narrative being split into many. Further on, scenarios produced from that narrative act as acceptance criteria, tests, and definition of done.

Who can write narratives?
Anyone can write narratives. Teams that are switching to Agile tend to have business analysts as writers and owners of narratives or even whole BDD stories (a narrative with one or more scenarios).
In more mature agile teams, the product owner has a responsibility to make sure that there is a product backlog with BDD stories. That does not mean that he writes them. Each member of the team can write BDD stories or parts of them (narrative or scenario).
Whether all the narratives are written by one person (customer, business analyst, or product owner) or anyone can write them (developers, testers, etc.) usually depends on the type of organization and customers. Organizations that are used to “traditional” requirements and procedures that require them to have those requirements “signed” before the project starts often struggle during their transition to Agile and iterative development. In cases like this, having one person (usually a business analyst) as the owner and writer of narratives might make a smoother transition towards team ownership and lower the impact on the organization

A good BDD narrative uses the “INVEST” model:

  •  Independent. Reduced dependencies = easier to plan.
  •  Negotiable. Details added via collaboration.
  •  Valuable. Provides value to the customer.
  •  Estimable. Too big or too vague = not estimable.
  •  Small. Can be done in less than a week by the team.
  •  Testable. Good acceptance criteria defined as scenarios.

While IEEE 830 requirements are focused on system operations, BDD narratives focus on customer value. They encourage looseness of information in order to foster a higher level of collaboration between stakeholders and the team. The actual work being done is accomplished through collaboration revolving around the narrative that becomes more detailed through scenarios as the development progresses. Narratives are at higher level than IEEE 830 requirements. Narratives are followed by collaboratively developed scenarios which define when the BDD story meets the expectations.

Scenarios
Even though narratives can be written by anyone, it is often the result of conversations between the product owner or business analyst and the business stakeholder.
Scenarios describe interactions between user roles and the system. They are written in plain language with minimal technical details so that all stakeholders (customer, developers, testers, designers, marketing managers, etc.) can have a common base for use in discussions, development, and testing.
Scenarios are the acceptance criteria of the narrative. They represent the definition of done. Once all scenarios have been implemented, the story is considered finished. Scenarios can be written by anyone, with testers leading the effort.

The whole process should be iterative within the sprint; as the development of the BDD story progresses, new scenarios can be written to cover cases not thought of before. The initial set of scenarios should cover the “happy path”. Alternative paths should be added progressively during the duration of the sprint.

Format
Scenarios consist of a description and given, when, and then steps.
The scenario description is a short explanation of what the scenario does. It should be possible to understand the scenario from its description. It should not contain details and should not be longer than ten words.
Steps are a sequence of preconditions, events, and outcomes of a scenario. Each step must start with words given, when or then.
The Given step describes the context or precondition that needs to be fulfilled.

Given visitor is on the home screen

The When step describes an action or some event.

When user logs in

The Then step describes an outcome.

Then welcome message is displayed

Any number of given, when and then steps can be combined, but at least one of each must be present. BDD steps increase the quality of conversations by forcing participants to think in terms of pre-conditions that allow users to perform actions that result in some outcomes. By using those three types of steps, the quality of the interactions between team members and stakeholders increases.

Process
The following process should be followed.
1. Write and discuss narrative.
2. Write and discuss short descriptions of scenarios.
3. Write steps for each scenario.
4. Repeat steps 2 and 3 during the development of the
narrative.

By starting only with scenario descriptions, we are creating a basis that will be further developed through steps. It allows us to discuss different aspects of the narrative without going into the details of all the steps required for each of the scenarios. Do not spend too much time writing descriptions of all possible scenarios. New ones will be written later.
Once each scenario has been fully written (description and steps) new possibilities and combinations will be discovered, resulting in more scenarios.

Each action or set of actions (when steps) is followed by one or more outcomes (then steps). Even though this scenario provides a solid base, several steps are still missing. This situation is fairly common because many steps are not obvious from the start.
Additional preconditions, actions, and outcomes become apparent only after first version of the scenario has been written.

This scenario covers one of many different combinations. It describes the “happy path” where all actions have been performed successfully. To specify alternative paths, we can
copy this scenario and modify it a bit.

This scenario was not written and fully perfected at the first attempt  but through several iterations. With each version of the scenario,  new questions were asked and new possibilities were explored.
The process of writing one scenario can take several days or even  weeks. It can be done in parallel with code development. As soon as  the first version of the scenario has been completed, development  can start. As development progresses, unexpected situations will  arise and will need to be reflected in scenarios.

Behavior-driven development borrows the concept of the ubiquitous language from domain driven design. A ubiquitous language is a (semi-)formal language that is shared by all members of a software development team — both software developers and non-technical personnel. The language in question is both used and developed by all team members as a common means of discussing the domain of the software in question.  In this way BDD becomes a vehicle for communication between all the different roles in a software project.

BDD uses the specification of desired behavior as a ubiquitous language for the project team members. This is the reason that BDD insists on a semi-formal language for behavioral specification: some formality is a requirement for being a ubiquitous language. In addition, having such a ubiquitous language creates a domain model of specifications, so that specifications may be reasoned about formally. This model is also the basis for the different BDD-supporting software tools that are available.

Much like test-driven design practice, behavior-driven development assumes the use of specialized support tooling in a project. Inasmuch as BDD is, in many respects, a more specific version of TDD, the tooling for BDD is similar to that for TDD, but makes more demands on the developer than basic TDD tooling.

Tooling principles

In principle a BDD support tool is a testing framework for software, much like the tools that support TDD. However, where TDD tools tend to be quite free-format in what is allowed for specifying tests, BDD tools are linked to the definition of the ubiquitous language discussed earlier.

As discussed, the ubiquitous language allows business analysts to write down behavioral requirements in a way that will also be understood by developers. The principle of BDD support tooling is to make these same requirements documents directly executable as a collection of tests. The exact implementation of this varies per tool, but agile practice has come up with the following general process:

  • The tooling reads a specification document.
  • The tooling directly understands completely formal parts of the ubiquitous language . Based on this, the tool breaks each scenario up into meaningful clauses.
  • Each individual clause in a scenario is transformed into some sort of parameter for a test for the user story. This part requires project-specific work by the software developers.
  • The framework then executes the test for each scenario, with the parameters from that scenario.

Dan North has developed a number of frameworks that support BDD (including JBehave and RBehave), whose operation is based on the template that he suggested for recording user stories  These tools use a textual description for use cases and several other tools (such as CBehave) have followed suit. However, this format is not required and so there are other tools that use other formats as well. For example Fitnesse (which is built around decision tables), has also been used to roll out BDD.

Tooling examples

There are several different examples of BDD software tools in use in projects today, for different platforms and programming languages.

Possibly the most well-known is JBehave, which was developed by Dan North. The following is an example taken from that project:

Consider an implementation of the Game of Life. A domain expert (or business analyst) might want to specify what should happen when someone is setting up a starting configuration of the game grid. To do this, he might want to give an example of a number of steps taken by a person who is toggling cells. Skipping over the narrative part, he might do this by writing up the following scenario into a plain text document (which is the type of input document that JBehave reads):

Given a 5 by 5 game
When I toggle the cell at (3, 2)
Then the grid should look like
.....
.....
.....
..X..
.....
When I toggle the cell at (3, 1)
Then the grid should look like
.....
.....
.....
..X..
..X..
When I toggle the cell at (3, 2)
Then the grid should look like
.....
.....
.....
.....
..X..

The bold print is not actually part of the input; it is included here to show which words are recognized as formal language. JBehave recognizes the terms Given (as a precondition which defines the start of a scenario), When (as an event trigger) and Then (as a postcondition which must be verified as the outcome of the action that follows the trigger). Based on this, JBehave is capable of reading the text file containing the scenario and parsing it into clauses (a set-up clause and then three event triggers with verifiable conditions). JBehave then takes these clauses and passes them on to code that is capable of setting a test, responding to the event triggers and verifying the outcome. This code must be written by the developers in the project team (in Java, because that is the platform JBehave is based on). In this case, the code might look like this:

private Game game;
private StringRenderer renderer;
 
@Given("a $width by $height game")
public void theGameIsRunning(int width, int height) {
    game = new Game(width, height);
    renderer = new StringRenderer();
    game.setObserver(renderer);
}
 
@When("I toggle the cell at ($column, $row)")
public void iToggleTheCellAt(int column, int row) {
    game.toggleCellAt(column, row);
}
 
@Then("the grid should look like $grid")
public void theGridShouldLookLike(String grid) {
    assertThat(renderer.asString(), equalTo(grid));
}

The code has a method for every type of clause in a scenario. JBehave will identify which method goes with which clause through the use of annotations and will call each method in order while running through the scenario. The text in each clause in the scenario is expected to match the template text given in the code for that clause (for example, a Given in a scenario is expected to be followed by a clause of the form “a X by Y game”). JBehave supports the matching of actual clauses to templates and has built-in support for picking terms out of the template and passing them to methods in the test code as parameters. The test code provides an implementation for each clause type in a scenario which interacts with the code that is being tested and performs an actual test based on the scenario. In this case:

  • The theGameIsRunning method reacts to a Given clause by setting up the initial game grid.
  • The iToggleTheCellAt method reacts to a When clause by firing off the toggle event described in the clause.
  • The theGridShouldLookLike method reacts to a Then clause by comparing the actual state of the game grid to the expected state from the scenario.

The primary function of this code is to be a bridge between a text file with a story and the actual code being tested. Note that the test code has access to the code being tested (in this case an instance of Game) and is very simple in nature (has to be, otherwise a developer would end up having to write tests for his tests).

Finally, in order to run the tests, JBehave requires some plumbing code that identifies the text files which contain scenarios and which inject dependencies (like instances of Game) into the test code. This plumbing code is not illustrated here, since it is a technical requirement of JBehave and does not relate directly to the principle of BDD-style testing.

Story versus specification

A separate subcategory of behavior-driven development is formed by tools that use specifications as an input language rather than user stories. An example of this style is the RSpec tool that was also developed by Dan North. Specification tools don’t use user stories as an input format for test scenarios but rather use functional specifications for units that are being tested. These specifications often have a more technical nature than user stories and are usually less convenient for communication with business personnel than are user stories. An example of a specification for a stack might look like this:

Specification: Stack

When a new stack is created
Then it is empty

When an element is added to the stack
Then that element is at the top of the stack

When a stack has N elements 
And element E is on top of the stack
Then a pop operation returns E
And the new size of the stack is N-1

Such a specification may exactly specify the behavior of the component being tested, but is less meaningful to a business user. As a result, specification-based testing is seen in BDD practice as a complement to story-based testing and operates at a lower level. Specification testing is often seen as a replacement for free-format unit testing.

Specification testing tools like RSpec and JDave are somewhat different in nature from tools like JBehave. Since they are seen as alternatives to basic unit testing tools like JUnit, these tools tend to favor forgoing the separation of story and testing code and prefer embedding the specification directly in the test code instead. For example, an RSpec test for a hashtable might look like this:

describe Hash do
  before(:each) do
    @hash = Hash.new(:hello => 'world')
  end
 
  it "should return a blank instance" do
    Hash.new.should eql({})
  end
 
  it "should hash the correct information in a key" do
    @hash[:hello].should eql('world')
  end
end

This example shows a specification in readable language embedded in executable code. In this case a choice of the tool is to formalize the specification language into the language of the test code by adding methods named it and should. Also there is the concept of a specification precondition – the before section establishes the preconditions that the specification is based on.

Cucumber lets software development teams describe how software should behave in plain text. The text is written in a business-readable domain-specific language and serves as documentation, automated tests and development-aid – all rolled into one format.

Cucumber works with Ruby, Java, .NET, Flex or web applications written in any language. It has been translated to over 40 spoken languages.

Cucumber also supports more succinct tests in tables – similar to what FIT does. Users can view the examples and documentation to learn more about Cucumber tables.

Gherkin gives us a lightweight structure for documenting examples of the behavior our stakeholders want, in a way that it can be easily understood both by the stakeholders and by Cucumber. Although we can call Gherkin a programming language, its primary design goal is human readability, meaning you can write automated tests that read like documentation.

Using mocks

BDD proponents claim that the use of “should” and “ensureThat” in BDD examples encourages developers to question whether the responsibilities they’re assigning to their classes are appropriate, or whether they can be delegated or moved to another class entirely. Practitioners use an object which is simpler than the collaborating code, and provides the same interface but more predictable behavior. This is injected into the code which needs it, and examples of that code’s behavior are written using this object instead of the production version.

These objects can either be created by hand, or created using a mocking framework such as mock.

Questioning responsibilities in this way, and using mocks to fulfill the required roles of collaborating classes, encourages the use of Role-based Interfaces. It also helps to keep the classes small and loosely coupled.

PEARL XVII : Multidimensional Testing Coverage Matrix

PEARL XVII:

In Agile Software Development Methodology, the Multi Dimensional Testing Coverage Matrix  of what needs to be tested should be defined early. The importance of this has grown exponentially the past few years, as the test coverage matrix has become increasingly complex.

Rapid prototyping and development techniques combined with Agile development methodologies are pushing the envelope on the best practice of testing early and testing often. Keeping pace with the quick development turn-around and shorter time to market and being adaptive to late changes in requirements requires effective management of quality process. The use of  multidimensional test coverage matrix which maps of test artifacts – test cases, test defects, test fixtures – mapped to the requirements – needs, features, use cases and supplementary requirements – as a QA scheduling and planning tool defined early in the project, though claimed to have been practiced, has been largely overlooked by the  software industry.

 View of Agile Delivery Life Cycle

View of Agile Delivery Life Cycle

User needs in an agile process are defined by a story (sometimes captured as use-cases and features) planned to be implemented iteratively. Work break down for development (in iterations) of these use-cases and features is defined in terms of tasks.

Test coverage can help in monitoring the quality of testing, and assist in directing the test generators to create test cases that cover areas that have not been tested before.

Get Better Structure of the Product: Understanding the end-to-end data flow of an  application helps to analyze the root cause of a defect. Split the complete  product into small segments that will help team to get a clear structure of a product as well  as flows and dependencies of each and every part of modules.

Depending on Windows/Mac/Linux/Unix platforms, different types of browser-based tools are important for agile testers to troubleshoot the  defects. For example, for Http(s) tracking tools – Parse Proxy, Http Watch, WebScarab,  Http Debugger Pro, Archillies, and log tracing tools- Winscp & Putty, XML SPY,etc. These dependencies shall be clearly depicted in the Multidimensional Test coverage Matrix.

The agile team needs to  develop good test scenarios for future automation, end-to-end testing and product  lifecycle analysis, to help in organizational process growth

Acceptance TDD (ATDD). Team can do TDD at the requirements level by writing  customer test, the equivalent of a function test or acceptance test in the traditional world. Acceptance TDD is often called behavior-driven development (BDD) or story test-driven development, where you first automate a failing story test, then driving the design via TDD until the story test passes (a story test is a customer acceptance test).

Requirement Types

There are several categories of requirements, these are: Stakeholder, HighLevel Business, Application, Data, Security, Test (Quality), Supplementary, and Change
Management. Some of these categories have a lower level of granularity associated with them,
Stakeholder Requirements
These usually come from the CEO of the company, or the body of the Board of Directors. These are very high-level. They usually focus on a major “problem” that they want solved. These can also include high-level expectations for: Performance (“instantaneous”), Availability (“24x7x365”) and Security (“totally secure”).

High-Level Business Requirements
High-level requirements will fulfill Stakeholder Requirements. They are traced directly from them.  High-Level business requirements trace to Application Requirements.
Application Requirements
Application requirements drive the development of the application. They fall into the following categories; Use Case Requirements, Business Requirements/Rules, Functional Requirements, Technical Requirements, User Interface Requirements, “Look and Feel” Requirements and Navigation Requirements. There are some very fine lines between these types, and the ultimate strategy may be to combine these categories as needed.
Use Case Requirements
These start in the form of Use-Case Documents, but can be broken down into individual
requirements. The use case describes the basic flow of the system, alternate paths through the system. Business Rules, Navigation and User Interface requirements are all derived from the use case, and are traced from it.
Business Requirements/Rules
A Business Requirement or Rule is something that is specific to the business. These are
sometimes listed as “non-functional” requirements, as they do not have any function to them. The rules exist whether or not a system exists. An example of this would be, for a transportation firm,“Trucks only can travel 55 mph,” and another would be “We accept no bid less than $100,000.” This is also likely a list of data that is required to be captured, with the data including the data items needed, such as Name, address, etc. If there is a business need for data to be in a certain format, or minimum or maximum length for a text field, for example, that too is captured here. Any edit, validation, or default value that serves a business purpose is also in this category. This list of business rules is often captured in a business catalog. These are all should be traced from the use case.
Functional Requirements
These are driven and traced from the Business Requirements and Rules. They describe how a business rule is carried out, or how data is captured. These are usually in the format of “system shall do this or do that.”
Technical Requirements
These are derived during the low level design process, and sometimes during development. They are driven from and traced from the Functional Requirements. They are not “user” features of the system, but describe the low-level required actions of the system, that only a designer or developer would know. For example, “When transaction is being processed, and an error of type XYZ occurs, a rollback of the database transaction is required.” They can also include details of an error message, and error handling requirements.
User Interface Requirements
These are driven from Functional and Use Case Requirements, are traced from them both, depending on where they were derived from. They include items such as screen layout, tab flow, mouse and keyboard use, what controls to use for what functions (e.g. radio button, pulldown list), and other “ease of use” issues.
“Look and Feel” Requirements
These are driven from Functional and Business Requirements, and are traced from them both, depending on where they were derived from. These can be categorized as the “non-functional”
User Interface type requirements. They deal with how fields look, regarding font, font size,
colors, graphics, etc.
Navigation Requirements
These are driven and traced from the Use Case, as the Use Case lists the flow of the system, and the Navigation Requirements depict how that flow will take place. They are usually presented in a storyboard format, and should show the screen flow of each use case, and every alternate flow. Additionally, they should state what happens to the data or transaction for each step, for example, what happens to the data entered on a previous screen if the user decides to “go back?” They include the various ways to get to all screens, and an application screen map should be one of the artifacts derived in this category of requirements.
Based on the above Application Requirements, the Data Requirements are derived, and are traced from the Application Requirements.
Data Requirements
Based on the data needed in the Business Rules, and taking other requirements types into consideration, the database platform, database tables, columns, data types, size and other database attributes can be created.
Security Requirements
These are traced from wherever they need to be traced from, as they encompass all requirements that have do directly with the application . They mainly will come from Business Rules.
Test (Quality) Requirements/Test Cases
The Test Cases are traced from all types of requirements, and can be percived as the all en compassing box that all requirements are inside of. This shows that all requirements shall have test cases traced to it. If that test case fails during testing, the team should be able to see what higher-level requirement would not be fulfilled.
Supplementary Requirements
Traced from Data and Application requirements, these refer to the technology infrastructure and operations type of requirements, such as hardware and network requirements. There are many Operations Requirements that must also be captured, in the categories of, Administration Requirements, Maintenance Requirements, Disaster/Recovery Requirements, Skill Set of personnel Requirements, Change Management Requirements, User Documentation Requirements, Training Requirements, and Demo/Presentation Requirements. Other nonfunctional requirements that do not fall into any other category would fall into this “catch-all” category.

Carefully Define Testing Coverage Matrix: With the help of multi-dimensional  matrix, define the coverage requirements for a vital part of the specifications process at  an early stage. The size of the coverage matrix has an important influence on staffing  needs and QA budgets. Defining the testing coverage matrix early benefits the management to distribute resources and have better control and analysis of what can be  done internally versus what should be outsourced.

Test coverage in the test plan states what requirements will be verified during what stages of the product life. Test Coverage is derived from design specifications and other requirements, such as safety standards or regulatory codes, where each requirement or specification of the design ideally will have one or more corresponding means of verification. Test coverage for different product life stages may overlap, but will not necessarily be exactly the same for all stages. For example, some requirements may be verified during Design Verification test, but not repeated during Acceptance test. Test coverage also feeds back into the design process, since the product may have to be designed to allow test access .

In order to reproduce production scenarios or customer-specific issues, development and test teams need to be able to change operating systems, browsers, and databases, or infrastructure parameters such CPU, memory, disk size, and network configuration. These scenarios must be addressed in the multidimensional test coverage matrix.

The use cases descriptions define the main success scenarios of the system. However, not every use case scenario ends in a success for the user. While elaborating the use-cases using the descriptive text to capture these alternate paths, new aspects of the systems come to light when exceptions are encountered (non-happy path behavior of the system is being captured).

Spence and Probasco refer to them as overloading the term requirements, a common point of confusion with Requirements Management. These may not be clear from the user needs and system features captured, but they are a very vital and essential aspect of the system behavior. To ensure that the system meets these requirements and for coverage to be effective, these have to be elicited clearly and traced completely. Alternate paths may also be captured using a usability (scenario) matrix . While the use cases are mapped against features (or cards) which are planned for the iteration, so can the use-cases, the use-case scenarios that stem from these and so on, cascading to the test cases (and test artifacts)

Note that the usage of the application flow, even though captured, could end-up varying the application flow based on the data . Supplementary requirements corresponding to the architectural requirements for the system cannot be mapped unless captured separately. These remain outside the functional requirements modeled by the use cases.

A sample list of business rules that have to be followed could be summarized.

When the mapping of the test case flows across functionality is carried, it becomes evident that the granularity of details falls short; when mapping the coverage of the test flows against the business rules  in the tabular representation.

For example, Based on the feature set as set out it is possible that any one of the flows used to ensure coverage of business requirement 1 could as well serve for business requirement 2. However, on closer scrutiny the test case flow that tests non-happy path scenario of business requirement rule 2 requires a further elaboration of the test flows against feature set.

Such gaps and inadequacies will come to light in a multidimensional test coverage matrix that is not granular and consequently, the test coverage falls short. Tracing every non functional, business and non-business requirement to test cases and scenarios should increase the confidence and coverage of testing and QA activities that can be performed. The usability flows and concrete test cases that cover the requirements and needs can be formulated and with each iteration, targeted test cases could be identified to be run or executed to address within the specific build.

Test Coverage matrix is really multi-dimensional and to be effective QA artifacts, they have to transcend the various phases of the development process – initiation, elaboration, construction and transition. Further, it has to be a “living” artifact, one that is updated with each iteration

Multi Dimensional Test coverage matrix example

Multi Dimensional Test coverage matrix example

Within iterations, a set of acceptance and regression tests have to be scheduled and performed to meet the exit criteria. Features and stories (in the form of cards) are planned in iterations in an agile methodology. With multi dimensional test coverage matrix and mapping of the test cases to features, use cases and defects, optimum test planning assuring the software quality within each build/release becomes effortless and convenient. By establishing effective multi dimensional test coverage matrix, helps to answer some of the following questions

1. What test cases should be selected to run for the current build – verify fixed defects, regression suite for the current fixes, apart from the base code smoke and build acceptance tests?

2. What impact does change in a specific set of non-functional and functional requirements have on the QA testing process in arriving at test estimates?

3. How can defects identified be mapped to the requirements that the iteration was scoped to achieve? And what surround testing and re-testing have to be carried out to validate before the defects can be closed out or new but related ones identified?

4. What change requests were brought about by the most recent build or iteration and what does this new change entail?

Testing coverage matrix also establishes tracking back to the exact requirements being implemented in the iteration improving coverage and confience in the quality process. This is of greater significance in agile projects where requirements documentation isn’t complete as requirements continue to evolve with each build or iteration. Agility ensures the process (and the product) is adaptive to changing requirements and using testing coverage for QA activities ensures that verification keeps up with these changes. impact on quality

Multi Dimensional Attributes of Test Coverage Matrix

Multi Dimensional Test coverage matrix defines a method for testing a complex software system, wherein the complex software system has interrelated system programs.

The method comprising the steps of:

  • Determining linkages between interrelated system programs using an  multidimensional test coverage matrix;
  • identifying a change in one or more of the interrelated system programs;
  • applying the multidimensional test coverage matrix such that it depicts information pertaining to a selected test scenario and test cases pertaining to the identified change,
  • And also to depict other test scenarios and test cases linked by two or more levels through at least one of dependencies and tributaries.
  • The test coverage matrix is adapted to depict linkages across three or more levels of dependency;

Multi dimensional Test coverage matrix enables to keep track of the complex relationships between various components of complex software system and to assist in the regression testing needed in the integration and subsequent modification of the complex software system.  it is used to store the relationship information between test cases (predetermined scenarios involving a series of specified transactions to test the behavior of one or more software components of the complex software system) and/or the individual components of the complex software system

Dependency Scenario

Dependency Scenario