PEARL III: Principles of Lean Software Development for Agile Methodology

PEARL III: Principles of Lean Software Development for Agile Methodology

The market for software is fast paced with frequently changing customer needs. In order to stay competitive companies have to be able to react to the changing needs in a rapid manner. Failing to do so often results in a higher risk of market lock-out , reduced probability of market dominance , and it is less likely that the product conforms to the needs of the market. In consequence software companies need to take action in order to be responsive whenever there is a shift in the customers’ needs on the market. That is, they need to meet the current requirements of the market, the requirements being function or quality related. Two development paradigms emerged in the last decade to address this challenge, namely agile and lean software development.

The term Lean Software Development was first coined as the title for a conference organized by the ESPRIT initiative of the European Union, in Stuttgart Germany, October 1992. Independently, the following year, Robert Charette in 1993 suggested the concept of “Lean Software Development” as part of his work exploring better ways of managing risk in software projects. The term “Lean” dates to 1991, suggested by James Womack, Daniel Jones, and Daniel Roos, in their book The Machine That Changed the World: The Story of Lean Production as the English language term to describe the management approach used at Toyota. The idea that Lean might be applicable in software development was established very early, only 1 to 2 years after the term was first used in association with trends in manufacturing processes and industrial engineering.

In their 2nd book, published in 1995, Womack and Jones defined five core pillars of Lean Thinking. These were:

  • Value
  • Value Stream
  • Flow
  • Pull
  • Perfection

This became the default working definition for Lean over most of the next decade. The pursuit of perfection, it was suggested, was achieved by eliminating waste. While there were 5 pillars, it was the 5th one, pursuit of perfection through the systemic identification of wasteful activities and their elimination, that really resonated with a wide audience. Lean became almost exclusively associated with the practice of elimination of waste through the late 1990s and the early part of the 21st Century.

The Womack and Jones definition for Lean is not shared universally. The principles of management at Toyota are far more subtle. The single word “waste” in English is described more richly with three Japanese terms:

  • Muda – literally meaning “waste” but implying non-value-added activity
  • Mura – meaning “unevenness” and interpreted as “variability in flow”
  • Muri – meaning “overburdening” or “unreasonableness”

Perfection is pursued through the reduction of non-value-added activity but also through the smoothing of flow and the elimination of overburdening. In addition, the Toyota approach was based in a foundational respect for people and heavily influenced by the teachings of 20th century quality assurance and statistical process control experts such as W. Edwards Deming.

Unfortunately, there are almost as many definitions for Lean as there are authors on the subject.

Lean and Agile

Bob Charette was invited but unable to attend the 2001 meeting at Snowbird, Utah, where the Manifesto for Agile Software Development was authored. Despite missing this historic meeting, Lean Software Development was considered as one of several Agile approaches to software development. Jim Highsmith dedicated a chapter of his 2002 book to an interview with Bob about the topic. Later, Mary & Tom Poppendieck went on to author a series of 3 books. During the first few years of the 21st Century, Lean principles were used to explain why Agile methods were better. Lean explained that Agile methods contained little “waste” and hence produced a better economic outcome. Lean principles were used as a “permission giver” to adopt Agile methods.
Early lean software ideas were developed by Poppendieck and Middleton and Sutton. These books explored how lean thinking could be transferred from manufacturing to the more intangible world and different culture of software engineers. Specific techniques on how the concept of kanban could be applied to software were also developed . Note
that the use of these methods is partly a metaphor rather than a direct copying. For example, kanban in factories literally is a binary signal to replenish an inventory buffer, based on what the customer has taken away. In software it performs a similar function, but more broadly displays information on the status of the process and potential problems.
Moving upstream and applying lean thinking to influence project selection and definition also creates great benefits .
The proceedings of the first Lean & Kanban Software conference and the work of Shalloway et al. show adoption is spreading.

Defining Lean Software Development

Defining Lean Software Development is challenging because there is no specific Lean Software Development method or process. Lean is not an equivalent of Personal Software Process, V-Model, Spiral Model, EVO, Feature-Driven Development, Extreme Programming, Scrum, or Test-Driven Development. A software development lifecycle process or a project management process could be said to be “lean” if it was observed to be aligned with the values of the Lean Software Development movement and the principles of Lean Software Development. So those anticipating a simple recipe that can be followed and named Lean Software Development will be disappointed. Individuals must fashion or tailor their own software development process by understanding Lean principles and adopting the core values of Lean.

There are several schools of thought within Lean Software Development. The largest, and arguably leading, school is the Lean Systems Society, which includes Donald Reinertsen, Jim Sutton, Alan Shalloway, Bob Charette, Mary Poppendeick, and David J. Anderson. Mary and Tom Poppendieck’s work developed prior to the formation of the Society and its credo stands separately, as does the work of Craig Larman, Bas Vodde , and, most recently, Jim Coplien . This section seeks to be broadly representative of the Lean Systems Society viewpoint as expressed in its credo and to provide a synthesis and summary of their ideas.


The Lean Systems Society published its credo at the 2012 Lean Software & Systems Conference . This was based on a set of values published a year earlier. Those values include:
  • Accept the human condition
  • Accept that complexity & uncertainty are natural to knowledge work
  • Work towards a better Economic Outcome
  • While enabling a better Sociological Outcome
  • Seek, embrace & question ideas from a wide range of disciplines
  • A values-based community enhances the speed & depth of positive change

Accept the Human Condition

Knowledge work such as software development is undertaken by human beings. We humans are inherently complex and, while logical thinkers, we are also led by our emotions and some inherent animalistic traits that can’t reasonably be overcome. Our psychology and neuro-psychology must be taken into account when designing systems or processes within which we work. Our social behavior must also be accommodated. Humans are inherently emotional, social, and tribal, and our behavior changes with fatigue and stress. Successful processes will be those that embrace and accommodate the human condition rather than those that try to deny it and assume logical, machine-like behavior.

Accept that Complexity & Uncertainty are Natural to Knowledge Work

The behavior of customers and markets are unpredictable. The flow of work through a process and a collection of workers is unpredictable. Defects and required rework are unpredictable. There is inherent chance or seemingly random behavior at many levels within software development. The purpose, goals, and scope of projects tend to change while they are being delivered. Some of this uncertainty and variability, though initially unknown, is knowable in the sense that it can be studied and quantified and its risks managed, but some variability is unknowable in advance and cannot be adequately anticipated. As a result, systems of Lean Software Development must be able to react to unfolding events, and the system must be able to adapt to changing circumstances. Hence any Lean Software Development process must exist within a framework that permits adaptation (of the process) to unfolding events.

Work towards a better Economic Outcome

Human activities such as Lean Software Development should be focused on producing a better economic outcome. Capitalism is acceptable when it contributes both to the value of the business and the benefit of the customer. Investors and owners of businesses deserve a return on investment. Employees and workers deserve a fair rate of pay for a fair effort in performing the work. Customers deserve a good product or service that delivers on its promised benefits in exchange for a fair price paid. Better economic outcomes will involve delivery of more value to the customer, at lower cost, while managing the capital deployed by the investors or owners in the most effective way possible.

Enable a better Sociological Outcome

Better economic outcomes should not be delivered at the expense of those performing the work. Creating a workplace that respects people by accepting the human condition and provides systems of work that respect the psychological and sociological nature of people is essential. Creating a great place to do great work is a core value of the Lean Software Development community.

Principles of Lean Software Development for Scaling Agile

The Lean Software & Systems community seems to agree on a few principles that underpin Lean Software Development processes. These are the principles of Lean software development for Agile Methodology.
  • Follow a Systems Thinking & Design Approach
  • Emergent Outcomes can be Influenced by Architecting the Context of a Complex Adaptive System
  • Respect People (as part of the system)
  • Use the Scientific Method (to drive improvements)
  • Encourage Leadership
  • Generate Visibility (into work, workflow, and system operation)
  • Reduce Flow Time
  • Reduce Waste to Improve Efficiency

Follow a Systems Thinking & Design Approach

This is often referred to in Lean literature as “optimize the whole,” which implies that it is the output from the entire system (or process) that we desire to optimize, and we shouldn’t mistakenly optimize parts in the hope that it will magically optimize the whole. Most practitioners believe the corollary to be true, that optimizing parts (local optimization) will lead to a suboptimal outcome.

A Lean Systems Thinking and Design Approach requires that we consider the demands on the system made by external stakeholders, such as customers, and the desired outcome required by those stakeholders. We must study the nature of demand and compare it with the capability of our system to deliver. Demand will include so-called “value demand,” for which customers are willing to pay, and “failure demand,” which is typically rework or additional demand caused by a failure in the supply of value demand. Failure demand often takes two forms: rework on previously delivered value demand and additional services or support due to a failure in supplying value demand. In software development, failure demand is typically requests for bug fixes and requests to a customer care or help desk function.

A systems design approach requires that we also follow the Plan-Do-Study-Act (PDSA) approach to process design and improvement. W. Edwards Deming used the words “study” and “capability” to imply that we study the natural philosophy of our system’s behavior. This system consists of our software development process and all the people operating it. It will have an observable behavior in terms of lead time, quality, quantity of features or functions delivered (referred to in Agile literature as “velocity”), and so forth.

Velocity : At the end of each iteration, the team adds up effort estimates associated with user stories that were completed during that iteration. This total is called velocity.
Knowing velocity, the team can compute (or revise) an estimate of how long the project will take to complete, based on the estimates associated with remaining user stories and assuming that velocity over the remaining iterations will remain approximately the same. This is generally an accurate prediction, even though rarely a precise one.

“Lead time” is a term borrowed from the manufacturing method known as Lean or Toyota Production System, where it is defined as the time elapsed between a customer placing an order and receiving the product ordered.
Translated to the software domain, lead time can be described more abstractly as the time elapsed between the identification of a requirement and its fulfillment. Defining a more concrete measurement depends on the situation being examined: for instance, when focusing on the software development process, the “lead time” elapsed between the formulation of a user story and that story being used “in production”, that is, by actual users under normal conditions.
Teams opting for the kanban approach favor this measure, over the better known velocity. Instead of aiming at increasing velocity, improvement initiatives intend to reduce lead time.

These metrics will exhibit variability and, by studying the mean and spread of variation, Individuals can develop an understanding of their capability. If this is mismatched with the demand and customer expectations, then the system will need to be redesigned to close the gap.

Deming also taught that capability is 95% influenced by system design, and only 5% is contributed by the performance of individuals. In other words, we can respect people by not blaming them for a gap in capability compared to demand and by redesigning the system to enable them to be successful.

To understand system design, we must have a scientific understanding of the dynamics of system capability and how it might be affected. Models are developed to predict the dynamics of the system. While there are many possible models, several popular ones are in common usage: the understanding of economic costs; so-called transaction and coordination costs that relate to production of customer-valued products or services; the Theory of Constraints – the understanding of bottlenecks; and The Theory of Profound Knowledge – the study and recognition of variability as either common to the system design or special and external to the system design.

Emergent Outcomes can be Influenced by ARCHITECTURE of the Context for a Complex Adaptive System

Complex systems have starting conditions and simple rules that, when run iteratively, produce an emergent outcome. Emergent outcomes are difficult or impossible to predict given the starting conditions. The computer science experiment “The Game of Life” is an example of a complex system. A complex adaptive system has within it some self-awareness and an internal method of reflection that enables it to consider how well its current set of rules is enabling it to achieve a desired outcome. The complex adaptive system may then choose to adapt itself – to change its simple rules – to close the gap between the current outcome and the desired outcome. The Game of Life adapted such that the rules could be re-written during play would be a complex adaptive system.

In software development processes, the “simple rules” of complex adaptive systems are the policies that make up the process definition. The core principle here is based in the belief that developing software products and services is not a deterministic activity, and hence a defined process that cannot adapt itself will not be an adequate response to unforeseeable events. Hence, the process designed as part of our system thinking and design approach must be adaptable. It adapts through the modification of the policies of which it is made.

The Kanban approach to Lean Software Development utilizes this concept by treating the policies of the kanban pull system as the “simple rules,” and the starting conditions are that work and workflow is visualized, that flow is managed using an understanding of system dynamics, and that the organization uses a scientific approach to understanding, proposing, and implementing process improvements.

The term “kanban” with the sense of a sign, poster or billboard, and derived from roots which literally translate as “visual board”.
Its meaning within the Agile context is borrowed from the Toyota Production System, where it designates a system to control the inventory levels of various parts. It is analogous to (and in fact inspired by) cards placed behind products on supermarket shelves to signal “out of stock” items and trigger a resupply “just in time”.
The Toyota system affords a precise accounting of inventory or “work in process”, and strives for a reduction of inventory levels, considered wasteful and harmful to performance.
The phrase “Kanban method” also refers to an approach to continuous improvement which relies on visualizing the current system of work scheduling, managing “flow” as the primary measure of performance, and whole-system optimization – as a process improvement approach, it does not prescribe any particular practices.

Respect People

The Lean community adopts Peter Drucker’s definition of knowledge work that states that workers are knowledge workers if they are more knowledgeable about the work they perform than their bosses. This creates the implication that workers are best placed to make decisions about how to perform work and how to modify processes to improve how work is performed. So the voice of the worker should be respected. Workers should be empowered to self-organize to complete work and achieve desired outcomes. They should also be empowered to suggest and implement process improvement opportunities or “kaizen events” as they are referred to in Lean literature. Making process policies explicit so that workers are aware of the rules that constrain them is another way of respecting them. Clearly defined rules encourage self-organization by removing fear and the need for courage. Respecting people by empowering them and giving them a set of explicitly declared policies holds true with the core value of respecting the human condition.
SAP has been using SCRUM and other Agile methodologies for several years at the team level. Herbert Illgner, COO Business Solutions and Technology at SAP, who has been involved with the effort, says that team empowerment and faster feedback cycles with customers are two significant benefits. Illgner added that SAP is expanding application of Agile methods to the entire product creation process using a Lean framework that includes empowered cross-functional teams, continuous improvement process and managers as support and teachers..

Use the Scientific Method

Seek to use models to understand the dynamics of how work is done and how the system of Lean Software Development is operating. Observe and study the system and its capability, and then develop and apply models for predicting its behavior. Collect quantitative data in the applicable studies, and use that data to understand how the system is performing and to predict how it might change when the process is changed.

The Lean Software & Systems community uses statistical methods such as statistical process control charts and spectral analysis histograms of raw data for lead time and velocity to understand system capability. They also use models such as: the Theory of Constraints to understand bottlenecks; The System of Profound Knowledge to understand variation that is internal to the system design versus that which is externally influenced; and an analysis of economic costs in the form of tasks performed to merely coordinate, set up, deliver, or clean up after customer-valued product or services are created. Some other models are coming into use, such as Real Option Theory, which seeks to apply financial option theory from financial risk management to real-world decision making.

The scientific method suggests: we study; we postulate an outcome based on a model; we perturb the system based on that prediction; and we observe again to see if the perturbation produced the results the model predicted. If it doesn’t, then we check our data and reconsider whether our model is accurate. Using models to drive process improvements moves it to a scientific activity and elevates it from a superstitious activity based on intuition.

Encourage Leadership

Leadership and management are not the same. Management is the activity of designing processes, creating, modifying, and deleting policy, making strategic and operational decisions, gathering resources, providing finance and facilities, and communicating information about context such as strategy, goals, and desired outcomes. Leadership is about vision, strategy, tactics, courage, innovation, judgment, advocacy, and many more attributes. Leadership can and should come from anyone within an organization. Small acts of leadership from workers will create a cascade of improvements that will deliver the changes needed to create a Lean Software Development process.

Generate Visibility

Knowledge work is invisible. If you can’t see something, it is (almost) impossible to manage it. It is necessary to generate visibility into the work being undertaken and the flow of that work through a network of individuals, skills, and departments until it is complete. It is necessary to create visibility into the process design by finding ways of visualizing the flow of the process and by making the policies of the process explicit for everyone to see and consider. When all of these  are visible, then the use of the scientific method is possible, and conversations about potential improvements can be collaborative and objective. Collaborative process improvement is almost impossible if work and workflow are invisible and if process policies are not explicit.

Reduce Flow Time

The software development profession and the academics who study software engineering have traditionally focused on measuring time spent working on an activity. The Lean Software Development community has discovered that it might be more useful to measure the actual elapsed calendar time something takes to be processed. This is typically referred to as Cycle Time and is usually qualified by the boundaries of the activities performed. For example, Cycle Time through Analysis to Ready for Deployment would measure the total elapsed time for a work item, such as a user story, to be analyzed, designed, developed, tested in several ways, and queued ready for deployment to a production environment. In consultation with the customer or product owner, the team divides up the work to be done into functional increments called “user stories”.

Lead time clock starts when the request is made and ends at delivery. Cycle time clock starts when work begins on the request and ends when the item is ready for delivery. Cycle time is a more mechanical measure of process capability. Lead time is what the customer sees.

Lead time depends on cycle time, but also depends on your willingness to keep a backlog, the customer’s patience, and the customer’s readiness for delivery.

Focusing on the time work takes to flow through the process is important in several ways. Longer cycle times have been shown to correlate with a non-linear growth in bug rates. Hence shorter cycle times lead to higher quality. This is counter-intuitive as it seems ridiculous that bugs could be inserted in code while it is queuing and no human is actually touching it. Traditionally, the software engineering profession and academics who study it have ignored this idle time. However, empirical evidence suggests that cycle time is important to initial quality.

Alan Shalloway has also talked about the concept of “induced work.” His observation is that a lag in performing a task can lead to that task taking a lot more effort than it may have done. For example, a bug found and fixed immediately may only take 20 minutes to fix, but if that bug is triaged, is queued and then waits for several days or weeks to be fixed, it may involve several or many hours to make the fix. Hence, the cycle time delay has “induced” additional work. As this work is avoidable, in Lean terms, it must be seen as “waste.”

The third reason for focusing on cycle time is a business related reason. Every feature, function, or user story has a value. That value may be uncertain but, nevertheless, there is a value. The value may vary over time. The concept of value varying over time can be expressed economically as a market payoff function. When the market payoff function for a work item is understood, even if the function exhibits a spread of values to model uncertainty, it is possible to evaluate a “cost of delay.” The cost of delay allows us to put a value on reducing cycle time.

With some work items, the market payoff function does not start until a known date in the future. For example, a feature designed to be used during the 4th of July holiday in the United States has no value prior to that date. Shortening cycle time and being capable of predicting cycle time with some certainty is still useful in such an example. Ideally, we want to start the work so that the feature is delivered “just in time” when it is needed and not significantly prior to the desired date, nor late, as late delivery incurs a cost of delay. Just-in-time delivery ensures that optimal use was made of available resources. Early delivery implies that we might have worked on something else and have, by implication, incurred an opportunity cost of delay.

As a result of these three reasons, Lean Software Development seeks to minimize flow time and to record data that enables predictions about flow time. The objective is to minimize failure demand from bugs, waste from over-burdening due to delay in fixing bugs, and to maximize value delivered by avoiding both cost of delay and opportunity cost of delay.

Reduce Waste to Improve Efficiency

A value stream mapping technique is used to identify waste. The second step is to point out sources of waste and to eliminate them. Waste-removal should take place iteratively until even essential-seeming processes and procedures are liquidated.

For every valued-added activity, there are setup, cleanup and delivery activities that are necessary but do not add value in their own right. For example, a project iteration that develops an increment of working software requires planning (a setup activity), an environment and perhaps a code branch in version control (collectively known as configuration management and also a setup activity), a release plan and performing the actual release (a delivery activity), a demonstration to the customer (a delivery activity), and perhaps an environment teardown or reconfiguration (a cleanup activity.) In economic terms, the setup, cleanup, and delivery activities are transaction costs on performing the value-added work. These costs (or overheads) are considered waste in Lean.

Any form of communication overhead can be considered waste. Meetings to determine project status and to schedule or assign work to team members would be considered a coordination cost in economic language. All coordination costs are waste in Lean thinking. Lean software development methods seek to eliminate or reduce coordination costs through the use of colocation of team members, short face-to-face meetings such as standups, and visual controls such as card walls.

The third common form of waste in Lean Software Development is failure demand. Failure demand is a burden on the system of software development. Failure demand is typically rework or new forms of work generated as a side-effect of poor quality. The most typical forms of failure demand in software development are bugs, production defects, and customer support activities driven out of a failure to use the software as intended. The percentage of work-in-progress that is failure demand is often referred to as Failure Load. The percentage of value-adding work against failure demand is a measure of the efficiency of the system.

The percentage of value-added work against the total work, including all the non-value adding transaction and coordination costs, determines the level of efficiency. A system with no transaction and coordination costs and no failure load would be considered 100% efficient.

Traditionally, Western management science has taught that efficiency can be improved by increasing the batch size of work. Typically, transaction and coordination costs are fixed or rise only slightly with an increase in batch size. As a result, large batches of work are more efficient. This concept is known as “economy of scale.” However, in knowledge work problems, coordination costs tend to rise non-linearly with batch size, while transaction costs can often exhibit a linear growth. As a result, the traditional 20th Century approach to efficiency is not appropriate for knowledge work problems like software development.

It is better to focus on reducing the overheads while keeping batch sizes small in order to improve efficiency. Hence, the Lean way to be efficient is to reduce waste. Lean software development methods focus on fast, cheap, and quick planning methods; low communication overhead; and effective low overhead coordination mechanisms, such as visual controls in kanban systems. They also encourage automated testing and automated deployment to reduce the transaction costs of delivery. Modern tools for minimizing the costs of environment setup and teardown, such as modern version control systems and use of virtualization, also help to improve efficiency of small batches of software development.

Lean Software Development Practices for Agile 

Lean software development are viewed as a set of thinking tools that could easily blend in with any agile approach.So as you can see, lean and agile are deeply intertwined in the software world.
In practice, Agile seems to be changing for the better by adopting Lean thinking in a large way. Rally Development says that its customers get to market 50% faster and are 25% more productive when they employ a hybrid of Lean and Agile development methods.

Lean Software Development does not prescribe practices. It is more important to demonstrate that actual process definitions are aligned with the principles and values. However, a number of practices are being commonly adopted. This section provides a brief overview of some of these.

Continuous learning

Software development is a continuous learning process with the additional challenge of development teams and end product sizes. The best approach for improving a software development environment is to amplify learning. The accumulation of defects should be prevented by running tests as soon as the code is written. Instead of adding more documentation or detailed planning, different ideas could be tried by writing code and building. The process of user requirements gathering could be simplified by presenting screens to the end-users and getting their input.

The learning process is sped up by usage of short iteration cycles – each one coupled with refactoring and integration testing. Refactoring consists of improving the internal structure of an existing program’s source code, while preserving its external behavior. The noun “refactoring” refers to one particular behaviour-preserving transformation, such as “Extract Method” or “Introduce Parameter”. Refactoring does “not” mean: rewriting code, fixing bugs or improve observable aspects of software such as its interface

Refactoring in the absence of safeguards against introducing defects (i.e. violating the “behaviour preserving” condition) is risky. Safeguards include aids to regression testing including automated unit tests or automated acceptance tests, and aids to formal reasoning such as type systems.

Increasing feedback via short feedback sessions with customers helps when determining the current phase of development and adjusting efforts for future improvements. During those short sessions both customer representatives and the development team learn more about the domain problem and figure out possible solutions for further development. Thus the customers better understand their needs, based on the existing result of development efforts, and the developers learn how to better satisfy those needs. Another idea in the communication and learning process with a customer is set-based development – this concentrates on communicating the constraints of the future solution and not the possible solutions, thus promoting the birth of the solution via dialogue with the customer.

Decide as late as possible

As software development is always associated with some uncertainty, better results should be achieved with an options-based approach, delaying decisions as much as possible until they can be made based on facts and not on uncertain assumptions and predictions. The more complex a system is, the more capacity for change should be built into it, thus enabling the delay of important and crucial commitments. The iterative approach promotes this principle – the ability to adapt to changes and correct mistakes, which might be very costly if discovered after the release of the system.

An agile software development approach can move the building of options earlier for customers, thus delaying certain crucial decisions until customers have realized their needs better. This also allows later adaptation to changes and the prevention of costly earlier technology-bounded decisions. This does not mean that no planning should be involved – on the contrary, planning activities should be concentrated on the different options and adapting to the current situation, as well as clarifying confusing situations by establishing patterns for rapid action. Evaluating different options is effective as soon as it is realized that they are not free, but provide the needed flexibility for late decision making.

Deliver as fast as possible

In the era of rapid technology evolution, it is not the biggest that survives, but the fastest. The sooner the end product is delivered without major defects, the sooner feedback can be received, and incorporated into the next iteration. The shorter the iterations, the better the learning and communication within the team. With speed, decisions can be delayed. Speed assures the fulfilling of the customer’s present needs and not what they required yesterday. This gives them the opportunity to delay making up their minds about what they really require until they gain better knowledge. Customers value rapid delivery of a quality product.

The just-in-time production ideology could be applied to software development, recognizing its specific requirements and environment. This is achieved by presenting the needed result and letting the team organize itself and divide the tasks for accomplishing the needed result for a specific iteration. At the beginning, the customer provides the needed input. This could be simply presented in small cards or stories – the developers estimate the time needed for the implementation of each card. Thus the work organization changes into self-pulling system – each morning during a stand-up meeting, each member of the team reviews what has been done yesterday, what is to be done today and tomorrow, and prompts for any inputs needed from colleagues or the customer. This requires transparency of the process, which is also beneficial for team communication.

Another key idea in Toyota’s Product Development System is set-based design. If a new brake system is needed for a car, for example, three teams may design solutions to the same problem. Each team learns about the problem space and designs a potential solution. As a solution is deemed unreasonable, it is cut. At the end of a period, the surviving designs are compared and one is chosen, perhaps with some modifications based on learning from the others – a great example of deferring commitment until the last possible moment. Software decisions could also benefit from this practice to minimize the risk brought on by big up-front design

Empower the team

There has been a traditional belief in most businesses about the decision-making in the organization – the managers tell the workers how to do their own job. In a “Work-Out technique”, the roles are turned – the managers are taught how to listen to the developers, so they can explain better what actions might be taken, as well as provide suggestions for improvements. The lean approach favors the aphorism “find good people and let them do their own job,” encouraging progress, catching errors, and removing impediments, but not micro-managing.

Another mistaken belief has been the consideration of people as resources. People might be resources from the point of view of a statistical data sheet, but in software development, as well as any organizational business, people do need something more than just the list of tasks and the assurance that they will not be disturbed during the completion of the tasks. People need motivation and a higher purpose to work for – purpose within the reachable reality, with the assurance that the team might choose its own commitments. The developers should be given access to the customer; the team leader should provide support and help in difficult situations, as well as ensure that skepticism does not ruin the team’s spirit.

Build integrity in

The customer needs to have an overall experience of the System – this is the so-called perceived integrity: how it is being advertised, delivered, deployed, accessed, how intuitive its use is, price and how well it solves problems.

Conceptual integrity means that the system’s separate components work well together as a whole with balance between flexibility, maintainability, efficiency, and responsiveness. This could be achieved by understanding the problem domain and solving it at the same time, not sequentially. The needed information is received in small batch pieces – not in one vast chunk with preferable face-to-face communication and not any written documentation. The information flow should be constant in both directions – from customer to developers and back, thus avoiding the large stressful amount of information after long development in isolation.

One of the healthy ways towards integral architecture is refactoring. As more features are added to the original code base, the harder it becomes to add further improvements. Refactoring is about keeping simplicity, clarity, minimum amount of features in the code. Repetitions in the code are signs for bad code designs and should be avoided. The complete and automated building process should be accompanied by a complete and automated suite of developer and customer tests, having the same versioning, synchronization and semantics as the current state of the System. At the end the integrity should be verified with thorough testing, thus ensuring the System does what the customer expects it to. Automated tests are also considered part of the production process, and therefore if they do not add value they should be considered waste. Automated testing should not be a goal, but rather a means to an end, specifically the reduction of defects.

See the whole

Software systems nowadays are not simply the sum of their parts, but also the product of their interactions. Defects in software tend to accumulate during the development process – by decomposing the big tasks into smaller tasks, and by standardizing different stages of development, the root causes of defects should be found and eliminated. The larger the system, the more organizations that are involved in its development and the more parts are developed by different teams, the greater the importance of having well defined relationships between different vendors, in order to produce a system with smoothly interacting components. During a longer period of development, a stronger subcontractor network is far more beneficial than short-term profit optimizing, which does not enable win-win relationships.

Lean thinking has to be understood well by all members of a project, before implementing in a concrete, real-life situation. “Think big, act small, fail fast; learn rapidly” – these slogans summarize the importance of understanding the field and the suitability of implementing lean principles along the whole software development process. Only when all of the lean principles are implemented together, combined with strong “common sense” with respect to the working environment, is there a basis for success in software development.

Model Storming: 
Agile Modeling’s practices of light weight, initial requirements envisioning followed by iteration modeling and just-in-time (JIT) model storming work because they reflect deferment of commitment regarding what needs to be built until it’s actually needed, and the practices help eliminate waste because you’re only modeling what needs to be built.
Agility by Self Organization :
It is possible to deliver high-quality systems quickly. By limiting the work of a team to its capacity, which is reflected by the team’s velocity (this is the number of “points” of functionality which a team delivers each iteration), you can establish a reliable and repeatable flow of work. An effective organization doesn’t demand teams do more than they are capable of, but instead asks them to self-organize and determine what they can accomplish. Constraining these teams to delivering potentially shippable solutions on a regular basis motivates them to stay focused on continuously adding value.

Cumulative Flow Diagrams

Cumulative Flow Diagrams have been a standard part of reporting in Team Foundation Server since 2005. Cumulative flow diagrams plot an area graph of cumulative work items in each state of a workflow. They are rich in information and can be used to derive the mean cycle time between steps in a process as well as the throughput rate (or “velocity”). Different software development lifecycle processes produce different visual signatures on cumulative flow diagrams. Practitioners can learn to recognize patterns of dysfunction in the process displayed in the area graph. A truly Lean process will show evenly distributed areas of color, smoothly rising at a steady pace. The picture will appear smooth without jagged steps or visible blocks of color.

In their most basic form, cumulative flow diagrams are used to visualize the quantity of work-in-progress at any given step in the work item lifecycle. This can be used to detect bottlenecks and observe the effects of “mura” (variability in flow).

Visual Controls

In addition to the use of cumulative flow diagrams, Lean Software Development teams use physical boards, or projections of electronic visualization systems, to visualize work and observe its flow. Such visualizations help team members observe work-in-progress accumulating and enable them to see bottlenecks and the effects of “mura.” Visual controls also enable team members to self-organize to pick work and collaborate together without planning or specific management direction or intervention. These visual controls are often referred to as “card walls” or sometimes (incorrectly) as “kanban boards.”

Virtual Kanban Systems

A kanban system is a practice adopted from Lean manufacturing. It uses a system of physical cards to limit the quantity of work-in-progress at any given stage in the workflow. Such work-in-progress limited systems create a “pull” where new work is started only when there are free kanban indicating that new work can be “pulled” into a particular state and work can progress on it.

In Lean Software Development, the kanban are virtual and often tracked by setting a maximum number for a given step in the workflow of a work item type. In some implementations, electronic systems keep track of the virtual kanban and provide a signal when new work can be started. The signal can be visual or in the form of an alert such as an email.

Virtual kanban systems are often combined with visual controls to provide a visual virtual kanban system representing the workflow of one or several work item types. Such systems are often referred to as “kanban boards” or “electronic kanban systems.” A visual virtual kanban system is available as a plug-in for Team Foundation Server, called Visual WIP[20]. This project was developed as open source by Hakan Forss in Sweden.

Small Batch Sizes / Single-piece Flow

Lean Software Development requires that work is either undertaken in small batches, often referred to as “iterations” or “increments,” or that work items flow independently, referred to as “single-piece flow.” Single-piece flow requires a sophisticated configuration management strategy to enable completed work to be delivered while incomplete work is not released accidentally. This is typically achieved using branching strategies in the version control system. A small batch of work would typically be considered a batch that can be undertaken by a small team of 8 people or less in under 2 weeks.

Small batches and single-piece flow require frequent interaction with business owners to replenish the backlog or queue or work. They also require a capability to release frequently. To enable frequent interaction with business people and frequent delivery, it is necessary to shrink the transaction and coordination costs of both activities. A common way to achieve this is the use of automation.


Lean Software Development expects a high level of automation to economically enable single-piece flow and to encourage high quality and the reduction of failure demand. The use of automated testing, automated deployment, and software factories to automate the deployment of design patterns and creation of repetitive low variability sections of source code will all be commonplace in Lean Software Development processes.

Kaizen Events

In Lean literature, the term kaizen means “continuous improvement” and a kaizen event is the act of making a change to a process or tool that hopefully results in an improvement.

The Lean concept of Kaizen also has a strong influence on the way Agile is being practiced, filling a gap relating to continuous improvement

Lean Software Development processes use several different activities to generate kaizen events. These are listed here. Each of these activities is designed to stimulate a conversation about problems that adversely affect capability and, consequently, ability to deliver against demand. The essence of kaizen in knowledge work is that we must provoke conversations about problems across groups of people from different teams and with different skills.

The evolution of Agile is primarily focused on evolving the product toward a better fit with requirements. In Agile, both the product and the requirements are refined as more is known through experience. Kaizen, a continuous improvement method used in Lean, focuses on the development process itself. When Kaizen is practiced in an Agile project, the participants not only suggest ways to improve the fit between the product and the requirements but also offer ways to improve the process being used, something usually not emphasized in Agile methods. Eckfeldt described the use of Kaizen snakes and project thermometers to capture process improvement feedback.

Daily standup meetings

Teams of software developers, often up to 50, typically meet in front of a visual control system such as a whiteboard displaying a visualization of their work-in-progress. They discuss the dynamics of flow and factors affecting the flow of work. Particular focus is made to externally blocked work and work delayed due to bugs. Problems with the process often become evident over a series of standup meetings. The result is that a smaller group may remain after the meeting to discuss the problem and propose a solution or process change. A kaizen event will follow. These spontaneous meetings are often referred to as spontaneous quality circles in older literature. Such spontaneous meetings are at the heart of a truly kaizen culture. Managers will encourage the emergence of kaizen events after daily standup meetings in order to drive adoption of Lean within their organization.


Project teams may schedule regular meetings to reflect on recent performance. These are often done after specific project deliverables are complete or after time-boxed increments of development known as iterations or sprints in Agile software development.

Retrospectives typically use an anecdotal approach to reflection by asking questions like “what went well?”, “what would we do differently?”, and “what should we stop doing?”

Retrospectives typically produce a backlog of suggestions for kaizen events. The team may then prioritize some of these for implementation.

A retrospective is intended to reveal facts or feelings which have measurable effects on the team’s performance, and to construct ideas for improvement based on these observations. It will not be useful if it devolves into a verbal joust, or a whining session.

On the other hand, an effective retrospective requires that each participant feel comfortable speaking up. The facilitator is responsible for creating the conditions of mutual trust; this may require taking into accounts such factors as hierarchical relationships, the presence of a manager for instance may inhibit discussion of performance issues.
Being an all-hands meeting, a retrospective comes at a significant cost in person-hours. Poor execution, either from the usual causes of bad meetings (lack of preparation, tardiness, inattention) or from causes specific to this format (lack of trust and safety, taboo topics), will result in the practice being discredited, even though a vast majority of the Agile community views it as valuable.
An effective retrospective will normally result in decisions, leading to action items; it’s a mistake to have too few (there is always room for improvement) or too many (it would be impractical to address “all” issues in the next iteration). One or two improvement ideas per iteration retrospective may well be enough.
Identical issues coming up at each retrospective, without measurable improvement over time, may signal that the retrospective has become an empty ritual

Operations Reviews

An operations review is typically larger than a retrospective and includes representatives from a whole value stream. It is common for as many as 12 departments to present objective, quantitative data that show the demand they received and reflect their capability to deliver against the demand. Operations reviews are typically held monthly. The key differences between an operations review and a retrospective is that operations reviews span a wider set of functions, typically span a portfolio of projects and other initiatives, and use objective, quantitative data. Retrospectives, in comparison, tend to be scoped to a single project; involve just a few teams such as analysis, development, and test; and are generally anecdotal in nature.

An operations review will provoke discussions about the dynamics affecting performance between teams. Perhaps one team generates failure demand that is processed by another team? Perhaps that failure demand is disruptive and causes the second team to miss their commitments and fail to deliver against expectations? An operations review provides an opportunity to discuss such issues and propose changes. Operations reviews typically produce a small backlog of potential kaizen events that can be prioritized and scheduled for future implementation.

There is no such thing as a single Lean Software Development process. A process could be said to be Lean if it is clearly aligned with the values and principles of Lean Software Development. Lean Software Development does not prescribe any practices, but some activities have become common. Lean organizations seek to encourage kaizen through visualization of workflow and work-in-progress and through an understanding of the dynamics of flow and the factors (such as bottlenecks, non-instant availability, variability, and waste) that affect it. Process improvements are suggested and justified as ways to reduce sources of variability, eliminate waste, improve flow, or improve value delivery or risk management. As such, Lean Software Development processes will always be evolving and uniquely tailored to the organization within which they evolve. It will not be natural to simply copy a process definition from one organization to another and expect it to work in a different context. It will also be unlikely that returning to an organization after a few weeks or months to find the process in use to be the same as was observed earlier. It will always be evolving.

The organization using a Lean software development process could be said to be Lean if it exhibited only small amounts of waste in all three forms (“mura,” “muri,” and “muda”) and could be shown to be optimizing the delivery of value through effective management of risk. The pursuit of perfection in Lean is always a journey. There is no destination. True Lean organizations are always seeking further improvement.

Lean Software Development is still an emerging field, and we can expect it to continue to evolve over the next decade.

Lean software development at BBC WorldWide

The lean ideas behind the Toyota production system can be applied to software project
management. This paragraph explains the  investigation of the performance of a nine-person software development team employed by BBC Worldwide based in London. The data collected in 2009 involved direct observations of the development team, the kanban boards, the daily stand-up meetings, semistructured interviews with a wide variety of staff, and statistical analysis. The evidence shows that over the 12-month period, lead time to deliver software improved by 37%, consistency of delivery rose by 47%, and defects reported by customers fell 24%. The significance of this work is showing that the use of lean methods including visual management, team-based problem solving, smaller batch sizes, and statistical process control can improve software development. It also summarizes key differences between agile and lean approaches to software development.
The conclusion is that the performance of the software development team was improved by adopting a lean approach. The faster delivery with a focus on creating the highest value to the customer also reduced both technical and market risks.

Lean software development at IMVU INc

IMVU Inc. ( is a virtual company where users meet as personalized avatars in 3D digital rooms. Founded in 2004, IMVU has 25 million  registered users, 100,000 registered developers and  reached $1 million in monthly revenue. Over 90 percent of IMVU‘s revenue is from the direct sale of  virtual credits (a form of currency) to users who purchase digital products from its 1.8 million item digital  catalog. IMVU has won the 2008 Virtual Worlds  Innovation Award and was also named a Rising Star  in the 2008 Silicon Valley Technology Fast 50 program. IMVU receives funding from top venture investors Menlo Ventures, Allegis Capital and Bridgescale Partners. Its offices are located in Palo Alto,  CA. (

Software Development at IMVU
IMVU‘s founders had previously founded—a virtual world startup that took three  years to build, burned through a ton of money, and  was an abysmal failure after launch. However, from  an engineering perspective, was an  amazing success, as they built it ahead of schedule,  maintained tight quality standards, and solved multiple difficult technical problems. Still, it wasn‘t a  commercial success—and large amounts of time and  money were wasted. As a result, IMVU‘s founding  team decided to build the minimum viable product  and then test it with users—even if the product  seemed only half-built (an engineer‘s nightmare).
As a result, IMVU was one of the startups that pioneered the ―build-just-a-little-and-get-customer-feedback model. This model was only possible because  of the application of several lean principles at the  technical level in the development process.
Lean Principle #1: Specify Value in the Eyes of the  Customer
From the beginning, IMVU‘s founders decided they  wanted to build a culture of ―ship, ship, ship. From a  business perspective, this makes a lot of sense, but  from an engineering perspective, it‘s like pulling fingernails with a pair of rusty pliers:―Bugs were all over the place, extremely ugly looking, and only the most rudimentary features.
In essence, releasing a sub-par product allowed  IMVU to avoid over-production wastes by putting  man hours only into features their customers liked.
Lean Principle #2: Identify Value Stream and  Eliminate Waste
The IMVU team worked hard to cultivate the ―ship,  ship, ship mentality. For example, on their very first  day, most developers were expected to write some  code and push it into production. Even though it was  generally just a small bug fix or a miniscule feature,  this ―release-code-on-the-first-day-of-work idea  seemed revolutionary for most new hires.
Continuous deployment reduced the wastes of overproduction, waiting, and processing. In a traditional  development process, multiple engineers are busy  building multiple features based on the last bit of  stable code. When they try to deploy their feature  after two weeks of work, they find that someone else  deployed a different feature the previous day and the
two features don‘t play well together. Continuous  deployment allows engineers to upload their work  instantaneously – thus ensuring engineers are always  working from the same base code. This avoids  spending extra weeks making the feature code  compatible.
Lean Principle #3: Make Value Flow at the Pull of the Customer
IMVU projects have an eight week Return on Investment (ROI) target. Whenever someone suggests a  small project, they are asked to provide a general  roadmap showing that the project could repay the  time investment in eight weeks. Projects are continuously tested on small numbers of IMVU users –who often had no idea they were part of a bucket test.  If a project shows success, they keep working on it.
After a few weeks, if the numbers shows the project  had zero chance of positive ROI, it is shut down immediately. Over time, as IMVU matures, this project  ROI target is expanded:
Lean Principle #4: Involve and Empower
Employees IMVU implemented the 5 Why‘s process, also known as ―Root cause analysis, to involve and empower its employees during trouble-shooting  processes. 5 Why‘s process is the technique of asking  why five times to get to the root cause of a problem  when it occurs.
in blog posts by Ries, each IMVU engineer has his/her own sandbox which mimicked
production as close as possible. IMVU has a comprehensive set of unit, acceptance, functional, and performance tests, and practiced Test-Drive-Development across the whole team. Engineers build a series of test tags, and quickly run a subset of tests in their sandboxes. Revisions are required if a test fails. To keep developers on the same code before it passed the various tests, IMVU created the equivalent of a Kanban system plus an Andon cord (automated testing and immediate rollback system). Developers are assigned a single task, and not allowed to move onto the next task until their code not only successfully passes the automated testing, but also has successfully deployed. Only then, they can pull the next task from the queue. This means that developers have a little bit of idle time while the tests were running. It also means that code is fully completed before a developer moves on. As a result, engineering is optimized for productivity rather than activity:

Lean Principle #5: Continuously Improve in Pursuit of Perfection
The problem with all this emphasis on ―ship, ship, ship, was different bugs in the code kept taking the site down. Sometimes it was simply a scaling issue— new upgrades worked fine on an engineer‘s computer, but crashed when hundreds of thousands of users tried it. Other times, it was a new employee releasing some feature without understanding how the previous code base worked. From a business perspective, it didn‘t matter what the problem was; if the site was down, IMVU was losing money.
From a technical perspective, each new problem required a different solution. Solving scaling issues is very different than solving a single infinite loop problem. The only practical fix was either cease continuous deployment or institute automated tests that checked the code, plus allowed for immediate code rollbacks if any server started to crash.

Eventually, IMVU architected a series of automated tests that looked at every new code check-in, tested it, and then pushed it onto the live servers. If at any point the code crashed—either during testing or once it started running in the wild—the automated tests instituted a rollback to the last verified good version, and sent a nice little e-mail back to the engineer that said ―We‘re sorry, but it looks like your code ABC caused a problem at XYZ. Afraid we can‘t let your code go live until this is fixed. As a result, the automated testing caught an amazing number of errors, and IMVU management started pursuing massively high quality expectations.

IMVU Successfully implemented lean principles at the technical level in the software development  process. They encountered many common challenges that software companies face: choosing the  right product feature, long development cycle, endless testing and debugging. IMVU found solutions  by sticking with the basic lean principles. They were  able to identify and reduce common wastes in software development process, specifically, overproduction, waiting, process, and defects.

IMVU clearly demonstrated the importance of lean  implementation in the software development process.
The implementation of lean principles cannot turn  software development into a production line environment, with scientific methods for each step of the  way. However, it can help turn a chaotic, constantly  changing process, into a much more predictable, fast  moving, and streamlined process. Lean implementation coupled with brilliant designs and fully engaged  intellectual team can help deliver great software  products.
It would seem that the rapid release cycles called for  by lean principles can only be effective if there is a  comprehensive and rigorous testing environment. An  interesting question is whether IMVU‘s practices  (such as daily release online) would be applicable to  software companies that focus on packaged rather  than online products. In this case, the ―customer is a  combination of the other developers and the ultimate  consumer. IMVU‘s experience challenges the conventional wisdom in software development. Can it be  beneficial to all software companies striving to deliver the right product, at the right time, and at the  right price? Middleton and Sutton  believe that the benefits work across different types of software. Yet,  they also recognize that lean software is far too early  in its evolution.

Lean Beyond Agile

In recent years, Lean Software Development has really emerged as its own discipline related to, but not specifically a subset of the Agile movement. This evolution started with the synthesis of ideas from Lean Product Development and the work of Donald G. Reinertsen and ideas emerging from the non-Agile world of large scale system engineering and the writing of James Sutton and Peter Middleton. David J. Anderson also synthesized the work of Eli Goldratt and W. Edwards Deming and developed a focus on flow rather than waste reduction . At the behest of Reinertsen around 2005,  David J. Anderson introduced the use of kanban systems that limit work-in-progress and “pull” new work only when the system is ready to process it. Alan Shalloway added his thoughts on Lean software development in his 2009 book on the topic. Since 2007, the emergence of Lean as a new force in the progress of the software development profession has been focused on improving flow, managing risk, and improving (management) decision making. Kanban has become a major enabler for Lean initiatives in IT-related work. It appears that a focus on flow, rather than a focus on waste elimination, is proving a better catalyst for continuous improvement within knowledge work activities such as software development.

PEARL XV : Beyond Scrum : A Scalable Agile Framework with Continuous Integration

PEARL XV : Beyond Scrum :  A Scalable Agile Framework with Continuous Integration using Test Automation and Build for Large Distributed Agile Projects.

scalable agile process

Scrum is the most popular Agile technique, but it doesn’t scale well. And while Scrum improves the effectiveness of individual teams, productivity gains fall off sharply on large projects with many teams.

Yet Agile methods have been used in very large commercial and open source projects to increase productivity, quality and release frequency. Here are a couple of examples:

  • Facebook incorporates code from 600 developers while delivering two releases per day.
  • The Google Android project utilizes thousands of contributors spread across the world.
  • Flickr reached a level of 10 deployments per day.

Can we learn from these companies and projects? What types of techniques and tools did they use to achieve those results?

This section will cover the problems with Scrum, changes in approach to help scalability and methods for supporting distributed teams and continuous delivery.

Problems With Scrum
Scrum techniques have been very successful in improving the effectiveness of individual development teams, employing concepts like self-directed co-located teams, time-boxed sprints, and regular customer feedback from working software. Yet many organizations have run into obstacles when trying to apply Scrum techniques to large projects. For example, no effective techniques have evolved to coordinate the work of multiple Scrum teams and manage dependencies among them. The “Scrum of Scrums” approach of holding meetings with representatives of every team becomes increasingly time-consuming and unwieldy as the number of teams multiply.

In part, this is because some of the assumptions underlying Scrum are too restrictive for large organizations, or clash with business requirements. Many groups refuse to be limited to co-located teams because they are already distributed, they want to take advantage of the global market for development talent, or simply because many of their employees work from home several days a week.

Many groups need to share key personnel such as architects, UI designers, and database specialists across many projects, and cannot assign them to a single team. Other companies need to fix bugs and release new functionality more frequently than the 2-8 week cycles typical of Scrum teams. This is particularly true of those providing web- and cloud-based applications, where customers expect a constant flow of enhancements.

This is not to say that Scrum practices have no place in a large development environment. But it is now clear that many organizations need to go “Beyond Scrum” and find new practices to manage distributed contributors and large, complex projects.

Changes in Approach That Help Scalability
The large commercial and open source projects that have successfully scaled Agile typically depart from conventional Scrum practices in several areas.

No Scrum meetings: Sprint planning meetings, retrospectives and Scrum-of-Scrum meetings are time-consuming, usually require that everyone be in one room, and usually don’t do a very good job of coordinating across teams. That’s why large-scale projects find ways to use online collaboration and planning tools to coordinate work within and across teams, with fewer meetings and conference calls.

“Pull,” “continuous flow,” and “publish what’s ready”: Although Scrum practices are far more agile than the waterfall methods they replaced, they still impose a degree of inflexibility. Once a sprint plan is complete, the features to be delivered are fixed, and so is the time frame to deliver them (usually 2-8 weeks).

Scalable projects typically use pull and continuous flow techniques (especially Kanban), so developers are always working on the highest priority tasks, and new functionality can be released as soon as it is ready.

Code review workflows (long used by open source projects) can be used to select contributions from hundreds of contributors and publish what’s ready. By helping organizations scale to more contributors and release more frequently, code review workflows can become a key building block of Scalable Agile.

Diverse contributors: Classic Scrum practices are designed for co-located teams of 8-10 members. But, in reality, large projects need to incorporate work from individual contributors, shared resources (e.g., architects and DBAs), outsourcing companies, and business partners as well as teams. Collaboration tools and code review workflows are central to meshing the work of these diverse contributors.

A Scalable Agile Process Framework
The question, then, is how can we apply the new approaches as a coherent whole?

Instead of each Scrum team having its own backlog, product management (or product owners) maintain a single project-wide backlog, with tasks sorted in priority order.

At any time, contributors can pull tasks from the top of the backlog into their own “Current Work” list. Ideally they will pull the highest-priority task, but if they do not have the necessary skills or resources they can move down the stack to find another high-priority assignment.

This process ensures that the highest-priority tasks are addressed first. If an urgent bug fix or a key customer feature request is placed at the top of the backlog it will receive immediate attention, instead of waiting for the next sprint.

Contributors can be individuals, teams, departments, or even entire organizations like an outsourcing firm or a business partner. There is no expectation or requirement that the tasks be done by Scrum teams of 8-10 members. This allows organizations to call on the talents of all kinds of individuals and companies, and in fact conforms to the reality of most large projects today.

Tasks are then managed using Kanban or lean principles. Typically this means that each person is working on one task at a time (i.e., the team has a work-in-process limit of one task for each person on the team).

Kanban principles ensure that once tasks are started they are completed as quickly as possible, which means that they can be released sooner, and also that other tasks which depend on the first task can be started sooner.

When tasks are completed, the contributor pulls in on-demand resources to build and test a release with the new code. This provides immediate feedback to the contributors, and allows them to catch and fix bugs right away. It also makes features available faster, because there is no wait for centralized build and test systems.

Finally, once new code submissions have been tested successfully, they can be pulled through a merge process into a staging area or into a final build. This means that a new version of the software can be assembled and released at any time, with whatever bug fixes and enhancements are available at that moment.

What Does This Accomplish?
How exactly does this Scalable Agile process framework address the shortcomings of Scrum and provide more scalable, responsive development efforts? Here are a few of the advantages:

  • There can be many types of contributors, including (but not limited to) conventional Scrum teams.
  • There is no need to spend time estimating tasks precisely, doing detailed sprint planning, or having long meetings to coordinate assignments across teams. As long as the backlog is maintained in priority order the highest-priority tasks will be addressed first.
  • Once tasks are started they are completed in the least possible time, meaning they can be released faster and dependent tasks can be started sooner.
  • Software quality is better, because test feedback is available as soon as a task is complete. Bugs can be fixed when it is clear what changes caused the problem, and when the code is fresh in the mind of the developer. Also, quality assurance does not become a bottleneck, a situation which often leads organizations to cut corners on testing (leading to yet more quality problems).
  • New versions of the application can be assembled and released at any time according to business demand. With sufficient automation this can be daily or even several times a day.

Scrum is the most popular Agile techniques, but it doesn’t scale well. And while Scrum improves the effectiveness of individual teams, productivity gains fall off sharply on large projects with many teams.

In the earlier section, methods on how to apply agile techniques to distributed teams and large projects was dealt. In the following section,  Tools and techniques for managing distributed teams will be addressed.

Some of the processes and tools needed to manage the Scalable Agile process framework are addressed in the following paragraphs.

The first of these “building blocks” is support for distributed teams. Large development organizations are almost always distributed because they have (1) business units and business partners in multiple locations, (2) “outsourcing” groups in different countries, (3) decided to take advantage of the global market for development talent, (4) remote employees who work from home.

So how can organizations support distributed teams well enough to reduce the need for face-to-face meetings?

Online Agile Planning
Online tools can replace paper-and-pencil planning exercises and physical whiteboards. This allows team members worldwide to create and maintain an overall project backlog, pull tasks to individual teams and contributors, move tasks through the steps in a Kanban process, and view tasks ready to be pulled into a release.

The  an online Agile planning tool is used for managing a central backlog and pulling tasks into “current” task buckets for individual teams, and an online card wall that can replace the physical variety.

online planning tools
Online planning tools can replace paper plans and physical white boards.

Online Collaboration

Development team members can collaborate most easily when they are in the same room, but online tools can provide a close approximation. Such tools include online standup reports, wikis, chat and IM products, and video and teleconferencing systems.

Another type of online tool, an activity stream gives developers real-time visibility into the activities of other team members—activities like code commits, new tickets, comments added to tickets, code reviews and posts on wikis.

activity stream

An activity stream shows commits, comments, and other events.

Global Code Management

Global collaboration can be undermined if developers need to share large repositories and large files over long distances and performance is slow.

Some of the technologies that Tool vendor Perforce uses, like proxies and replication, ensure that files are available immediately in remote locations . These solutions ensure that data is available where needed without artificial boundaries that impede sharing and collaboration.

Perforce technologies ensure that distributed team members don’t have to wait to get large repositories and files.

Decentralized Code Management
Developers often want highly decentralized code management so they can create their own local test branches and work independent of centralized corporate resources.

Development managers, however, want to maintain control over and visibility into activities at remote locations.

Git Fusion from Perforce answers both needs. Developers can quickly clone their own repositories and work in private Git repositories on their local systems, with easy code sharing between teams and products.

select directories

Release managers can make selected directories visible to Git users.

Release managers can model an entire product development effort with Perforce streams and branches, apply access controls, and control how much history and which files are cloned into new Git repositories . As changes are accepted, the enterprise release model guides changes to the right places: older releases, customizations, and parallel development efforts.

When developers commit code to the Perforce repository, the Perforce shared versioning service makes the changes visible to everyone and maintains a strong system of record about the source and nature of all changes.

The earlier paragraphs described the challenges of scaling large and distributed Agile teams, and investigated the tools and strategies that resolve them. Once the problem of scaling Agile development has been addressed, however, pressure is then applied next to the people and processes that are tasked with delivering or deploying the resulting product. Agile workflow is only successful once efficiency is attained in all stages of the workflow. This paragraph will cover the second building block of Scalable Agile: Continuous Delivery.


ScrumBan is a relatively easy first step for Scrum teams that want to move in the direction of Continuous Delivery.

Teams using ScrumBan work within a time-boxed sprint. But unlike conventional Scrum practices, a work-in-process limit is adopted, so team members are focused on finishing one task at a time. At a certain point in the sprint a “triage” process identifies which tasks can be completed within the time box, and drops the others from the sprint plan. At that point there is a “feature freeze,” and the remainder of the sprint is devoted to completing the tasks specified by the triage process.


ScrumBan represents a step toward Continuous Delivery because it emphasizes completing a small number of tasks as quickly as possible. Development teams avoid the pitfalls involved in pulling out all the stops to deliver the entire sprint plan, regardless of the cost in terms of quality and delays.

On-Demand Merge and Test by Contributor
The conventional software release process creates a huge bottleneck at the test phase. All teams send their contributions to a central QA team, which creates and tests a “release candidate.”

In traditional release processes, the QA lab becomes a serious bottleneck.

In theory this workflow makes very efficient use of the QA team and test systems, however:

  • It takes a long time to run all of the tests.
  • It is hard to debug and troubleshoot many code changes at once, especially if they may be interacting with each other.
  • Errors uncovered during the integration phase may require costly rework by several contributors.
  • The test lab becomes a huge bottleneck near the end of each sprint, causing stress and leading to sloppy testing practices.
  • Releases are delayed until the entire release candidate has been completely tested and debugged.

But what if each team can build and test based on just its own contributions?

There can be a different approach to testing. In this scenario each team and contributor has access to test resources. QA team members act as advisors and facilitators rather than being charged with managing all of the testing themselves. When a development team finishes a set of changes, the team then pulls a copy of the production version onto the test system. Changes can then be merged into this private production version, built and tested locally.
Each team pulls a product version, merges its changes, and performs its own tests.

team pulls

If testing uncovers problems, these can be solved right away by development, without worrying about interactions with changes from other teams. Multiple teams and contributors can now test and debug contributions independently, and submit them to the central staging area when ready
Teams submit tested contributions when they are ready; releases can be assembled at any time.

The advantages of this approach include:

  • QA is no longer a bottleneck—teams test independently, when they are ready.
  • It is easier to debug and troubleshoot problems, because each group is observing only its own changes to a previously tested production version.
  • Releases can be assembled at any time, constructed from whatever contributions have been tested and submitted.

These capabilities are not easy or cheap to implement. They require a considerable investment in automated build and test environments, which for distributed teams must be provided in the cloud. They also require that code merge and management be a fast and easy part of any developer’s daily work. A simple and easy-to-automate merge framework like Perforce Streams  provides merge notifications, merge pathway guidance, and intuitive tools.

In a complicated project consisting of several components, Perforce’s visibility over any part of the project also helps development teams share and reuse code. These teams can quickly adapt to a changing project structure, even if they are working in distributed repositories via Git Fusion. A merge in this environment would never require a complicated action that spans several independent repositories.

These capabilities are an indispensable aspect of Scalable Agile, because they allow very large numbers of teams to contribute to a project without overwhelming build and test resources.

Code Review Workflow
Another major challenge for Continuous Delivery is how to merge a growing number of contributions into production releases. How can you organize the flow of contributions from many sources? How can you decide when to assemble the next release? How do you avoid creating a bottleneck at the point where the contributions come together?

One very useful method is a code review workflow similar to those used in open source projects. In these projects hundreds of contributors might submit code and thousands might test it. Typically a core group of “maintainers” reviews submissions and selects the ones that will be included in the next release.

A code review workflow can be utilized in commercial environments as well. for example, the Assembla ticketing tool includes a merge request feature that allows contributors to submit code changes for review. Designated reviewers can review the submissions, hold online discussions about them, vote for or against accepting them, and make immediate decisions to accept or reject them

This code review workflow lets organizations manage the code review process and delegate the decision making for accepting contributions and assembling releases, which prevents these activities from becoming bottlenecks.

A code review workflow allows designated reviewers to vote on which contributions to include in the next release.

Streams for Managing Multiple Versions
Another common challenge among large projects is maintaining multiple releases and managing custom versions for individual customers.

Software vendors, for example, usually need to support several releases of an application at once. Bug fixes might need to be applied to many (but not all) of the supported releases. Enhancements to the current release might be retrofitted to the previous release and added to the upcoming release under development. Similarly, a service provider or enterprise IT department might be maintaining customized versions of an application for different customers or different business units within the enterprise.

perforce streams

It is much easier to navigate these complex scenarios with a tool like Perforce Streams. Perforce Streams not only helps development managers visualize the relationships between releases and versions, it guides release managers on where and when to apply bug fixes and feature enhancements when they are ready to be merged

Perforce Streams provide adaptable workflow for teams and promote efficiencies such as code re-use, automated merging, fast context switching, efficient workspace updates, and inherited workspace and branch views. An innovative addition to the Perforce branching and merging toolset, streams eliminate overhead, simplify common processes, and increase agility and scalability. In projects with a large volume of data, the time and performance savings are considerable.

Perforce Streams helps deploy bug fixes and enhancement across multiple releases and custom versions.

The typical perception of Agile development methodologies is that their benefits and promise are reserved for small, co-located teams. However, in the above sections we have seen how many, if not all, of the traditional Agile practices can be improved to the benefit of not only large teams, but large distributed teams as well. Ironically, this scalability has been achieved by employing the very processes and tools that the Agile Manifesto preaches against. However, while the tools enable scalability, they never require the sacrifice of developers’ freedoms or their ability to interact.

In these final paragraphs , tools will again be the focus for providing the solution for scaling one of the essential requirements of any Agile workflow: Continuous Integration with build and test automation. All the ideas addressed in the above paragraphs  will be reviewed along with detail on  how they fit together to make Agile scalable.

All of the examples provided can be implemented using software from Perforce, Assembla, Git and Jenkins.

Methods for Providing On-Demand Infrastructure

In a large project, the trickiest and costliest problems are found only when individuals put together all the pieces. It pays to find and fix these integration problems as early and often as possible.
Continuous Integration is a set of best practices in software development that supports project integration on a rapid, repeated basis . They are:

  •  Maintain a code repository
  •  Automate the build
  •  Make the build self-testing
  •  Everyone commits to the mainline every day
  •  Every commit (to the mainline) should be built
  •  Keep the build fast
  •  Test in a clone of the production environment
  •  Make it easy to get the latest deliverables
  •  Everyone can see the results of the latest build
  •  Automate deployment

The goal of is to perform a project-wide integration as often as possible. Striving to achieve this shapes your infrastructure, development practices, and attitudes.
Attitude is important. The entire team must commit to achieving successful integration at all stages of the project. For instance, a development task is not considered to be “done” until the feature appears in the integration build and is proven to work there. That shared commitment should make developers uneasy when they risk divergence by working in isolation for any lengthy period, e.g. when they are using an old build as a stable development base, or when they commit their changes their only infrequently. We cannot emphasize enough the importance of frequent integration. It really does reduce project risk.

Continuous Integration Tools in the Cloud
Organizations clearly need to invest in automated build and test processes if they want to scale up and deliver features faster and release more frequently. This investment can be expensive, but manual methods are obviously not scalable. Also, automated build and test processes tend to produce much higher software quality.

And if teams and contributors are highly distributed? Then the build and test tools must be accessible online, in the cloud.

Automated test tools like Jenkins can be integrated into the code review and merge workflows described earlier paragraphs. Whenever a contribution is accepted, a new version can be built and a series of automated tests can be run against it. Tools like Jenkins will then provide developers and the QA staff with detailed information on test results. Results from the test tool can even be used to vote to accept or reject contributions, as part of the code review workflow.

Automated test tools can provide detailed information on test results, and even vote to accept or reject contributions.

jenkins ci

Managing the impact of on-demand continuous integration is a logistical challenge for the version management service. Perforce addresses this challenge by providing flexible configurations of proxies and replicas to meet a variety of build demands.

supporting ci

In the earlier paragraphs,  some of the shortcomings of Scrum, such as lack of techniques to coordinate teams, assumptions about co-located teams and fixed release cycles that are unrealistic for many organizations, and a tendency to spend too much time in planning and coordination meetings was covered.

Continuous, iterative development is supported for different workflow methodology employed. With Perforce,  can:

  • Build and confirm your work from a private workspace before submitting your code
  • Execute automated builds and tests on specific branches upon check-in
  • Improve the quality of software and delivery time to market
  • Popular continuous integration tools, like Electric Commander from Electric Cloud, Parabuild from Viewtier, and Anthill Pro from UrbanCode, all support their own integrations with Perforce.

 A Scalable Agile process framework featuring the following was outlined:

  • A single prioritized backlog for all teams, so high-priority tasks always receive immediate attention.
  • Kanban processes with WIP limits, so teams don’t have to spend a lot of time in release planning, and to ensure that individual tasks are completed as quickly as possible.
  • On-demand resources, so each team can build and test its own contributions quickly and avoid making the QA lab a bottleneck.
  • A code review process that allows designated reviewers to accept or reject a large number of code submissions.
  • A “take what’s ready” approach to releases, so organizations could provide new functionality as frequently as required by customer needs and expectations.

The processes and tools that could facilitate Scalable Agile was discussed. These include online Agile planning tools, online collaboration, global code management, decentralized code management, ScrumBan processes, tools for on-demand merging and testing by contributors, code review workflows, stream-based tools for managing multiple releases and custom versions, and continuous integration tools provided in the cloud.

While the journey to Scalable Agile may be a long one, each of the steps down the path provides immediate benefits. The growing development groups consider:

  • Implementing ScrumBan, to start moving toward lean methods.
  • Deploying online planning and collaboration tools, to improve the effectiveness of distributed teams and contributors.
  • Deploying advanced code management platforms, to support distributed development and manage multiple releases and versions.
  • Begin investing in Continuous Integration and on-demand build and test systems.
  • Adjust the dial on continuous delivery gradually, to allow time for all your teams to adjust.