PEARL XXIII : Guidelines for Successful and Effective Retrospectives

PEARL XXIII : Retrospectives are widely regarded as the most indispensable of people-focused agile techniques. Inspection and adaptation lie at the very heart of agility, and retrospectives focus on inspecting and adapting the most valuable asset in a software organization, the team itself. Without pursuing improvement as retrospectives require, true agility is simply not achievable. This section deals with guidelines for Successful and effective Retrospectives.

Performance can be neither improved nor maintained without exercise. Simply conducting a meeting isn’t enough to be successful, however. Attention must be paid to ensuring teams plan improvements. If a plan to improve is not part of the outcome, it wasn’t actually a Sprint Retrospective.

When done well, retrospectives are often the most beneficial ceremony a team practices. When done poorly, retrospectives can be wasteful and grueling to attend.

Without deliberately maintaining and improving performance, systems trend toward entropy and degrade over time. This is as true of software development teams as it is of professional athletes and expensive sports cars. That’s why Scrum prescribes the Sprint Retrospective, a regularly occurring event focused on the health and performance of the Scrum Team itself. Sprint Retrospectives are meetings in which Scrum Teams reflect on themselves and their work, producing an actionable plan for improving. Sprint Retrospectives are the final event in each Sprint, marking the end of each Sprint cycle. The Sprint Retrospective is an opportunity for the Scrum Team to inspect itself and create a plan for improvements to be enacted during the next Sprint.
The purpose of the Sprint Retrospective is to:

  • Inspect how the last Sprint went with regards to people, relationships, process, and tools;
  • Identify and order the major items that went well and potential improvements; and,
  • Create a plan for implementing improvements to the way the Scrum Team does its work.Sprint Retrospectives are used by teams to deliberately improve. Effective Sprint Retrospectives are an important ingredient in helping good teams become great and great teams sustain themselves.

Why Retrospectives Matter

Retrospectives are widely regarded as the most indispensable of people-focused agile techniques. Inspection and adaptation lie at the very heart of agility, and retrospectives focus on inspecting and adapting the most valuable asset in a software organization, the team itself. Without pursuing improvement as retrospectives require, true agility is simply not achievable.

Performance can be neither improved nor maintained without exercise. Simply conducting a meeting isn’t enough to be successful, however. Attention must be paid to ensuring teams plan improvements. If a plan to improve is not part of the outcome, it wasn’t actually a Sprint Retrospective. When done well, retrospectives are often the most beneficial ceremony a team practices. When done poorly, retrospectives can be wasteful and grueling to attend.

Anatomy of a Healthy Sprint Retrospective

Scrum says little about the internal structure of Sprint Retrospectives. Rather than prescribing how the Sprint Retrospective is conducted, Scrum specifies the output of the Sprint Retrospective: improvements the Scrum Team will enact for the next Sprint.

This flexibility has birthed a wide array of tools and techniques specifically designed to conduct retrospectives. Several popular practices are described later in this article, but regardless of the specific technique used, good Sprint Retrospectives have these characteristics:

  • The entire team is engaged
  • Discussion focuses on the team rather than individuals
  • The team’s Definition of Done is visited and hopefully expanded
  • A list of actionable commitments is created
  • The results of the previous Sprint Retrospective are visited
  • The discussion is relevant for all attendees

The entire Scrum Team attends each Sprint Retrospective. Usually, this means the Product Owner and Development Team attend as participants while the Scrum Master facilitates the meeting. In some cases, Scrum Teams invite other participants to the meeting. This can be especially helpful when working closely with customers or other stakeholders. Regardless of who attends, the environment for Sprint Retrospectives must be safe for all participants. This means attendees must be honest and transparent while treating others with respect. Passions can ignite in retrospectives as issues of performance and improvement are discussed; skilled facilitators ensure discussions stay positive and professional, focusing on improvement of the team as a whole. This is not an opportunity for personal criticism or attack. Increasing the Definition of Done Development Teams in Scrum use a Definition of Done to note what must be true about their work before it is considered complete. For example, a Development Team may decide that each feature it implements must have at least one passing automated acceptance test. Or the team’s Definition of Done may state that all code must be peer reviewed.

A Development Team’s Definition of Done is meant to expand over time. A newly formed team will invariably have a less stringent and smaller Definition of Done than a more mature team with a shared history of improving. Expanding a team’s Definition of Done lies at the very core of Kaizen, a Japanese term meaning a mindful and constant focus on improvement. While a team may initially require only that code build before being checked in, over time they should evolve more exacting standards like the need for unit tests to accompany new code. With each Sprint, Development Teams hopefully learn something that informs the expansion of the Definition of Done. The Sprint Retrospective is the perfect forum for discussing what was observed and learned during the Sprint and what changes might be made to the Definition of Done as a result. Because not every Product Owner has interest or involvement in internal Development Team practices, some Scrum Teams divide the Sprint Retrospective into two different phases:

  1. Focus on the entire Scrum Team
  2. Focus on the Development Team

Making Actionable Commitments

Although discussion may diverge and converge during the meeting, no Sprint Retrospective is successful if it doesn’t result in commitments by the team. It is not enough to simply reflect on what happened during the Sprint. The Scrum Team makes actionable commitments for what it will:

  1. Keep doing
  2. Start doing
  3. Stop doing

The word “actionable” is significant. Actionable commitments have clear steps to completion and acceptance criteria, just like a good requirement. An actionable commitment is clearly articulated and understood by the team. When teams first start performing retrospectives, they often find it easier to identify problems than plan what to do about them. Accordingly, the commitments published by the team may look like these:

  • Work in smaller batches
  • Make requirements easier to read
  • Write more unit tests
  • Be more accurate when estimating

These are not commitments; they are either goals or perhaps thinly veiled complaints. These are certainly issues that teams may wish to discuss during the Sprint Retrospective, but a list of actionable commitments looks more like this:

  • Check in code at least twice per day: before lunch and before going home
  • Express new Product Backlog items as User Stories and include acceptance criteria
  • Create a failing automated test that proves a defect exists before fixing it
  • Use Planning Poker during Product Backlog grooming sessions

Commitments made in the previous Sprint Retrospective are visited in each new Sprint Retrospective. This is necessary for retrospectives to retain their meaning and value. Few things are as frustrating as being on a team that continually commits to improving itself without making tangible progress toward doing so. For the Sprint Retrospective to be valuable team members must be more than present, they must be invested. Collaborating to create actionable commitments engages attendees and invests them in the success of the team.

Keeping it Relevant

Sprint Retrospectives are fundamentally a technique used to reveal the practices and behaviors of the Scrum Team to itself. When a self-organizing system becomes self-aware, it self-corrects and deliberately improves when given the tools to do so.

For retrospectives to be useful, they must be meaningful to the participants. If the focus isn’t on something valued by the participants, benefits will simply not be realized. The team must be allowed to consider and improve in areas it believes are important. Further, if a facilitator or dominant personality is driving the retrospective to a specific conclusion, the team avoids taking responsibility for itself and its work. Topics visited should be relevant for all levels of expertise. For example, there is little value in visiting the fine points of advanced Test-Driven Development (TDD) scenario if some team members aren’t even familiar with unit tests. The real value may be in deciding to increase the number of tests the team is writing, in getting some training, or in having a team member confident in TDD coach others.
Keep the focus on the Scrum Team, not the individual, and not the broader organization. Focusing holistically allows the team to genuinely see itself as a self-organizing unit, rather than as a loose confederation of individuals. Addressing issues of individual performance is not appropriate during a team retrospective. Not only is personal feedback most appropriately given in private, individual behaviors are not something the team can change together.
Having the team focus on one individual during a Sprint Retrospective is recipe for disaster and may result in irreparable harm to team member’s trust of each other. For retrospectives to be meaningful, they should focus on issues the team can control. Criticizing a company-wide vacation policy may be gratifying for the complainer looking for a sympathetic ear, but does little to help the team improve. Attention must be paid to those issues the team can affect itself, like the reaction it may choose to a particular policy.

Varying the Techniques

There are numerous techniques for conducting retrospectives. Trying different constructions of the Sprint Retrospective meeting keeps things fresh and interesting. As the primary facilitators for the Scrum Teams, Scrum Masters should at least be familiar with some of the more popular techniques.

There are books about retrospectives and blog articles aplenty to help people get the most from their practice. Some of the most popular are briefly described here.

In the most basic of Sprint Retrospective’s a facilitator simply asks basic questions of the team and facilitates discussion. The facilitator or Scrum Master may use various brainstorming techniques to get the team to answer:

  1. What went well in this Sprint?
  2. What happened in this Sprint that could use improvement?
  3. What will we commit to doing in the Sprint?

One simple technique to derive these answers has each team member write 2-3 answers to these questions on sticky notes during a 3-5 minute period of silence. Once created, the suggestions are grouped on a wall for all to see before being voted upon. A list of actionable commitments can thereby be derived from the collective wisdom of the team. Most other Sprint Retrospective techniques are variations on this theme and may focus on only one question or stage of this process. In any case, the outcomes are most important and any good technique supports this basic model.

Reviewing Previous Commitments

In addition to looking ahead to the next Sprint, each Sprint Retrospective should include a review of commitments made in the previous Sprint and a discussion about the team’s success in meeting those commitments. If this discussion isn’t part of each Sprint Retrospective, attendees soon learn their commitments don’t matter, and they’ll stop meeting them.

Further, the right place to review Sprint Retrospective commitments is throughout the Sprint, not just at the end. Once commitments for improvement are made, posting them publicly can help ensure they are considered on a daily basis. Some teams value posting commitments made during Sprint Retrospectives on the wall in a public area as a reminder to everyone what they should be focusing on improving each day.

There are many other techniques for conducting parts or the whole of the Sprint Retrospective. The names of many techniques are listed below and each is worthy of detailed discussion. All of the following are well documented online and in various publications.

Techniques for Sprint Retrospectives

A fishbowl conversation is a form of dialog that can be used when discussing topics within large groups. Fishbowl conversations are usually used in participatory events like Open Space Technology and Unconferences. The advantage of Fishbowl is that it allows the entire group to participate in a conversation. Several people can join the discussion.Four to five chairs are arranged in an inner circle. This is the fishbowl. The remaining chairs are arranged in concentric circles outside the fishbowl. A few participants are selected to fill the fishbowl, while the rest of the group sit on the chairs outside the fishbowl. In an open fishbowl, one chair is left empty. In a closed fishbowl, all chairs are filled. The moderator introduces the topic and the participants start discussing the topic. The audience outside the fishbowl listen in on the discussion.In an open fishbowl, any member of the audience can, at any time, occupy the empty chair and join the fishbowl. When this happens, an existing member of the fishbowl must voluntarily leave the fishbowl and free a chair. The discussion continues with participants frequently entering and leaving the fishbowl.

Depending on how large your audience is you can have many audience members spend some time in the fishbowl and take part in the discussion. When time runs out, the fishbowl is closed and the moderator summarizes the discussion.An immediate variation of this is to have only two chairs in the central group. When someone in the audience wants to join the two-way conversation, they come forward and tap the shoulder of the person they want to replace, at some point when they are not talking. The tapped speaker must then return to the outer circles, being replaced by the new speaker, who carries on the conversation in their place.In a closed fishbowl, the initial participants speak for some time. When time runs out, they leave the fishbowl and a new group from the audience enters the fishbowl. This continues until many audience members have spent some time in the fishbowl. Once the final group has concluded, the moderator closes the fishbowl and summarizes the discussion 

Mad Sad Glad

  1. Divide the board into three areas labelled:
    • Mad – frustrations, things that have annoyed the team and/or have wasted a lot of time
    • Sad – disappointments, things that have not worked out as well as was hoped
    • Glad – pleasures, things that have made the team happy
  2. Explain the meanings of the headings to the team and encourage them to place stickies with their ideas for each of them under each heading
  3. Wait until everyone has posted all of their ideas
  4. Have the team group similar ideas together
  5. Discuss each grouping as a team identifying any corrective actions


  1. Draw a large circle on a whiteboard and divide it into five equal segments
  2. Label each segment ‘Start’, ‘Stop’, ‘Keep Doing’, ‘More Of’, ‘Less Of’
  3. For each segment pose the following questions to the team:
    • What can we start doing that will speed the team’s progress?
    • What can we stop doing that hinders the team’s progress?
    • What can we keep doing to do that is currently helping the team’s progress?
    • What is currently aiding the team’s progress and we can do more of?
    • What is currently impeding the team’s progress and we can do less of?
  4. Encourage the team to place stickies with ideas in each segment until everyone has posted all of their ideas
  5. Erase the wheel and have the team group similar ideas together. Note that the same idea may have been expressed in opposite segments but these should still be grouped together
  6. Discuss each grouping as a team including any corrective actions

Problem Tree
A great technique to solve some of these is to use a problem solving tree. What you need is some post it notes, markers and a large wall or whiteboard.

  1. Start with an problem you need to solve, that you’ve identified in the retrospective.
  2. Write this on a sticky note, and stick it at the top of the tree.
  3. Now ask what participants what you can do to solve the problem.
  4. For each different idea put a sticky note below the first, at the same level.
  5. For each of these nodes do the same and build up a tree structure similar to an organisation chart.
  6. For each idea you put up, ask if it can be done in a single sprint, and if everyone understands what they need to do. If the answer is no, break it down smaller and make another level in the problem solving tree.
  7. Once you have some lower levels that are well understood and easy to implement in a single sprint, dot vote to see which to tackle in the next sprint. Try to only pick one and get it done, rather than lots that go nowhere.

Sailboat Retrospective


  1. Draw a boat on a white board. Include the following details:
    • Sails or engines  – these represent the things that are pushing the team forward towards their goals
    • Anchors – these represent the things that are impeding the team from reaching their goals
  2. Explain the metaphors to the team and encourage them to place stickies with their ideas for each of them on appropriate area of the drawing
  3. Wait until everyone has posted all of their ideas
  4. Have the team group similar ideas together
  5. Discuss each grouping as a team including any corrective actions going forward

Top 5

Expose the most pressing issues in an initially anonymous manner and determine the most effective actions to resolve them.
Length of time:
Approximately 45 minutes depending on the size of the team.
Short Description:
The facilitator asks participants to bring along their top five issues which are then grouped and in pairs the participants create actions to resolve them them before voting on the top actions which are taken away.
Whiteboard or flipchart paper & pens.

  1. Before the retrospective provide participants with a simple Word document template and ask them to identify their top 5 issues (one per template) and for each issue suggest as many solutions as possible. The template is to ensure participants can be as anonymous as possible.
  2. Collect all the print-outs, spread them on the table and ask the team to group relevant issues.
  3. Ask for a title for each group, create a column for each one on a whiteboard (or flip chart sheets stuck to the wall) and place the associated print outs on the floor below.
  4. Get participants to form pairs (preferably with someone they don’t normally work too closely with) and give them three minutes with each column to come up with as many actions as they can and to write them in the column. Pairs are able to refer to the print outs and previous pairs’ actions for inspiration.
  5. After three minutes pairs move on to another column until all are exhausted.
  6. Go through all the actions so all participants are aware of them all.
  7. Give each participant three votes and ask them to choose their favourite actions (can use votes however they wish e.g. 3 on one action).
  8. Identify the most popular actions and ask for volunteers to own them. Make it clear it will be their responsibility to ensure they get completed before the next retrospective (tip: don’t choose too many actions and definitely no more than one action per participant).
  9. As with all retrospective output Agilists find the best way to ensure they get actioned is to stick them up on the wall somewhere everyone can see.

Other Techniques

  • Journey Lines
  • 6 Thinking Hats
  • Appreciative Retrospective
  • Top 5
  • Plan of action
  • Race Car
  • The Abyss
  • The Perfection Game
  • The Improvement Game
  • Force Field Analysis
  • Four L’s
  • World Café
  • Emotional Seismograph

Two particularly rich resources for facilitators looking to expand their retrospective toolboxes are:

Sprint Retrospectives aren’t the Scrum Master’s playground. Newly minted Scrum Masters are sometimes tempted to vary the techniques wildly from Sprint to Sprint. While variety in retrospectives prevents teams falling into a rut, tempering this with some consistency will yield the best results. Teams focusing on actionable outcomes will see the most value from their retrospectives.

Why Retrospectives Dont Work

Worse than being ineffective or a waste of time, badly run Sprint Retrospectives can be destructive and harmful to the team. For this reason, having a skilled facilitator conduct the meeting is highly recommended, especially when teams are new to the practice.Facilitation is typically the job of the Scrum Master, but for Scrum Masters new to the role, this may not be an area of expertise. It requires more than a working knowledge of Scrum for Sprint Retrospectives to have positive outcomes; it requires facilitation skills and the ability to lead a group away from negative discussion toward positive outcomes.

Common Smells

A common example of a bad retrospective is one that deteriorates into a gripe session. It is much easier to remember that went poorly than to identify things that went well, and a trickle of “improvement suggestions” can easily turn into a torrent of complaints when the facilitator doesn’t redirect this conversation.
Other smells that a Sprint Retrospective isn’t working well include:

  • Considering the retrospective a “post-mortem” or “after-action” report rather than an opportunity to plan for improvement
  • Unengaged attendees
  • Critiquing a single person’s performance
  • No resulting actionable commitments
  • Having no “what we did well” answers; teams need to understand and appreciate their positive as well as negative behaviors and practices

In all of the above situations, it is often easy to trace the root cause of the negativity to a lack of trust and commitment on the part of one or more team members. While there is no silver bullet to address this, Scrum specifically charges the Scrum Master with working toward addressing situations like these.

Although Sprint Retrospectives are powerful and valuable events, they are a commonly discarded element of Scrum. Scrum Teams with recent and regular success tend to rationalize away the need to conduct Sprint Retrospectives. This is rather like a fit person deciding to stop exercising.

The meta-conversation may sound a bit like the following: Six Months after Introducing Scrum
Developer Dave: Quality is up, bugs are down. Morale is high, manual regression cost is low. Since we are doing so well, we don’t need the Sprint Retrospectives to help us improve anymore.
Boss Bob: That sounds reasonable. Cancelling that meeting will save us time that can be spent on adding more features.
Six Months Later
Boss Bob: Quality has dropped and bugs are increasing. Team members are dissatisfied and much of the regression work is being performed manually.
Developer Dave: It’s because of Scrum. We told you that it wasn’t a silver bullet and it obviously doesn’t work.
Boss Bob: True. I’ll find a methodology consultant to implement a new process.Obviously, it wasn’t Scrum that failed here. The organization’s decision to omit a key ingredient of Scrum’s success was the catalyst for failure. Unfortunately this scenario is all too common.

Scrum Teams reaching that most tenuous state of high performance are rare, beautiful, and fragile. Meaningful retrospectives are a significant ingredient in keeping those teams functioning at such high levels. Reflecting upon itself allows the team to self-adjust and achieve even higher levels of performance and product quality. This is the very essence of Kaizen, and core to any real program of improvement.

When retrospectives work, the results are palpable. There is an excitement in the team to try new things. When retrospectives work, these things will inevitable be true:

  • The team achieves measurably higher and higher levels of quality over time
  • Individuals understand their role within the context of the team
  • Actionable commitments are known by all team members

Finally, when Sprint Retrospectives work well, the team grows more focused, productive, and valuable to the organization. Excellent software development teams do not simply appear. They emerge over time and then only by deliberate attention to improvement. Sprint Retrospectives are a key ingredient in that emergence.

Common Pitfalls

  • A retrospective is intended to reveal facts or feelings which have measurable effects on the team’s performance, and to construct ideas for improvement based on these observations. It will not be useful if it devolves into a verbal joust, or a whining session.
  • On the other hand, an effective retrospective requires that each participant feel comfortable speaking up. The facilitator is responsible for creating the conditions of mutual trust; this may require taking into accounts such factors as hierarchical relationships, the presence of a manager for instance may inhibit discussion of performance issues.
  • Being an all-hands meeting, a retrospective comes at a significant cost in person-hours. Poor execution, either from the usual causes of bad meetings (lack of preparation, tardiness, inattention) or from causes specific to this format (lack of trust and safety, taboo topics), will result in the practice being discredited, even though a vast majority of the Agile community views it as valuable.
  • An effective retrospective will normally result in decisions, leading to action items; it’s a mistake to have too few (there is always room for improvement) or too many (it would be impractical to address “all” issues in the next iteration). One or two improvement ideas per iteration retrospective may well be enough.
  • Identical issues coming up at each retrospective, without measurable improvement over time, may signal that the retrospective has become an empty ritual.

Milestone Retrospective

Once a project has been underway for some time, or at the end of the project (in that case, especially when the team is likely to work together again), all of the team’s permanent members (not just the developers) invests from one to three days in a detailed analysis of the project’s significant events.

PEARL IX : Refactoring performed to Sustain Application Development Success in Agile Environments

PEARL IX : Refactoring performed to Sustain Application Development Success in Agile Environments

 The term “refactoring” was originally coined by Martin Fowler and Kent Beck which refers to “a change made to the internal structure of software to make it easier to understand and cheaper to modify without altering its actual observable behavior i.e. it is a disciplined way to clean up code that minimizes the chances of introducing bugs and also enables the code to be evolved slowly over time and facilitates taking an iterative and incremental approach to programming and/or design”. Importantly, the underlying objective behind refactoring is to give thoughtful consideration and improve some of the essential non-functional attributes of the software. So, to achieve this, the technique has been broadly classified into following major categories:

1. Code Refactoring (clean-up) : It is intended to remove the unused code, methods, variables etc. which are misleading.
2. Code Standard Refactoring It is done to achieve quality code.

3. Database Refactoring: Just like code refactoring, it is intended to clean (clean-up) or remove the unnecessary and redundant data without changing the architecture.
4. Database schema and  Design Refactoring : This includes enhancing the database schema by leaving the actual fields required by the application.
5. User-Interface Refactoring :  It is intended to change the UI without affecting the underlying functionality.
6. Architecture Refactoring :  It is done to achieve modularization at the application level.

Refactoring is actually a simple technique where you make structural changes to the code in small, independent and safe steps, and test the code after each of these steps just to ensure that you have not changed the behavior – i.e. the code still works the same, but just looks different. Nevertheless, refactoring is intended to fill in some short-cuts, eliminate duplication and dead code, and help ensure the design and logic have been made very clear. Further, it is equally important to understand that, although refactoring is driven by certain good characteristics and shares some common attributes with debugging and/ or optimization, etc., it is actually different because

  •  Refactoring is not all about fixing any bugs.
  •  Again, optimization is not refactoring at all.
  •  Likewise, revisiting and/or tightening up error handling code is not refactoring.
  •  Adding any defensive code is also not considered to be refactoring.
  •  Importantly, tweaking the code to make it more testable is also not refactoring.

Re-factoring Activities – Conceptualized
The refactoring process generally consists of a number of distinct activities which are dealt with in chronological order:

  • Firstly, identify where the software should be refactored, i.e. figure out the code smell areas in the software which might increase the risk of failures or bugs.
  • Next, determine what refactoring should be applied to the identified places based on the list identified.
  • Guarantee that the applied refactoring preserves the behavior of the software. This is the crucial step in which, based on the type of software such as real-time, embedded and safety-critical, measures have to be taken to preserve their behavior prior to subjecting them to refactoring.
  • Apply the appropriate refactoring technique.
  • Assess the effect of the refactoring on the quality characteristics of the software, e.g. complexity, understandability and maintainability, and of the process, e.g. productivity, cost and effort.
  • Ensure the requisite consistency is maintained between the refactored program code and other software artifacts.

Refactoring Steps – Application/System Perspective
The points below clearly summarize the important steps to be adhered to when refactoring an application:
1. Firstly, formulate the unit test cases for the application/ system – the unit test cases should be developed in such a way that they test the application behavior and ensure that this behavior remains intact even after every cycle of refactoring.
2. Identify the approach to the task for refactoring – this includes two essential steps:
– Finding the problem – this is about identifying wheth-er there is any code smell situation with the current piece of code and, if yes, then identifying what the problem is all about.
– Assess/Decompose the problem – after identifying the potential problem assess it against the risks involved.
3. Design a suitable solution – work out what the resultant state will be after subjecting the code to refactoring.
Accordingly, formulate a solution that will be helpful intransitioning the code from the current state to the resultant state.
4. Alter the code – now proceed with refactoring the code without changing the external behavior of the code.
5. Test the refactored code – to ensure that the results and/ or behavior are consistent. If the test fails, then rollback the changes made and repeat the refactoring in different way.
6. Continue the cycle with the aforementioned steps (1) to (5) until the problematic/current code moves to the resultant state.

So, having said about refactoring and its underlying intent, it can be taken up as a practice and can be implemented safely with ease because the majority of today’s modern IDEs (integrated development environments) are inbuilt and equipped with various refactoring tools and patterns which can be used readily to refactor any application/business-logic/middle-tier code seamlessly. However, the situation may not be the same when it comes to refactoring a database, because database refactoring is conceptually more difficult when compared to code refactoring since with code refactoring you only need to maintain the behavioral semantics, whereas with database refactoring you must also maintain information semantics.

Refactoring is the process of clarifying and simplifying the design of existing code, without changing its behavior. Agile teams are maintaining and extending their code a lot from iteration to iteration, and without continuous refactoring, this is hard to do. This is because un-refactored code tends to rot. Rot takes several forms: unhealthy dependencies between classes or packages, bad allocation of class responsibilities, way too many responsibilities per method or class, duplicate code, and many other varieties of confusion and clutter.

Every time we change code without refactoring it, rot worsens and spreads. Code rot frustrates us, costs us time, and unduly shortens the lifespan of useful systems. In an agile context, it can mean the difference between meeting or not meeting an iteration deadline.

Refactoring code ruthlessly prevents rot, keeping the code easy to maintain and extend. This extensibility is the reason to refactor and the measure of its success. But note that it is only “safe” to refactor the code this extensively if we have extensive unit test suites of the kind we get if we work Test-First. Without being able to run those tests after each little step in a refactoring, we run the risk of introducing bugs. If you are doing true Test-Driven Development (TDD), in which the design evolves continuously, then you have no choice about regular refactoring, since that’s how you evolve the design.

Code Hygiene

A popular metaphor for refactoring is cleaning the kitchen as you cook. In any kitchen in which several complex meals are prepared per day for more than a handful of people, you will typically find that cleaning and reorganizing occur continuously. Someone is responsible for keeping the dishes, the pots, the kitchen itself, the food, the refrigerator all clean and organized from moment to moment. Without this, continuous cooking would soon collapse. In your own household, you can see non-trivial effects from postponing even small amounts of dish refactoring: did you ever try to scrape the muck formed by dried Cocoa Crispies out of a bowl? A missed opportunity for 2 seconds worth of rinsing can become 10 minutes of aggressive scraping.

Specific “Refactorings”

Refactorings are the opposite of fiddling endlessly with code; they are precise and finite. Martin Fowler’s definitivebook on the subject describes 72 specific “refactorings” by name (e.g., “Extract Method,” which extracts a block of code from one method, and creates a new method for it). Each refactoring converts a section of code (a block, a method, a class) from one of 22 well-understood “smelly” states to a more optimal state. It takes awhile to learn to recognize refactoring opportunities, and to implement refactorings properly.

Refactoring to Patterns

Refactoring does not only occur at low code levels. In his recent book, Refactoring to Patterns, Joshua Kerievsky skillfully makes the case that refactoring is the technique we should use to introduce Gang of Four design patterns into our code. He argues that patterns are often over-used, and often introduced too early into systems. He follows Fowler’s original format of showing and naming specific “refactorings,” recipes for getting your code from point A to point B. Kerievsky’s refactorings are generally higher level than Fowler’s, and often use Fowler’s refactorings as building blocks. Kerievsky also introduces the concept of refactoring “toward” a pattern, describing how many design patterns have several different implementations, or depths of implementation. Sometimes you need more of a pattern than you do at other times, and this book shows you exactly how to get part of the way there, or all of the way there.

The Flow of Refactoring

In a Test-First context, refactoring has the same flow as any other code change. You have your automated tests. You begin the refactoring by making the smallest discrete change you can that will compile, run, and function. Wherever possible, you make such changes by adding to the existing code, in parallel with it. You run the tests. You then make the next small discrete change, and run the tests again. When the refactoring is in place and the tests all run clean, you go back and remove the old smelly parallel code. Once the tests run clean after that, you are done.

Refactoring Automation in IDEs

Refactoring is much, much easier to do automatically than it is to do by hand. Fortunately, more and more Integrated Development Environments (IDEs) are building in automated refactoring support. For example, one popular IDE for Java is eclipse, which includes more auto-refactorings all the time. Another favorite is IntelliJ IDEA, which has historically included even more refactorings. In the .NET world, there are at least two refactoring tool plugins for Visual Studio 2003, and we are told that future versions of Visual Studio will have built-in refactoring support.

To refactor code in eclipse or IDEA, you select the code you want to refactor, pull down the specific refactoring you need from a menu, and the IDE does the rest of the hard work. You are prompted appropriately by dialog boxes for new names for things that need naming, and for similar input. You can then immediately rerun your tests to make sure that the change didn’t break anything. If anything was broken, you can easily undo the refactoring and investigate.


Add Parameter

A method needs more information from its caller.

Add a parameter for an object that can pass on this information.

Customer                               Customer    
getContact()                                              getContact(data)

inverse of Remove Parameter

Naming: In IDEs this refactoring is usually done as part of “Change Method Signature”

Refactoring a Database – a Major and Typical Variant of Refactoring
“A database refactoring is a process or act of making simple changes to your database schema that improves its design while retaining both its behavioral and informational semantics.
It includes refactoring either structural aspects of the database such as table and view definitions or functional aspects such as stored procedures and triggers etc. Hence, it can be often thought of as the way to normalize your database schema.”
For a better understanding and appreciation of the concept,
let us consider the example of a typical database refactoring technique named Split Column, in which you replace a single table column with two or more other columns. For example, you are working on the PERSON table in your database and figure out that the DATE column is being used for two distinct purposes. a) to store the birth date when the person is a customer and b) to store the hire date when the person is an employee. Now, there is a problem if we have a requirement with the application to retrieve a person who is both customer and employee. So, before we proceed to implement and/or simulate such new requirement, we need to fix the database schema by replacing the DATE column with equivalent BirthDate and HireDate columns. Importantly, to maintain the behavioral semantics of the database schema we need to update all the supporting source code that accessed the DATE column earlier to now work with the newly introduced two columns. Likewise, to maintain the informational semantics we need to write a typical migration script that loops through the table, determines the appropriate type, and then copies the existing date data into the appropriate column.

Classification of Database Refactoring
The database refactoring process is classified into following
major categories:
1. Data quality – the database refactoring process which largely focuses on improving the quality of the data and information that resides within the database. Examples include introducing column constraints and replacing the type code with some boolean values, etc.
2. Structural – as the name implies this database refactoring process is intended to change the database schema.
Examples include renaming a column or splitting a column etc.
3. Referential Integrity – this is a kind of structural refactoring which is intended to refactor the database to ensure referential integrity. Examples include introducing cascading delete.
4. Architectural – this is a kind of structural refactoring which is intended to refactor one type of database item to another type.
5. Performance – this is a kind of structural refactoring which is aimed at improving the performance of the database. Examples include introducing alternate index to fasten the search during data selection.
6. Method – a refactoring technique which is intended to change a method (typically a stored procedure, stored function or trigger, etc.) to improve its quality. Examples include renaming a stored procedure to make it easier to refer and understand.
7. Non-Refactoring Transformations – this type of refactoring technique is intended to change the database schema that, in turn, changes its semantics. Examples include
adding new column to an existing table.
Why isn’t Database Refactoring Easy?
Generally, database refactoring is presumed to be a difficult and/or complicated task when compared to code refactoring. not just because there is the need to give thoughtful consideration to the behavioral and information semantics, but due to a distinct attribute referred to as coupling. The term coupling is understood to be the measure of the degree of the dependencies between two entities/items. So, the more coupling there is between entities/items, the greater the likelihood that a change in one will require a change in another. Hence, it is understood that coupling is the root cause of all the issues when it comes to database refactoring, i.e. the more things that your database is coupled to, the harder it is to refactor. Unfortunately, the majority of relational databases are coupled to a wide variety of things as mentioned below:

■ Application source code
■ Source code that facilitates data loading
■ Code that facilitates data extraction
■ Underlying Persistent layers/frameworks that govern the overall application process flow
■ The respective database schema
■ Data migration scripts, etc.

Refactoring Steps – Database Perspective
Generally, the need to refactor the database schema will be identified by a application developer who is actually trying to implement a new requirement or fix a defect. Then the application developer describes the required change to the concerned DBA of the project and then refactoring begins. Now, as part of this exercise, the DBA will typically work through all or a few of the following steps in chronological order:
1. Most importantly, verify whether database refactoring is required or not – this is the first thing that the DBA does, and it is where they will determine whether database refactoring is needed and/or if it is the right one to perform. Now the next important thing is to assess the overall impact of the refactoring.

2. If it is inevitable, choose the most appropriate database refactoring – this important step is about having several choices for implementing new logic and structures within a database and choosing the right one.

3. Deprecate the original schema – this is not a straightforward step, because you cannot simply make a change retaining the behavior. to the database schema instantly. Instead, adopt an approach that will work with both the old and the new schema in parallel for a while to provide the required time for the other team to both refactor and redeploy their
4. Modify the schema – this step is intended to make the requisite changes to the schema and ensure that the necessary logs are also updated accordingly, e.g. database change log which is typically the source code for implementing all database schema changes and update log which contains the source code for future changes to the database schema.
5. Migrate the data – this is the crucial step which involves migrating and/or copying the data from old versions of the schema to the new.
6. Modify all related external programs – this step is intended to ensure that all the programs which access the portion of database schema which is for the subject of refactoring must be updated to work with the new version of the database schema.
7. Conduct regression test – once the changes to the application code and database schema have been put in place, then it is good to run the regression test suite just to ensure that everything is right and working correctly.
8. Keep the team informed about the changes made and version control the work – this is an important step because the database is a shared resource and it is minimally shared by the application development team. So, it is the prime responsibility of the DBA to keep the team informed about the changes made to the database. Nevertheless, since database refactoring definitely includes some DDLs, change scripts, data migration scripts, data models related scripts, test data and its generation code, etc., all these scripts have to be put under configuration management by checking them into a version control system for better versioning, control, and consistency.

Once the database schema has been refactored successfully in the application development sandbox (a technical environment where your software, including both your application code and database schema, are developed and unit tested), the team can go ahead with refactoring the requisite Integration, Test/QA, and Production sandboxes as well, to ensure that the changes introduced are available and uniform across all environments.

Refactor Unit Tests

Unit test the current and rewritten code

Unit tests are tests to test small sections of the code. Ideally each test is independent, and stubs and drivers are used to get control over the environment. Since refactoring deals with small sections of code, unit tests provide the correct scope.

Refactor code that has no existing unit tests

When you work with very old code, in general you do not have unit tests. So can you just start refactoring? No, first add unit tests to the existing code. After refactoring, these unit tests should still hold. In this way you improve the maintainability of the code as well as the quality of the code. This is a complex task. First you need to find out what the functionality of thecode is. Then you need to think of test cases that properly cover the functionality. To discover the functionality, you provide several inputs to the code and observe the outputs. Functional equivalence is proven when the code is input/output conformant to the original code.

Refactor to increase the quality of the existing unit tests You also see code which contains badly designed unit tests. For example, the unit test verifies multiple scenarios at once. Usually this is caused by not properly decoupling the code from its dependencies . This is undesirable behaviour because the test must not depend on the state of the environment. A solution is to refactor the code to support substitutable dependencies. This allows the test to use a test stub or mock object. The unit test is split into three unit tests which test the three scenarios separately. The rewritten code has a configurable time provider. The test now uses its own time provider and has complete control over the environment.

Every change in the code needs to be tested. Therefore testing  is required when refactoring. You test the changes at different  levels. Since a small section of code is changed, unit testing  seems the most fitting level. But do not forget the business  value! Regression testing is of vital importance for the business.

Test-driven development (TDD)

Test-driven development (TDD) is an advanced technique of using automated unit tests to drive the design of software and force decoupling of dependencies. The result of using this practice is a comprehensive suite of unit tests that can be run at any time to provide feedback that the software is still working. This technique is heavily emphasized by those using Agile development methodologies

The motto of test-driven development is “Red, Green, Refactor.”

  • Red: Create a test and make it fail.
  • Green: Make the test pass by any means necessary.
  • Refactor: Change the code to remove duplication in your project and to improve the design while ensuring that all tests still pass.

The Red/Green/Refactor cycle is repeated very quickly for each new unit of code.

Key Benefits of Re-factoring
From a system/application standpoint, listed below are summaries of the key benefits that can be achieved seamlessly when implementing the refactoring process in a disciplined fashion:

  • Firstly, it improves the overall software extendability.
  • Reduces and optimizes the code maintenance cost.
  • Facilitates highly standardized and organized code.
  • Ensures that the system architecture is improved by retaining the behavior.
  • Guarantees three essential attributes: readability, understandability, and modularity of the code.
  • Ensures constant improvement in the overall quality of the system.

Justifying the refactoring task might be very difficult, but not impossible. Here are the tips for justifying the need for refactoring.
1. Future business changes will require less time. Refactoring will not give an immediate return but, in the long run, adding features will be less expensive as the code will become easier to maintain. Before refactoring, the code is fit for machine consumption but after refactoring it is fit for human as well as machine consumption.
2. Bugs will be fixed during refactoring. Hidden bugs or logics embedded in complicated unnecessary loops will be exposed, which might result in fixing some longstanding
non-reproducible issues.
3. The current application will have a longer life. Prevention is better than cure. Refactoring can be considered to be a prevention exercise which will help to optimize the structure of the application for future enhancements.
4. There might be performance gains. You cannot promise any apparent or measurable performance gain. But if you are planning to do refactoring to achieve some performance gain, then you should have measurable counters showing the performance of the current app before you start refactoring. And after each change the performance counters should be recalculated to check the optimization.Refactoring may result in a reduction in the lines of code, making it less expensive to maintain in the long run. During refactoring of your algorithm, you should follow the DRY (Don’t Repeat Yourself) principle. Any application
that has survived for 6 months to 1 year will have ample places to remove duplication of code.

Developers do not use the full potential of the refactoring tools available on the market.
This might be due to a lack of knowledge or pressure of timelines. During refactoring, these tools are extremely helpful and valuable as they reduce the chances of intro- ducing an error when making big changes

  • Resharper VIsual Studio Add on for .NET
  • XCode for Objective C #
  • iNTELLIJ idea For Java

Refactoring using the right tools and good software development practices will be a boon for any application’s long life and sustenance. Refactoring is an opportunity to solidify the foundation of an existing application that might have become weaker after adding a lot of changes and enhancements. If you are making changes to the same piece of code for the third time, it means there is some technical debt that you have created and there is a need to refactor this code.

PEARL XI : DevOps Orginated from Enterprise System Management and Agile S/W Methodology

PEARL XI : DEVOPS (portmanteau of development and operations) is a software development lifecycle approach that stresses communication, collaboration and integration between software developers and information technology (IT) operations professionals .  Many of the ideas (and people) involved in DevOps originated from the Enterprise Systems Management and Agile software development movements.

DevOps - Continuous Value

IT now powers most businesses. The central role  that IT plays translates into huge demands on the IT  staff to develop and deploy new applications and  services at an accelerated pace. To meet this demand, many software development organizations  are applying Lean principles through such approaches as Agile software development. Influenced heavily by Lean methodology, Agile methodology  is based on frequent, customer-focused releases  and strives to eliminate all steps that don’t add value  for the customer. Using Agile methodology, development teams are able to shrink development cycles dramatically and increase application quality.
Unfortunately, the increasing number of software  releases, growing complexity, shrinking deployment time frames, and limited budgets are presenting the  operations staff with unprecedented challenges.  Operations can begin to address these challenges  by learning from software developers and adopting  Lean methodology. That requires re-evaluating current processes, ferreting out sources of waste, and  automating wherever possible.

According to the Lean Enterprise Institute, “The core idea [of Lean] is to maximize customer value while  minimizing waste. Simply, Lean means creating  more value for customers with fewer resources.”

This involves a five-step process for guiding the implementation of Lean techniques:
1. Specify value from the standpoint of the end customer.
2. Identify all the steps in the value stream, eliminating whenever possible those steps that do not create value.
3. Make the value-creating steps occur in tight sequence so the product will flow smoothly toward the customer.
4. As flow is introduced, let customer demand determine the time to market.
5. As value is specified, value streams are identified, wasted steps are removed, and customer-demand centric flow is established, begin the process again and continue it until a state of perfection is reached in which perfect value is created with no waste.
Clearly, Lean is not a one-shot proposition. It’s a reiterative process of continuous improvement.
Bridge the DevOps gap
There are obstacles to bringing Lean methodology to operations. One of the primary ones is the cultural difference between development and operations. Developers are usually driven to embrace the latest technologies and methodologies. Agile principles mean that they are aligning more closely with business requirements, and the business has an imperative to move quickly to stay competitive. Consequently, the development team is incentivized to move applications from concept to marketing as quickly as possible.
The culture of operations is typically cautious and deliberate. They are incentivized to maintain stability and business continuity. They are well aware of the consequences and high visibility of problems, such as performance slowdowns and outages, which are
caused by improperly handled releases.
As a result, there is a natural clash between the businessdriven need for speed on the development side and the conservative inertia on the operations side. Each group has different processes and ways of looking at things.
The result is often called the DevOps gap. The DevOps movement has arisen out of the need to address this disconnect. DevOps is an approach that looks to bring the benefits of Agile and Lean methodologies into operations, reducing the barriers to delivering more value for the customer and aligning with the business. It stresses the importance of communication, collaboration, and integration between the two groups, and even combining responsibilities. Today, operations teams find themselves at a critical decision point. They can adopt the spirit of DevOps and strive to close the gap. That requires working more closely with development. It means getting involved earlier in the development cycle instead of waiting for new applications and services to “come over the fence.” And conversely, developers will need to be more involved in application support. The best way to facilitate this change is by following the development team’s lead in adopting Lean methodology by reducing waste and focusing on customer value.
On the other hand, not closing the gap can have serious repercussions for operations. In frustration, developers may bypass operations entirely and go right to the cloud. This is already occurring in some companies.

Another challenge that operations teams face is in how to take the new intellectual property that the development organizations have built for the business and get it out to customers as quickly as possible, with the least number of errors and at the lowest cost. That requires creating a release process that is fast, efficient, and repeatable. That’s where Lean methodology provides the most value.

DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration betweensoftware developers and information technology (IT) operations professionals. DevOps is a response to the interdependence of software development and IT operations. It aims to help an organization rapidly produce software products and services.

A DevOps approach applies agile and lean thinking principles to all stakeholders in an organization who develop, operate,  or benefit from the business’s software systems, including customers, suppliers partners. By extending lean principles across  the entire software supply chain, DevOps capabilities will  improve productivity through accelerated customer feedback  cycles, unified measurements and collaboration across an enterprise, and reduced overhead, duplication, and rework

Companies with very frequent releases may require a DevOps awareness or orientation program. Flickr developed a DevOps approach to support a business requirement of ten deployments per day; this daily deployment cycle would be much higher at organizations producing multi-focus or multi-function applications. This is referred to as continuous deployment or continuous delivery  and is frequently associated with the lean startup methodology. Working groups, professional associations and blogs have formed on the topic since 2009.

DevOps aids in software application release management for a company by standardizing development environments. Events can be more easily tracked as well as resolving documented process control and granular reporting issues. Companies with release/deployment automation problems usually have existing automation but want to more flexibly manage and drive this automation — without needing to enter everything manually at the command-line. Ideally, this automation can be invoked by non-operations resources in specific non-production environments. Developers are given more environment control, giving infrastructure more application-centric understanding.

Simple processes become clearly articulated using a DevOps approach. The goal is to maximize the predictability, efficiency, security and maintainability of operational processes. This objective is very often supported by automation.

DevOps integration targets product delivery, quality testing, feature development and maintenance releases in order to improve reliability and security and faster development and deployment cycles. Many of the ideas (and people) involved in DevOps came from the Enterprise Systems Management and Agile software development movements.

The focus of Lean is on delivering value to the customer and doing so as quickly and efficiently as possible. It is flow oriented rather than batch oriented. Its purpose is to smooth the flow of the value stream and make it customer centric.

DevOps incorporates lean thinking and agile methodology as follows:

  • Eliminate any activity that is not necessary for learning what  customers want. This emphasizes fast, continuous iterations  and customer insight with a feedback loop.
  • Eliminate wait times and delays caused by manual processes  and reliance on tribal knowledge.
  • Enable knowledge workers, business analysts, developers, testers, and other domain experts to focus on creative activities (not procedural activities) that help sustain innovation, and avoid expensive and dangerous organization and technology “resets.”
  • Optimize risk management by steering with meaningful  delivery analytics that illuminate validated learning by  reducing uncertainty in ways that can be measured.

The first step for operations in adopting Lean methodology is to understand the big picture. That means not only developing an understanding of the end-to-end release process but also understanding the release process within the overall context of the DevOps plan, build, and run cycle. In this cycle, development plans a new application based on the requirements of the business, builds the application, and then releases it to operations. Operations then assumes responsibility for running the application.
In examining processes, therefore, operations should not only look at the release process itself but also at the process before the release to determine where opportunities lie for closer cooperation between the two groups. For example, operations may see a way for development to improve the staging process for operational production of an application.
Release process management (RPM) solutions are available that enable IT to map out and document the entire application lifecycle process, end to end, from planning through release to retirement. These solutions provide a collaboration platform that can bring operations and development closer together and provide that “big picture” visibility so vital to Lean. They also enable operations to consolidate release processes that are fragmented across spreadsheets, hand-written notes, and various other places.
In examining the release process itself, operations should look for areas to tighten the flow and eliminate unnecessary tasks. The operations group in one company, for example, examined the release process and found that it was re-provisioning the same servers three times when it was only necessary to do so once.
Anything that doesn’t directly contribute to customer value (like unnecessary meetings, approvals, and communication) should be considered for elimination.
Automate for consistency and speed
Manual procedures are major contributors to waste. For example, an existing release process may call for a database administrator (DBA) to update a particular database manually. This manual effort is inefficient and susceptible to errors. It’s also unlikely to be done in a consistent fashion: If there are several DBAs, each one may build a database differently.
Automation eliminates waste as well as a major source of errors. Automation ensures that processes are repeatable and consistently applied, while also ensuring frictionless compliance with corporate policies and external regulations. Deployment automation and configuration management tools can help by automating a wide variety of processes based on best practices. For Lean methodology to really work, processes must be predictable and consistent. That means that simple automation is not enough. The delivery of the entire software stack should be automated. This means that all environment builds — whether in pre- or post-production — should be completely automated. Also, the software deployment process must be completely automated, including code, content, configurations, and whatever else is required.

Automate manual and overhead activities (enabling continuous delivery) such as change propagation and orchestration,  traceability, measurement, progress reporting, etc.
By automating the whole software stack, it becomes much easier to ensure compliance with operations and security. This can save vast amounts of time usually wasted waiting on security approval for new application deployments.

It is preferable to automate the time-consuming operational policies like initiating the required change request approvals, configuring performance monitoring, and so on. The mundane manual tasks, like these policies, create the most waste.
Before diving into automation, however, it’s essential for operations to map out and fully understand the end-to-end release process. When you use a release process management (RPM) platform to drive the end-to-end process, the team can review the process holistically to uncover sources of waste and determine where to apply automation tools to best streamline the process, eliminate waste, and accelerate delivery.
Measure success and continually improve Lean is an iterative approach to continuous improvement, and iteration necessitates feedback.

Consequently, operations must establish a means of tracking the impact of adopting Lean
methodology. In establishing the feedback metrics, keep in mind that the primary purpose of Lean methodology is not just to smooth and accelerate the release cycle; it’s also to create more value for customers and do it with fewer resources.
Consequently, operations should measure not only the increase in speed of releases but also the impact of the releases on cost and on customer value. For example, did the release result in a spike in the number of service desk incidents? This would not only increase support costs but also would degrade the customer experience. Or did the lack of capacity planning result in over-taxed infrastructure and degrade end-user performance? Here, it’s important to monitor application performance and availability from the customer’s perspective. Customers are not interested in the performance metrics of the individual IT infrastructure components that support a service.
They care about the overall user experience. In particular, how quickly did they complete their transactions end to end?
Application Performance Management (APM) solutions can track and report on a wide variety of metrics, including customer experience. These metrics provide valuable feedback to both the operations and development teams in measuring the impact of Lean implementation and identifying areas that require further attention. With these solutions in place, operations can operate in a mode of continuous improvement.

Use meaningful measurement and monitoring of progress (enabling continuous optimization) for improved visibility across the organization, including the software value delivery supply chain.

IBM DevOps Platform

IBM provides an open, standards-based DevOps platform that supports a continuous innovation, feedback and improvement lifecycle, enabling a business to plan, track, manage, and automate all aspects of continuously delivering business ideas. At the same time, the business is able to manage both existing and new workloads in enterprise-class systems and open the door to innovation with cloud and mobile solutions. This capability includes an iterative set of quality checks and verification phases that each product or piece of application code must pass before release to customers. The IBM solution provides a continuous feedback loop for all aspects of the delivery process (e.g., customer experience and sentiments, quality metrics, service level agreements, and environment data) and enables continuous testing of ideas and capabilities with end users in a customer facing environment.
IBM’s DevOps solution consists of the open standards based platform, DevOps Foundation services, with end to end DevOps lifecycle capabilities. To accommodate varying levels of maturity within an IT team’s delivery processes

Plan and measure: This adoption path consists of one major practice:
This adoption path consists of one major practice:
Continuous business planning: Continuous business planning employs lean principles to start small by identifying the outcomes and resources needed to test the business vision/value, to adapt and adjust continually, measure actual progress and learn what customers really want and shift direction with agility and update the plan. 

Develop and test: This adoption path consists of two major practices:
Collaborative development: Collaborative development enables collaboration between business, development, and QA organizations—including contractors and vendors in outsourced projects spread across time zones—to deliver innovative, quality software continuously. This includes support for polyglot programming and support multiplatform
development, elaboration of ideas, and creation of user stories complete with cross-team change and lifecycle management.
Collaborative development includes the practice of continuous integration, which promotes frequent team integrations and automatic builds. By integrating the system more frequently, integration issues are identified earlier when they are easier to fix, and the overall integration effort is reduced via continuous feedback as the project shows constant and demonstrable progress.
Continuous testing: Continuous testing reduces the cost of testing while helping development teams balance quality and speed. It eliminates testing bottlenecks through virtualized dependent services, and it simplifies the creation of virtualized test environments that can be easily deployed, shared, and updated as systems change. These capabilities reduce the cost of provisioning and maintaining test environments and
shorten test cycle times by allowing integration testing earlier in lifecycle.
Release and deploy: This adoption path consists of one major practice:
Continuous release and deployment: Continuous release and deployment provides a continuous delivery pipeline that automates deployments to test and production environments.
It reduces the amount of manual labor, resource wait-time, and rework by means of push-button deployments that allow higher frequency of releases, reduced errors, and end-to-end transparency for compliance.
Monitor and optimize: This adoption path consists of two major practices:
Continuous monitoring: Continuous monitoring offers enterprise-class, easy-to-use reporting that helps developers and testers understand the performance and availability of their application, even before it is deployed to production.
The early feedback provided by continuous monitoring is vital for lowering the cost of errors and change, and for steering projects toward successful completion.

Continuous customer feedback and optimization:
Continuous customer feedback provides the visual evidence and full context for analyzing customer behavior and pinpointing customer pain points. Feedback can be applied during
both pre- and post-production phases to maximize the value of every customer visit and ensure that more transactions are completed successfully. This allows immediate visibility into the sources of customer struggles that affect their behavior and impact business.

Benefits of the IBM DevOps solution

By adopting this solution to address needs, organizations can  unlock new business opportunities:

  • Deliver a differentiated and engaging customer experience  that builds customer loyalty and increases market share by  continuously obtaining and responding to customer feedback
  • Obtain fast-mover advantage to capture markets with quicker time to value based on software-based innovation, with  improved predictability and success
  • Increase capacity to innovate by reducing waste and rework  in order to shift resources to higher value activities

Keep up with the future
By adopting Lean methodology, operations teams can catch up with and even get ahead of the large and rapidly increasing amount of new and updated services flowing from Agile-accelerated development teams. And they can do so without increasing costs or jeopardizing stability and business continuity.
In so doing, operations can help increase customer value, which has a direct effect on revenue, competitiveness, and the brand. Moreover, the operations team will have the metrics to demonstrate its contribution to the business. That enables the team to transform its image in the organization from software-release speed barrier to high-velocity enabler.

Traditional approaches to software development and delivery  are no longer sufficient. Manual processes are error prone,  break down, and they create waste and delayed response.
Businesses can’t afford to focus on cost while neglecting speed of delivery, or choose speed over managing risk. A DevOps  approach offers a powerful solution to these challenges.
DevOps reduces time to customer feedback, increases quality, reduces risk and cost, and unifies process, culture, and tools  across the end to end lifecycle—which includes adoption path to plan and measure, develop and test, release and deploy, and monitor and optimize.

PEARL XIX : Effective Steps to reduce technical debt: An agile approach

PEARL XIX : Effective Steps to reduce technical debt: An agile approach

In every codebase, there are the dark corners and alleys developers fear. Code that’s impossibly brittle; code that bites back with regression bugs; code that when you attempt to follow, will drive you beyond chaos.

Ward Cunningham created a beautiful metaphor for the hard-to-change, error-prone parts of code when he likened it to financial debt. Technical debt prevents you from moving forward, from profiting, from staying “in the black.” As in the real world, there’s cheap debt, debt with an interest lower than you can make in a low-risk financial instrument. Then there’s the expensive stuff, the high-interest credit card fees that pile on even more debt.

The impact of accumulated technical debt can be decreased efficiency, increased cost, and extended delays in the maintenance of existing systems. This can directly jeopardize operations, undermining the stability and reliability of the business over time. It also can stymie the ability to innovate and grow

DB Systel, a subsidiary of Deutsche Bahn, is one of Germany’s leading information technology and communications providers, running approximately 500 high-availability business systems for its customers. In order to keep this complex environment—a mix of packaged and in-house–developed systems that range from mainframe to mobile—running efficiently while continuing to address the needs of its customers, DB Systel decided to embed processes and tools within its development and maintenance activities to actively address its technical debt.

DB Systel’s software developers have employed new tools during development so they can detect and correct errors more efficiently. Using a software analysis and measurement platform from CAST, DB Systel has been able to uncover architectural hot spots and transactions in its core systems that carry significant structural risk. DB Systel is now better able to track the nonfunctional quality characteristics of its systems and precisely measure changes in architecture- and code-level technical debt within these applications to prioritize the areas with highest impact.

By implementing this strategy at the architecture level, DB Systel has seen a reduction in time spent on error detection and an increased focus on leading-practice development techniques. The company also noticed a rise in employees’ intrinsic motivation as a result of using CAST. With an effective technical debt management process in place, DB Systel is mitigating the possibility of software deterioration while also enriching application quality.

Technical debt is a drag. It can kill productivity, making maintenance annoying, difficult, or, in some cases, impossible. Beyond the obvious economic downside, there’s a real psychological cost to technical debt. No developer enjoys sitting down to his computer in the morning knowing he’s about to face impossibly brittle, complicated source code. The frustration and helplessness thus engendered is often a root cause of more systemic problems, such as developer turnover— just one of the real economic costs of technical debt.

However, the consequences of failing to identify and measure technical debt can be significant. An application with a lot of technical debt may not be able to fulfill its business purpose and may never reach production. Or technical debt may require weeks or months of remedial refactoring before the application emerges into production. At best, it could reach production, but be limited in its ability to meet users’ needs.

Every codebase contains some measure of technical debt. One class of debt is fairly harmless: byzantine dependencies among bizarrely named types in stable, rarely modified recesses of  system. Another is sloppy code that is easily fixed on the spot, but often ignored in the rush to address higher-priority problems. There are many more examples.

This section outlines a general workflow and several tactics for dealing with the high-interest debt

In order to fix technical debt, team need to cultivate buy-in from stakeholders and teammates alike. To do this,they need to start thinking systemically. Systems thinking is long-range thinking. It is investment thinking. It’s the idea that effort you put in today will let you progress at a predictable and sustained pace in the future.

Technical debt (also known as design debt or code debt) is a neologism metaphor referring to the eventual consequences of poor software architecture and software development within a code-base. The debt can be thought of as work that needs to be done before a particular job can be considered complete. If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy.

As a change is started on a codebase, there is often the need to make other coordinated changes at the same time in other parts of the codebase or documentation. The other required, but uncompleted changes, are considered debt that must be paid at some point in the future. Just like financial debt, these uncompleted changes incur interest on top of interest, making it cumbersome to build a project. Although the term is used in software development primarily, it can also be applied to other professions.

Common causes of technical debt include (a combination of):

  • Business pressures, where the business considers getting something released sooner before all of the necessary changes are complete, builds up technical debt comprising those uncompleted changes
  • Lack of process or understanding, where businesses are blind to the concept of technical debt, and make decisions without considering the implications
  • Lack of building loosely coupled components, where functions are hard-coded, when business needs change, the software is inflexible.
  • Lack of test suite, which encourages quick and risky band-aids to fix bugs.
  • Lack of documentation, where code is created without necessary supporting documentation. That work to create the supporting documentation represents a debt that must be paid.
  • Lack of collaboration, where knowledge isn’t shared around the organization and business efficiency suffers, or junior developers are not properly mentored
  • Parallel development at the same time on two or more branches can cause the build up of technical debt because of the work that will eventually be required to merge the changes into a single source base. The more changes that are done in isolation, the more debt that is piled up.
  • Delayed refactoring – As the requirements for a project evolve, it may become clear that parts of the code have become unwieldy and must be refactored in order to support future requirements. The longer that refactoring is delayed, and the more code is written to use the current form, the more debt that piles up that must be paid at the time the refactoring is finally done.
  • Lack of knowledge, when the developer simply doesn’t know how to write elegant code.

“Interest payments” are both in the necessary local maintenance and the absence of maintenance by other users of the project. Ongoing development in the upstream project can increase the cost of “paying off the debt” in the future. One pays off the debt by simply completing the uncompleted work.

The build up of technical debt is a major cause for projects to miss deadlines. It is difficult to estimate exactly how much work is necessary to pay off the debt. For each change that is initiated, an uncertain amount of uncompleted work is committed to the project. The deadline is missed when the project realizes that there is more uncompleted work (debt) than there is time to complete it in. To have predictable release schedules, a development team should limit the amount of work in progress in order to keep the amount of uncompleted work (or debt) small at all times.

“As an evolving program is continually changed, its complexity, reflecting deteriorating structure, increases unless work is done to maintain or reduce it.”
— Meir Manny Lehman, 1980
While Manny Lehman’s Law already indicated that evolving programs continually add to their complexity and deteriorating structure unless work is done to maintain it, Ward Cunningham first drew the comparison between technical complexity and debt in a 1992 experience report:

“Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.”
— Ward Cunningham, 1992
In his 2004 text, Refactoring to Patterns, Joshua Kerievsky presents a comparable argument concerning the costs associated with architectural negligence, which he describes as “design debt”.

“…doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical  debt incurs interest payments, which come in  the form of the extra effort that we have to do in  future development because of the quick and dirty  design choice. We can choose to continue paying  the interest, or we can pay down the principal by  refactoring the quick and dirty design into the  better design. Although it costs to pay down the
principal, we gain by reduced interest payments  in the future.”

–Martin Fowler

Technical Debt” refers to delayed technical  work that is incurred when technical short  cuts are taken, usually in pursuit of calendar driven software schedules. Just like financial  debt, some technical debts can serve valuable  business purposes. Other technical debts are  simply counterproductive. The ability to take on
debt safely, track their debt, manage their debt, and pay down their debt varies among different  organizations. Explicit decision making before  taking on debt and more explicit tracking of debt  are advised.

–Steve McConnell

Activities that might be postponed include documentation, writing tests, attending to TODO comments and tackling compiler and static code analysis warnings. Other instances of technical debt include knowledge that isn’t shared around the organization and code that is too confusing to be modified easily.

In open source software, postponing sending local changes to the upstream project is a technical debt

 The basic workflow for tackling technical debt—indeed any kind of improvement—is repeatable. Essentially, there are four:

  1. Identify where are the debt. How much is each debt item affecting  company’s bottom line and team’s productivity?
  2. Build a business case and forge a consensus on priority with those affected by the debt, both team and stakeholders.
  3. Fix the debt  on the  chosen item, head on with proven tactics.
  4. Repeat. Go back to step 1 to identify additional debt and hold the line on the improvements  made.

Agile Approach to Technical Debt

Involve the Product Owner and “promote” him to be the sponsor of technical debt reduction.

Sometimes it’s hard to find debt, especially if a team is new to a codebase. In cases where there’s no collective memory or oral tradition to draw on, team can use a static analysis tool such as NDepend ( to probe the code for the more troublesome spots.

Determining test coverage  can be another valuable tool for discovering hidden debt.

Use the log feature of  version control system to generate a report of changes over the last month or two. Find the parts of the system that receive the most activity, changes or additions, and scrutinize them for technical debt. This will help to find the bottlenecks that are challenging  today; there’s very little value in fixing debt in those parts of your system that change rarely.

Inventory and structure known technical debt

Having convinced the product owner it is time to collect and inventory known technical problems and map them on a structure that visualizes the system and project landscape.

It is not about completely understanding all topics. It is about finding a proper structure, identifying the most important issues and mapping those onto the structure. It’s about extracting knowledge about the systems from the heads to develop a common picture of existing technical problems.

Therefore write the names / identifiers of all applications and modules individual own on cards. These cards shall be pinned on a whiteboard. In the next step extract to-do’s (to solve existing problems) from all documentation media used (wiki, jira, confluence, code documentation, paper), write them on post-its and stuck them next to the application name it belongs to.This board shall be accessible to all team members over a period of some days. Every team member was responsible to complete, restructure, and correct the board during this period so that they could go on with a round portfolio of the existing debt in the systems.

Having collected and understood the work to reduce the technical debt within the systems the team now need a baseline for defining a good strategy – a repayment plan. Therefore  Costs and benefits shall be estimated. 

Obtaining consensus is key. We want the majority of team members to support the current improvement initiative e selected.Luke Hohmann’s “Buy a Feature” approach from his book Innovation Games ( will help to get consensus.

  1. Generate a short list (5-9 items) of things you want to improve. Ideally these items are in your short-term path.
  2. Qualify the items in terms of difficulty. we can use the abstract notion of a T-shirt size: small, medium, large or extra-large
  3. Give your features a price based on their size. For example, small items may cost $50, medium items $100, and so on.
  4. Give everyone a certain amount of money. The key here is to introduce scarcity into the game. You want people to have to pool their money to buy the features they’re interested in. You want to price, say, medium features at a cost where no one individual can buy them. It’s valuable to find where more than a single individual sees the priority since you’re trying to build consensus.
  5. Run a short game, perhaps 20 or 30 minutes in length, where people can discuss, collude, and pitch their case. This can be quite chaotic and also quite fun, and you’ll see where the seats of influence are in your team.
  6. Review the items that were bought and by what margins they were bought. You can choose to rank your list by the purchased features or, better yet, use the results of the Buy a Feature game in combination with other techniques, such as an awareness of the next release plan.

Taking on some judicious technical debt can be an appropriate decision to meet schedules or to prototype a new feature set, as long as the decision was made with a clear understanding of the costs involved later in the project, such as code refactoring.

As Martin Fowler says, “The useful distinction  isn’t between debt or non-debt, but between prudent and reckless  debt.”


Technical debt actually begets more tech debt over time, and its state diagram is depicted.

Load Testing as a Practice to identify Technical Debt

Load testing exposes weaknesses in an application that cannot be found through traditional functional testing. Those weaknesses are generally reflected in the application’s inability to scale appropriately. Testers are also typically  already planning to perform load testing at some point prior to  the production release of the application.

Load testing involves enabling virtual users to execute predetermined actions simultaneously against the application. The  scripts exercise features either singly or in sequences expected  to be common among production users.

Load testing looks at the characteristics of an application under a simulated load, similar to the way it might operate in a production environment. At the highest level, it determines if an application will support the number of simultaneous users specified in the project requirements.

However, it does more than that. By looking at system characteristics as you increase the number of simultaneous users, you  can make some useful statements regarding what resources  are being stressed, and where in the application they are being stressed. With this information, the team can identify weaknesses in the application that are generally the result of incurring  technical debt, therefore providing the basis for identifying the  debt.

Some automation and measurement tools are required to successfully identify and assess technical debt with load testing.

Coding / Testing Practices

Management has to make the time  through proactive investment, but so does the team. Each team member needs to invest in their own knowledge and  education on how to write clean code, their business domain  and how to do their jobs optimally. While teams learn during  the project through retrospectives, design reviews and pair  programming, teams should learn agile engineering practices  for design, development and testing. Whether through courses,  conferences, user groups, podcasts, web sites or books – there  are many options for learning better coding practices to reduce technical debt

Design Principles and Techniques

Additionally, architects need to learn about evolutionary design principles and refactoring techniques for fixing poor designs today and building better designs tomorrow. Lastly, a governance group should meet periodically to review performance and plan future system
changes to further reduce technical debt.

Definition of Done

Establish a common “definition of done” for each requirement, user story or use case and ensure its validated with the business before development begins. A simple format  such as “this story is done when: <list of criteria>” works well. The Product Owner presents “done” to the Developers, User
Interface Designers, Testers and Analysts and together they collaboratively work out the finer implementation details. Set expectations with developers that only stories meeting “done” (as validated by the testers) will be accepted and contribute towards velocity. Similarly, set expectations with management and analysts that only stories that are “ready” are scheduled for development to ensure poor requirements don’t cause further technical debt.

In all popular languages and platforms today, open source and commercial tools are available to automate builds, the continuous integration of code changes, unit testing, acceptance testing, deployments, database setup, performance testing and many other common manual activities. In addition to reducing manual effort, automation reduces the risk of mistakes and over-reliance on one individual for performing critical activities. First setup automated builds ( Ant, nAnt or rake), followed by continuous integration ( Hudson). Next setup automated unit testing (JUnit, NUnit or RSpec) and acceptance testing ( FitNesse and Selenium). Finally setup automated deployments (r Capistrano or custom
shells scripts,). It’s amazing what a few focused team members can accomplish in a relatively short period of time if given time to focus on automating common activities to reduce technical debt.

Consider rating and rewarding developers on the quality of their code. In some cases, fewer skilled developers may be better than volumes of mediocre resources whose work may require downstream reversal of debt. Regularly run code complexity reviews and technical debt assessments, sharing the results across the team. Not only can specific examples help the team improve, but trends can signal that a project is headed in the wrong direction or encountering unexpected complexity.

PEARL XXV : Scaled Agile Framework® pronounced SAFe™


Scaled Agile Framework® pronounced SAFe™ – All individuals and enterprises can benefit from the application of these innovative and empowering scaled agile methods.

SAFe Core Values

SAFe Core Values

Our modern world runs on software. In order to keep pace, we build increasingly complex and sophisticated software systems. Doing so requires larger teams and continuously rethinking the methods and practices – part art, science, engineering, mathematics, social science – that we use to organize and manage these important activities. The Scaled Agile Framework is an interactive knowledge base for implementing agile practices at enterprise scale. The Scaled Agile Framework represents one such set of advances. The Scaled Agile Framework®, or SAFe, provides a recipe for adopting Agile at enterprise scale.  It is illustrated in the big picture. As Scrum is to the Agile team, SAFe is to the Agile enterprise. SAFe tackles the tough issues – architecture, integration, funding, governance and roles at scale.  It is field-tested and enterprise-friendly. SAFe is the brainchild of Dean LeffingwellAs Ken Schwaber and Jeff Sutherland are to Scrum, Dean Leffingwell is to SAFe. SAFe is based on Lean and Agile principles. There are three levels in SAFe:

  • Team
  •  Program
  •  Portfolio
scaled Agile Framework big picture

scaled Agile Framework big picture

At the Team Level: Scrum with XP engineering practices are used. Design/Build/Test (DBT) teams deliver working, fully tested software every two weeks.  There are five to nine members of each team.

The scrum team is renamed as the DBT team (from Design / Build / Test) and the sprint review is described as the sprint demo .

One positive aspect of SAFe is its alignment between team and business objectives during the PSI planning. (Potentially Ship-able Increment)  

It makes it easier to see the connection between the company roadmap/vision and  day-to-day work. High level view of Business and Architectural needs behind the company investment and its connection to the particular epic on the program level, then story implemented on team level is helpful during the planning.

Similarly, the HIP Sprints (from Hardening / Innovation / Planning) scheduled at the end of each PSI.


Spikes is A story or task aimed at answering a question or gathering information, rather than at producing shippable product.

In practice, the spikes teams take on are often proof-of-concept types of activities. The definition above says that the work is not focused on the finished product. It may even be designed to be thrown away at the end. This gives your product owner the proper expectation that you will most likely not directly implement the spike solution. During the course of the sprint, you may discover that what you learned in the spike cannot be implemented for any practical purpose. Or you may discover that the work can be used for great benefit on future stories. Either way, the intention of the spike is not to implement the completed work “as is.”

There are two other characteristics that spikes should have:

  1. Have clear objectives and outcomes for the spike. Be clear on the knowledge you are trying to gain and the problem(s) you are trying to address. It’s easy for a team to stray off into something interesting and related, but not relevant.
  2. Be timeboxed. Spikes should be timeboxed so you do just enough work that’s just good enough to get the value required.

At the Program Level:

Features are services provided by the system that fulfill stakeholders needs. They are maintained in program backlog and are sized to fit in PSI/Release so that each PSI/Release delivers conceptual integrity. Features bridges the gap between user stories and EPics.

SAFe defines an Agile Release Train (ART).  As iteration is to team, train is to program. The ART (or train) is the primary vehicle for value delivery at the program level.  It delivers a value stream for the organization. SAFe is three letter acronym (TLA) heaven – DBT, ART, RTE, PSI, NFR, RMT and I&A! . Between 5 and 10 teams work together on a train.  They synchronize their release boundaries and their iteration boundaries. Every 10 weeks (5 iterations) a train delivers a Potentially Shippable Increment (PSI).  A demo and inspect and adapt sessions are held.  Planning begins for the next PSI. PSIs provide a steady cadence for the development cycle.  They are separate from the concept of market releases, which can happen more or less frequently and on a different schedule. New program level roles are defined

  •  System Team
  •  Product Manager
  •  System Architect
  •  Release Train Engineer (RTE)
  •  UX and Shared Resources (e.g., security, DBA)
  •  Release Management Team

In IT/PMI environments the Program Manager or Senior Project Manager might fill one of two roles.  If they have deep domain expertise, they are likely to fill the Product Manager role.  If they have strong people management skills and understand the logistics of release they often become the Release Train Engineer SAFe makes a distinction between content (what the system does) and design (how the system does it).  There is separate “authority” for content and design. The Product Manager (Program Manager) has content authority at the program level.  S / He defines and prioritizes the program backlog.

SAFe defines an artifact hierarchy of Epics – Features – User Stories.  The program backlog is a prioritized list of features.  Features can originate at the Program level, or they can derive from Epics defined at the Portfolio level.  Features decompose to User Stories which flow to Team-level backlogs. Features are prioritized based on Don Reinersten’s Weighted Shortest Job First (WSJF) economic decision framework. The System Architect has design authority at the program level.  He collaborates day to day with the teams, ensuring that non-functional requirements (NFRs) are met.  He works with the enterprise architect at the portfolio level to ensure that there is sufficient architectural runway to support upcoming user and business needs. The UX Designer(s) provides UI design, UX guidelines and design elements for the teams.  In a similar manner, shared specialists provide services such as security, performance and database administration across the teams. The Release Train Engineer (RTE) is the Uber-ScrumMaster. The Release Management Team is a cross-functional team – with representation from marketing, dev, quality, ops and deployment – that approves frequent releases of quality solutions to customers.

For agility at scale, a small magnitude of modeling has been introduced to support the vision , upcoming features, and ongoing extension to the upcoming Architectural Runway for each Agile Release train.

Agile Release train is the long lived team of agile teams typically consists of 50 to 125 individuals, that serves the program level value delivery in SAFe.  Using a common team sprint cadence each train has dedicated resources to continuously define build test and deliver value to one of the enterprise value streams. Teams are aligned to a common mission via a single program backlog and include the program management, architectural, UX guidance and release train engineer roles. Each train produces valuable and evaluate able system level potential shipable increment at least 8 to 12 weeks accordance with PSI Objectives established by the teams during each release planning event but team can release any time according to market needs.

Cadence is what gives a team a feeling of demarcation, progression, resolution or flow. A pattern which allows the team to know what they are doing and when it will be done. For very small, or mature teams, this cadence could by complex, arrhythmic or syncopated. However, it is enough to allow a team to make reliable commitments because recognizing their cadence allows them to understand their capability or capacity.

The program backlog is the single, definitive repository for all the work anticipated by the program. The backlog is created from the breakdown of business and architectural epics into features that will address user needs and deliver business benefits . The purpose of the program roadmap is to establish alignment across all teams, while also providing predictability to the deliverables over an established time horizon

Program EPics affect single release train.

SAFe provides a cadence-based approach to the delivery of value via PSIs. Schedule, manage and govern your synchronized PSIs

Shared Iteration schedules allow multiple teams to stay on the same cadence and facilitate roll up reporting. Release Capacity Planning allows you to scale agile initiatives across multiple teams and deploy more predictable releases. Cross team dependencies are quickly identified and made visible to the entire program.

At the Portfolio Level:

The Portfolio Vision defines how the enterprise’s business strategy will be achieved.

In the Scaled Agile Framework, the Portfolio Level is the highest and most strategic layer where programs are aligned to the company’s business strategy and investment approach

PPM has a central role in Strategy, Investment Funding, Program Management and Governance. Investment Themes drive budget allocations. Themes are done as part of the budgeting process with a lifespan of 6-12 months.

Epics are enterprise initiatives that are sufficiently substantial in scope , they warrant analysis and understanding of potential ROI. EPics require light weight business case that elaborate business and technology impact and implementation strategy. EPics are generally cross cutting and impact multiple organizations,budget, release trains, and occur over multiple PSI.

Portfolio Epics affect multiple release trains. Epics cut across all three business dimensions of Time ( Multiple PSI,years) , Scope (Release Trains, Applications,solutions and business platforms)  , Organizations(Department, Business units, Partners, End-To-End business value chain).

Portfolio philosophy is centralized strategy with local execution. Epics define large development initiatives that encapsulate the new development necessary to realize the benefits of investment themes.Program Project Management represents individuals responsible for strategy, Investment funding, program management and governance. They are the stewards of portfolio vision, define relevant value streams,control the budget through investment themes, define and prioritize cross cutting portfolio backlog epics, guide agile release trains and report to business on investment spends and program progress. SAFe provides seven transformation patterns to lead the organization to program portfolio management.

  • Decentralized Decision making
  • Demand management, continuous value flow
  • Light weight, epic only business cases
  • Decentralized Rolling wave planning
  • Agile estimating and planning
  • Self organizing, self management agile release trains
  • Objective Fact based measures and milestones

Rolling Wave Planning is the process of project planning in waves as the project proceeds and later details become clearer. Work to be done in the near term is based on high level assumptions; also, high level milestones are set. As the project progresses, the risks, assumptions, and milestones originally identified become more defined and reliable. One would use Rolling Wave Planning in an instance where there is an extremely tight schedule or timeline to adhere to; whereas more thorough planning would have placed the schedule into an unacceptable negative schedule variance.

This is an approach that iteratively plans for a project as it unfolds, similar to the techniques used in Scrum (development) and other forms of Agile software development.

Progressive Elaboration is what occurs in this rolling wave planning process. Progressive Elaboration means that over time we elaborate the work packages in greater detail. Progressive Elaboration refers to the fact that as the weeks and months pass we have planned to provide that missing, more elaborated detail for the work packages as they now appear on the horizon.

Investment themes represent the set of initiatives that drive the enterprise’s investment in systems, products, applications, and services. Epics can be grouped by investment themes and then can visualize relative capacity allocations to determine if planned epics are in alignment with the overall business strategy. Epics are large-scale development initiatives that realize the value of investment themes.



There are business epics (customer-facing) and architectural epics (technology solutions). Business and architectural epics are managed in parallel Kanban systems. Objective metrics support IT governance and continuous improvement. Enterprise architecture is a first class citizen.  The concept of Intentional Architecture provides a set of planned initiatives to enhance solution design, performance, security and usability. SAFe patterns provide a transformation roadmap.

Architectural Runway exists when the enterprise platforms have sufficient existing technological infrastructure(code) to support the implementation of the highest priority features without excessive delay inducing redesign, In order to achieve some degree of runway , the enterprise must continuously invest in refactoring and extending existing platforms

SAFe suggests development and implementation of kanban systems for business and archtiecture portfolio epics.

Architectural epics kanban system suggests visibility , Work In Process limits and continuous flow to portfolio level architectural epics. This kanban system has four states, funnel, backlog, analysis and implementation. The architecture epic kanban is typically under the auspices of CTO/ Technology office which includes enterprise and system architects.

Business epics kanban system suggests visibility , Work In Process limits and continuous flow to portfolio level business epics. This kanban system has four states, funnel, backlog, analysis and implementation. The business epic kanban is typically under the auspices of  program portfolio management . comprised of those executives and business owners who have responsibility for implementing business strategy.

Value streams

Value streams

Lean Approach

The scaled agile framework is based on number of trends in the modern software engineering.

  • Lean Thinking
  • Product Development flow
  • Agile Development
Dean Leffingwell and Lean Thinking

Dean Leffingwell and Lean Thinking

Agile gives tools needed to empower and engage development teams to achieve unprecedented levels of productivity,quality and engagement. The SAFe House of Lean provides the following constructs

  • The Goal : Value, Sustain-ably the shortest lead time Best quality and value to people
  • Respect for people
  • Kaizen (continuous improvement)
  • Principles of product development flow
  • Foundation Management : Lean thinking manager:Teacher

Investment themes reflect how a portfolio allocates budgets to various initiatives that it has allocated to the portfolio business strategy. Investment themes are portfolio level capacity allocations in that each theme gets resource implied by the budget.

PEARL X : Behavior Driven Development

PEARL X : Behavior Driven Development provides stakeholder value through collaboration throughout the entire project

Behavior-driven development was developed by Dan North as a response to the issues encountered teaching test-driven development:

  • Where to start in the process
  • What to test and what not to test
  • How much to test in one go
  • What to call the tests
  • How to understand why a test fails
At the heart of BDD is a rethinking the approach to the unit testing and acceptance testing that North came up with while dealing with these issues. For example, he proposes that unit test names be whole sentences starting with the word “should” and should be written in order of business value

At its core, behavior-driven development is a specialized version of test-driven development which focuses on behavioral specification of software units.

Test-driven development is a software development methodology which essentially states that for each unit of software, a software developer must:

  • define a test set for the unit first;
  • then implement the unit;
  • finally verify that the implementation of the unit makes the tests succeed.

This definition is rather non-specific in that it allows tests in terms of high-level software requirements, low-level technical details or anything in between. The original developer of BDD (Dan North) came up with the notion of BDD because he was dissatisfied with the lack of any specification within TDD of what should be tested and how. One way of looking at BDD therefore, is that it is a continued development of TDD which makes more specific choices than TDD.

Behavior Driven Development

Behavior Driven Development

Behavior-driven development specifies that tests of any unit of software should be specified in terms of the desired behavior of the unit. Borrowing from agile software development the “desired behavior” in this case consists of the requirements set by the business — that is, the desired behavior that has business value for whatever entity commissioned the software unit under construction.  Within BDD practice, this is referred to as BDD being an “outside-in” activity.

BDD practices

The practices of BDD include:

  • Establishing the goals of different stakeholders required for a vision to be implemented
  • Drawing out features which will achieve those goals using feature injection
  • Involving stakeholders in the implementation process through outside–in software development
  • Using examples to describe the behavior of the application, or of units of code
  • Automating those examples to provide quick feedback and regression testing
  • Using ‘should’ when describing the behavior of software to help clarify responsibility and allow the software’s functionality to be questioned
  • Using ‘ensure’ when describing responsibilities of software to differentiate outcomes in the scope of the code in question from side-effects of other elements of code.
  • Using mocks to stand-in for collaborating modules of code which have not yet been written

Domain-Driven Design (DDD) is a collection of principles and patterns that help developers craft elegant object systems. Properly applied it can lead to software abstractions called domain models. These models encapsulate complex business logic, closing the gap between business reality and code.


BDD is driven by business value; that is, the benefit to the business which accrues once the application is in production. The only way in which this benefit can be realized is through the user interface(s) to the application, usually (but not always) a GUI.

In the same way, each piece of code, starting with the UI, can be considered a stakeholder of the other modules of code which it uses. Each element of code provides some aspect of behavior which, in collaboration with the other elements, provides the application behavior.

The first piece of production code that BDD developers implement is the UI. Developers can then benefit from quick feedback as to whether the UI looks and behaves appropriately. Through code, and using principles of good design and refactoring, developers discover collaborators of the UI, and of every unit of code thereafter. This helps them adhere to the principle of YAGNI, since each piece of production code is required either by the business, or by another piece of code already written.

YAGNI : You Are Not Goingto Need It

Behavior-Driven Development (BDD) is an agile process designed to keep the focus on stakeholder value throughout the whole project. The premise of BDD is that the requirement has to be written in a way that everyone understands it – business representative, analyst, developer, tester, manager, etc. The key is to have a unique set of artifacts that are understood and used by everyone.

User stories are the central axis around which a software project rotates. Developers use user stories to capture requirements and to express customer expectations. User stories provide the unit of effort that project management uses to plan and to track progress. Estimations are made against user stories, and user stories are where software design begins. User stories help to shape a system’s usability and user experience.

User stories express requirements in terms of The Role, The Goal, and The Motivation.

A BDD story is written by the whole team and used as both requirements and executable test cases. It is a way to perform test-driven development (TDD) with a clarity that cannot be accomplished with unit testing. It is a way to describe and test functionality in (almost) natural language.


BDD Story Format
Even though there are different variations of the BDD story template, they all have two common elements: narrative and scenario. Each narrative is followed by one or more scenarios.

The BDD story format looks like this:

In order to [benefit]
As a [role]
I want to [feature]
Scenario: [description]
Given [context or precondition]
When [event or action]
Then [outcome validation]

“User stories are a promise for a conversation” (Ron Jeffries)
A BDD story consists of a narrative and one or more scenarios. A narrative is a short, simple description of a feature told from the perspective of a person or role that requires the new functionality. The intention of the narrative is NOT to provide a complete description of what is to be developed but to provide a basis for communication between all interested parties (business, analysts, developers, testers, etc.) The narrative shifts the focus from writing features to discussing them.
Even though it is usually very short, it tries to answer three basic questions that are often overlooked in traditional requirements.
What is the benefit or value that should be produced (In order to)?
Who needs it (As a)? And what is a feature or goal (I want to)?
With those questions answered, the team can start defining the best solution in collaboration with the stakeholders.

The narrative is further defined through scenarios that provide a definition of done, and acceptance criteria that confirm that narrative that was developed fulfill expectations.

It is important to remember that the written part of a BDD story is incomplete until discussions about that narrative occur and scenarios are written. Only the whole story (narrative and one or more scenarios) represents a full description of the functionality and definition of done.

If more information is needed, narratives can point to a diagram, workflow, spreadsheet, or any other external document.

Since narratives have some characteristics of traditional requirements, it is important to describe distinctions. Two most important differences are precision and planning

Narratives favor verbal communication. Written language is very imprecise, and team members and stakeholders might interpret the requirement in a different way.

Verbal communication wins over written.
As another example,  is the following requirement statement relating to a registration screen: “The system shall allow the user to register using 16 character username and 8 character password”.

It was unclear whether the username MUST be 16 characters or whether it could be any length up to 16 characters, or whether it could be any length with a minimum of 16 characters. In this particular case, the business analyst removed any doubt as soon as clarification was asked.

However, there are many other cases
when developers take requirements as a final product and simply implement them in a way they understand them. In those cases they might not understand the reasons behind those requirements but just “follow specifications”. They might have a better solution in mind that never gets discussed.

IEEE 830 style requirements (“The system shall…”) often consist of hundreds or even thousands of statements. Planning such a large number of statements is extremely difficult. There are too many of them to be prioritized and estimated, and it is hard to understand which functionalities should be developed. That is especially evident when those statements are separated into different sections that represent different parts of the system or products. Without adequate prioritization, estimation, and description of the functionality itself, it is very hard to accomplish an iterative and incremental development process. Even if there is some kind of iteration plan, it can take a long time for a completed functionality to be delivered, since the development of isolated parts of the
system is done in a different order and at a different speed.

Narratives are not requirement statements
The Computer Society of the Institute of Electrical and Electronics Engineers (IEEE) has published a set of guidelines on how to write software requirements specifications. This document is known as IEEE Standard 830 and it was last updated in 1998. One of the
characteristics of an IEEE 830 statement is the use of the phrase
“The system shall…”. Examples would be:

The system shall allow the user to login using a username and password.

The system shall have a login confirmation screen.

The system shall allow 3 unsuccessful login attempts.

Writing requirements in this way has many disadvantages: it is error prone and time-consuming, to name but two. Two other important disadvantages are that it is boring and too long to read.
This might seem irrelevant until you realize the implications. If reviewers and, if there is such a process, those who need to sign off requirements do NOT read it thoroughly and skip sections out of boredom, or because it does NOT affect them, many things will be missed. Moreover, having a big document written at that level often prevents readers from understanding the big picture and the real goal of the project.

A Waterfall model combined with IEEE 830 requirements tends to plan everything in advance, define all details, and hope that the project execution will be flawless. In reality, there are almost no successful software projects that manage to accomplish these goals. Requirements change over time resulting in “change requests”. Changes are unavoidable and only through constant communication and short iterations can the team reduce the impact of these changes. IEEE 830 statements are a big document in the form of a checklist. Written, done, forgotten, the overall understanding is lost. The need for constant reevaluation is nonexistent.
Consider the following requirements:

  • The product shall have 4 wheels.
  • The product shall have a steering wheel.
  • The product shall be powered by electricity.
  • The product shall be produced in different colors.

Each of those statements can be developed and tested independently and assembled at the end of the process. The first image in someone’s head might be an electrically-powered car.
That image is incorrect. It is a car, it has four wheels, it is powered by electricity (rechargeable batteries) and it can be purchased in different colors. it is A toy car

That is probably not what individual would think from reading those statements. A better description would be:

In order to provide entertainment for children
As a parent
I want a small-sized car
By looking at this narrative, it is clear what the purpose is (enter- tainment for children), who needs it (parents), and what it is (a small-sized car). It does not provide all the details since the main purpose is to establish the communication that will result in more information and understanding of someone’s needs.

That process might end with one narrative being split into many. Further on, scenarios produced from that narrative act as acceptance criteria, tests, and definition of done.

Who can write narratives?
Anyone can write narratives. Teams that are switching to Agile tend to have business analysts as writers and owners of narratives or even whole BDD stories (a narrative with one or more scenarios).
In more mature agile teams, the product owner has a responsibility to make sure that there is a product backlog with BDD stories. That does not mean that he writes them. Each member of the team can write BDD stories or parts of them (narrative or scenario).
Whether all the narratives are written by one person (customer, business analyst, or product owner) or anyone can write them (developers, testers, etc.) usually depends on the type of organization and customers. Organizations that are used to “traditional” requirements and procedures that require them to have those requirements “signed” before the project starts often struggle during their transition to Agile and iterative development. In cases like this, having one person (usually a business analyst) as the owner and writer of narratives might make a smoother transition towards team ownership and lower the impact on the organization

A good BDD narrative uses the “INVEST” model:

  •  Independent. Reduced dependencies = easier to plan.
  •  Negotiable. Details added via collaboration.
  •  Valuable. Provides value to the customer.
  •  Estimable. Too big or too vague = not estimable.
  •  Small. Can be done in less than a week by the team.
  •  Testable. Good acceptance criteria defined as scenarios.

While IEEE 830 requirements are focused on system operations, BDD narratives focus on customer value. They encourage looseness of information in order to foster a higher level of collaboration between stakeholders and the team. The actual work being done is accomplished through collaboration revolving around the narrative that becomes more detailed through scenarios as the development progresses. Narratives are at higher level than IEEE 830 requirements. Narratives are followed by collaboratively developed scenarios which define when the BDD story meets the expectations.

Even though narratives can be written by anyone, it is often the result of conversations between the product owner or business analyst and the business stakeholder.
Scenarios describe interactions between user roles and the system. They are written in plain language with minimal technical details so that all stakeholders (customer, developers, testers, designers, marketing managers, etc.) can have a common base for use in discussions, development, and testing.
Scenarios are the acceptance criteria of the narrative. They represent the definition of done. Once all scenarios have been implemented, the story is considered finished. Scenarios can be written by anyone, with testers leading the effort.

The whole process should be iterative within the sprint; as the development of the BDD story progresses, new scenarios can be written to cover cases not thought of before. The initial set of scenarios should cover the “happy path”. Alternative paths should be added progressively during the duration of the sprint.

Scenarios consist of a description and given, when, and then steps.
The scenario description is a short explanation of what the scenario does. It should be possible to understand the scenario from its description. It should not contain details and should not be longer than ten words.
Steps are a sequence of preconditions, events, and outcomes of a scenario. Each step must start with words given, when or then.
The Given step describes the context or precondition that needs to be fulfilled.

Given visitor is on the home screen

The When step describes an action or some event.

When user logs in

The Then step describes an outcome.

Then welcome message is displayed

Any number of given, when and then steps can be combined, but at least one of each must be present. BDD steps increase the quality of conversations by forcing participants to think in terms of pre-conditions that allow users to perform actions that result in some outcomes. By using those three types of steps, the quality of the interactions between team members and stakeholders increases.

The following process should be followed.
1. Write and discuss narrative.
2. Write and discuss short descriptions of scenarios.
3. Write steps for each scenario.
4. Repeat steps 2 and 3 during the development of the

By starting only with scenario descriptions, we are creating a basis that will be further developed through steps. It allows us to discuss different aspects of the narrative without going into the details of all the steps required for each of the scenarios. Do not spend too much time writing descriptions of all possible scenarios. New ones will be written later.
Once each scenario has been fully written (description and steps) new possibilities and combinations will be discovered, resulting in more scenarios.

Each action or set of actions (when steps) is followed by one or more outcomes (then steps). Even though this scenario provides a solid base, several steps are still missing. This situation is fairly common because many steps are not obvious from the start.
Additional preconditions, actions, and outcomes become apparent only after first version of the scenario has been written.

This scenario covers one of many different combinations. It describes the “happy path” where all actions have been performed successfully. To specify alternative paths, we can
copy this scenario and modify it a bit.

This scenario was not written and fully perfected at the first attempt  but through several iterations. With each version of the scenario,  new questions were asked and new possibilities were explored.
The process of writing one scenario can take several days or even  weeks. It can be done in parallel with code development. As soon as  the first version of the scenario has been completed, development  can start. As development progresses, unexpected situations will  arise and will need to be reflected in scenarios.

Behavior-driven development borrows the concept of the ubiquitous language from domain driven design. A ubiquitous language is a (semi-)formal language that is shared by all members of a software development team — both software developers and non-technical personnel. The language in question is both used and developed by all team members as a common means of discussing the domain of the software in question.  In this way BDD becomes a vehicle for communication between all the different roles in a software project.

BDD uses the specification of desired behavior as a ubiquitous language for the project team members. This is the reason that BDD insists on a semi-formal language for behavioral specification: some formality is a requirement for being a ubiquitous language. In addition, having such a ubiquitous language creates a domain model of specifications, so that specifications may be reasoned about formally. This model is also the basis for the different BDD-supporting software tools that are available.

Much like test-driven design practice, behavior-driven development assumes the use of specialized support tooling in a project. Inasmuch as BDD is, in many respects, a more specific version of TDD, the tooling for BDD is similar to that for TDD, but makes more demands on the developer than basic TDD tooling.

Tooling principles

In principle a BDD support tool is a testing framework for software, much like the tools that support TDD. However, where TDD tools tend to be quite free-format in what is allowed for specifying tests, BDD tools are linked to the definition of the ubiquitous language discussed earlier.

As discussed, the ubiquitous language allows business analysts to write down behavioral requirements in a way that will also be understood by developers. The principle of BDD support tooling is to make these same requirements documents directly executable as a collection of tests. The exact implementation of this varies per tool, but agile practice has come up with the following general process:

  • The tooling reads a specification document.
  • The tooling directly understands completely formal parts of the ubiquitous language . Based on this, the tool breaks each scenario up into meaningful clauses.
  • Each individual clause in a scenario is transformed into some sort of parameter for a test for the user story. This part requires project-specific work by the software developers.
  • The framework then executes the test for each scenario, with the parameters from that scenario.

Dan North has developed a number of frameworks that support BDD (including JBehave and RBehave), whose operation is based on the template that he suggested for recording user stories  These tools use a textual description for use cases and several other tools (such as CBehave) have followed suit. However, this format is not required and so there are other tools that use other formats as well. For example Fitnesse (which is built around decision tables), has also been used to roll out BDD.

Tooling examples

There are several different examples of BDD software tools in use in projects today, for different platforms and programming languages.

Possibly the most well-known is JBehave, which was developed by Dan North. The following is an example taken from that project:

Consider an implementation of the Game of Life. A domain expert (or business analyst) might want to specify what should happen when someone is setting up a starting configuration of the game grid. To do this, he might want to give an example of a number of steps taken by a person who is toggling cells. Skipping over the narrative part, he might do this by writing up the following scenario into a plain text document (which is the type of input document that JBehave reads):

Given a 5 by 5 game
When I toggle the cell at (3, 2)
Then the grid should look like
When I toggle the cell at (3, 1)
Then the grid should look like
When I toggle the cell at (3, 2)
Then the grid should look like

The bold print is not actually part of the input; it is included here to show which words are recognized as formal language. JBehave recognizes the terms Given (as a precondition which defines the start of a scenario), When (as an event trigger) and Then (as a postcondition which must be verified as the outcome of the action that follows the trigger). Based on this, JBehave is capable of reading the text file containing the scenario and parsing it into clauses (a set-up clause and then three event triggers with verifiable conditions). JBehave then takes these clauses and passes them on to code that is capable of setting a test, responding to the event triggers and verifying the outcome. This code must be written by the developers in the project team (in Java, because that is the platform JBehave is based on). In this case, the code might look like this:

private Game game;
private StringRenderer renderer;
@Given("a $width by $height game")
public void theGameIsRunning(int width, int height) {
    game = new Game(width, height);
    renderer = new StringRenderer();
@When("I toggle the cell at ($column, $row)")
public void iToggleTheCellAt(int column, int row) {
    game.toggleCellAt(column, row);
@Then("the grid should look like $grid")
public void theGridShouldLookLike(String grid) {
    assertThat(renderer.asString(), equalTo(grid));

The code has a method for every type of clause in a scenario. JBehave will identify which method goes with which clause through the use of annotations and will call each method in order while running through the scenario. The text in each clause in the scenario is expected to match the template text given in the code for that clause (for example, a Given in a scenario is expected to be followed by a clause of the form “a X by Y game”). JBehave supports the matching of actual clauses to templates and has built-in support for picking terms out of the template and passing them to methods in the test code as parameters. The test code provides an implementation for each clause type in a scenario which interacts with the code that is being tested and performs an actual test based on the scenario. In this case:

  • The theGameIsRunning method reacts to a Given clause by setting up the initial game grid.
  • The iToggleTheCellAt method reacts to a When clause by firing off the toggle event described in the clause.
  • The theGridShouldLookLike method reacts to a Then clause by comparing the actual state of the game grid to the expected state from the scenario.

The primary function of this code is to be a bridge between a text file with a story and the actual code being tested. Note that the test code has access to the code being tested (in this case an instance of Game) and is very simple in nature (has to be, otherwise a developer would end up having to write tests for his tests).

Finally, in order to run the tests, JBehave requires some plumbing code that identifies the text files which contain scenarios and which inject dependencies (like instances of Game) into the test code. This plumbing code is not illustrated here, since it is a technical requirement of JBehave and does not relate directly to the principle of BDD-style testing.

Story versus specification

A separate subcategory of behavior-driven development is formed by tools that use specifications as an input language rather than user stories. An example of this style is the RSpec tool that was also developed by Dan North. Specification tools don’t use user stories as an input format for test scenarios but rather use functional specifications for units that are being tested. These specifications often have a more technical nature than user stories and are usually less convenient for communication with business personnel than are user stories. An example of a specification for a stack might look like this:

Specification: Stack

When a new stack is created
Then it is empty

When an element is added to the stack
Then that element is at the top of the stack

When a stack has N elements 
And element E is on top of the stack
Then a pop operation returns E
And the new size of the stack is N-1

Such a specification may exactly specify the behavior of the component being tested, but is less meaningful to a business user. As a result, specification-based testing is seen in BDD practice as a complement to story-based testing and operates at a lower level. Specification testing is often seen as a replacement for free-format unit testing.

Specification testing tools like RSpec and JDave are somewhat different in nature from tools like JBehave. Since they are seen as alternatives to basic unit testing tools like JUnit, these tools tend to favor forgoing the separation of story and testing code and prefer embedding the specification directly in the test code instead. For example, an RSpec test for a hashtable might look like this:

describe Hash do
  before(:each) do
    @hash = => 'world')
  it "should return a blank instance" do eql({})
  it "should hash the correct information in a key" do
    @hash[:hello].should eql('world')

This example shows a specification in readable language embedded in executable code. In this case a choice of the tool is to formalize the specification language into the language of the test code by adding methods named it and should. Also there is the concept of a specification precondition – the before section establishes the preconditions that the specification is based on.

Cucumber lets software development teams describe how software should behave in plain text. The text is written in a business-readable domain-specific language and serves as documentation, automated tests and development-aid – all rolled into one format.

Cucumber works with Ruby, Java, .NET, Flex or web applications written in any language. It has been translated to over 40 spoken languages.

Cucumber also supports more succinct tests in tables – similar to what FIT does. Users can view the examples and documentation to learn more about Cucumber tables.

Gherkin gives us a lightweight structure for documenting examples of the behavior our stakeholders want, in a way that it can be easily understood both by the stakeholders and by Cucumber. Although we can call Gherkin a programming language, its primary design goal is human readability, meaning you can write automated tests that read like documentation.

Using mocks

BDD proponents claim that the use of “should” and “ensureThat” in BDD examples encourages developers to question whether the responsibilities they’re assigning to their classes are appropriate, or whether they can be delegated or moved to another class entirely. Practitioners use an object which is simpler than the collaborating code, and provides the same interface but more predictable behavior. This is injected into the code which needs it, and examples of that code’s behavior are written using this object instead of the production version.

These objects can either be created by hand, or created using a mocking framework such as mock.

Questioning responsibilities in this way, and using mocks to fulfill the required roles of collaborating classes, encourages the use of Role-based Interfaces. It also helps to keep the classes small and loosely coupled.

Welcome to Agile Pearls of Wisdom Blog

This Blog elucidates the pearls of wisdom in Agile Software Development methodology based on best practices, approaches, implementations, frameworks, checklists, cheat sheets, concepts, principles, guidelines, tools, methods, values, practices, philosophies, culture etc.

Agile software development 

Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen tight iterations throughout the development cycle.

Agile Development

Agile Development

The Agile Manifesto introduced the term in 2001. Since then, the Agile Movement, with all its values, principles, methods, practices, tools, champions and practitioners, philosophies and cultures, has significantly changed the landscape of the modern software engineering and commercial software development in the Internet era

Martin Fowler is widely recognized as one of the key founders of the agile methods.
Incremental software development methods have been traced back to 1957.In 1974, a paper by E. A. Edmonds introduced an adaptive software development process. Concurrently and independently the same methods were developed and deployed by the New York Telephone Company’s Systems Development Center under the direction of Dan Gielan. In the early 1970s, Tom Gilb started publishing the concepts of Evolutionary Project Management (EVO), which has evolved into Competitive Engineering. During the mid to late 1970s Gielan lectured extensively throughout the U.S. on this methodology, its practices, and its benefits.

So-called lightweight agile software development methods evolved in the mid-1990s as a reaction against the heavyweight waterfall-oriented methods, which were characterized by their critics as being heavily regulated, regimented, micromanaged and overly incremental approaches to development.

Proponents of lightweight agile methods contend that they are returning to development practices that were present early in the history of software development.

Agile Methodologies

Agile Methodologies

Early implementations of agile methods include Rational Unified Process (1994), Scrum (1995), Crystal Clear, Extreme Programming (1996), Adaptive Software Development, Feature Driven Development (1997), and Dynamic Systems Development Method (DSDM) (1995). These are now collectively referred to as agile methodologies, after the Agile Manifesto was published in 2001.

On February 11-13, 2001, at The Lodge at Snowbird ski resort in the Wasatch mountains of Utah, seventeen people met to talk, ski, relax, and try to find common ground and of course, to eat. What emerged was the Agile Software Development Manifesto. Representatives from Extreme Programming, SCRUM, DSDM, Adaptive Software Development, Crystal, Feature-Driven Development, Pragmatic Programming, and others sympathetic to the need for an alternative to documentation driven, heavyweight software development processes convened.

Now, a bigger gathering of organizational anarchists would be hard to find, so what emerged from this meeting was symbolic, a Manifesto for Agile Software Development signed by all participants. The only concern with the term agile came from Martin Fowler (a Brit for those who dont know him) who allowed that most Americans didnt know how to pronounce the word agile.

Alistair Cockburns initial concerns reflected the early thoughts of many participants. “I personally didn’t expect that this particular group of agilites to ever agree on anything substantive.”
But his post-meeting feelings were also shared, “Speaking for myself, I am delighted by the final phrasing [of the Manifesto]. I was surprised that the others appeared equally delighted by the final phrasing. So we did agree on something substantive.”

Naming themselves “The Agile Alliance,” this group of independent thinkers about software development, and sometimes competitors to each other, agreed on the Manifesto for Agile Software Development .

But while the Manifesto provides some specific ideas, there is a deeper theme that drives many, but not all, to be sure, members of the alliance. At the close of the two-day meeting, Bob Martin joked that he was about to make a “mushy” statement. But while tinged with humor, few disagreed with Bobs sentiments that they all felt privileged to work with a group of people who held a set of compatible values, a set of values based on trust and respect for each other and promoting organizational models based on people, collaboration, and building the types of organizational communities in which they would want to work.

At the core, Jim Highsmith believes

Agile Methodologists are really about “mushy” stuff about delivering good products to customers by operating in an environment that does more than talk about “people as our most important asset” but actually “acts” as if people were the most important, and lose the word “asset”. So in the final analysis, the meteoric rise of interest in and sometimes tremendous criticism of Agile Methodologies is about the mushy stuff of values and culture.

For example, Jim thinks that ultimately, Extreme Programming has mushroomed in use and interest, not because of pair-programming or refactoring, but because, taken as a whole, the practices define a developer community freed from the baggage of Dilbertesque corporations.

Kent Beck told the story of an early job in which he estimated a programming effort of six weeks for two people. After his manager reassigned the other programmer at the beginning of the project, he completed the project in twelve weeks and felt terrible about himself! The boss of course harangued Kent about how slow he was throughout the second six weeks. Kent, somewhat despondent because he was such a “failure” as a programmer, finally realized that his original estimate of 6 weeks was extremely accurate for 2 people and that his “failure” was really the managers failure , indeed, the failure of the standard “fixed” process mindset that so frequently plagues our industry.

According to JIm Highsmith,
This type of situation goes on every day marketing, or management, or external customers, internal customers, and, yes, even developers dont want to make hard trade-off decisions, so they impose       irrational demands through the imposition of corporate power       structures. This isnt merely a software development problem, it    runs throughout Dilbertesque organizations.
In order to succeed in the new economy, to move aggressively into  the era of e-business, e-commerce, and the web, companies have to  rid themselves of their Dilbert manifestations of make-work and    arcane policies. This freedom from the inanities of corporate life attracts proponents of Agile Methodologies, and scares the         begeebers  out of traditionalists. Quite frankly, the Agile        approaches scare corporate bureaucrats at least those that are     happy pushing  process for process sake versus trying to do the    best for the "customer" and deliver something timely and tangible  and "as   promised" because they run out of places to hide.
The Agile movement is not anti-methodology, in fact, many of them  want to restore credibility to the word methodology. They want to  restore a balance. They  embrace modeling, but not in order to file some diagram in a dusty corporate repository. They embrace        documentation, but not hundreds of pages of never-maintained and   rarely-used tomes. They plan, but recognize the limits of planning in a turbulent environment. Those who would brand proponents of XP or SCRUM or any of the other Agile Methodologies as "hackers" are  ignorant of both the methodologies and the original definition of  the term hacker.

The meeting at Snowbird was incubated at an earlier get together of Extreme Programming proponents, and a few “outsiders,” organized by Kent Beck at the Rogue River Lodge in Oregon in the spring of 2000. At the Rogue River meeting attendees voiced support for a variety of “Light” methodologies, but nothing formal occurred. During 2000 a number of articles were written that referenced the category of “Light” or “Lightweight” processes. A number of these articles referred to “Light methodologies, such as Extreme Programming, Adaptive Software Development, Crystal, and SCRUM”. In conversations, no one really liked the moniker “Light”, but it seemed to stick for the time being.

In September 2000, Bob Martin from Object Mentor in Chicago, started the next meeting ball rolling with an email; “I’d like to convene a small (two day) conference in the January to February 2001 timeframe here in Chicago. The purpose of this conference is to get all the lightweight method leaders in one room. All of you are invited; and I’d be interested to know who else I should approach.” Bob set up a Wiki site and the discussions raged.

Early on, Alistair Cockburn weighed in with an epistle that identified the general disgruntlement with the word Light: “I don’t mind the methodology being called light in weight, but I’m not sure I want to be referred to as a lightweight attending a lightweight methodologists meeting. It somehow sounds like a bunch of skinny, feebleminded lightweight people trying to remember what day it is.”

The fiercest debate was over location! There was serious concern about Chicago in wintertime cold and nothing fun to do; Snowbird, Utah cold, but fun things to do, at least for those who ski on their heads like Martin Fowler tried on day one; and Anguilla in the Caribbean warm and fun, but time consuming to get to. In the end, Snowbird and skiing won out; however, a few people like Ron Jeffries want a warmer place next time.

The Agile Manifesto reads, in its entirety, as follows:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

  • Individuals and interactions over Processes and tools
  • Working software over Comprehensive documentation
  • Customer collaboration over Contract negotiation
  • Responding to change over Following a plan

That is, while there is value in the items on the right, we value the items on the left more.

  • Kent Beck
  • James Grenning
  • Robert C. Martin
  • Mike Beedle
  • Jim Highsmith
  • Steve Mellor
  • Arie van Bennekum
  • Andrew Hunt
  • Ken Schwaber
  • Alistair Cockburn
  • Ron Jeffries
  • Jeff Sutherland
  • Ward Cunningham
  • Jon Kern
  • Dave Thomas
  • Martin Fowler
  • Brian Marick

In 2001, the above authors drafted the agile manifesto. This declaration may be freely copied in any form, but only in its entirety through this notice.

The meaning of the manifesto items on the left within the agile software development context are:

Individuals and interactions – in agile development, self-organization and motivation are important, as are interactions like co-location and pair programming.
Working software – working software will be more useful and welcome than just presenting documents to clients in meetings.
Customer collaboration – requirements cannot be fully collected at the beginning of the software development cycle, therefore continuous customer or stakeholder involvement is very important.
Responding to change – agile development is focused on quick responses to change and continuous development.
Introducing the manifesto on behalf of the Agile Alliance, Jim Highsmith commented that the Agile movement was not opposed to methodology:

The Agile movement is not anti-methodology, in fact, many of us    want to restore credibility to the word methodology. We want to restore a balance. We embrace modeling, but not in order to file some diagram in a dusty corporate repository. We embrace documentation, but not hundreds of pages of never-maintained and rarely-used      information. We plan, but recognize the limits of planning in a    turbulent environment. Those who would brand proponents of XP or   SCRUM or any of the other Agile Methodologies as "hackers" are     ignorant of both the methodologies and the original definition of  the term hacker.
—Jim Highsmith, History: The Agile Manifesto

Agile principles
The Agile Manifesto is based on twelve principles:

  • Customer satisfaction by rapid delivery of useful software
  • Welcome changing requirements, even late in development
  • Working software is delivered frequently (weeks rather than months)
  • Working software is the principal measure of progress
  • Sustainable development, able to maintain a constant pace
  • Close, daily cooperation between business people and developers
  • Face-to-face conversation is the best form of communication (co-location)
  • Projects are built around motivated individuals, who should be trusted
  • Continuous attention to technical excellence and good design
  • Simplicity—the art of maximizing the amount of work not done—is essential
  • Self-organizing teams
  • Regular adaptation to changing circumstances

Later, Ken Schwaber with others founded the Scrum Alliance and created the Certified Scrum Master programs and its derivatives. Ken left the Scrum Alliance in the fall of 2009, and founded to further improve the quality and effectiveness of Scrum.

In 2005, a group headed by Alistair Cockburn and Jim Highsmith wrote an addendum of project management principles, the Declaration of Interdependence, to guide software project management according to agile development methods.

In 2009, a movement spearheaded by Robert C Martin wrote an extension of software development principles, the Software Craftsmanship Manifesto, to guide agile software development according to professional conduct and mastery.


The term XP predates the term Agile by several years. XP stands for Extreme Programming, and is a suite of practices, principles, and values invented by Kent Beck in the late ‘90s. Nowadays the principles and values are not as well known, but the practices survive. Those practices are:

The Planning Game

Development proceeds in very short iterations, typically 1-2 weeks in duration. Prior to each iteration features are broken down into very small stories. Stories are estimated by developers and then chosen by stakeholders based on their estimated cost and business value. The sum of story estimates planned for the current iteration cannot exceed the sum of estimates completed in the previous iteration.

Whole Team

The team consists of developers, business analysts, QA, project managers, etc. The team works together in a lab space or open area where collaboration and communication are maximized.

Acceptance Tests

Stories and features are defined by automated tests written by the business analysts, and QA. No story or feature can be said to be done until the suite of acceptance tests that define it are passing.

Small Releases

Systems are released to production, or pre-production very frequently. An interval of 2-3 months is the maximum. The minimum can be once per iteration.

Continuous Integration

The whole system is built and tested end-to-end several times each day. While new tests are made to pass, no previously passing tests are allowed to break. Developers must continuously keep the system in a deployable state.

Collective Ownership

Code, and other work artifacts, are not owned by individuals. Any member of the team may work on any artifact at any time.

Coding Standard

Code, and other work artifacts, look as if they were written by the team. Each team member follows the team standard for format and appearance of the artifacts.


Names within code and other work artifacts are chosen to be evocative of the system being created.

Sustainable Pace

Building software is a marathon, not a sprint. Team members must run at a rate they can sustain for the long haul. Overtime must be carefully controlled and limited. Tired people do not win.

Pair Programming

Code and other work artifacts are produced by pairs of individuals working together. One member of the pair is responsible for the task at hand, and the other helps out. Pairs change frequently (every two hours or so) but responsibility stays with the owner.

Pair programming, an agile development technique used by XP.

The affordability of pair programming is a key issue. If it is much more expensive, managers simply will not permit it. Skeptics assume that incorporating pair programming will double code development expenses and critical manpower needs. Along with code development costs, however, other expenses, such as quality assurance and field support costs must also be considered. IBM reported spending about $250 million repairing and reinstalling fixes to 30,000 customer-reported problems . That is over $8,000 for each defect!

In 1999, a controlled experiment run by the second author at the University of Utah investigated the economics of pair programming. Advanced undergraduates in a Software Engineering course participated in the experiment. One third of the class coded class projects as they had for years – by themselves.
The rest of the class completed their projects with a collaborative partner. After the initial adjustment period in the first program (the “jelling” assignment), together the pairs only spent about 15% more time on the program than the individuals . Development costs certainly do not double with pair programming!
Significantly, the resulting code has about 15% fewer defects . These results are statistically significant.The initial 15% increase in code development expense is recovered in the reduction in defects,

There are many specific agile development methods. Most promote development, teamwork, collaboration, and process adaptability throughout the life-cycle of the project.

Test Driven Development

Developers are not allowed to write production code until they have written a failing unit test. They may not write more of a unit test than is sufficient to fail. They may not write more production code than is sufficient to pass the failing test. The unit tests are maintained and executed as part of the build process. No previously passing unit test is allowed to fail.


Code, and other work artifacts, are continuously reviewed and kept as clean as possible. It is not sufficient that code works; it must also be clean.

Simple Design

The simplest design that suffices for the task at hand, is the right design. More complex and general designs may become useful later, but not now. We do not wish to carry the weight of that complexity while it is not needed. Sufficient for the day are the complexities therein.

Iterative, incremental and evolutionary

Agile methods break tasks into small increments with minimal planning and do not directly involve long-term planning. Iterations are short time frames (timeboxes) that typically last from one to four weeks. Each iteration involves a cross-functional team working in all functions: planning, requirements analysis, design, coding, unit testing, and acceptance testing. At the end of the iteration a working product is demonstrated to stakeholders. This minimizes overall risk and allows the project to adapt to changes quickly. An iteration might not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration. Multiple iterations might be required to release a product or new features.

Scrum Cycle

Scrum Cycle

Efficient and face-to-face communication

No matter what development disciplines are required, each agile team will contain a customer representative, e.g. Product Owner in Scrum. This person is appointed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer mid-iteration questions. At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment (ROI) and ensuring alignment with customer needs and company goals.

Information Radiators

In agile software development, an information radiator is a (normally large) physical display located prominently in an office, where passers-by can see it. It presents an up-to-date summary of the status of a software project or other product. The name was coined by Alistair Cockburn, and described in his 2002 book Agile Software Development.A build light indicator may be used to inform a team about the current status of their project.

Very short feedback loop and adaptation cycle

A common characteristic of agile development are daily status meetings or “stand-ups”, e.g. Daily Scrum (Meeting). In a brief session, team members report to each other what they did the previous day, what they intend to do today, and what their roadblocks are.

Quality focus

Specific tools and techniques, such as continuous integration, automated unit testing, pair programming, test-driven development, design patterns, domain-driven design, code refactoring and other techniques are often used to improve quality and enhance project agility.

Compared to traditional software engineering, agile development is mainly targeted at complex systems and projects with dynamic, undeterministic and non-linear characteristics, where accurate estimates, stable plans and predictions are often hard to get in early stages, and big up-front designs and arrangements will probably cause a lot of waste, i.e. not economically sound. These basic arguments and precious industry experiences learned from years of successes and failures have helped shape Agile’s favor of adaptive, iterative and evolutionary development.

Adaptive vs. Predictive
Development methods exist on a continuum from adaptive to predictive. Agile methods lie on the adaptive side of this continuum. One key of adaptive development methods is a “Rolling Wave” approach to schedule planning, which identifies milestones but leaves flexibility in the path to reach them, and also allows for the milestones themselves to change. Adaptive methods focus on adapting quickly to changing realities. When the needs of a project change, an adaptive team changes as well. An adaptive team will have difficulty describing exactly what will happen in the future. The further away a date is, the more vague an adaptive method will be about what will happen on that date. An adaptive team cannot report exactly what tasks they will do next week, but only which features they plan for next month. When asked about a release six months from now, an adaptive team might be able to report only the mission statement for the release, or a statement of expected value vs. cost.

Predictive methods, in contrast, focus on analysing and planning the future in detail and cater for known risks. In the extremes, a predictive team can report exactly what features and tasks are planned for the entire length of the development process. Predictive methods rely on effective early phase analysis and if this goes very wrong, the project may have difficulty changing direction. Predictive teams will often institute a Change Control Board to ensure that only the most valuable changes are considered.

Risk analysis can be used to choose between adaptive (agile or value-driven) and predictive (plan-driven) methods

Iterative vs. Waterfall
One of the differences between agile and waterfall is that testing of the software is conducted at different stages during the software development lifecycle. In the Waterfall model, there is always a separate testing phase near the completion of an implementation phase. However, in Agile and especially Extreme programming, testing is usually done concurrently with coding, or at least, testing jobs start in early iterations.

After almost a decade of mismanagement and waste at the FBI, its CIO turned the agency’s maligned case management implementation into an agile project. Two years later, the system is live. This relative success, as well as the example of other federal agencies, shows that agile can work in Washington.

Not only that, the U.S. Government is serious about Agile. Not only is agile part of Federal CIO Steven VanRoekel’s “Future First” initiative, but the Government Accountability Office (GAO)  had issued a report on the federal government’s use of agile.

GAO identified 32 practices and approaches as effective for applying Agile software development methods to IT projects. The practices generally align with five key software development project management activities: strategic planning, organizational commitment and collaboration, preparation, execution, and evaluation. Officials who have used Agile methods on federal projects generally agreed that these practices are effective. Specifically, each practice was used and found effective by officials from at least one agency, and ten practices were used and found effective by officials from all five agencies.

Code vs. Documentation
In a letter to IEEE Computer, Steven Rakitin expressed cynicism about agile development, calling an article supporting agile software development “yet another attempt to undermine the discipline of software engineering” and translating “Working software over comprehensive documentation” as “We want to spend all our time coding. Remember, real programmers don’t write documentation.”

This is disputed by proponents of Agile software development, who state that developers should write documentation if that’s the best way to achieve the relevant goals, but that there are often better ways to achieve those goals than writing static documentation. Scott Ambler states that documentation should be “Just Barely Good Enough” (JBGE), that too much or comprehensive documentation would usually cause waste, and developers rarely trust detailed documentation because it’s usually out of sync with codes, while too little documentation may also cause problems for maintenance, communication, learning and knowledge sharing. Alistair Cockburn wrote of the Crystal method:

Crystal considers development to be a series of co-operative games, and the provision of documentation is intended to be enough to    help the next win at the next game. The work products for Crystal  include use cases, risk list, iteration plan, core domain models,  and design notes to inform on choices...however there are no       templates for these documents and descriptions are necessarily     vague, but the objective is clear, just enough documentation for   the next game. I always tend to characterize this to my team as:   what would you want to know if you joined the team tomorrow.
—Alistair Cockburn

Agile methods
Well-known agile software development methods and/or process frameworks include:

  • Adaptive Software Development (ASD)
  • Agile Modeling
  • Agile Unified Process (AUP)
  • Crystal Methods (Crystal Clear)
  • Disciplined Agile Delivery
  • Dynamic Systems Development Method (DSDM)
  • Extreme Programming (XP)
  • Feature Driven Development (FDD)
  • Lean software development
  • Scrum
  • Scrum-ban

Software development life-cycle support

The agile methods are focused on different aspects of the Software development life cycle. Some focus on the practices (e.g. XP, Pragmatic Programming, Agile Modeling), while others focus on managing the software projects (e.g. Scrum). Yet, there are approaches providing full coverage over the development life cycle (e.g. DSDM, IBM RUP), while most of them are suitable from the requirements specification phase on (FDD, for example). Thus, there is a clear difference between the various agile methods in this regard.

Agile practices
Agile development is supported by a bundle of concrete practices suggested by the agile methods, covering areas like requirements, design, modeling, coding, testing, project management, process, quality, etc. Some notable agile practices include:

  • Acceptance test-driven development (ATDD)
  • Agile Modeling
  • Backlogs (Product and Sprint)
  • Behavior-driven development (BDD)
  • Cross-functional team
  • Continuous integration (CI)
  • Domain-driven design (DDD)
  • Information radiators (Scrum board, Kanban board, Task board, Burndown chart)
  • Iterative and incremental development (IID)
  • Pair programming
  • Planning poker
  • Refactoring
  • Scrum meetings (Sprint planning, Daily scrum, Sprint review and retrospective)
  • Test-driven development (TDD)
  • Agile testing
  • Timeboxing
  • Use case
  • User story
  • Story-driven modeling
  • Velocity tracking

The Agile Alliance has provided a comprehensive online collection with a map guide to the applying agile practices.

This section explains how Primavera Systems, a vendor of project portfolio  management solutions, turned around its development organization in 2003. In  terms of value to the company, the development organization went from having low no confidence in its ability to deliver and repeated failure to meet expectations, to  being cheered for a release that was the hit of their user conference, with good  quality and twice the expected functionality. Bonuses were forthcoming for this  release. Magic? No, just leadership, hard work, and using a process that turned the  leadership and hard work into results. These are the Agile processes Primavera, a 21 year-old software company, sells project portfolio management  solutions to help firms manage all their projects, programs, and resources.  Primavera was thriving, and its growth was leading to increasingly complex client  needs; this put a strain on its ability to release a product that pleased its entire  customer base. Throughout 2002, the development organization worked overtime to  develop release 3.5. As with other projects in the past, the last three months were  particularly bad; the developers sacrificed weekends and home life to get the release  out with all of the new requirements. The result – a release seen by management as  incomplete and three weeks late, and an exhausted team with low morale.  Primavera decided to try the Agile development processes Scrum and XP to fix its  problems. Scrum is an overarching process for planning and managing development  projects, while XP prescribes individual team practices that help developers,  analysts, testers and managers perform at peak efficiency. Though they are often implemented separately, Scrum and XP are even more effective when implemented  together. Primavera adopted Scrum first to improve the way it managed product  development, then adopted XP practices to upgrade its product quality and then  customized the amalgam to suit its own needs.

The result of Primavera’s experiment is a highly satisfied customer base, and a  highly motivated, energetic development environment. Of equal value, everyone  within Primavera now has a process for working together to build the best releases  possible, and is aware of, and participates in, the tradeoff decisions involved. People  who haven’t had a chance to work together in years put their shoulders to making  each release a success, from CEO, CTO and VP’s to the entire development  organization. When the experiment started, Primavera was a very quiet, subdued  place to work. It now feels like a vibrant community.

“We pull in a lot of feedback from all of our customers and look for the similarities  across conversations with resource managers, functional managers, program  managers, and executives,” says Michael Shomberg, Primavera Vice President of  Marketing. “These methodologies are very empowering. Decisions are driven down to  where knowledge is applied. Decisions are better and communication back to the  customers is real and exciting. There are no over-promises or expectations that run  the risk of disappointment, because the customer sees on the screen what they had  in their head – or better. That’s the wow we want to experience, with our customers  and everyone in our company.”

Method tailoring
In the literature, different terms refer to the notion of method adaptation, including ‘method tailoring’, ‘method fragment adaptation’ and ‘situational method engineering’. Method tailoring is defined as:

A process or capability in which human agents determine a system development approach for a specific project situation through responsive changes in, and dynamic interplays between contexts, intentions, and method fragments.

Potentially, almost all agile methods are suitable for method tailoring. Even the DSDM method is being used for this purpose and has been successfully tailored in a CMM context. Situation-appropriateness can be considered as a distinguishing characteristic between agile methods and traditional software development methods, with the latter being relatively much more rigid and prescriptive. The practical implication is that agile methods allow project teams to adapt working practices according to the needs of individual projects. Practices are concrete activities and products that are part of a method framework. At a more extreme level, the philosophy behind the method, consisting of a number of principles, could be adapted (Aydin, 2004).

Extreme Programming (XP) makes the need for method adaptation explicit. One of the fundamental ideas of XP is that no one process fits every project, but rather that practices should be tailored to the needs of individual projects. Partial adoption of XP practices, as suggested by Beck, has been reported on several occasions.

Mehdi Mirakhorli proposes a tailoring practice that provides a sufficient road-map and guidelines for adapting all the practices. RDP (rule-description-practice) Practice is designed for customizing XP. This practice, first proposed as a long research paper in the APSO workshop at the ICSE 2008 conference, is currently the only proposed and applicable method for customizing XP. Although it is specifically a solution for XP, this practice has the capability of extending to other methodologies. At first glance, this practice seems to be in the category of static method adaptation but experiences with RDP Practice says that it can be treated like dynamic method adaptation. The distinction between static method adaptation and dynamic method adaptation is subtle.

Sabre Airline Solutions adopted XP in 2001. With its new model, Sabre does iterative development in small, simple steps. The company uses two-week iterations, and customers see a new release every one to three months. Features, called “stories,” are expressed in user terms and must be simple enough to be coded, tested and integrated in two weeks or less.
Automated unit tests (against the programmer’s criteria) and broader acceptance tests (against customer requirements) must be passed at the end of each iteration before the next can begin. Unit and acceptance tests for each feature are written before the feature is coded. If a developer has trouble writing a test, he doesn’t clearly understand the feature.
Actual coding is done in pairs by teams in open labs, promoting collective ownership of code, although individuals sometimes do the simplest tasks. Programmers are re-paired frequently, often every day or two. They sign up for the tasks they want to do and the person they want to pair with.
Every project team has an “XP coach” and an application subject-matter expert called the XP customer. The XP customer stays in or near the programming lab all or most of the time. He decides on and prioritizes product features, writes the stories for programmers and signs off on the results.
“Refactoring” code—rewriting it not to fix bugs or add features but to make it less redundant and more maintainable—is strongly encouraged. Sabre says the concept hardly existed at the company before XP because it was too difficult.

Comparison with other methods

This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (August 2010)

Agile methods have much in common with the Rapid Application Development techniques from the 1980/90s as espoused by James Martin and others. In addition to technology-focused methods, customer-and-design-centered methods, such as Visualization-Driven Rapid Prototyping developed by Brian Willison, work to engage customers and end users to facilitate agile software development.


In 2008 the Software Engineering Institute (SEI) published the technical report “CMMI or Agile: Why Not Embrace Both” to make clear that the Capability Maturity Model Integration and Agile can co-exist. Modern CMMI-compatible development processes are also iterative. The CMMI Version 1.3 includes tips for implementing Agile and CMMI process improvement together.

Measuring agility
While agility can be seen as a means to an end, a number of approaches have been proposed to quantify agility. Agility Index Measurements (AIM) score projects against a number of agility factors to achieve a total. The similarly named Agility Measurement Index, scores developments against five dimensions of a software project (duration, risk, novelty, effort, and interaction). Other techniques are based on measurable goals. Another study using fuzzy mathematics has suggested that project velocity can be used as a metric of agility. There are agile self-assessments to determine whether a team is using agile practices (Nokia test, Karlskrona test, 42 points test).

While such approaches have been proposed to measure agility, the practical application of such metrics is still debated. There is agile software development ROI data available from the CSIAC ROI Dashboard.

Experience and adoption

One of the early studies reporting gains in quality, productivity, and business satisfaction by using Agile methods was a survey conducted by Shine Technologies from November 2002 to January 2003. A similar survey conducted in 2006 by Scott Ambler, the Practice Leader for Agile Development with IBM Rational’s Methods Group reported similar benefits. Others claim that agile development methods are still too young to require extensive academic proof of their success.

Large-scale and distributed Agile
Large-scale agile software development remains an active research area. Agile development has been widely seen as being more suitable for certain types of environment, including small teams of experts. 157 Positive reception towards Agile methods has been observed in Embedded domain across Europe in recent years

Some things that may negatively impact the success of an agile project are:

  • Large-scale development efforts (>20 developers), though scaling strategies and evidence of some large projects have been described.
  • Distributed development efforts (non-colocated teams). Strategies have been described in Bridging the Distance and Using an Agile Software Process with Offshore Development.
  • Forcing an agile process on a development team.
  • Mission-critical systems where failure is not an option at any cost (e.g. software for avionics).
  • The early successes, challenges and limitations encountered in the adoption of agile methods in a large organization have been documented.

Agile offshore
In terms of outsourcing agile development, Michael Hackett, senior vice president of LogiGear Corporation has stated that “the offshore team … should have expertise, experience, good communication skills, inter-cultural understanding, trust and understanding between members and groups and with each other.

Agile methodologies can be inefficient in large organizations and certain types of projects. Agile methods seem best for developmental and non-sequential projects. Many organizations believe that agile methodologies are too extreme and adopt a hybrid approach that mixes elements of agile and plan-driven approaches.

The term “agile” has also been criticized as being a management fad that simply describes existing good practices under new jargon, promotes a “one size fits all” mindset towards development strategies, and wrongly emphasizes method over results.

Alistair Cockburn organized a celebration of the 10th anniversary of the Agile Manifesto in Snowbird, Utah on February 12, 2011, gathering some 30+ people who’d been involved at the original meeting and since. A list of about 20 elephants in the room (“undiscussable” agile topics/issues) were collected, including aspects: the alliances, failures and limitations of agile practices and context (possible causes: commercial interests, decontextualization, no obvious way to make progress based on failure, limited objective evidence, cognitive biases and reasoning fallacies), politics and culture.

 As Philippe Kruchten wrote in the end:
The agile movement is in some ways a bit like a teenager: very     self-conscious, checking constantly its appearance in a mirror,    accepting few criticisms, only interested in being with its peers, rejecting en bloc all wisdom from the past, just because it is fromthe past, adopting fads and new jargon, at times cocky and arrogant. But I have no doubts that it will mature further, become more    open to the outside world, more reflective, and also therefore moreeffective.

Applications Outside of Software Development
Agile methods have been extensively used for development of software products and some of them use certain characteristics of software, such as object technologies.[54] However, these techniques can be applied to the development of non-software products, such as computers, motor vehicles, medical devices, food, and clothing; see Flexible product development.

Agile development paradigms can be used in other areas of life such as raising children. Its success in child development might be founded on some basic management principles; communication, adaptation and awareness. Bruce Feiler has shown that the basic Agile Development paradigms can be applied to household management and raising children. In his TED Talk, “Agile programming — for your family”, these paradigms brought significant changes to his household environment, such as the kids doing dishes, taking out the trash, and decreasing his children’s emotional outbreaks which inadvertently increased their emotional stability. In some ways, agile development is more than a bunch of software development rules; but it can be something more simple and broad, like a problem solving guide.