PEARL XVIII : Elucidation on ATDD – Acceptance Test Driven Development

PEARL XVIII : Elucidation on ATDD – Acceptance Test Driven Development 

TDD helps software developers produce working, high-quality code that’s maintainable and, most of all, reliable. Customers are rarely, however, interested in buying code. Customers want software that helps them to be more productive, make more money, maintain or improve operational capability, take over a market, and so forth. This is what we need to deliver with our software—functionality to support business function or market needs. Acceptance test-driven development (acceptance TDD) is what helps developers build high-quality software that fulfills the business’s needs as reliably as TDD helps ensure the software’s technical quality.

Acceptance Test Driven Development (ATDD) is a practice in which the whole team collaboratively discusses acceptance criteria, with examples, and then distills them into a set of concrete acceptance tests before development begins. It’s the best way to ensure that we all have the same shared understanding of what it is we’re actually building. It’s also the best way to ensure we have a shared definition of Done. .

Acceptance TDD helps coordinate software projects in a way that helps us deliver exactly what the customer wants when they want it, and that doesn’t let us implement the required functionality only half way.

An essential property of acceptance TDD is that it’s a team activity and a team process.

Acceptance tests are specifications for the desired behavior and functionality of a system. They tell us, for a given user story, how the system handles certain conditions and inputs and with what kinds of outcomes. There are a number of properties that an acceptance test should exhibit; 

An important property of acceptance tests is that they use the language of the domain and the customer instead of geek-speak only the programmer understands. This is the fundamental requirement for having the customer involved in the creation of acceptance tests and helps enormously with the job of validating that the tests are correct and sufficient. Scattering too much technical lingo into our tests makes us more vulnerable to having a requirement bug sneak into a production release—because the customer’s eyes glaze over when reading geek-speak and the developers are drawn to the technology rather than the real issue of specifying the right thing.

By using a domain language in specifying our tests, we are also not unnecessarily tied to the implementation, which is useful since we need to be able to refactor our system effectively. By using domain language, the changes we need to make to our existing tests when refactoring are typically non-existent or at most trivial.

Concise, precise, and unambiguous

Largely for the same reasons we write our acceptance tests using the domain’s own language, we want to keep our tests simple and concise. We write each of our acceptance tests to verify a single aspect or scenario relevant to the user story at hand. We keep our tests uncluttered, easy to understand, and easy to translate to executable tests. The less ambiguity involved, the better we are at avoiding mistakes and the working with our tests.

We might write our stories as simple reminders in the form of a bulleted list, or we might opt to spell them out as complete sentences describing the expected behavior. In either case, the goal is to provide just enough information for us to remember the important things we need to discuss and test for, rather than documenting those details beforehand. Card, conversation, confirmation—these are the three Cs that make up a user story. Those same three Cs could be applied to acceptance tests as well.

Yet another common property of acceptance tests is that they might not be implemented (translation: automated) using the same programming language as the system they are testing. Whether this is the case depends on the technologies involved and on the overall architecture of the system under test. For example, some programming languages are easier to inter-operate with than others. Similarly, it is easy to write acceptance tests for a web application through the HTTP protocol with practically any language we want, but it’s often impossible to run acceptance tests for embedded software written in any language other than that of the system itself.

The main reason for choosing a different programming language for implementing acceptance tests than the one we’re using for our production code (and, often, unit tests) is that the needs of acceptance tests are often radically different from the properties of the programming language we use for implementing our system. To give you an example, a particular real-time system might be feasible to implement only with native C code, whereas it would be rather verbose, slow, and error-prone to express tests for the same real-time system in C compared to, for example, a scripting language.

The ideal syntax for expressing our acceptance tests could be a declarative, tabular structure such as a spreadsheet, or it could be something closer to a sequence of higher-level actions written in plain English. If we want to have our customer collaborate with developers on our acceptance tests, a full-blown programming language such as Java, C/C++, or C# is likely not an option. “Best tool for the job” means more than technically best, because the programmer’s job description also includes collaborating with the customer.

The acceptance TDD cycle

In its simplest form, the process of acceptance test-driven development can be expressed as the simple cycle illustrated by figure 1

Figure 1. The acceptance TDD cycle

This cycle continues throughout the iteration as long as we have more stories to implement, starting over again from picking a user story; then writing tests for the chosen story, then turning those tests into automated, executable tests; and finally implementing the functionality to make our acceptance tests pass.

In practice, of course, things aren’t always that simple. We might not yet have user stories, the stories might be ambiguous or even contradictory, the stories might not have been prioritized, the stories might have dependencies that affect their scheduling, and so on.

Step 1: Pick a user story

The first step is to decide which story to work on next. Not always an easy job; but, fortunately, most of the time we’ll already have some relative priorities in place for all the stories in our iteration’s work backlog. Assuming that we have such priorities, the simplest way to go is to always pick the story that’s on top of the stack—that is, the story that’s considered the most important of those remaining. Again, sometimes, it’s not that simple.

Generally speaking, the stories are coming from the various planning meetings held throughout the project where the customer informally describes new features, providing examples to illustrate how the system should work in each situation. In those meetings, the developers and testers typically ask questions about the features, making them a medium for intense learning and discussion. Some of that information gets documented on a story card (whether virtual or physical), and some of it remains as tacit knowledge. In those same planning meetings, the customer prioritizes the stack of user stories by their business value (including business risk) and technical risk (as estimated by the team).

There are times when the highest-priority story requires skills that we don’t possess, or we consider not having enough of. In those situations, we might want to skip to the next task to see whether it makes more sense for us to work on it. Teams that have adopted pair programming don’t suffer from this problem as often. When working in pairs, even the most cross-functional team can usually accommodate by adjusting their current pairs in a way that frees the necessary skills for picking the highest priority task from the pile.

The least qualified person

The traditional way of dividing work on a team is for everyone to do what they do best. It’s intuitive. It’s safe. But it might not be the best way of completing the task. Arlo Belshee presented an experience report at the Agile 2005 conference, where he described how his company had started consciously tweaking the way they work and measuring what works and what doesn’t. Among their findings about stuff that worked was a practice of giving tasks to the least qualified person.

There can be more issues to deal with regarding picking user stories, but most of the time the solution comes easily through judicious application of common sense. For now, let’s move on to the second step in our process: writing tests for the story we’ve just picked.

Step 2: Write tests for a story

With a story card in hand (or onscreen if you’ve opted for managing your stories online), our next step is to write tests for the story.

The first thing to do is, of course, get together with the customer. In practice, this means having a team member sit down with the customer (they’re the one who should own the tests, remember?) and start sketching out a list of tests for the story in question.

As usual, there are personal preferences for how to go about doing this, but current preference  is to quickly scram out a list of rough scenarios or aspects of the story we want to test in order to say that the feature has been implemented correctly. There’s time to elaborate on those rough scenarios later on when we’re implementing the story or implementing the acceptance tests. At this time, however, we’re only talking about coming up with a bulleted list of things we need to test—things that have to work in order for us to claim the story is done.

On timing

Especially in projects that have been going on for a while already, the customer and the development team probably have some kind of an idea of what’s going to get scheduled into the next iteration in the upcoming planning meeting. In such projects, the customer and the team have probably spent some time during the previous iteration sketching acceptance tests for the features most likely to get picked in the next iteration’s planning session. This means that we might be writing acceptance tests for stories that we’re not going to implement until maybe a couple of weeks from now. We also might think of missing tests during implementation, for example, so this test-writing might happen pretty much at any point in time between writing the user story and the moment when the customer accepts the story as completed.

Once we have such a rough list, we start elaborating the tests, adding more detail and discussing about how this and that should work, whether there are any specifics about the user interface the customer would like to dictate, and so forth. Depending on the type of feature, the tests might be a set of interaction sequences or flows, or they might be a set of inputs and expected outputs. Often, especially with flow-style tests, the tests specify some kind of a starting state, a context the test assumes is part of the system.

Other than the level of detail and the sequence in which we work to add that detail, there’s a question of when—or whether—to start writing the tests into an executable format. Witness step 3 in our process: automating the tests.

Step 3: Automate the tests

The next step once we’ve got acceptance tests written down on the back of a story card, on a whiteboard, in some electronic format, or on pink napkins, is to turn those tests into something we can execute automatically and get back a simple pass-or-fail result. Whereas we’ve called the previous step writing tests, we might call this step implementing or automating those tests.

In an attempt to avoid potential confusion about how the executable acceptance tests differ from the acceptance tests

We might turn Acceptance tests into an executable format by using a variety of approaches and tools. The most popular category of tools  these days seems to be what we calltable-based tools. Their premise is that the tabular format of tables, rows, and columns makes it easy for us to specify our tests in a way that’s both human and machine readable. Figure 2 presents an example of how we might draft an executable test for the first test  “Valid account number”.

Figure 2. Example of an executable test, sketched on a piece of paper

In figure 2, we’ve outlined the steps we’re going to execute as part of our executable test in order to verify that the case of an incoming support call with a valid account number is handled as expected, displaying the customer’s information onscreen. Our test is already expressed in a format that’s easy to turn into a tabular table format using our tool of choice—for example, something that eats HTML tables and translates their content into a sequence of method invocations to Java code according to some documented rules.

The inevitable fact is that most of the time, there is not such a tool available that would understand our domain language tests in our table format and be able to wire those tests into calls to the system under test. In practice, we’ll have to do that wiring ourselves anyway—most likely the developers or testers will do so using a programming language.

To summarize this duality of turning acceptance tests into executable tests, we’re dealing with expressing the tests in a format that’s both human and machine readable and with writing the plumbing code to connect those tests to the system under test.

On style

The example in figure 2 is a flow-style test, based on a sequence of actions and parameters for those actions. This is not the only style at our disposal, however. A declarative approach to expressing the desired functionality or business rule can often yield more compact and more expressive tests than what’s possible with flow-style tests.

Yet our goal should—once again—be to keep our tests as simple and to the point as possible, ideally speaking in terms of what we’re doing instead of how we’re doing it.

With regard to writing things down , there are variations on how different teams do this. Some start writing the tests right away into electronic format using a word processor; some even go so far as to write them directly in an executable syntax. Some teams run their tests as early as during the initial authoring session. Some people,  prefer to work on the tests alongside the customer using a physical medium, leaving the running of the executable tests for a later time. For example, agilists like to sketch the executable tests on a whiteboard or a piece of paper first, and pick up the computerized tools only when they got something  relatively sure won’t need to be changed right away.

The benefit is that we’re less likely to fall prey to the technology—Agilsts noticed that tools often steal too much focus from the topic, which we don’t want. Using software also has this strange effect of the artifacts being worked on somehow seeming more formal, more final, and thus needing more polishing up. All that costs time and money, keeping us from the important work.

In projects where the customer’s availability is the bottleneck, especially in the beginning of an iteration (and this is the case more often than not), it makes a lot of sense to have a team member do the possibly laborious or uninteresting translation step on their own rather than keep the customer from working on elaborating tests for other stories. The downside to having the team member formulate the executable syntax alone is that the customer might feel less ownership in the acceptance tests in general—after all, it’s not the exact same piece they were working on. Furthermore, depending on the chosen test-automation tool and its syntax, the customer might even have difficulty reading the acceptance tests once they’ve been shoved into the executable format dictated by the tool.

let’s consider a case where our test-automation tool is a framework for which we express our tests in a simple but powerful scripting language such as Ruby. Figure 3 highlights the issue with the customer likely not being as capable of feeling ownership of the implemented acceptance test compared to the sketch, which they have participated in writing. Although the executable snippet of Ruby code certainly reads nicely to a programmer, it’s not so trivial for a non-technical person to relate to.

Figure 3. Contrast between a sketch an actual, implemented executable acceptance test

Another aspect to take into consideration is whether we should make all tests executable to start with or whether we should automate one test at a time as we progress with the implementation. Some teams—and this is largely dependent on the level of certainty regarding the requirements—do fine by automating all known tests for a given story up front before moving on to implementing the story.

Some teams prefer moving in baby steps like they do in regular test-driven development, implementing one test, implementing the respective slice of the story, implementing another test, and so forth. The downside to automating all tests up front is, of course, that we’re risking more unfinished work—inventory, if you will—than we would be if we’d implemented one slice at a time. Agilists  preference is strongly on the side of implementing acceptance tests one at a time rather than try getting them all done in one big burst. It should be mentioned, though, that elaborating acceptance tests toward their executable form during planning sessions could help a team understand the complexity of the story better and, thus, aid in making better estimates.

Many of the decisions regarding physical versus electronic medium, translating to executable syntax together or not, and so forth also depend to a large degree on the people. Some customers have no trouble working on the tests directly in the executable format (especially if the tool supports developing a domain-specific language). Some customers don’t have trouble identifying with tests that have been translated from their writing. As in so many aspects of software development, it depends.

Regardless of our choice of how many tests to automate at a time, after finishing this step of the cycle we have at least one acceptance test turned into an executable format; and before we proceed to implementing the functionality in question, we will have also written the necessary plumbing code for letting the test-automation tool know what those funny words mean in terms of technology. That is, we will have identified what the system should do when we say “select a transaction” or “place a call”—in terms of the programming API or other interface exposed by the system under test.

To put it another way, once we’ve gotten this far, we have an acceptance test that we can execute and that tells us that the specified functionality is missing. The next step is naturally to make that test pass—that is, implement the functionality to satisfy the failing test.

Step 4: Implement the functionality

Next on our plate is to come up with the functionality that makes our newly minted acceptance test(s) pass. Acceptance test-driven development doesn’t say how we should implement the functionality; but, needless to say, it is generally considered best practice among practitioners of acceptance TDD to do the implementation using test-driven development.

In general, a given story represents a piece of customer-valued functionality that is split—by the developers—into a set of tasks required for creating that functionality. It is these tasks that the developer then proceeds to tackle using whatever tools necessary, including TDD. When a given task is completed, the developer moves on to the next task, and so forth, until the story is completed—which is indicated by the acceptance tests executing successfully.

In practice, this process means plenty of small iterations within iterations. Figure 4 visualizes this transition to and from test-driven development inside the acceptance TDD process.

Figure 4. The relationship between test-driven development and acceptance test-driven development

As we can see, the fourth step of the acceptance test-driven development cycle, implementing the necessary functionality to fix a failing acceptance test, can be expanded into a sequence of smaller TDD cycles of test-code-refactor, building up the missing functionality in a piecemeal fashion until the acceptance test passes.

While the developer is working on a story, frequently consulting with the customer on how this and that ought to work, there will undoubtedly be occasions when the developer comes up with a scenario—a test—that the system should probably handle in addition to the customer/developer writing those things down. Being rational creatures, we add those acceptance tests to our list, perhaps after asking the customer what they think of the test. After all, they might not assign as much value to the given aspect or functionality of the story as we the developers might.

At some point, we’ve iterated through all the tasks and all the tests we’ve identified for the story, and the acceptance tests are happily passing. At this point, depending on whether we opted for automating all tests up front  or automating them just in time, we either go back to Step 3 to automate another test or to Step 1 to pick a brand-new story to work on.

. Getting acceptance tests passing is intensive work.

 Acceptance TDD inside an iteration

A healthy iteration consists mostly of hard work. Spend too much time in meetings or planning ahead, and you’re soon behind the iteration schedule and need to de-scope . Given a clear goal for the iteration, good user stories, and access to someone to answer our questions, most of the iteration should be spent in small cycles of a few hours to a couple of days writing acceptance tests, collaborating with the customer where necessary, making the tests executable, and implementing the missing functionality with test-driven development.

As such, the four-step acceptance test-driven development cycle of picking a story, writing tests for the story, implementing the tests, and implementing the story is only a fraction of the larger continuum of a whole iteration made of multiple—even up to dozens—of user stories, depending on the size of your team and the size of your stories. In order to gain understanding of how the small four-step cycle for a single user story fits into the iteration, we’re going to touch the zoom dial and see what an iteration might look like on a time line with the acceptance TDD–related activities scattered over the duration of a single iteration.

Figure 5 is an attempt to describe what such a time line might look like for a single iteration with nine user stories to implement. Each of the bars represents a single user story moving through the steps of writing acceptance tests, implementing acceptance tests, and implementing the story itself. In practice, there could (and probably would) be more iterations within each story, because we generally don’t write and implement all acceptance tests in one go but rather proceed through tests one by one.

Figure 5. Putting acceptance test-driven development on time line

Notice how the stories get completed almost from the beginning of the iteration? That’s the secret ingredient that acceptance TDD packs to provide indication of real progress. Our two imaginary developers (or pairs of developers and/or testers, if we’re pair programming) start working on the next-highest priority story as soon as they’re done with their current story. The developers don’t begin working on a new story before the current story is done. Thus, there are always two user stories getting worked on, and functionality gets completed throughout the iteration.

So, if the iteration doesn’t include writing the user stories, where are they coming from? As you may know if you’re familiar with agile methods, there is usually some kind of a planning meeting in the beginning of the iteration where the customer decides which stories get implemented in that iteration and which stories are left in the stack for the future. Because we’re scheduling the stories in that meeting, clearly we’ll have to have those stories written before the meeting, no?

That’s where continuous planning comes into the picture.

Continuous planning

Although an iteration should ideally be an autonomous, closed system that includes everything necessary to meet the iteration’s goal, it is often necessary—and useful—to prepare for the next iteration during the previous one by allocating some amount of time for pre-iteration planning activities.  Suggestions regarding the time we should allocate for this continuous planning range from 10–15% of the team’s total time available during the iteration. As usual, it’s good to start with something that has worked for others and, once we’ve got some experience doing things that way, begin zeroing in on a number that seems to work best in our particular context.

In practice, these pre-iteration planning activities might involve going through the backlog of user stories, identifying stories that are most likely to get scheduled for the next iteration, identifying stories that have been rendered obsolete, and so forth. This ongoing pre-iteration planning is also the context in which we carry out the writing of user stories and, to some extent, the writing of the first acceptance tests. The rationale here is to be prepared for the next iteration’s beginning when the backlog of stories is put on the table. At that point, the better we know our backlog, the more smoothly the planning session goes, and the faster we get back to work, crunching out valuable functionality for our customer.

By writing, estimating, splitting if necessary, and prioritizing user stories before the planning meeting, we ensure quick and productive planning meetings and are able to get back to delivering valuable features sooner.

It would be nice if we had all acceptance tests implemented (and failing) before we start implementing the production code. That is often not a realistic scenario, however, because tests require effort as well—they don’t just appear from thin air—and investing our time in implementing the complete set of acceptance tests up front doesn’t make any more sense than big up-front design does in the larger scale. It is much more efficient to implement acceptance tests as we go, user story by user story.

Teams that have dedicated testing personnel can have the testing engineers work together with the customer to make acceptance tests executable while developers start implementing the functionality for the stories.  A hazard is  that most teams, however, are much more homogeneous in this regard and participate in writing and implementing acceptance tests together, with nobody designated as “the acceptance test guy.”

The process is largely dependent on the availability of the customer and the test and software engineers. If your customer is only onsite for a few days in the beginning of each iteration, you probably need to do some trade-offs in order to make the most out of those few days and defer work that can be deferred until after the customer is no longer available. Similarly, somebody has to write code, and it’s likely not the customer who’ll do that; software and test engineers need to be involved at some point.

We start from those stories we’ll be working on first, of course, and implement the user story in parallel with automating the acceptance tests that we’ll use to verify our work. And, if at all possible, we avoid having the same person implement the tests and the production code in order to minimize our risk of human nature playing its tricks on us.

Again, we want to keep an eye on putting too much up-front effort in automating our acceptance tests—we might end up with a huge bunch of tests but no working software. It’s much better to proceed in small steps, delivering one story at a time. No matter how valuable our acceptance tests are to us, their value to the customer is negligible without the associated functionality.

The mid-iteration sanity check

Agilists like to have an informal sanity check in the middle of an iteration. At that point, we should have approximately half of the stories scheduled for the iteration running and passing. This might not be the case for the first iteration, due to having to build up more infrastructure than in later iterations; but, especially as we get better at estimating our stories, it should always be in the remote vicinity of having 50% of the stories passing their tests.

Of course, we’ll be tracking story completion throughout the iteration. Sometimes we realize early on that our estimated burn rate was clearly off, and we must adjust the backlog immediately and accordingly. By the middle of an iteration, however, we should generally be pretty close to having half the stories for the iteration completed. If not, the chances are that there’s more work to do than the team’s capacity can sustain, or the stories are too big compared to the iteration length.

A story’s burn-down rate is constantly more accurate a source of prediction than an inherently optimistic software developer. If it looks like we’re not going to live up to our planned iteration content, we decrease our load.

Decreasing the load

When it looks like we’re running out of time, we decrease the load. We don’t work harder (or smarter). We’re way past that illusion. We don’t want to sacrifice quality, because producing good quality guarantees the sustainability of our productivity, whereas bad quality only creates more rework and grinds our progress to a halt. We also don’t want to have our developers burn out from working overtime, especially when we know that working overtime doesn’t make any difference in the long run. Instead, we adjust the one thing we can: the iteration’s scope—to reality. In general, there are three ways to do that: swap, drop, and split. Tom DeMarco and Timothy Lister have done a great favor to our industry with their best-selling books Slack (DeMarco; Broadway, 2001) and Peopleware (DeMarco, Lister; Dorset House, 1999), which explain how overtime reduces productivity.

Swapping stories is simple. We trade one story for another, smaller one, thereby decreasing our workload. Again, we must consult the customer in order to assure that we still have the best possible content for the current iteration, given our best knowledge of how much work we can complete.

Dropping user stories is almost as straightforward as swapping them. “This low-priority story right here, we won’t do in this iteration. We’ll put it back into the product backlog.” But dropping the lowest-priority story might not always be the best option, considering the overall value delivered by the iteration—that particular story might be of low priority in itself, but it might also be part of a bigger whole that our customer cares about. We don’t want to optimize locally. Instead, we want to make sure that what we deliver in the end of the iteration is a cohesive whole that makes sense and can stand on its own.

The third way to decrease our load, splitting, is a bit trickier compared to dropping and swapping

Splitting stories

How do we split a story we already tried hard to keep as small as possible during the initial planning game? In general, we can split stories by function or by detail (or both). Consider a story such as “As a regular user of the online banking application, I want to optionally select the recipient information for a bank transfer from a list of most frequently and recently used accounts based on my history so that I don’t have to type in the details for the recipients every time.”

Splitting this story by function could mean dividing the story into “…from a list of recently used accounts” and “…from a list of most frequently used accounts.” Plus, depending on what the customer means by “most frequently and recently used,” we might end up adding another story along the lines of “…from a weighted list of most frequently and recently used accounts” where the weighted list uses an algorithm specified by the customer. Having these multiple smaller stories, we could then start by implementing a subset of the original, large story’s functionality and then add to it by implementing the other slices, building on what we have implemented for the earlier stories.

Splitting it by detail could result in separate stories for remembering only the account numbers, then also the recipient names, then the VAT numbers, and so forth. The usefulness of this approach is greatly dependent on the distribution of the overall effort between the details—if most of the work is in building the common infrastructure rather than in adding support for one more detail, then splitting by function might be a better option. On the other hand, if a significant part of the effort is in, for example, manually adding stuff to various places in the code base to support one new persistent field, splitting by detail might make sense.

Regardless of the chosen strategy, the most important thing to keep in mind is that, after the splitting, the resulting user stories should still represent something that makes sense—something valuable—to the customer.

PEARL IV : INVEST in good User Stories and SMART Tasks

PEARL IV : INVEST in good User Stories and SMART Tasks

In software development and product management, a user story is one or more sentences in the everyday or business language of the end user or user of a system that captures what a user does or needs to do as part of his or her job function. User stories are used with agile software development methodologies as the basis for defining the functions a business system must provide, and to facilitate requirements management. It captures the ‘who’, ‘what’ and ‘why’ of a requirement in a simple, concise way, often limited in detail by what can be hand-written on a small paper notecard.

User stories originated with Extreme Programming (XP), whose first written description in 1998 only claimed that customers defined project scope “with user stories, which are like use cases”. Rather than offered as a distinct practice, they were described as one of the “game pieces” used in the planning game. However, most of the further literature thrust around all the ways arguing that user stories are “unlike” use cases, in trying to answer in a more practical manner “how requirements are handled” in XP and more generally Agile projects. This drives the emergence, over the years, of a more sophisticated account of user stories.

In 2001, Ron Jeffries proposed the well-known Three C’s formula, i.e. Card, Conversation, Confirmation, to capture the components of a user story:

A Card (or often a Post-it note) is a physical token giving tangible and durable form to what would otherwise only be an abstraction;

A Conversation is taking place at different time and places during a project between the various stakeholders concerned by the given feature (customers, users, developers, testers, etc.), which is largely verbal but most often supplemented by documentation;

The Confirmation, the more formal the better, ensures that the objectives the conversation revolved around have been reached finally.

A pidgin language is a simplified language, usually used for trade, that allows people who can’t communicate in their native language to nonetheless work together. User stories act like this. We don’t expect customers or users to view the system the same way that programmers do; stories act as a pidgin language where both sides can agree enough to work together effectively.

User stories are written by or for business users or customers as a primary way to influence the functionality of the system being developed. User stories may also be written by developers to express non-functional requirements (security, performance, quality, etc.), though primarily it is the task of a product manager to ensure user stories are captured.

Agile projects, especially Scrum , use a product backlog, which is a prioritized list of the functionality to be developed in a product or service. Although product backlog items can be whatever the team desires, user stories have emerged as the best and most popular form of product backlog items.

While a product backlog can be thought of as a replacement for the requirements document of a traditional project, it is important to remember that the written part of an agile user story (“As a user, I want …”) is incomplete until the discussions about that story occur.

It’s often best to think of the written part as a pointer to the real requirement. User stories could point to a diagram depicting a workflow, a spreadsheet showing how to perform a calculation, or any other artifact the product owner or team desires.

When the time comes for creating user stories, one of the developers gets together with a customer representative, e.g. a product manager (or product owner in Scrum), which has the responsibility for formulating the user stories. The developer may use a series of questions to get the customer representative going, such as asking about the desirability of some particular functionality, but must take care not to dominate the idea-creation process.

UserStoryWorkshop

A user story is not a specification, but an communication and collaboration tool. Stories should never be handed off to the development team. They should rather be part of a conversation: The product owner and the team should discuss the stories, or even better, write them together. This leverages the creativity and the knowledge of the team and usually results in better user stories.

As the customer representative conceives a user story, it is written down on a note card (e.g. 3×5 inches or 8×13 cm) with a name and a brief description. If the developer and the customer representative find a user story deficient in some way (too large, complicated, or imprecise), it is rewritten until satisfactory – often using the INVEST guidelines. Commonly, user stories are not to be definite once they have been written down, since requirements tend to change throughout the development lifecycle, which agile processes handles by not carving them in stone upfront.

A Scrum epic is a large user story. There’s no magic threshold at which we call a particular story an epic. It just means “big user story.

“Theme” is a collection of user stories

Important considerations for writing user stories:

  1. Stakeholders write user stories. An important concept is that your project stakeholders write the user stories, not the developers. User stories are simple enough that people can learn to write them in a few minutes, so it makes sense that the domain experts (the stakeholders) write them. As its name suggests, a user story describes how a customer or a user employs the product. You should therefore tell the stories from the user’s perspective.
  2. Personas : A great way to capture your insights about the users and customers is to use personas. But there is more to it: The persona goals help you discover your stories. Simply ask yourself: What functionality does the product have to provide to meet the goal of the personas.A user persona is a representation of the goals and behavior of a hypothesized group of users. In most cases, personas are synthesized from data collected from interviews with users. They are captured in 1–2 page descriptions that include behavior patterns, goals, skills, attitudes, and environment, with a few fictional personal details to make the persona a realistic character. For each product, more than one persona is usually created, but one persona should always be the primary focus for the design.
  3. Use the simplest tool : User stories are often written on index cards as you see in Figure 2(below)  Index cards are very easy to work with and are therefore an inclusive modeling technique.
  4. Remember non-functional requirements: Stories can be used to describe a wide variety of requirements types. For example in Figure 1(below) the Students can purchase parking passes online user story is a usage requirement similar to a use case whereas the Transcripts will be available online via a standard browser is closer to a technical requirement.
  5. Indicate the estimated size: You can see in Figure 2 that it includes an estimate for the effort to implement the user story. One way to estimate is to assign user story points to each card, a relative indication of how long it will take a pair of programmers to implement the story. The team then knows that if it currently takes them on average 2.5 hours per point; therefore the user story in Figure 2 will take around 10 hours to implement.
  6. Indicate the priority: Requirements, including defects identified as part of your independent parallel testing activities or by your operations and support efforts, are prioritized by your project stakeholders (or representatives thereof such as product owners) and added to the stack in the appropriate place. You can easily maintain a stack of prioritized requirements by moving the cards around in the stack as appropriate. You can see that the user story card includes an indication of the priority; Agilists often use a scale of one to ten with one being the highest priority. Other prioritization approaches are possible – priorities of High/Medium/Low are often used instead of numbers and some people will even assign each card it’s own unique priority order number (e.g. 344, 345, …). You want to indicate the priority somehow in case you drop the deck of cards, or if you’re using more sophisticated electronic tooling. Pick a strategy that works well for your team. You also see that the priority changed at some point in the past, this is a normal thing, motivating the team to move the card to another point in the stack. The implication is that your prioritization strategy needs to support this sort of activity. The advice is to keep it simple.
  7. Optionally include a unique identifier: The card also includes a unique identifier for the user story, in this case 173. The only reason to do this would be to do this is if you need to maintain some sort of traceability between the user story and other artifacts, in particular acceptance tests.

Figure 1. Example user stories.

  • Students can purchase monthly parking passes online.
  • Parking passes can be paid via credit cards.
  • Parking passes can be paid via PayPal.
  • Professors can input student marks.
  • Students can obtain their current seminar schedule.
  • Students can order official transcripts.
  • Students can only enroll in seminars for which they have prerequisites.
  • Transcripts will be available online via a standard browser.

Figure 2. User story card (informal, high level).

Format

A team at Connextra developed the traditional user-story template in 2001:

“As a <role>, I want <goal/desire> so that <benefit>”

Mike Cohn, a well-known author on user stories, regards the “so that” clause as optional:

“As a <role>, I want <goal/desire>”

Chris Matts suggested that “hunting the value” was the first step in successfully delivering software, and proposed this alternative as part of Feature Injection:

“In order to <receive benefit> as a <role>, I want <goal/desire>”

Another template based on the Five Ws specifies:

“As <who> <when> <where>, I <what> because <why>.”

User Scenarios

A user scenario expands upon your user stories by including details about how a system might be interpreted, experienced, and used. Like user stories, you might imagine several scenarios for each persona group that you anticipate will make up your audience. Your scenarios should anticipate the user’s goal, specify any assumed knowledge, and speculate on the details of the user’s interaction experience.

User Stories and Planning

There are two areas where user stories affect the planning process on agile projects:

  1. Scheduling. In the agile change management management process where work items, including stories, are addressed in priority order. So, the implication is that the priority assigned to a story affects when the work will be done to implement that requirement. Project stakeholders are responsible for prioritizing requirements.  A numerical prioritization strategy is taken (perhaps on a scale of 1 to 20) and a MoSCoW (Must Should Could Won’t) approach is also used. Stakeholders also have the right to define new requirements, change their minds about existing requirements, and even reprioritize requirements as they see fit. However, stakeholders must also be responsible for making decisions and providing information in a timely manner.
  2. Estimating. Developers are responsible for estimating the effort required to implement the things which they will work on, including stories. The implication is that because you can only do so much work in an iteration, the size of the work items (including stories), affect when those work items will be addressed. Although you may fear that developers don’t have the requisite estimating skills, and this is often true at first, the fact is that it doesn’t take long for people to get pretty good at estimating when they know that they’re going to have to live up to those estimates. If you’ve adopted the pair programming practice then a user story must be able to be implemented by two people in a single iteration/sprint. Therefore if you’re working in one week iterations each user story must describe less than one week worth of work. Of course, if you aren’t taking a non-solo development approach such as pair programming the user story would need to be implementable by a single person within a single iteration. Large stories, sometimes called epics, would need to be broken up into smaller stories to meet this criteria.

INVEST

Bill Wake  introduced the INVEST mnemonic in his seminal post on creating better stories, suggesting they should be Independent, Negotiable, Valuable, Estimable, Small, and Testable. An acute observer of common disconnects between product people and developers, his mnemonic remains an excellent checklist for the creation of high quality product design inputs.

The acronym “INVEST” can remind you that good stories are:

I – Independent
N – Negotiable
V – Valuable
E – Estimable
S – Small
T – Testable

Independent

Ideally, you should be able to implement your stories in any order while still realizing the benefit of seeing something working at the end. This is important for two reasons. First, you lose a lot of the adaptability agile’s supposed to deliver if your stories have (many) interdependencies. Second, it makes it much more likely you’ll end up at the end of an iteration without working product that you can validate to make sure that you’re moving in a valuable direction.

Negotiable… and Negotiated

User stories are not requirements A good story is negotiable. It is not an explicit contract for features; rather, details will be co-created by the customer and programmer during development. A good story captures the essence, not the details. Over time, the card may acquire notes, test ideas, and so on, but we don’t need these to prioritize or schedule stories.They’re not another just another template to put on a piece of paper- they’re part of a process where the product person (original drafter) of the stories co-creates a working incarnation of the story with the implementer. This way, the original author gets someone to help them improve/”edit” their story and make sure the story’s interpreted into product in a way that delivers on something Valuable for the user.

Valuable

A story needs to be valuable. We don’t care about value to just anybody; it needs to be valuable to the customer. Developers may have (legitimate) concerns, but these framed in a way that makes the customer perceive them as important.

This is especially an issue when splitting stories. Think of a whole story as a multi-layer cake, e.g., a network layer, a persistence layer, a logic layer, and a presentation layer. When we split a story, we’re serving up only part of that cake. We want to give the customer the essence of the whole cake, and the best way is to slice vertically through the layers. Developers often have an inclination to work on only one layer at a time (and get it “right”); but a full database layer (for example) has little value to the customer if there’s no presentation layer.

Making each slice valuable to the customer supports XP’s pay-as-you-go attitude toward infrastructure.

Stories want to communication information. Don’t hide them on a network drive, the corporate intranet , or a licensed tool. Make them visible instead, for instance, by putting them up on the wall. A great tool to discover, visualize, and manage your stories is a Product Canvas pasted on the wall.

Estimable

A good story can be estimated. We don’t need an exact estimate, but just enough to help the customer rank and schedule the story’s implementation. Being estimable is partly a function of being negotiated, as it’s hard to estimate a story we don’t understand. It is also a function of size: bigger stories are harder to estimate. Finally, it’s a function of the team: what’s easy to estimate will vary depending on the team’s experience. (Sometimes a team may have to split a story into a (time-boxed) “spike” that will give the team enough information to make a decent estimate, and the rest of the story that will actually implement the desired feature.)

It should be possible for a developer with relevant experience to roughly estimate a story. That estimate’s then used by the product person to prioritize their list of stories, taking account of both value and cost (in terms of developer time).

Small

„ Large stories (epics) are

– hard to estimate
– hard to plan

They don’t fit well into single sprints
„ Compound story

– An epic that comprises multiple shorter stories

„ Complex story

– A story that is inherently large and cannot easily be  disaggregated into constituent stories

Compound stories sometimes hide assumptions, Split compound stories along operational boundaries (CRUD), Data Boundaries.

Good stories tend to be small. Stories typically represent at most a few person-weeks worth of work. (Some teams restrict them to a few person-days of work.) Above this size, and it seems to be too hard to know what’s in the story’s scope. Saying, “it would take me more than a month” often implicitly adds, “as I don’t understand what-all it would entail.” Smaller stories tend to get more accurate estimates.

Story descriptions can be small too (and putting them on an index card helps make that happen). Alistair Cockburn described the cards as tokens promising a future conversation. Remember, the details can be elaborated through conversations with the customer.

Detail can be added to user stories in two ways:

  • By splitting a user story into multiple, smaller user stories.
  • By adding “conditions of satisfaction.”

When a relatively large story is split into multiple, smaller agile user stories, it is natural to assume that detail has been added. After all, more has been written.

The conditions of satisfaction is simply a high-level acceptance test that will be true after the agile user story is complete

Epics are big, coarse-grained user stories. Starting with epics allows you to sketch the product functionality without committing to the details. This is particularly helpful for new products and new features: It allows you to capture the rough scope, and it buys you time to learn more about the users and how to best meet their needs. It also reduces the time and effort required to integrate new insights.

A story map is a graphical, two-dimensional product backlog. At the top of the map are big user stories, which are sometimes “epics” as Mike Cohn describes them, and other times correspond to “themes” or “activities”. These grouping units are created by orienting at the user’s workflow or “the order you’d explain the behavior of the system”. Vertically, below the epics, the actual story cards are allocated and ordered by priority. The first horizontal row is a “walking skeleton” and below that represents increasing sophistication.

In this way it becomes possible to describe even big systems without losing the big picture.

Testable

A good story is testable. Writing a story card carries an implicit promise: “I understand what I want well enough that I could write a test for it.” Several teams have reported that by requiring customer tests before implementing a story, the team is more productive. “Testability” has always been a characteristic of good requirements; actually writing the tests early helps us know whether this goal is met.

Strive for 90% Test Automation.

If a customer doesn’t know how to test something, this may indicate that the story isn’t clear enough, or that it doesn’t reflect something valuable to them, or that the customer just needs help in testing.

A team can treat non-functional requirements (such as performance and usability) as things that need to be tested. Figure out how to operationalize these tests will help the team learn the true needs.

For all these attributes, the feedback cycle of proposing, estimating, and implementing stories will help teach the team what it needs to know.

 Agile User Stories Prioritization 

Aim of Agile User Stories Prioritization  engineering actions are to contribute to business value that is described in terms of return-on investment of software product and it is very  essential for a software. For a product to be  successful, it is very important to identify the  correct equalizer among the competing quality  User Stories. From the customers’ view, the action  of continuous User Stories prioritization creates  the very core of today’s agile process

Continuous requirements prioritization  process from the customer‟s scope of vision forms the  essential of today‟s agile approaches. An essential  feature of any agile approach is an expressed focus on  making business value for the customer . Agile software process practitioners deem this approach  especially valuable for the software producers in a  circumstance that admits extremely uncertain requirements, experimentation with fresh  development technology, and customers willing to  explore the ways in which developing product can  assist their business goals.

The continuous  prioritization of requirements during the project acts  as fundamental role in accomplishing business value  creation. Requirements (re)prioritization at interiteration time is the means to align expert decisions to  the business strategy that aims the business value.

Creating a great user experience (UX) requires more than writing user stories. Also consider the user journeys and interactions, the visual design, and the nonfunctional properties of your product. This results in a holistic description that makes it easier to identify risks and assumptions, and it increases the chances of creating a product with the right user experience.

Agile Development SMART Tasks

A task communicates the need to do some work. Each team member can define tasks to represent the work that they need to accomplish. For example, a developer can define development tasks to implement user stories. A tester can define test tasks to assign the job of writing and running test cases. A task can also be used to signal regressions or to suggest that exploratory testing should be performed. Also, a team member can define a task to represent generic work for the project.

You link tasks to a user story to track the progress of work that has occurred to complete that story. After you define a task, you can link it to the user story that it implements

During the Iteration Planning meeting, stories and sometimes defects are broken down into tasks. The tasks are all the work the team must complete to develop, test and accept the story as done.  Tasks typically range in size from 1 hour to 2 days, depending on the length of the iteration. Two-week iterations ideally have tasks of 8 hours or less, and four week iterations have tasks sized at 2 days or less. Tasks larger than these guidelines should be broken down further to allow the team to incrementally complete the work and show progress. Keep tasks small. Estimate all tasks in hours. Iteration tracking will be done at story level to keep the team focused on business value.

The entire team works together to come up with the tasks for all the stories identified for the iteration. The majority of the tasks are determined during the iteration planning meeting, but in some cases the team may discover tasks as the work starts. It is appropriate to add and/or remove tasks on stories during the iteration.

When a task involves an external group – perhaps you want to demo your system to some project stakeholders, or deploy a testable version into your QA/test environment – then you should consider including the task as a reminder to coordinate with those groups.

Tasks may include the traditional steps in a development lifecycle (although limited to the feature in question, not the entire product). For instance: Design, Development, Unit Testing, System Testing, UAT (User Acceptance Testing), Documentation, etc. Remember, agile software development methods do not exclude these steps. Agile methods just advocate doing the steps feature-by-feature, just in time, instead of in big phases.

At the end of the day , the individuals update the tasks hours remaining for the tasks they have taken responsibility . This will enable to determine the Burn down chart.

Tasks makes the micro work break down structure that the teams can use to facilitate coordinating, estimating, tracking status and and assigning individual responsibilities to help assure completion of the stories. and there by the iteration

There is an acronym for creating effective goals: “SMART” –

S – Specific
M – Measurable
A – Achievable
R – Relevant
T – Time-boxed
These are good characteristics for tasks as well.

Specific

A task needs to be specific enough that everyone can understand what’s involved in it. This helps keep other tasks from overlapping, and helps people understand whether the tasks add up to the full story.

Measurable

The key measure is, “can we mark it as done?” The team needs to agree on what that means, but it should include “does what it is intended to,” “tests are included,” and “the code has been refactored.”

Achievable

The task owner should expect to be able to achieve a task. XP teams have a rule that anybody can ask for help whenever they need it; this certainly includes ensuring that task owners are up to the job.

Relevant

Every task should be relevant, contributing to the story at hand. Stories are broken into tasks for the benefit of developers, but a customer should still be able to expect that every task can be explained and justified.

Time-Boxed

A task should be time-boxed: limited to a specific duration. This doesn’t need to be a formal estimate in hours or days, but there should be an expectation so people know when they should seek help. If a task is harder than expected, the team needs to know it must split the task, change players, or do something to help the task (and story) get done.

As you discuss stories, write cards, and split stories, the INVEST acronym can help remind you of characteristics of good stories. When creating a task plan, applying the SMART acronym can improve your tasks.

Scrum and Sprints

The task board is the single most important information radiator that an agile team has. A task board illustrates the progress that an agile team is making in achieving their sprint goals. Usually the task board is located in an area that is central to the team

All of these task boards have a few simple things in common:

1) 4 basic columns (there can be more)

  1. Stories
  2. To Do
  3. In Process (In progress, WIP, etc.)
  4. Complete (Completed, Done, etc.)

2) A Row for each story

Simple task board

Taskboard

More complicated version that looks something like above.