- •Contents
- •Preface
- •Acknowledgments
- •About the author
- •About the cover illustration
- •Higher product quality
- •Less rework
- •Better work alignment
- •Remember
- •Deriving scope from goals
- •Specifying collaboratively
- •Illustrating using examples
- •Validating frequently
- •Evolving a documentation system
- •A practical example
- •Business goal
- •An example of a good business goal
- •Scope
- •User stories for a basic loyalty system
- •Key Examples
- •Key examples: Free delivery
- •Free delivery
- •Examples
- •Living documentation
- •Remember
- •Tests can be good documentation
- •Remember
- •How to begin changing the process
- •Focus on improving quality
- •Start with functional test automation
- •When: Testers own test automation
- •Use test-driven development as a stepping stone
- •When: Developers have a good understanding of TDD
- •How to begin changing the team culture
- •Avoid “agile” terminology
- •When: Working in an environment that’s resistant to change
- •Ensure you have management support
- •Don’t make test automation the end goal
- •Don’t focus on a tool
- •Keep one person on legacy scripts during migration
- •When: Introducing functional automation to legacy systems
- •Track who is running—and not running—automated checks
- •When: Developers are reluctant to participate
- •Global talent management team at ultimate software
- •Sky Network services
- •Dealing with sign-off and traceability
- •Get sign-off on exported living documentation
- •When: Signing off iteration by iteration
- •When: Signing off longer milestones
- •Get sign-off on “slimmed down use cases”
- •When: Regulatory sign-off requires details
- •Introduce use case realizations
- •When: All details are required for sign-off
- •Warning signs
- •Watch out for tests that change frequently
- •Watch out for boomerangs
- •Watch out for organizational misalignment
- •Watch out for just-in-case code
- •Watch out for shotgun surgery
- •Remember
- •Building the right scope
- •Understand the “why” and “who”
- •Understand where the value is coming from
- •Understand what outputs the business users expect
- •Have developers provide the “I want” part of user stories
- •When: Business users trust the development team
- •Collaborating on scope without high-level control
- •Ask how something would be useful
- •Ask for an alternative solution
- •Make sure teams deliver complete features
- •When: Large multisite projects
- •Further information
- •Remember
- •Why do we need to collaborate on specifications?
- •The most popular collaborative models
- •Try big, all-team workshops
- •Try smaller workshops (“Three Amigos”)
- •Pair-writing
- •When: Mature products
- •Have developers frequently review tests before an iteration
- •When: Analysts writing tests
- •Try informal conversations
- •When: Business stakeholders are readily available
- •Preparing for collaboration
- •Hold introductory meetings
- •When: Project has many stakeholders
- •Involve stakeholders
- •Undertake detailed preparation and review up front
- •When: Remote Stakeholders
- •Prepare only initial examples
- •Don’t hinder discussion by overpreparing
- •Choosing a collaboration model
- •Remember
- •Illustrating using examples: an example
- •Examples should be precise
- •Don’t have yes/no answers in your examples
- •Avoid using abstract classes of equivalence
- •Ask for an alternative way to check the functionality
- •When: Complex/legacy infrastructures
- •Examples should be realistic
- •Avoid making up your own data
- •When: Data-driven projects
- •Get basic examples directly from customers
- •When: Working with enterprise customers
- •Examples should be easy to understand
- •Avoid the temptation to explore every combinatorial possibility
- •Look for implied concepts
- •Illustrating nonfunctional requirements
- •Get precise performance requirements
- •When: Performance is a key feature
- •Try the QUPER model
- •When: Sliding scale requirements
- •Use a checklist for discussions
- •When: Cross-cutting concerns
- •Build a reference example
- •When: Requirements are impossible to quantify
- •Remember
- •Free delivery
- •Examples should be precise and testable
- •When: Working on a legacy system
- •Don’t get trapped in user interface details
- •When: Web projects
- •Use a descriptive title and explain the goal using a short paragraph
- •Show and keep quiet
- •Don’t overspecify examples
- •Start with basic examples; then expand through exploring
- •When: Describing rules with many parameter combinations
- •In order to: Make the test easier to understand
- •When: Dealing with complex dependencies/referential integrity
- •Apply defaults in the automation layer
- •Don’t always rely on defaults
- •When: Working with objects with many attributes
- •Remember
- •Is automation required at all?
- •Starting with automation
- •When: Working on a legacy system
- •Plan for automation upfront
- •Don’t postpone or delegate automation
- •Avoid automating existing manual test scripts
- •Gain trust with user interface tests
- •Don’t treat automation code as second-grade code
- •Describe validation processes in the automation layer
- •Don’t replicate business logic in the test automation layer
- •Automate along system boundaries
- •When: Complex integrations
- •Don’t check business logic through the user interface
- •Automate below the skin of the application
- •Automating user interfaces
- •Specify user interface functionality at a higher level of abstraction
- •When: User interface contains complex logic
- •Avoid recorded UI tests
- •Set up context in a database
- •Test data management
- •Avoid using prepopulated data
- •When: Specifying logic that’s not data driven
- •Try using prepopulated reference data
- •When: Data-driven systems
- •Pull prototypes from the database
- •When: Legacy data-driven systems
- •Remember
- •Reducing unreliability
- •When: Working on a system with bad automated test support
- •Identify unstable tests using CI test history
- •Set up a dedicated continuous validation environment
- •Employ fully automated deployment
- •Create simpler test doubles for external systems
- •When: Working with external reference data sources
- •Selectively isolate external systems
- •When: External systems participate in work
- •Try multistage validation
- •When: Large/multisite groups
- •Execute tests in transactions
- •Run quick checks for reference data
- •When: Data-driven systems
- •Wait for events, not for elapsed time
- •Make asynchronous processing optional
- •Getting feedback faster
- •Introduce business time
- •When: Working with temporal constraints
- •Break long test packs into smaller modules
- •Avoid using in-memory databases for testing
- •When: Data-driven systems
- •Separate quick and slow tests
- •When: A small number of tests take most of the time to execute
- •Keep overnight packs stable
- •When: Slow tests run only overnight
- •Create a current iteration pack
- •Parallelize test runs
- •When: You can get more than one test Environment
- •Try disabling less risky tests
- •When: Test feedback is very slow
- •Managing failing tests
- •Create a known regression failures pack
- •Automatically check which tests are turned off
- •When: Failing tests are disabled, not moved to a separate pack
- •Remember
- •Living documentation should be easy to understand
- •Look for higher-level concepts
- •Avoid using technical automation concepts in tests
- •When: Stakeholders aren’t technical
- •Living documentation should be consistent
- •When: Web projects
- •Document your building blocks
- •Living documentation should be organized for easy access
- •Organize current work by stories
- •Reorganize stories by functional areas
- •Organize along UI navigation routes
- •When: Documenting user interfaces
- •Organize along business processes
- •When: End-to-end use case traceability required
- •Listen to your living documentation
- •Remember
- •Starting to change the process
- •Optimizing the process
- •The current process
- •The result
- •Key lessons
- •Changing the process
- •The current process
- •Key lessons
- •Changing the process
- •Optimizing the process
- •Living documentation as competitive advantage
- •Key lessons
- •Changing the process
- •Improving collaboration
- •The result
- •Key lessons
- •Changing the process
- •Living documentation
- •Current process
- •Key lessons
- •Changing the process
- •Current process
- •Key lessons
- •Collaboration requires preparation
- •There are many different ways to collaborate
- •Looking at the end goal as business process documentation is a useful model
- •Long-term value comes from living documentation
- •Index
Chapter 4 Initiating the changes |
53 |
Their development process starts with the project manager, who selects the stories for an iteration in advance. The business analysts work with remote stakeholders to prepare the acceptance criteria in detail before an iteration starts. They use examples of existing speciications to drive structure of the new ones. If the new speciications are signiicantly different from any existing ones, a pair of developers will review the tests to provide early feedback and ensure the tests can be automated.
Their iterations start every second Wednesday. When an iteration starts, the whole team gets together for a planning meeting and reviews the upcoming stories in the order of priority. The goal is to ensure that all the developers understand what the story is about, to estimate the stories, and to check for technical dependencies that would make a different order of delivery better. They’ll also break down the story into tasks for development. By the time a story is reviewed in a planning meeting, the acceptance criteria for that story are generally already well speciied.
The team occasionally discovers that a story wasn’t well understood. When they were running weeklong iterations, such stories could disrupt the low, but two-week iterations allow them to handle such cases without much interruption to the overall process.
The stories are then implemented by pairs of developers. After a pair of developers implements a story and all the related tests pass, a business analyst will spend some time doing exploratory testing. If the analyst discovers unexpected behavior, or realizes that the team has not understood all the effects of a story on the existing system, he will extend the speciication with relevant examples and send the story back to the developers.
Sky Network Services
The Sky Network Services (SNS) group at the British Sky Broadcasting Company maintains a system for broadband provisioning. The group consists of six teams, and each team has ive to six developers and one or two testers. The entire group shares six business analysts. Because the teams maintain separate functional components that differ in terms of maturity, each team has a slightly different process.
The entire group runs a process based on Scrum in two-week sprints. One week before a sprint oficially starts, they organize a preplanning coordination meeting attended by two or three people from each team. The goal of that meeting is to prioritize the stories. By the time they meet, the business analysts have already collected and speciied some high-level acceptance criteria for each story. After the meeting, testers will start writing the speciications with examples, typically collaborating with a business analyst. Before the sprint oficially starts, each team will have at least one or two stories with detailed speciications with examples ready for automation.
54 Speciication by Example
The iterations start every second Wednesday, with a cross-team planning meeting to inform everyone about overall progress and the business goals of the upcoming iteration. The teams then go into individual planning meetings. Some teams spend 15 minutes going through the stories briely; others spend a few hours going over the details. Rakesh Patel, a developer who works on the project, says that this is mostly driven by the maturity of the underlying component:
At the moment we’re working on a component that’s been there for a long time and adding more messages to it. There is not really much need for the whole team to know what that involves; we can wait until we pick up the story card. Some other teams are building a new GUI, completely new functionality, and then it might be more appropriate for the whole team to sit down and discuss this in depth and try to work out what needs to be done. At that point we might also discuss nonfunctional requirements, architecture, etc.
After the planning meeting, the developers start working on the stories that already have speciications with examples. Business analysts and testers work on completing the acceptance criteria for all the stories planned for the iteration. Once they inish the speciications for a story, they’ll meet with a developer designated to be the “story champion” (see the sidebar) and go through the tests. If everyone thinks they have enough information to complete a story, the story card gets a blue sticker, which means that it’s ready for development.
Once development is completed, the business analysts will review the speciications again and put a red sticker on the card. Testers will then run additional tests on the story and add a green sticker when the tests pass. Some stories will involve their support team, database administrators, or system administrators, who review the story after the testers. Database administrators put a gold star on the story card, and system administrators add a silver star when they’ve reviewed the results. Stickers ensure that everyone who needs to be involved in a story knows about it.
Instead of big speciication meetings, the teams at SNS organize a low process. They still have a two-phase speciication process, with business analysts and testers preparing all the examples upfront and then reviewing them with a developer later. This gives developers more time to focus on development work. Instead of involving all the developers in the review to ensure they understand a story, the story champion is effectively responsible for transferring the information while pairing with other developers.