- •Contents
- •Preface
- •Acknowledgments
- •About the author
- •About the cover illustration
- •Higher product quality
- •Less rework
- •Better work alignment
- •Remember
- •Deriving scope from goals
- •Specifying collaboratively
- •Illustrating using examples
- •Validating frequently
- •Evolving a documentation system
- •A practical example
- •Business goal
- •An example of a good business goal
- •Scope
- •User stories for a basic loyalty system
- •Key Examples
- •Key examples: Free delivery
- •Free delivery
- •Examples
- •Living documentation
- •Remember
- •Tests can be good documentation
- •Remember
- •How to begin changing the process
- •Focus on improving quality
- •Start with functional test automation
- •When: Testers own test automation
- •Use test-driven development as a stepping stone
- •When: Developers have a good understanding of TDD
- •How to begin changing the team culture
- •Avoid “agile” terminology
- •When: Working in an environment that’s resistant to change
- •Ensure you have management support
- •Don’t make test automation the end goal
- •Don’t focus on a tool
- •Keep one person on legacy scripts during migration
- •When: Introducing functional automation to legacy systems
- •Track who is running—and not running—automated checks
- •When: Developers are reluctant to participate
- •Global talent management team at ultimate software
- •Sky Network services
- •Dealing with sign-off and traceability
- •Get sign-off on exported living documentation
- •When: Signing off iteration by iteration
- •When: Signing off longer milestones
- •Get sign-off on “slimmed down use cases”
- •When: Regulatory sign-off requires details
- •Introduce use case realizations
- •When: All details are required for sign-off
- •Warning signs
- •Watch out for tests that change frequently
- •Watch out for boomerangs
- •Watch out for organizational misalignment
- •Watch out for just-in-case code
- •Watch out for shotgun surgery
- •Remember
- •Building the right scope
- •Understand the “why” and “who”
- •Understand where the value is coming from
- •Understand what outputs the business users expect
- •Have developers provide the “I want” part of user stories
- •When: Business users trust the development team
- •Collaborating on scope without high-level control
- •Ask how something would be useful
- •Ask for an alternative solution
- •Make sure teams deliver complete features
- •When: Large multisite projects
- •Further information
- •Remember
- •Why do we need to collaborate on specifications?
- •The most popular collaborative models
- •Try big, all-team workshops
- •Try smaller workshops (“Three Amigos”)
- •Pair-writing
- •When: Mature products
- •Have developers frequently review tests before an iteration
- •When: Analysts writing tests
- •Try informal conversations
- •When: Business stakeholders are readily available
- •Preparing for collaboration
- •Hold introductory meetings
- •When: Project has many stakeholders
- •Involve stakeholders
- •Undertake detailed preparation and review up front
- •When: Remote Stakeholders
- •Prepare only initial examples
- •Don’t hinder discussion by overpreparing
- •Choosing a collaboration model
- •Remember
- •Illustrating using examples: an example
- •Examples should be precise
- •Don’t have yes/no answers in your examples
- •Avoid using abstract classes of equivalence
- •Ask for an alternative way to check the functionality
- •When: Complex/legacy infrastructures
- •Examples should be realistic
- •Avoid making up your own data
- •When: Data-driven projects
- •Get basic examples directly from customers
- •When: Working with enterprise customers
- •Examples should be easy to understand
- •Avoid the temptation to explore every combinatorial possibility
- •Look for implied concepts
- •Illustrating nonfunctional requirements
- •Get precise performance requirements
- •When: Performance is a key feature
- •Try the QUPER model
- •When: Sliding scale requirements
- •Use a checklist for discussions
- •When: Cross-cutting concerns
- •Build a reference example
- •When: Requirements are impossible to quantify
- •Remember
- •Free delivery
- •Examples should be precise and testable
- •When: Working on a legacy system
- •Don’t get trapped in user interface details
- •When: Web projects
- •Use a descriptive title and explain the goal using a short paragraph
- •Show and keep quiet
- •Don’t overspecify examples
- •Start with basic examples; then expand through exploring
- •When: Describing rules with many parameter combinations
- •In order to: Make the test easier to understand
- •When: Dealing with complex dependencies/referential integrity
- •Apply defaults in the automation layer
- •Don’t always rely on defaults
- •When: Working with objects with many attributes
- •Remember
- •Is automation required at all?
- •Starting with automation
- •When: Working on a legacy system
- •Plan for automation upfront
- •Don’t postpone or delegate automation
- •Avoid automating existing manual test scripts
- •Gain trust with user interface tests
- •Don’t treat automation code as second-grade code
- •Describe validation processes in the automation layer
- •Don’t replicate business logic in the test automation layer
- •Automate along system boundaries
- •When: Complex integrations
- •Don’t check business logic through the user interface
- •Automate below the skin of the application
- •Automating user interfaces
- •Specify user interface functionality at a higher level of abstraction
- •When: User interface contains complex logic
- •Avoid recorded UI tests
- •Set up context in a database
- •Test data management
- •Avoid using prepopulated data
- •When: Specifying logic that’s not data driven
- •Try using prepopulated reference data
- •When: Data-driven systems
- •Pull prototypes from the database
- •When: Legacy data-driven systems
- •Remember
- •Reducing unreliability
- •When: Working on a system with bad automated test support
- •Identify unstable tests using CI test history
- •Set up a dedicated continuous validation environment
- •Employ fully automated deployment
- •Create simpler test doubles for external systems
- •When: Working with external reference data sources
- •Selectively isolate external systems
- •When: External systems participate in work
- •Try multistage validation
- •When: Large/multisite groups
- •Execute tests in transactions
- •Run quick checks for reference data
- •When: Data-driven systems
- •Wait for events, not for elapsed time
- •Make asynchronous processing optional
- •Getting feedback faster
- •Introduce business time
- •When: Working with temporal constraints
- •Break long test packs into smaller modules
- •Avoid using in-memory databases for testing
- •When: Data-driven systems
- •Separate quick and slow tests
- •When: A small number of tests take most of the time to execute
- •Keep overnight packs stable
- •When: Slow tests run only overnight
- •Create a current iteration pack
- •Parallelize test runs
- •When: You can get more than one test Environment
- •Try disabling less risky tests
- •When: Test feedback is very slow
- •Managing failing tests
- •Create a known regression failures pack
- •Automatically check which tests are turned off
- •When: Failing tests are disabled, not moved to a separate pack
- •Remember
- •Living documentation should be easy to understand
- •Look for higher-level concepts
- •Avoid using technical automation concepts in tests
- •When: Stakeholders aren’t technical
- •Living documentation should be consistent
- •When: Web projects
- •Document your building blocks
- •Living documentation should be organized for easy access
- •Organize current work by stories
- •Reorganize stories by functional areas
- •Organize along UI navigation routes
- •When: Documenting user interfaces
- •Organize along business processes
- •When: End-to-end use case traceability required
- •Listen to your living documentation
- •Remember
- •Starting to change the process
- •Optimizing the process
- •The current process
- •The result
- •Key lessons
- •Changing the process
- •The current process
- •Key lessons
- •Changing the process
- •Optimizing the process
- •Living documentation as competitive advantage
- •Key lessons
- •Changing the process
- •Improving collaboration
- •The result
- •Key lessons
- •Changing the process
- •Living documentation
- •Current process
- •Key lessons
- •Changing the process
- •Current process
- •Key lessons
- •Collaboration requires preparation
- •There are many different ways to collaborate
- •Looking at the end goal as business process documentation is a useful model
- •Long-term value comes from living documentation
- •Index
Chapter 10 Validating frequently |
179 |
For me, this is quite a controversial idea. I’m of the opinion that if a test is worth running, it’s worth running all the time.5
Most of the tests at uSwitch run through the web user interface, so they take a long time and are quite expensive to maintain, which is what pushed them in this direction. With a system that doesn’t make you run tests end to end all the time, I’d prefer trying to automate the tests differently so that they don’t cost so much (see the “Automate below the skin of the application” section in chapter 9).
One of the reasons why uSwitch can afford to disable tests is that they have a separate system for monitoring user experience on their production website, which tells them if users start seeing errors or if the usage of a particular feature suddenly drops.
After covering faster feedback and more reliable validation results, it’s time to tackle something much more controversial. As the size of their living documentation system grew, many teams realized that they sometimes need to live with failing tests occasionally. In the following section, I present some good ideas for dealing with failing tests that you might not be able to ix straightaway.
Managing failing tests
In Bridging the Communication Gap, I wrote that failing tests should never be disabled but ixed straightaway. The research for this book changed my perspective on that issue slightly. Some teams had hundreds or thousands of checks in continuous validation, validating functionality that was built over several years. The calculation engine system at Weyerhaeuser (mentioned in the “Higher product quality” section in chapter 1) is continuously validated by more than 30,000 checks, according to Pierre Veragen.
Frequently validating so many speciications will catch many problems. On the other hand, running so many checks often means that the feedback will be slow, so problems won’t be instantly spotted or solved. Some issues might also require clariication from business users or might be a lesser priority to ix than the changes that should go live as part of the current iteration, so we won’t necessarily be able to ix all the problems as soon as we spot them.
This means that some tests in the continuous validation pack will fail and stay broken for a while. When a test pack is broken, people tend not to look for additional problems they might have caused, so just leaving these tests as they are isn’t a good idea. Here are some tips for managing functional regression in the system.
5 To |
be |
fair, |
I picked this idea up from David Evans, so if you want to quote it, use |
him |
as |
a |
reference. |
180 Speciication by Example
Create a known regression failures pack
Similar to the idea of creating a separate pack of executable speciications for the current iteration, many teams have created a speciic pack to contain tests that are expected to fail.
When you discover a regression failure |
and |
decide |
not |
to |
ix |
it straightaway, |
moving that test into a separate pack |
allows |
the |
tests |
to |
fail |
without breaking |
the main validation set. |
|
|
|
|
|
|
Grouping failing tests in a separate pack also allows us to track the number of such problems and prevent relaxing the rules around temporary failures from becoming a “get out of jail free” card that will cause problems to pile up.
A separate pack also allows us to run all failing tests periodically. Even if the test fails, it’s worth running to check if there are any additional failures. At RainStor, they mark such tests as bugs but still execute them to check for any further functional regression. Adam Knight says:
One day you might have the test failing because of trailing zeros. If the next day that test fails, you might not want to check it because you know the test fails. But it might be returning a completely wrong numeric result.
A potential risk with a pack for regression failures is that it can become a “get out of jail free” card for quality problems that are identiied late in a delivery phase. Use the pack for known failures only as a temporary holding space for problems that require further clariication. Catching issues late is a warning sign of an underlying process problem. A good strategy to avoid falling into this trap is to limit the number of tests that can be moved to a known regression pack. Some tools, such as Cucumber, even support automated checking for such limits.
Creating a separate pack where all failing tests are collected is good from the project management perspective, because we can monitor it and take action if it starts growing too much. One or two less-important issues might not be a cause to stop a release to production, but if the pack grows to dozens of tests, it’s time to stop and clean up before things get out of hand. If a test spends a long time in the known regression pack, that might be an argument to drop the related functionality because nobody cares about it.