- •Contents
- •Preface
- •Acknowledgments
- •About the author
- •About the cover illustration
- •Higher product quality
- •Less rework
- •Better work alignment
- •Remember
- •Deriving scope from goals
- •Specifying collaboratively
- •Illustrating using examples
- •Validating frequently
- •Evolving a documentation system
- •A practical example
- •Business goal
- •An example of a good business goal
- •Scope
- •User stories for a basic loyalty system
- •Key Examples
- •Key examples: Free delivery
- •Free delivery
- •Examples
- •Living documentation
- •Remember
- •Tests can be good documentation
- •Remember
- •How to begin changing the process
- •Focus on improving quality
- •Start with functional test automation
- •When: Testers own test automation
- •Use test-driven development as a stepping stone
- •When: Developers have a good understanding of TDD
- •How to begin changing the team culture
- •Avoid “agile” terminology
- •When: Working in an environment that’s resistant to change
- •Ensure you have management support
- •Don’t make test automation the end goal
- •Don’t focus on a tool
- •Keep one person on legacy scripts during migration
- •When: Introducing functional automation to legacy systems
- •Track who is running—and not running—automated checks
- •When: Developers are reluctant to participate
- •Global talent management team at ultimate software
- •Sky Network services
- •Dealing with sign-off and traceability
- •Get sign-off on exported living documentation
- •When: Signing off iteration by iteration
- •When: Signing off longer milestones
- •Get sign-off on “slimmed down use cases”
- •When: Regulatory sign-off requires details
- •Introduce use case realizations
- •When: All details are required for sign-off
- •Warning signs
- •Watch out for tests that change frequently
- •Watch out for boomerangs
- •Watch out for organizational misalignment
- •Watch out for just-in-case code
- •Watch out for shotgun surgery
- •Remember
- •Building the right scope
- •Understand the “why” and “who”
- •Understand where the value is coming from
- •Understand what outputs the business users expect
- •Have developers provide the “I want” part of user stories
- •When: Business users trust the development team
- •Collaborating on scope without high-level control
- •Ask how something would be useful
- •Ask for an alternative solution
- •Make sure teams deliver complete features
- •When: Large multisite projects
- •Further information
- •Remember
- •Why do we need to collaborate on specifications?
- •The most popular collaborative models
- •Try big, all-team workshops
- •Try smaller workshops (“Three Amigos”)
- •Pair-writing
- •When: Mature products
- •Have developers frequently review tests before an iteration
- •When: Analysts writing tests
- •Try informal conversations
- •When: Business stakeholders are readily available
- •Preparing for collaboration
- •Hold introductory meetings
- •When: Project has many stakeholders
- •Involve stakeholders
- •Undertake detailed preparation and review up front
- •When: Remote Stakeholders
- •Prepare only initial examples
- •Don’t hinder discussion by overpreparing
- •Choosing a collaboration model
- •Remember
- •Illustrating using examples: an example
- •Examples should be precise
- •Don’t have yes/no answers in your examples
- •Avoid using abstract classes of equivalence
- •Ask for an alternative way to check the functionality
- •When: Complex/legacy infrastructures
- •Examples should be realistic
- •Avoid making up your own data
- •When: Data-driven projects
- •Get basic examples directly from customers
- •When: Working with enterprise customers
- •Examples should be easy to understand
- •Avoid the temptation to explore every combinatorial possibility
- •Look for implied concepts
- •Illustrating nonfunctional requirements
- •Get precise performance requirements
- •When: Performance is a key feature
- •Try the QUPER model
- •When: Sliding scale requirements
- •Use a checklist for discussions
- •When: Cross-cutting concerns
- •Build a reference example
- •When: Requirements are impossible to quantify
- •Remember
- •Free delivery
- •Examples should be precise and testable
- •When: Working on a legacy system
- •Don’t get trapped in user interface details
- •When: Web projects
- •Use a descriptive title and explain the goal using a short paragraph
- •Show and keep quiet
- •Don’t overspecify examples
- •Start with basic examples; then expand through exploring
- •When: Describing rules with many parameter combinations
- •In order to: Make the test easier to understand
- •When: Dealing with complex dependencies/referential integrity
- •Apply defaults in the automation layer
- •Don’t always rely on defaults
- •When: Working with objects with many attributes
- •Remember
- •Is automation required at all?
- •Starting with automation
- •When: Working on a legacy system
- •Plan for automation upfront
- •Don’t postpone or delegate automation
- •Avoid automating existing manual test scripts
- •Gain trust with user interface tests
- •Don’t treat automation code as second-grade code
- •Describe validation processes in the automation layer
- •Don’t replicate business logic in the test automation layer
- •Automate along system boundaries
- •When: Complex integrations
- •Don’t check business logic through the user interface
- •Automate below the skin of the application
- •Automating user interfaces
- •Specify user interface functionality at a higher level of abstraction
- •When: User interface contains complex logic
- •Avoid recorded UI tests
- •Set up context in a database
- •Test data management
- •Avoid using prepopulated data
- •When: Specifying logic that’s not data driven
- •Try using prepopulated reference data
- •When: Data-driven systems
- •Pull prototypes from the database
- •When: Legacy data-driven systems
- •Remember
- •Reducing unreliability
- •When: Working on a system with bad automated test support
- •Identify unstable tests using CI test history
- •Set up a dedicated continuous validation environment
- •Employ fully automated deployment
- •Create simpler test doubles for external systems
- •When: Working with external reference data sources
- •Selectively isolate external systems
- •When: External systems participate in work
- •Try multistage validation
- •When: Large/multisite groups
- •Execute tests in transactions
- •Run quick checks for reference data
- •When: Data-driven systems
- •Wait for events, not for elapsed time
- •Make asynchronous processing optional
- •Getting feedback faster
- •Introduce business time
- •When: Working with temporal constraints
- •Break long test packs into smaller modules
- •Avoid using in-memory databases for testing
- •When: Data-driven systems
- •Separate quick and slow tests
- •When: A small number of tests take most of the time to execute
- •Keep overnight packs stable
- •When: Slow tests run only overnight
- •Create a current iteration pack
- •Parallelize test runs
- •When: You can get more than one test Environment
- •Try disabling less risky tests
- •When: Test feedback is very slow
- •Managing failing tests
- •Create a known regression failures pack
- •Automatically check which tests are turned off
- •When: Failing tests are disabled, not moved to a separate pack
- •Remember
- •Living documentation should be easy to understand
- •Look for higher-level concepts
- •Avoid using technical automation concepts in tests
- •When: Stakeholders aren’t technical
- •Living documentation should be consistent
- •When: Web projects
- •Document your building blocks
- •Living documentation should be organized for easy access
- •Organize current work by stories
- •Reorganize stories by functional areas
- •Organize along UI navigation routes
- •When: Documenting user interfaces
- •Organize along business processes
- •When: End-to-end use case traceability required
- •Listen to your living documentation
- •Remember
- •Starting to change the process
- •Optimizing the process
- •The current process
- •The result
- •Key lessons
- •Changing the process
- •The current process
- •Key lessons
- •Changing the process
- •Optimizing the process
- •Living documentation as competitive advantage
- •Key lessons
- •Changing the process
- •Improving collaboration
- •The result
- •Key lessons
- •Changing the process
- •Living documentation
- •Current process
- •Key lessons
- •Changing the process
- •Current process
- •Key lessons
- •Collaboration requires preparation
- •There are many different ways to collaborate
- •Looking at the end goal as business process documentation is a useful model
- •Long-term value comes from living documentation
- •Index
196 Speciication by Example
I’ve presented the most common ways of organizing the speciications here, but you don’t have to stop with these. Find your own way of structuring documents that makes it intuitive for your business users, testers, and developers to ind what they’re looking for.
Listen to your living documentation
At irst, many teams didn’t understand that living documentation closely relects the domain model of the system it describes. If the design of a system is driven with executable speciications, the same ubiquitous language and domain models will be used in both the speciications and software.
Spotting incidental complexity in executable speciications is a good indicator that you can simplify the system and make it easier to use and maintain. Channing Walton refers to this approach as “listen to your tests.” He worked on an order-management system at UBS where the acceptance criteria for worklows was complex. He says:
If a test is too complicated, it’s telling you something about the system. Worklow testing was very painful. There was too much going on and tests were very complicated. Developers started asking why the tests were so complicated. It turned out that worklows were overcomplicated by each department not really knowing what the others are doing. Tests helped because they put everything together, so people could see that another department is doing validations and handling errors as well. The whole thing got reduced to something much simpler.
Automating executable speciications forces developers to experience what it’s like to use their own system, because they have to use the interfaces designed for clients. If executable speciications are hard to automate, this means that the client APIs aren’t easy to use, which means it’s time to start simplifying the APIs. This was one of the biggest lessons for Pascal Mestdach:
The way you write your tests deines how you write and design your code. If you need to persist patient data in a part of your test to do that, you need to make a data set, ill a data set with four tables, call a huge method to persist it, and call some setup methods for that class. That makes it really hard to come to the part where it actually tests your scenario. If your setup is hard, the tests will be hard. But then, persisting a patient in real code is going to be hard.
Chapter 11 Evolving a documentation system |
197 |
Markus Gärtner points out that long setups signal bad API design:
When you notice a long setup, think about the user of your API and the stuff you’re creating. This will become the business of someone to deal with your complicated API. Do you really want to do this?
Living documentation maintenance problems can also provide a hint that the architecture of the system is suboptimal. Ian Cooper said that they often broke many tests in their living documentation system with small domain code changes, an example of shotgun surgery. That led him to investigate how to improve the design of the system:
It’s an indicator that your architecture is wrong. At irst you struggle against it, and then you begin to realize that the problem is not FitNesse, but how you let it interact with your application.
Cooper suggested looking at living documentation as an alternative user interface to the system. If this interface is hard to write and maintain, the real user interface will also be hard to write and maintain.
If a concept is deined through complex interactions in the living documentation, that probably means that the same complex interactions exist in the programming language code. If two concepts are described similarly in the living documentation, this probably means the domain model also contains this duplication. Instead of ignoring complex speciications, we can use them as a warning sign that the domain model should be changed or that the underlying software should be cleaned up.
Remember
•To get the most out of your living documentation system, keep it consistent and make sure that the individual executable speciications are easy to understand and easy to access for everyone, including business users.
•Evolve the ubiquitous language and use it consistently.
•As the system evolves, watch out for long speciications or several small ones that explain the same thing with minor variations. Look for concepts at a higher level of abstraction that would make these things easier to explain.
•Organize the living documentation system into a hierarchy that allows you to easily ind all the speciications for the current iteration and any feature that was previously implemented.
PART 3
Case studies
12
uSwitch
uSwitch.com is one of the busiest UK websites. The website compares prices and services for a variety of companies and products, including energy suppliers, credit cards, and insurance providers. The complexity of their
software system is driven by high scalability as well as complex integrations with a large number of external partners.
uSwitch is an interesting case study because it illustrates how a company working in a Waterfall process with separate development and testing teams on a problematic legacy environment can still transition to a much better way of delivering quality software. uSwitch has completely overhauled their software delivery process over the course of three years.
At uSwitch, I interviewed Stephen Lloyd, Tony To, Damon Morgan, Jon Neale, and Hemal Kuntawala. When I asked them about their software process, their general answer was, “Someone suggests an idea in the morning and then it gets implemented and goes live.” Early in my career, I worked for a company with a software process that could be described the same way—and that experienced ireworks on the production systems almost daily. But in the case of uSwitch, the quality of the product and the speed with which they deliver features is enviable.
Although uSwitch didn’t set out to implement Speciication by Example in particular, their current process contains the most important patterns described in this book, including deriving scope from goals, collaborating on speciications, and automating executable speciications. To improve the software development process, they focused on improving product quality by constantly looking for and addressing obstacles to quality.
When looking for a better way to align development and testing, they automated tests in human-readable form. After that, they discovered that tests can become speciications. Moving to executable speciications got them to collaborate better. In the course
201