- •Contents
- •Preface
- •Acknowledgments
- •About the author
- •About the cover illustration
- •Higher product quality
- •Less rework
- •Better work alignment
- •Remember
- •Deriving scope from goals
- •Specifying collaboratively
- •Illustrating using examples
- •Validating frequently
- •Evolving a documentation system
- •A practical example
- •Business goal
- •An example of a good business goal
- •Scope
- •User stories for a basic loyalty system
- •Key Examples
- •Key examples: Free delivery
- •Free delivery
- •Examples
- •Living documentation
- •Remember
- •Tests can be good documentation
- •Remember
- •How to begin changing the process
- •Focus on improving quality
- •Start with functional test automation
- •When: Testers own test automation
- •Use test-driven development as a stepping stone
- •When: Developers have a good understanding of TDD
- •How to begin changing the team culture
- •Avoid “agile” terminology
- •When: Working in an environment that’s resistant to change
- •Ensure you have management support
- •Don’t make test automation the end goal
- •Don’t focus on a tool
- •Keep one person on legacy scripts during migration
- •When: Introducing functional automation to legacy systems
- •Track who is running—and not running—automated checks
- •When: Developers are reluctant to participate
- •Global talent management team at ultimate software
- •Sky Network services
- •Dealing with sign-off and traceability
- •Get sign-off on exported living documentation
- •When: Signing off iteration by iteration
- •When: Signing off longer milestones
- •Get sign-off on “slimmed down use cases”
- •When: Regulatory sign-off requires details
- •Introduce use case realizations
- •When: All details are required for sign-off
- •Warning signs
- •Watch out for tests that change frequently
- •Watch out for boomerangs
- •Watch out for organizational misalignment
- •Watch out for just-in-case code
- •Watch out for shotgun surgery
- •Remember
- •Building the right scope
- •Understand the “why” and “who”
- •Understand where the value is coming from
- •Understand what outputs the business users expect
- •Have developers provide the “I want” part of user stories
- •When: Business users trust the development team
- •Collaborating on scope without high-level control
- •Ask how something would be useful
- •Ask for an alternative solution
- •Make sure teams deliver complete features
- •When: Large multisite projects
- •Further information
- •Remember
- •Why do we need to collaborate on specifications?
- •The most popular collaborative models
- •Try big, all-team workshops
- •Try smaller workshops (“Three Amigos”)
- •Pair-writing
- •When: Mature products
- •Have developers frequently review tests before an iteration
- •When: Analysts writing tests
- •Try informal conversations
- •When: Business stakeholders are readily available
- •Preparing for collaboration
- •Hold introductory meetings
- •When: Project has many stakeholders
- •Involve stakeholders
- •Undertake detailed preparation and review up front
- •When: Remote Stakeholders
- •Prepare only initial examples
- •Don’t hinder discussion by overpreparing
- •Choosing a collaboration model
- •Remember
- •Illustrating using examples: an example
- •Examples should be precise
- •Don’t have yes/no answers in your examples
- •Avoid using abstract classes of equivalence
- •Ask for an alternative way to check the functionality
- •When: Complex/legacy infrastructures
- •Examples should be realistic
- •Avoid making up your own data
- •When: Data-driven projects
- •Get basic examples directly from customers
- •When: Working with enterprise customers
- •Examples should be easy to understand
- •Avoid the temptation to explore every combinatorial possibility
- •Look for implied concepts
- •Illustrating nonfunctional requirements
- •Get precise performance requirements
- •When: Performance is a key feature
- •Try the QUPER model
- •When: Sliding scale requirements
- •Use a checklist for discussions
- •When: Cross-cutting concerns
- •Build a reference example
- •When: Requirements are impossible to quantify
- •Remember
- •Free delivery
- •Examples should be precise and testable
- •When: Working on a legacy system
- •Don’t get trapped in user interface details
- •When: Web projects
- •Use a descriptive title and explain the goal using a short paragraph
- •Show and keep quiet
- •Don’t overspecify examples
- •Start with basic examples; then expand through exploring
- •When: Describing rules with many parameter combinations
- •In order to: Make the test easier to understand
- •When: Dealing with complex dependencies/referential integrity
- •Apply defaults in the automation layer
- •Don’t always rely on defaults
- •When: Working with objects with many attributes
- •Remember
- •Is automation required at all?
- •Starting with automation
- •When: Working on a legacy system
- •Plan for automation upfront
- •Don’t postpone or delegate automation
- •Avoid automating existing manual test scripts
- •Gain trust with user interface tests
- •Don’t treat automation code as second-grade code
- •Describe validation processes in the automation layer
- •Don’t replicate business logic in the test automation layer
- •Automate along system boundaries
- •When: Complex integrations
- •Don’t check business logic through the user interface
- •Automate below the skin of the application
- •Automating user interfaces
- •Specify user interface functionality at a higher level of abstraction
- •When: User interface contains complex logic
- •Avoid recorded UI tests
- •Set up context in a database
- •Test data management
- •Avoid using prepopulated data
- •When: Specifying logic that’s not data driven
- •Try using prepopulated reference data
- •When: Data-driven systems
- •Pull prototypes from the database
- •When: Legacy data-driven systems
- •Remember
- •Reducing unreliability
- •When: Working on a system with bad automated test support
- •Identify unstable tests using CI test history
- •Set up a dedicated continuous validation environment
- •Employ fully automated deployment
- •Create simpler test doubles for external systems
- •When: Working with external reference data sources
- •Selectively isolate external systems
- •When: External systems participate in work
- •Try multistage validation
- •When: Large/multisite groups
- •Execute tests in transactions
- •Run quick checks for reference data
- •When: Data-driven systems
- •Wait for events, not for elapsed time
- •Make asynchronous processing optional
- •Getting feedback faster
- •Introduce business time
- •When: Working with temporal constraints
- •Break long test packs into smaller modules
- •Avoid using in-memory databases for testing
- •When: Data-driven systems
- •Separate quick and slow tests
- •When: A small number of tests take most of the time to execute
- •Keep overnight packs stable
- •When: Slow tests run only overnight
- •Create a current iteration pack
- •Parallelize test runs
- •When: You can get more than one test Environment
- •Try disabling less risky tests
- •When: Test feedback is very slow
- •Managing failing tests
- •Create a known regression failures pack
- •Automatically check which tests are turned off
- •When: Failing tests are disabled, not moved to a separate pack
- •Remember
- •Living documentation should be easy to understand
- •Look for higher-level concepts
- •Avoid using technical automation concepts in tests
- •When: Stakeholders aren’t technical
- •Living documentation should be consistent
- •When: Web projects
- •Document your building blocks
- •Living documentation should be organized for easy access
- •Organize current work by stories
- •Reorganize stories by functional areas
- •Organize along UI navigation routes
- •When: Documenting user interfaces
- •Organize along business processes
- •When: End-to-end use case traceability required
- •Listen to your living documentation
- •Remember
- •Starting to change the process
- •Optimizing the process
- •The current process
- •The result
- •Key lessons
- •Changing the process
- •The current process
- •Key lessons
- •Changing the process
- •Optimizing the process
- •Living documentation as competitive advantage
- •Key lessons
- •Changing the process
- •Improving collaboration
- •The result
- •Key lessons
- •Changing the process
- •Living documentation
- •Current process
- •Key lessons
- •Changing the process
- •Current process
- •Key lessons
- •Collaboration requires preparation
- •There are many different ways to collaborate
- •Looking at the end goal as business process documentation is a useful model
- •Long-term value comes from living documentation
- •Index
Chapter 9 Automating validation without changing speciications 161
for an object, we can specify only those that are important to locate a good prototype. This makes the speciications easier to understand.
Automating the validation of speciications without changing them is conceptually different from traditional test automation, which is why so many teams struggle with it when they get started with Speciication by Example. We automate speciications to get fast feedback, but our primary goal should be to create executable speciications that are easily accessible and human readable, not just to automate a validation process. Once our speciications are executable, we can validate them frequently to build a living documentation system. We’ll cover those ideas in the next two chapters.
Remember
•Reined speciications should be automated with as little change as possible.
•The automation layer should deine how something is tested; speciications should deine what is to be tested.
•Use the automation layer to translate between the business language and user interface concepts, APIs, and databases. Create higher-level reusable components for speciications.
•Automate below the user interface if possible.
•Don’t rely too much on existing data if you don’t have to.
10
Validating frequently
‘‘Stray too far out of your lane and your attention’’ is immediately riveted by a loud, vibrating baloop baloop baloop.
In the 1950s, the California Department of Transportation had a problem with motorway lane markers. The lines were wearing out, and someone had to repaint them every season. This was costly, caused disruption to trafic, and was danger-
ous for the people charged with that task.
Dr. Elbert Dysart Botts worked on solving that problem and experimented with more-relective paint, but this proved to be a dead end. Thinking outside the box, he invented raised lane markers, called Botts’ Dots. Botts’ Dots were visible by day or night, regardless of the weather. They didn’t wear out as easily as painted lane markers. Instead of relying just on drivers’ sense of sight, Botts’ Dots cause a tactile vibration and audible rumbling when drivers move across designated travel lanes. This feedback proved to be one of the most important safety features on highways, alerting inattentive drivers to potential danger when they drift from their lane.
Botts’ Dots were introduced to software development as one of the original 12 Extreme Programming practices, called continuous integration (CI). Continuous integration alerts inattentive software teams when they start to drift from a product that can be built and packaged. A dedicated continuous integration system frequently builds the product and runs tests to ensure that the system doesn’t work only on a developer’s machine. By lagging potential problems quickly, this practice allows us to stay in the middle of the lane and take small and cheap corrective action when needed. Continuous integration ensures that once the product is built right, it stays right.
1 http://articles.latimes.com/1997-03-07/local/me-35781_1_botts-dots
162
Chapter 10 Validating frequently |
163 |
The same principles apply to building the right product. Once the right product is built, we want to ensure that it stays right. If it drifts from the designated direction, we can solve the problem much more easily and cheaply if we know about it quickly and don’t let problems accumulate. We can frequently validate executable speciications. A continuous-build server2 can frequently check all the speciications and ensure that the system still satisies them.
Continuous integration is a well-documented software practice, and many other authors have already done a good job of explaining it in detail. I don’t wish to repeat how to set up continuous-build and integration systems in general, but some particular challenges for frequent validation of executable speciications are important for the topic of this book.
Many teams who used Speciication by Example to extend existing systems found that executable speciications have to run against a real database with realistic data, external services, or a fully deployed website. Functional acceptance tests check the functionality across many components, and if the system isn’t built up front to be testable, such checks often require the entire system to be integrated and deployed. This causes three groups of problems for frequent validation in addition to those usual for continuous integration with just technical (unit) tests:
•Unreliability caused by environmental dependencies—Unit tests are largely independent of the test environment, but executable speciications might depend heavily on the rest of the ecosystem they run in. Environment issues can cause the tests to fail even if the programming language code is correct. To gain conidence in acceptance test results, we have to solve or mitigate these environmental problems and make the test execution reliable.
•Slower feedback–Functional acceptance tests on brownield projects will often run an order of magnitude slower than unit tests. I’d consider a unit test pack slow if it runs for several minutes. Anything close to 10 minutes would be deinitely too slow, and I’d seriously start investigating how to make it run faster. On the other hand, I’ve seen acceptance test packs that had to run for several hours and couldn’t be optimized without reducing conidence in the results. Such slow overall feedback requires us to come up with a solution to get fast feedback from selected parts of the system on demand.
•Managing failed tests—A large number of coarse-grained functional tests that depend on many moving parts required some teams, especially when they started implementing Speciication by Example, to manage failing tests instead of ixing them straight away.
2 Software |
that automatically builds, packages, |
and executes |
tests when anyone changes |
any code in the version control system. If |
you’ve never |
heard of one, google CruiseControl |
|
Hudson |
or TeamCity. |
|
|