- •Contents
- •Preface
- •Acknowledgments
- •About the author
- •About the cover illustration
- •Higher product quality
- •Less rework
- •Better work alignment
- •Remember
- •Deriving scope from goals
- •Specifying collaboratively
- •Illustrating using examples
- •Validating frequently
- •Evolving a documentation system
- •A practical example
- •Business goal
- •An example of a good business goal
- •Scope
- •User stories for a basic loyalty system
- •Key Examples
- •Key examples: Free delivery
- •Free delivery
- •Examples
- •Living documentation
- •Remember
- •Tests can be good documentation
- •Remember
- •How to begin changing the process
- •Focus on improving quality
- •Start with functional test automation
- •When: Testers own test automation
- •Use test-driven development as a stepping stone
- •When: Developers have a good understanding of TDD
- •How to begin changing the team culture
- •Avoid “agile” terminology
- •When: Working in an environment that’s resistant to change
- •Ensure you have management support
- •Don’t make test automation the end goal
- •Don’t focus on a tool
- •Keep one person on legacy scripts during migration
- •When: Introducing functional automation to legacy systems
- •Track who is running—and not running—automated checks
- •When: Developers are reluctant to participate
- •Global talent management team at ultimate software
- •Sky Network services
- •Dealing with sign-off and traceability
- •Get sign-off on exported living documentation
- •When: Signing off iteration by iteration
- •When: Signing off longer milestones
- •Get sign-off on “slimmed down use cases”
- •When: Regulatory sign-off requires details
- •Introduce use case realizations
- •When: All details are required for sign-off
- •Warning signs
- •Watch out for tests that change frequently
- •Watch out for boomerangs
- •Watch out for organizational misalignment
- •Watch out for just-in-case code
- •Watch out for shotgun surgery
- •Remember
- •Building the right scope
- •Understand the “why” and “who”
- •Understand where the value is coming from
- •Understand what outputs the business users expect
- •Have developers provide the “I want” part of user stories
- •When: Business users trust the development team
- •Collaborating on scope without high-level control
- •Ask how something would be useful
- •Ask for an alternative solution
- •Make sure teams deliver complete features
- •When: Large multisite projects
- •Further information
- •Remember
- •Why do we need to collaborate on specifications?
- •The most popular collaborative models
- •Try big, all-team workshops
- •Try smaller workshops (“Three Amigos”)
- •Pair-writing
- •When: Mature products
- •Have developers frequently review tests before an iteration
- •When: Analysts writing tests
- •Try informal conversations
- •When: Business stakeholders are readily available
- •Preparing for collaboration
- •Hold introductory meetings
- •When: Project has many stakeholders
- •Involve stakeholders
- •Undertake detailed preparation and review up front
- •When: Remote Stakeholders
- •Prepare only initial examples
- •Don’t hinder discussion by overpreparing
- •Choosing a collaboration model
- •Remember
- •Illustrating using examples: an example
- •Examples should be precise
- •Don’t have yes/no answers in your examples
- •Avoid using abstract classes of equivalence
- •Ask for an alternative way to check the functionality
- •When: Complex/legacy infrastructures
- •Examples should be realistic
- •Avoid making up your own data
- •When: Data-driven projects
- •Get basic examples directly from customers
- •When: Working with enterprise customers
- •Examples should be easy to understand
- •Avoid the temptation to explore every combinatorial possibility
- •Look for implied concepts
- •Illustrating nonfunctional requirements
- •Get precise performance requirements
- •When: Performance is a key feature
- •Try the QUPER model
- •When: Sliding scale requirements
- •Use a checklist for discussions
- •When: Cross-cutting concerns
- •Build a reference example
- •When: Requirements are impossible to quantify
- •Remember
- •Free delivery
- •Examples should be precise and testable
- •When: Working on a legacy system
- •Don’t get trapped in user interface details
- •When: Web projects
- •Use a descriptive title and explain the goal using a short paragraph
- •Show and keep quiet
- •Don’t overspecify examples
- •Start with basic examples; then expand through exploring
- •When: Describing rules with many parameter combinations
- •In order to: Make the test easier to understand
- •When: Dealing with complex dependencies/referential integrity
- •Apply defaults in the automation layer
- •Don’t always rely on defaults
- •When: Working with objects with many attributes
- •Remember
- •Is automation required at all?
- •Starting with automation
- •When: Working on a legacy system
- •Plan for automation upfront
- •Don’t postpone or delegate automation
- •Avoid automating existing manual test scripts
- •Gain trust with user interface tests
- •Don’t treat automation code as second-grade code
- •Describe validation processes in the automation layer
- •Don’t replicate business logic in the test automation layer
- •Automate along system boundaries
- •When: Complex integrations
- •Don’t check business logic through the user interface
- •Automate below the skin of the application
- •Automating user interfaces
- •Specify user interface functionality at a higher level of abstraction
- •When: User interface contains complex logic
- •Avoid recorded UI tests
- •Set up context in a database
- •Test data management
- •Avoid using prepopulated data
- •When: Specifying logic that’s not data driven
- •Try using prepopulated reference data
- •When: Data-driven systems
- •Pull prototypes from the database
- •When: Legacy data-driven systems
- •Remember
- •Reducing unreliability
- •When: Working on a system with bad automated test support
- •Identify unstable tests using CI test history
- •Set up a dedicated continuous validation environment
- •Employ fully automated deployment
- •Create simpler test doubles for external systems
- •When: Working with external reference data sources
- •Selectively isolate external systems
- •When: External systems participate in work
- •Try multistage validation
- •When: Large/multisite groups
- •Execute tests in transactions
- •Run quick checks for reference data
- •When: Data-driven systems
- •Wait for events, not for elapsed time
- •Make asynchronous processing optional
- •Getting feedback faster
- •Introduce business time
- •When: Working with temporal constraints
- •Break long test packs into smaller modules
- •Avoid using in-memory databases for testing
- •When: Data-driven systems
- •Separate quick and slow tests
- •When: A small number of tests take most of the time to execute
- •Keep overnight packs stable
- •When: Slow tests run only overnight
- •Create a current iteration pack
- •Parallelize test runs
- •When: You can get more than one test Environment
- •Try disabling less risky tests
- •When: Test feedback is very slow
- •Managing failing tests
- •Create a known regression failures pack
- •Automatically check which tests are turned off
- •When: Failing tests are disabled, not moved to a separate pack
- •Remember
- •Living documentation should be easy to understand
- •Look for higher-level concepts
- •Avoid using technical automation concepts in tests
- •When: Stakeholders aren’t technical
- •Living documentation should be consistent
- •When: Web projects
- •Document your building blocks
- •Living documentation should be organized for easy access
- •Organize current work by stories
- •Reorganize stories by functional areas
- •Organize along UI navigation routes
- •When: Documenting user interfaces
- •Organize along business processes
- •When: End-to-end use case traceability required
- •Listen to your living documentation
- •Remember
- •Starting to change the process
- •Optimizing the process
- •The current process
- •The result
- •Key lessons
- •Changing the process
- •The current process
- •Key lessons
- •Changing the process
- •Optimizing the process
- •Living documentation as competitive advantage
- •Key lessons
- •Changing the process
- •Improving collaboration
- •The result
- •Key lessons
- •Changing the process
- •Living documentation
- •Current process
- •Key lessons
- •Changing the process
- •Current process
- •Key lessons
- •Collaboration requires preparation
- •There are many different ways to collaborate
- •Looking at the end goal as business process documentation is a useful model
- •Long-term value comes from living documentation
- •Index
Chapter 4 Initiating the changes |
39 |
related to releasing a product. At LMAX, an electronic trading exchange, Jodie Parker made all the release activities visible by creating a release candidate board, which showed a high-level view of the three teams’ progress. It featured the status of all the planned deliverables, the main focus of the release, a set of tasks that would have to be done before a release, and critical issues that would have to be addressed before a release. All the teams could see this information and then come up with suggestions for improving delivery low.
Start with functional test automation
When: Applying to an existing project
The majority of teams I interviewed adopted Speciication by Example by starting with functional test automation and then gradually moving from testing after development to using executable speciications to guide development. That seems to be the path of least resistance for projects that already have a lot of code and that require testers to run their veriications manually.
Several teams sought to solve the problem of bottlenecks at the testing phase, a result of testers having to constantly catch up with development. With short delivery cycles (weeks or even days), extensive manual testing is impossible. Testing then piles up at the end of one iteration and spills over into the next, disrupting low. Functional test automation removes the bottleneck and engages testers with developers, motivating them to participate in the process change. Markus Gärtner says:
For a tester who comes from “testing is the bottleneck” and continuous ighting against development changes, it was very, very, very appealing to provide valuable feedback even before a bug was ixed with automated tests. This is a motivating vision to work toward.
If you |
don’t already |
have |
functional |
test automation, know that |
this is a |
low |
hanging |
fruit—an easy |
way |
to start |
the journey to Speciication by |
Example |
that |
provides |
immediate beneits. |
|
|
|
|
Automating functional tests works well as a irst phase of adoption of Speciication by Example, for several reasons:
•It brings immediate beneit. With automated tests, the length of the testing phase is signiicantly reduced, as is the number of issues escaping to production.
•Effective test automation requires a collaboration of developers and testers and starts to break down the divide between these two groups.
40 Speciication by Example
•Legacy products rarely have a design that supports easy testing. Starting with functional test automation forces the team to address this and make the architecture more testable, as well as sort out any issues around test reliability and test environments. This prepares the ground for automating executable speciications later.
•When most testing is manual and the team works in short cycles, testers are often a bottleneck in the process. This makes it virtually impossible for them to engage in anything else. Test automation gives them time to participate in speciication workshops and start looking at other activities, such as exploratory testing.
•Initial test automation enables the team to run more tests, and run them more frequently, than manual testing. This often lushes out bugs and inconsistencies, and the sudden increase in visibility helps business stakeholders see the value of test automation.
•Writing and initially automating functional tests often requires involvement of business users, who have to decide whether an inconsistency is a bug or the way the system should work. This leads to collaboration among testers, developers, and business users. It also requires the team to ind ways to automate tests so business users can understand them, preparing the way for executable speciications.
•Faster feedback helps developers see the value of test automation.
•Automating functional tests helps team members understand the tools required to automate executable speciications.
Isn’t this just shifting work?
A common |
objection |
to freeing |
up testers by getting programmers - to collabo |
|||||
rate on test automation is that programmers |
will have more work |
and this will |
||||||
slow |
down |
delivery |
of |
functionality. In fact, the |
general trend in the industry |
|||
is for teams to have |
more |
programmers |
than |
testers, so moving work from |
||||
testers |
to |
developers |
is |
not |
necessarily bad—it |
might remove |
a bottleneck in |
|
your process. |
|
|
|
|
|
|
Implementing functional test automation will get the team to work closer and prepare the system for the use of executable speciications later. To get the most out of this approach, automate functional tests using a tool for executable speciications and design them well, using the ideas from chapter 9 and chapter 11. Automating tests using traditional tester record-and-replay tools won’t give you the beneits you need.
Chapter 4 Initiating the changes |
41 |
Start by automating high-risk parts of the system
Working to |
cover |
all |
of |
a |
legacy |
system |
with |
automated |
tests |
is |
a |
futile |
efo |
|||||||||||
If you |
use |
functional |
test |
automation |
as |
a |
step |
to |
Speciication |
by |
Example, |
|||||||||||||
develop |
enough |
tests to |
show |
the |
value |
of test automation and get |
used |
to |
t |
|||||||||||||||
tools. |
After |
that, |
start |
implementing |
executable |
speciications |
for |
changes as |
|
|
||||||||||||||
they |
come |
and |
build |
up |
test |
coverage |
gradually. |
|
|
|
|
|
|
|
|
|
|
|||||||
To |
get |
the |
most |
out |
of |
|
initial functional |
test automation, focus on automating |
||||||||||||||||
the |
parts |
of |
the |
system |
that |
are |
risky; |
these |
are |
the |
areas |
where problems |
c |
|||||||||||
cost |
you |
a |
lot of money. Preventing issues there will demonstrate immediate |
|
||||||||||||||||||||
value. |
Good |
functional test coverage will give your team more conidence. The |
|
|||||||||||||||||||||
beneits |
from |
automating |
parts |
with |
less |
risk are |
probably |
not |
†as |
noteworthy. |
|
† See http://gojko.net/2011/02/08/test-automation-strategy-for-legacy-systems
Introduce a tool for executable speciications
When: Testers own test automation
On projects that already have functional test automation fully owned by testers, a key challenge is to break the imaginary wall between testers and developers. There’s no need to prove the value of test automation or to lush out issues around test environments, but the mindset of the team has to become more collaborative.
This problem is mostly cultural (more on that soon), but sometimes it’s also inancial. With an expensive test automation framework such as QTP, licensed per seat, developers and business analysts are intentionally kept far from the tests. Once the attitude toward collaboration changes, a team can work together on speciications and automate the validation without changing them.
Several teams got pushed in this direction when they had a problem that couldn’t be appropriately tested with their existing automation toolkit. They used this situation as a good excuse to start using one of the automation tools for executable speciications. (See the “Tools” section in the appendix for examples or additional articles on tools at http:// speciicationbyexample.com)
Teams |
discovered that |
using |
an |
automation tool |
for |
executable |
speciications |
|||
got |
developers |
more |
engaged |
in |
test |
automation |
and |
provided |
business users |
|
with |
a |
greater |
understanding |
of |
the |
tests. |
|
|
|