
- •Practical Unit Testing with JUnit and Mockito
- •Table of Contents
- •About the Author
- •Acknowledgments
- •Preface
- •Preface - JUnit
- •Part I. Developers' Tests
- •Chapter 1. On Tests and Tools
- •1.1. An Object-Oriented System
- •1.2. Types of Developers' Tests
- •1.2.1. Unit Tests
- •1.2.2. Integration Tests
- •1.2.3. End-to-End Tests
- •1.2.4. Examples
- •1.2.5. Conclusions
- •1.3. Verification and Design
- •1.5. Tools Introduction
- •Chapter 2. Unit Tests
- •2.1. What is a Unit Test?
- •2.2. Interactions in Unit Tests
- •2.2.1. State vs. Interaction Testing
- •2.2.2. Why Worry about Indirect Interactions?
- •Part II. Writing Unit Tests
- •3.2. Class To Test
- •3.3. Your First JUnit Test
- •3.3.1. Test Results
- •3.4. JUnit Assertions
- •3.5. Failing Test
- •3.6. Parameterized Tests
- •3.6.1. The Problem
- •3.6.2. The Solution
- •3.6.3. Conclusions
- •3.7. Checking Expected Exceptions
- •3.8. Test Fixture Setting
- •3.8.1. Test Fixture Examples
- •3.8.2. Test Fixture in Every Test Method
- •3.8.3. JUnit Execution Model
- •3.8.4. Annotations for Test Fixture Creation
- •3.9. Phases of a Unit Test
- •3.10. Conclusions
- •3.11. Exercises
- •3.11.1. JUnit Run
- •3.11.2. String Reverse
- •3.11.3. HashMap
- •3.11.4. Fahrenheits to Celcius with Parameterized Tests
- •3.11.5. Master Your IDE
- •Templates
- •Quick Navigation
- •Chapter 4. Test Driven Development
- •4.1. When to Write Tests?
- •4.1.1. Test Last (AKA Code First) Development
- •4.1.2. Test First Development
- •4.1.3. Always after a Bug is Found
- •4.2. TDD Rhythm
- •4.2.1. RED - Write a Test that Fails
- •How To Choose the Next Test To Write
- •Readable Assertion Message
- •4.2.2. GREEN - Write the Simplest Thing that Works
- •4.2.3. REFACTOR - Improve the Code
- •Refactoring the Tests
- •Adding Javadocs
- •4.2.4. Here We Go Again
- •4.3. Benefits
- •4.4. TDD is Not Only about Unit Tests
- •4.5. Test First Example
- •4.5.1. The Problem
- •4.5.2. RED - Write a Failing Test
- •4.5.3. GREEN - Fix the Code
- •4.5.4. REFACTOR - Even If Only a Little Bit
- •4.5.5. First Cycle Finished
- •‘The Simplest Thing that Works’ Revisited
- •4.5.6. More Test Cases
- •But is It Comparable?
- •Comparison Tests
- •4.6. Conclusions and Comments
- •4.7. How to Start Coding TDD
- •4.8. When not To Use Test-First?
- •4.9. Should I Follow It Blindly?
- •4.9.1. Write Good Assertion Messages from the Beginning
- •4.9.2. If the Test Passes "By Default"
- •4.10. Exercises
- •4.10.1. Password Validator
- •4.10.2. Regex
- •4.10.3. Booking System
- •Chapter 5. Mocks, Stubs, Test Spies
- •5.1. Introducing Mockito
- •5.1.1. Creating Test Doubles
- •5.1.2. Expectations
- •5.1.3. Verification
- •5.1.4. Conclusions
- •5.2. Types of Test Double
- •5.2.1. Code To Be Tested with Test Doubles
- •5.2.2. The Dummy Object
- •5.2.3. Test Stub
- •5.2.4. Test Spy
- •5.2.5. Mock
- •5.3. Putting it All Together
- •5.4. Example: TDD with Test Doubles
- •5.4.2. The Second Test: Send a Message to Multiple Subscribers
- •Refactoring
- •5.4.3. The Third Test: Send Messages to Subscribers Only
- •5.4.4. The Fourth Test: Subscribe More Than Once
- •Mockito: How Many Times?
- •5.4.5. The Fifth Test: Remove a Subscriber
- •5.4.6. TDD and Test Doubles - Conclusions
- •More Test Code than Production Code
- •The Interface is What Really Matters
- •Interactions Can Be Tested
- •Some Test Doubles are More Useful than Others
- •5.5. Always Use Test Doubles… or Maybe Not?
- •5.5.1. No Test Doubles
- •5.5.2. Using Test Doubles
- •No Winner So Far
- •5.5.3. A More Complicated Example
- •5.5.4. Use Test Doubles or Not? - Conclusion
- •5.6. Conclusions (with a Warning)
- •5.7. Exercises
- •5.7.1. User Service Tested
- •5.7.2. Race Results Enhanced
- •5.7.3. Booking System Revisited
- •5.7.4. Read, Read, Read!
- •Part III. Hints and Discussions
- •Chapter 6. Things You Should Know
- •6.1. What Values To Check?
- •6.1.1. Expected Values
- •6.1.2. Boundary Values
- •6.1.3. Strange Values
- •6.1.4. Should You Always Care?
- •6.1.5. Not Only Input Parameters
- •6.2. How to Fail a Test?
- •6.3. How to Ignore a Test?
- •6.4. More about Expected Exceptions
- •6.4.1. The Expected Exception Message
- •6.4.2. Catch-Exception Library
- •6.4.3. Testing Exceptions And Interactions
- •6.4.4. Conclusions
- •6.5. Stubbing Void Methods
- •6.6. Matchers
- •6.6.1. JUnit Support for Matcher Libraries
- •6.6.2. Comparing Matcher with "Standard" Assertions
- •6.6.3. Custom Matchers
- •6.6.4. Advantages of Matchers
- •6.7. Mockito Matchers
- •6.7.1. Hamcrest Matchers Integration
- •6.7.2. Matchers Warning
- •6.8. Rules
- •6.8.1. Using Rules
- •6.8.2. Writing Custom Rules
- •6.9. Unit Testing Asynchronous Code
- •6.9.1. Waiting for the Asynchronous Task to Finish
- •6.9.2. Making Asynchronous Synchronous
- •6.9.3. Conclusions
- •6.10. Testing Thread Safe
- •6.10.1. ID Generator: Requirements
- •6.10.2. ID Generator: First Implementation
- •6.10.3. ID Generator: Second Implementation
- •6.10.4. Conclusions
- •6.11. Time is not on Your Side
- •6.11.1. Test Every Date (Within Reason)
- •6.11.2. Conclusions
- •6.12. Testing Collections
- •6.12.1. The TDD Approach - Step by Step
- •6.12.2. Using External Assertions
- •Unitils
- •Testing Collections Using Matchers
- •6.12.3. Custom Solution
- •6.12.4. Conclusions
- •6.13. Reading Test Data From Files
- •6.13.1. CSV Files
- •6.13.2. Excel Files
- •6.14. Conclusions
- •6.15. Exercises
- •6.15.1. Design Test Cases: State Testing
- •6.15.2. Design Test Cases: Interactions Testing
- •6.15.3. Test Collections
- •6.15.4. Time Testing
- •6.15.5. Redesign of the TimeProvider class
- •6.15.6. Write a Custom Matcher
- •6.15.7. Preserve System Properties During Tests
- •6.15.8. Enhance the RetryTestRule
- •6.15.9. Make an ID Generator Bulletproof
- •Chapter 7. Points of Controversy
- •7.1. Access Modifiers
- •7.2. Random Values in Tests
- •7.2.1. Random Object Properties
- •7.2.2. Generating Multiple Test Cases
- •7.2.3. Conclusions
- •7.3. Is Set-up the Right Thing for You?
- •7.4. How Many Assertions per Test Method?
- •7.4.1. Code Example
- •7.4.2. Pros and Cons
- •7.4.3. Conclusions
- •7.5. Private Methods Testing
- •7.5.1. Verification vs. Design - Revisited
- •7.5.2. Options We Have
- •7.5.3. Private Methods Testing - Techniques
- •Reflection
- •Access Modifiers
- •7.5.4. Conclusions
- •7.6. New Operator
- •7.6.1. PowerMock to the Rescue
- •7.6.2. Redesign and Inject
- •7.6.3. Refactor and Subclass
- •7.6.4. Partial Mocking
- •7.6.5. Conclusions
- •7.7. Capturing Arguments to Collaborators
- •7.8. Conclusions
- •7.9. Exercises
- •7.9.1. Testing Legacy Code
- •Part IV. Listen and Organize
- •Chapter 8. Getting Feedback
- •8.1. IDE Feedback
- •8.1.1. Eclipse Test Reports
- •8.1.2. IntelliJ IDEA Test Reports
- •8.1.3. Conclusion
- •8.2. JUnit Default Reports
- •8.3. Writing Custom Listeners
- •8.4. Readable Assertion Messages
- •8.4.1. Add a Custom Assertion Message
- •8.4.2. Implement the toString() Method
- •8.4.3. Use the Right Assertion Method
- •8.5. Logging in Tests
- •8.6. Debugging Tests
- •8.7. Notifying The Team
- •8.8. Conclusions
- •8.9. Exercises
- •8.9.1. Study Test Output
- •8.9.2. Enhance the Custom Rule
- •8.9.3. Custom Test Listener
- •8.9.4. Debugging Session
- •Chapter 9. Organization Of Tests
- •9.1. Package for Test Classes
- •9.2. Name Your Tests Consistently
- •9.2.1. Test Class Names
- •Splitting Up Long Test Classes
- •Test Class Per Feature
- •9.2.2. Test Method Names
- •9.2.3. Naming of Test-Double Variables
- •9.3. Comments in Tests
- •9.4. BDD: ‘Given’, ‘When’, ‘Then’
- •9.4.1. Testing BDD-Style
- •9.4.2. Mockito BDD-Style
- •9.5. Reducing Boilerplate Code
- •9.5.1. One-Liner Stubs
- •9.5.2. Mockito Annotations
- •9.6. Creating Complex Objects
- •9.6.1. Mummy Knows Best
- •9.6.2. Test Data Builder
- •9.6.3. Conclusions
- •9.7. Conclusions
- •9.8. Exercises
- •9.8.1. Test Fixture Setting
- •9.8.2. Test Data Builder
- •Part V. Make Them Better
- •Chapter 10. Maintainable Tests
- •10.1. Test Behaviour, not Methods
- •10.2. Complexity Leads to Bugs
- •10.3. Follow the Rules or Suffer
- •10.3.1. Real Life is Object-Oriented
- •10.3.2. The Non-Object-Oriented Approach
- •Do We Need Mocks?
- •10.3.3. The Object-Oriented Approach
- •10.3.4. How To Deal with Procedural Code?
- •10.3.5. Conclusions
- •10.4. Rewriting Tests when the Code Changes
- •10.4.1. Avoid Overspecified Tests
- •10.4.2. Are You Really Coding Test-First?
- •10.4.3. Conclusions
- •10.5. Things Too Simple To Break
- •10.6. Conclusions
- •10.7. Exercises
- •10.7.1. A Car is a Sports Car if …
- •10.7.2. Stack Test
- •Chapter 11. Test Quality
- •11.1. An Overview
- •11.2. Static Analysis Tools
- •11.3. Code Coverage
- •11.3.1. Line and Branch Coverage
- •11.3.2. Code Coverage Reports
- •11.3.3. The Devil is in the Details
- •11.3.4. How Much Code Coverage is Good Enough?
- •11.3.5. Conclusion
- •11.4. Mutation Testing
- •11.4.1. How does it Work?
- •11.4.2. Working with PIT
- •11.4.3. Conclusions
- •11.5. Code Reviews
- •11.5.1. A Three-Minute Test Code Review
- •Size Heuristics
- •But do They Run?
- •Check Code Coverage
- •Conclusions
- •11.5.2. Things to Look For
- •Easy to Understand
- •Documented
- •Are All the Important Scenarios Verified?
- •Run Them
- •Date Testing
- •11.5.3. Conclusions
- •11.6. Refactor Your Tests
- •11.6.1. Use Meaningful Names - Everywhere
- •11.6.2. Make It Understandable at a Glance
- •11.6.3. Make Irrelevant Data Clearly Visible
- •11.6.4. Do not Test Many Things at Once
- •11.6.5. Change Order of Methods
- •11.7. Conclusions
- •11.8. Exercises
- •11.8.1. Clean this Mess
- •Appendix A. Automated Tests
- •A.1. Wasting Your Time by not Writing Tests
- •A.1.1. And what about Human Testers?
- •A.1.2. One More Benefit: A Documentation that is Always Up-To-Date
- •A.2. When and Where Should Tests Run?
- •Appendix B. Running Unit Tests
- •B.1. Running Tests with Eclipse
- •B.1.1. Debugging Tests with Eclipse
- •B.2. Running Tests with IntelliJ IDEA
- •B.2.1. Debugging Tests with IntelliJ IDEA
- •B.3. Running Tests with Gradle
- •B.3.1. Using JUnit Listeners with Gradle
- •B.3.2. Adding JARs to Gradle’s Tests Classpath
- •B.4. Running Tests with Maven
- •B.4.1. Using JUnit Listeners and Reporters with Maven
- •B.4.2. Adding JARs to Maven’s Tests Classpath
- •Appendix C. Test Spy vs. Mock
- •C.1. Different Flow - and Who Asserts?
- •C.2. Stop with the First Error
- •C.3. Stubbing
- •C.4. Forgiveness
- •C.5. Different Threads or Containers
- •C.6. Conclusions
- •Appendix D. Where Should I Go Now?
- •Bibliography
- •Glossary
- •Index
- •Thank You!

Chapter 2. Unit Tests
2.2. Interactions in Unit Tests
To understand what should be tested by unit tests, and how, we need to take a closer look at the interactions between the test class and the SUT, and the SUT and its DOCs1.
First, some theory in the form of a diagram. Figure 2.1 shows possible interactions between an SUT and other entities.
Figure 2.1. Types of collaboration with an SUT
Two interactions are direct, and involve the SUT and its client (a test class, in this case). These two are very easy to act upon - they are directly "available" from within the test code. Two other interactions are indirect: they involve the SUT and DOCs. In this case, the client (a test class) has no way of directly controlling the interactions.
Another possible classification divides up interactions into inputs (the SUT receiving some message) and outputs (the SUT sending a message). When testing, we will use direct and indirect inputs to set the SUT in a required state and to invoke its methods. The direct and indirect outputs of the SUT are expressions of the SUT’s behaviour; this means we shall use them to verify whether the SUT is working properly.
Table 2.1 summarizes the types of possible collaboration between an SUT and DOCs. The first column
– "type of interaction" – describes the type of collaboration from the SUT’s point of view. A test class acts as a client (someone who uses the SUT); hence its appearance in the "involved parties" column.
Table 2.1. Types of collaboration with an SUT within test code
|
type of |
involved parties |
description |
|
|
interaction |
|
|
|
|
|
|
|
|
|
direct input |
|
Calls to the methods of the SUT’s API. |
|
|
|
|
Test class & SUT |
|
|
direct output |
Values returned by the SUT to the test class after |
||
|
|
|
|
calling some SUT method. |
|
|
|
|
|
|
indirect output |
|
Arguments passed by the SUT to a method of one |
|
|
|
|
|
of its collaborators. |
|
|
|
SUT & DOCs |
|
|
indirect input |
Value returned (or an exception thrown) to the |
||
|
|
|
|
SUT by collaborators, after it called some of their |
|
|
|
|
methods |
|
|
|
|
|
|
|
|
|
|
1An SUT is a thing being tested; DOCs are its collaborators. Both terms are introduced in Section 1.2.
14

Chapter 2. Unit Tests
A code example will make all of this clear. Let’s imagine some financial service (FinancialService class) which, based on the last client payment and its type (whatever that would be), calculates some "bonus".
Listing 2.1. Example class to present various types of interaction in unit tests
public class FinancialService {
.... // definition of fields and other methods omitted
public BigDecimal calculateBonus(long clientId, BigDecimal payment) { Short clientType = clientDAO.getClientType(clientId);
BigDecimal bonus = calculator.calculateBonus(clientType, payment); clientDAO.saveBonusHistory(clientId, bonus);
return bonus;
}
}
As you can see the SUT’s calculateBonus() method takes two parameters (clientId and payment) and interacts with two collaborators (clientDAO and calculator). In order to test the calculateBonus() method thoroughly, we need to control both the input parameters (direct inputs) and the messages returned from its collaborators (indirect inputs). Then we will be able to see if returned value (direct output) is correct.
Table 2.2 summarizes the types of interaction that happen within the calculateBonus() method, and that are important from the test point of view.
Table 2.2. Collaborations within the calculateBonus() method
type of |
involved parties |
description |
|
interaction |
|
|
|
|
|
|
|
direct input |
|
Direct call of the calculateBonus() method of the SUT with |
|
|
Test class & SUT |
clientId and payment arguments |
|
|
|
||
direct output |
bonus value returned by the SUT to the test class after it |
||
|
|||
|
|
called the calculateBonus() method |
|
|
|
|
|
indirect output |
|
• clientId and bonus passed by the SUT to the |
|
|
|
saveBonusHistory() method of clientDAO |
|
|
SUT & DOCs |
• clientType and payment passed by the SUT to the |
|
|
calculateBonus() method of calculator |
||
|
|
||
|
|
|
|
indirect input |
|
clientType returned by clientDAO, and bonus returned by |
|
|
|
calculator to the SUT |
|
|
|
|
2.2.1. State vs. Interaction Testing
Let us now recall the simple abstraction of an OO system shown in Figure 1.1. It shows how two kinds of classes - workers and managers - cooperate together in order to fulfill a request issued by a client. The book describes unit testing of both kind of classes. First we shall dive into the world of workers, because we want to make sure that the computations they do, and the values they return, are correct. This part of unit testing – called state testing – is really simple, and has been fully recognized for many years. This kind of test uses direct inputs and outputs. We will discuss state testing in Chapter 3, Unit Tests with no Collaborators.
15

Chapter 2. Unit Tests
Then we will move into the more demanding topics connected with interactions testing. We will concentrate on the work of managers, and we will concentrate on how messages are passed between collaborators. This is a far trickier and less intuitive kind of testing. Every so often, new ideas and tools emerge, and there are still lively discussions going on about how to properly test interactions. What is really scary is that interaction tests can sometimes do more harm than good, so we will concentrate not only on how but also on whether questions. This kind of test concentrates on indirect outputs. We will discuss interactions testing in Chapter 5, Mocks, Stubs, Test Spies.
Testing of direct outputs is also called "state verification", while testing of indirect outputs is called "behaviour verification" (see [fowler2007]).
2.2.2. Why Worry about Indirect Interactions?
An object-oriented zealot could, at this point, start yelling at me: "Ever heard of encapsulation and information hiding? So why on earth should we worry about what methods were called by the SUT on its collaborators? Why not leave it as an implementation detail of the SUT? If this is a private part of the SUT implementation, then we should not touch it at all."
This sounds reasonable, doesn’t it? If only we could test our classes thoroughly, just using their API! Unfortunately, this is not possible.
Consider a simple example of retrieving objects from a cache.
Let us remember what the general idea of a cache is. There are two storage locations, the "real one", with vast capacity and average access time, and the "cache", which has much smaller capacity but much faster access time2. Let us now define a few requirements for a system with a cache. This will not be a fully-fledged cache mechanism, but will be sufficient to illustrate the problem we encounter.
When asked for an object with key X, our system with its cache should act according to the following simple rules:
1.if the object with key X is not in any storage location, the system will return null,
2.if the object with key X exists in any storage location, it will be returned,
a.if it exists in the cache storage, it will be returned from this storage location,
b.the main storage location will be searched only if the object with key X does not exist in the cache storage3.
The point is, of course, to have a smart caching strategy that will increase the cache hit ratio4 – but this is not really relevant to our discussion. What we are concerned with are the outputs (returned values) and the interactions between the SUT and its collaborators.
If you consider the requirements listed above, you will notice that with state testing we can only test two of them - 1 and 2. This is because state testing respects objects’ privacy. It does not allow one to
2In fact, it would be more correct to say that access to one storage area is cheaper than to the other one. Usually, the unit of cost is time-relative, so we will make such a simplification here.
3Requirements 2a and 2b could also be expressed as follows: "first search in the cache storage, then in the main storage".
4Which basically means that most of the items will be in the cache when requested, and the number of queries to the real storage will be minimized.
16

Chapter 2. Unit Tests
see what the object is doing internally – something which, in our case, means that it cannot verify from which storage area the requested object has been retrieved. Thus, requirements 2a and 2b cannot be verified using state testing.
This is illustrated in the picture below. Our SUT, which consists of two storage locations (a fast cache storage and a slower real storage), is accessible via a single get() method. The client, who sends requests to the SUT, knows nothing about its internal complexity.
Figure 2.2. Is this storage working correctly or not?
Ideally, when a request comes first the cache storage is searched and then, in case the cache storage does not have an entry with the given key (X in this example), the main storage is searched. However, if the SUT is not implemented correctly then it can first look into the main storage without checking the faster storage first. The client who waits for an object with the given key can not distinguish between these two situations. All he knows is that he requested an object with key X and that he got it.
In order to really verify whether our system is working as it is supposed to or not, interaction testing must by applied. The order of calls to collaborators – cache and real storage – must be checked. Without this, we cannot say whether the system is working or not.
This simple example proves that verification of the observable behaviour of the SUT (its direct outputs) is not enough. Similar issues arise when testing managers (see Section 1.1), which coordinate the efforts of others. As mentioned previously, such coordinating classes are quite popular in OO systems. This is why we will be spending a great deal of time discussing techniques, tools and issues related to indirect outputs testing.
But to begin with let’s concentrate on the simpler case. In the next section we will learn how to test simple objects that do not have any collaborators.
17