- •contents
- •preface
- •acknowledgments
- •about this book
- •Special features
- •Best practices
- •Design patterns in action
- •Software directory
- •Roadmap
- •Part 1: JUnit distilled
- •Part 2: Testing strategies
- •Part 3: Testing components
- •Code
- •References
- •Author online
- •about the authors
- •about the title
- •about the cover illustration
- •JUnit jumpstart
- •1.1 Proving it works
- •1.2 Starting from scratch
- •1.3 Understanding unit testing frameworks
- •1.4 Setting up JUnit
- •1.5 Testing with JUnit
- •1.6 Summary
- •2.1 Exploring core JUnit
- •2.2 Launching tests with test runners
- •2.2.1 Selecting a test runner
- •2.2.2 Defining your own test runner
- •2.3 Composing tests with TestSuite
- •2.3.1 Running the automatic suite
- •2.3.2 Rolling your own test suite
- •2.4 Collecting parameters with TestResult
- •2.5 Observing results with TestListener
- •2.6 Working with TestCase
- •2.6.1 Managing resources with a fixture
- •2.6.2 Creating unit test methods
- •2.7 Stepping through TestCalculator
- •2.7.1 Creating a TestSuite
- •2.7.2 Creating a TestResult
- •2.7.3 Executing the test methods
- •2.7.4 Reviewing the full JUnit life cycle
- •2.8 Summary
- •3.1 Introducing the controller component
- •3.1.1 Designing the interfaces
- •3.1.2 Implementing the base classes
- •3.2 Let’s test it!
- •3.2.1 Testing the DefaultController
- •3.2.2 Adding a handler
- •3.2.3 Processing a request
- •3.2.4 Improving testProcessRequest
- •3.3 Testing exception-handling
- •3.3.1 Simulating exceptional conditions
- •3.3.2 Testing for exceptions
- •3.4 Setting up a project for testing
- •3.5 Summary
- •4.1 The need for unit tests
- •4.1.1 Allowing greater test coverage
- •4.1.2 Enabling teamwork
- •4.1.3 Preventing regression and limiting debugging
- •4.1.4 Enabling refactoring
- •4.1.5 Improving implementation design
- •4.1.6 Serving as developer documentation
- •4.1.7 Having fun
- •4.2 Different kinds of tests
- •4.2.1 The four flavors of software tests
- •4.2.2 The three flavors of unit tests
- •4.3 Determining how good tests are
- •4.3.1 Measuring test coverage
- •4.3.2 Generating test coverage reports
- •4.3.3 Testing interactions
- •4.4 Test-Driven Development
- •4.4.1 Tweaking the cycle
- •4.5 Testing in the development cycle
- •4.6 Summary
- •5.1 A day in the life
- •5.2 Running tests from Ant
- •5.2.1 Ant, indispensable Ant
- •5.2.2 Ant targets, projects, properties, and tasks
- •5.2.3 The javac task
- •5.2.4 The JUnit task
- •5.2.5 Putting Ant to the task
- •5.2.6 Pretty printing with JUnitReport
- •5.2.7 Automatically finding the tests to run
- •5.3 Running tests from Maven
- •5.3.2 Configuring Maven for a project
- •5.3.3 Executing JUnit tests with Maven
- •5.3.4 Handling dependent jars with Maven
- •5.4 Running tests from Eclipse
- •5.4.1 Creating an Eclipse project
- •5.4.2 Running JUnit tests in Eclipse
- •5.5 Summary
- •6.1 Introducing stubs
- •6.2 Practicing on an HTTP connection sample
- •6.2.1 Choosing a stubbing solution
- •6.2.2 Using Jetty as an embedded server
- •6.3 Stubbing the web server’s resources
- •6.3.1 Setting up the first stub test
- •6.3.2 Testing for failure conditions
- •6.3.3 Reviewing the first stub test
- •6.4 Stubbing the connection
- •6.4.1 Producing a custom URL protocol handler
- •6.4.2 Creating a JDK HttpURLConnection stub
- •6.4.3 Running the test
- •6.5 Summary
- •7.1 Introducing mock objects
- •7.2 Mock tasting: a simple example
- •7.3 Using mock objects as a refactoring technique
- •7.3.1 Easy refactoring
- •7.3.2 Allowing more flexible code
- •7.4 Practicing on an HTTP connection sample
- •7.4.1 Defining the mock object
- •7.4.2 Testing a sample method
- •7.4.3 Try #1: easy method refactoring technique
- •7.4.4 Try #2: refactoring by using a class factory
- •7.5 Using mocks as Trojan horses
- •7.6 Deciding when to use mock objects
- •7.7 Summary
- •8.1 The problem with unit-testing components
- •8.2 Testing components using mock objects
- •8.2.1 Testing the servlet sample using EasyMock
- •8.2.2 Pros and cons of using mock objects to test components
- •8.3 What are integration unit tests?
- •8.4 Introducing Cactus
- •8.5 Testing components using Cactus
- •8.5.1 Running Cactus tests
- •8.5.2 Executing the tests using Cactus/Jetty integration
- •8.6 How Cactus works
- •8.6.2 Stepping through a test
- •8.7 Summary
- •9.1 Presenting the Administration application
- •9.2 Writing servlet tests with Cactus
- •9.2.1 Designing the first test
- •9.2.2 Using Maven to run Cactus tests
- •9.2.3 Finishing the Cactus servlet tests
- •9.3 Testing servlets with mock objects
- •9.3.1 Writing a test using DynaMocks and DynaBeans
- •9.3.2 Finishing the DynaMock tests
- •9.4 Writing filter tests with Cactus
- •9.4.1 Testing the filter with a SELECT query
- •9.4.2 Testing the filter for other query types
- •9.4.3 Running the Cactus filter tests with Maven
- •9.5 When to use Cactus, and when to use mock objects
- •9.6 Summary
- •10.1 Revisiting the Administration application
- •10.2 What is JSP unit testing?
- •10.3 Unit-testing a JSP in isolation with Cactus
- •10.3.1 Executing a JSP with SQL results data
- •10.3.2 Writing the Cactus test
- •10.3.3 Executing Cactus JSP tests with Maven
- •10.4 Unit-testing taglibs with Cactus
- •10.4.1 Defining a custom tag
- •10.4.2 Testing the custom tag
- •10.5 Unit-testing taglibs with mock objects
- •10.5.1 Introducing MockMaker and installing its Eclipse plugin
- •10.5.2 Using MockMaker to generate mocks from classes
- •10.6 When to use mock objects and when to use Cactus
- •10.7 Summary
- •Unit-testing database applications
- •11.1 Introduction to unit-testing databases
- •11.2 Testing business logic in isolation from the database
- •11.2.1 Implementing a database access layer interface
- •11.2.2 Setting up a mock database interface layer
- •11.2.3 Mocking the database interface layer
- •11.3 Testing persistence code in isolation from the database
- •11.3.1 Testing the execute method
- •11.3.2 Using expectations to verify state
- •11.4 Writing database integration unit tests
- •11.4.1 Filling the requirements for database integration tests
- •11.4.2 Presetting database data
- •11.5 Running the Cactus test using Ant
- •11.5.1 Reviewing the project structure
- •11.5.2 Introducing the Cactus/Ant integration module
- •11.5.3 Creating the Ant build file step by step
- •11.5.4 Executing the Cactus tests
- •11.6 Tuning for build performance
- •11.6.2 Grouping tests in functional test suites
- •11.7.1 Choosing an approach
- •11.7.2 Applying continuous integration
- •11.8 Summary
- •Unit-testing EJBs
- •12.1 Defining a sample EJB application
- •12.2 Using a façade strategy
- •12.3 Unit-testing JNDI code using mock objects
- •12.4 Unit-testing session beans
- •12.4.1 Using the factory method strategy
- •12.4.2 Using the factory class strategy
- •12.4.3 Using the mock JNDI implementation strategy
- •12.5 Using mock objects to test message-driven beans
- •12.6 Using mock objects to test entity beans
- •12.7 Choosing the right mock-objects strategy
- •12.8 Using integration unit tests
- •12.9 Using JUnit and remote calls
- •12.9.1 Requirements for using JUnit directly
- •12.9.2 Packaging the Petstore application in an ear file
- •12.9.3 Performing automatic deployment and execution of tests
- •12.9.4 Writing a remote JUnit test for PetstoreEJB
- •12.9.5 Fixing JNDI names
- •12.9.6 Running the tests
- •12.10 Using Cactus
- •12.10.1 Writing an EJB unit test with Cactus
- •12.10.2 Project directory structure
- •12.10.3 Packaging the Cactus tests
- •12.10.4 Executing the Cactus tests
- •12.11 Summary
- •A.1 Getting the source code
- •A.2 Source code overview
- •A.3 External libraries
- •A.4 Jar versions
- •A.5 Directory structure conventions
- •B.1 Installing Eclipse
- •B.2 Setting up Eclipse projects from the sources
- •B.3 Running JUnit tests from Eclipse
- •B.4 Running Ant scripts from Eclipse
- •B.5 Running Cactus tests from Eclipse
- •references
- •index
120CHAPTER 6
Coarse-grained testing with stubs
And yet it moves.
—Galileo
As you develop your applications, you will find that the code you want to test depends on other classes, which themselves depend on other classes, which depend on the environment. For example, you might be developing an application that uses JDBC to access a database, a J2EE application (one that relies on a J2EE container for security, persistence, and other services), an application that accesses a filesystem, or an application that connects to some resource using HTTP, SOAP, or another protocol.
For applications that depend on an environment, writing unit tests is a challenge. Your tests need to be stable, and when you run them over and over, they need to yield the same results. You need a way to control the environment in which they run. One solution is to set up the real required environment as part of the tests and run the tests from within that environment. In some cases, this approach is practical and brings real added value (see chapter 8, which discusses in-container testing). However, it works well only if you can set up the real environment on your development platform, which isn’t always the case.
For example, if your application uses HTTP to connect to a web server provided by another company, you usually won’t have that server application available in your development environment. So, you need a way to simulate that server so you can still write tests for your code.
Or, suppose you are working with other developers on a project. What if you want to test your part of the application, but the other part isn’t ready? One solution is to simulate the missing part by replacing it with a fake that behaves the same way.
There are two strategies for providing these fake objects: stubbing and using mock objects. Stubs, the original solution, are still very popular, mostly because they allow you to test code without changing it to make it testable. This is not the case with mock objects. This chapter is dedicated to stubbing, and chapter 7 covers mock objects.
6.1 Introducing stubs
Stubs are a mechanism for faking the behavior of real code that may exist or that may not have been written yet. Stubs allow you to test a portion of a system without the other part being available. They usually do not change the code you’re testing but instead adapt to provide seamless integration.
Practicing on an HTTP connection sample |
121 |
|
|
DEFINITION stub—A stub is a portion of code that is inserted at runtime in place of the real code, in order to isolate calling code from the real implementation. The intent is to replace a complex behavior with a simpler one that allows independent testing of some portion of the real code.
Here are some examples of when you might use stubs:
■When you cannot modify an existing system because it is too complex and fragile
■For coarse-grained testing, such as integration testing between different subsystems
Stubs usually provide very good confidence in the system being tested. With stubs, you are not modifying the objects under test, and what you are testing is the same as what will execute in production. Tests involving stubs are usually executed in their running environment, providing additional confidence.
On the downside, stubs are usually hard to write, especially when the system to fake is complex. The stub needs to implement the same logic as the code it is replacing, and that is difficult to get right for complex logic. This issue often leads to having to debug the stubs! Here are some cons of stubbing:
■Stubs are often complex to write and need debugging themselves.
■Stubs can be difficult to maintain because they’re complex.
■A stub does not lend itself well to fine-grained unit testing.
■Each situation requires a different strategy.
In general, stubs are better adapted for replacing coarse-grained portions of code. You would usually use stubs to replace a full-blown external system like a filesystem, a connection to a server, a database, and so forth. Using stubs to replace a method call to a single class can be done, but it is more difficult. (We will demonstrate how to do this with mock objects in chapter 7.)
6.2 Practicing on an HTTP connection sample
To demonstrate what stubs can do, let’s build some stubs for a simple application that opens an HTTP connection to a URL and reads its content. Figure 6.1 shows the sample application (limited to a WebClient.getContent method) performing an HTTP connection to a remote web resource. We have supposed that the remote
122CHAPTER 6
Coarse-grained testing with stubs
WebClient |
|
|
Web Server |
|
||
|
|
|
|
|
|
|
public String getContent(URL url) |
|
|
|
Web Resource |
|
|
{ |
|
|
|
|
|
|
[...] |
|
HTTP |
|
|
public void doGet(...) |
|
url.openConnection(); |
|
|
|
|
{ |
|
|
|
|
|
|
||
[...] |
|
connection |
|
|
//Generate HTML |
|
} |
|
|
|
|
} |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Figure 6.1 The sample application makes an HTTP connection to a remote web resource. This is the real code in the stub definition.
web resource is a servlet, which by some means (say, by calling a JSP) generates an HTML response. Figure 6.1 is what we called the real code in the stub definition.
Our goal in this chapter is to unit-test the getContent method by stubbing the remote web resource, as demonstrated in figure 6.2. As you can see, you replace the servlet web resource with the stub, a simple HTML page returning whatever you need for the TestWebClient test case.
This approach allows you to test the getContent method independently of the implementation of the web resource (which in turn could call several other objects down the execution chain, possibly up to a database).
The important point to notice with stubbing is that getContent has not been modified to accept the stub. It is transparent to the application under test. In order to allow this, the external code to be replaced needs to have a well-defined
WebClient (being tested) |
Web Server |
public String getContent(URL url) |
|
Web Resource Stub |
{ |
|
|
[...] |
HTTP |
<html> |
url.openConnection(); |
|
<head/> |
[...] |
connection |
<body/> |
} |
|
</html> |
Test Case |
test |
|
|
public TestWebClient extends TestCase
{
[...]
public void testGetConnect()
{
[...]
}
}
Figure 6.2 Adding a test case and replacing the real web resource with a stub
Practicing on an HTTP connection sample |
123 |
|
|
interface and allow plugging of different implementations (the stub one, for example). In the example in figure 6.1, the interface is URLConnection, which cleanly isolates the implementation of the page from its caller.
Let’s see a stub in action using the simple HTTP connection sample. The example in listing 6.1 from the sample application demonstrates a code snippet opening an HTTP connection to a given URL and reading the content found at that URL. Imagine the method is one part of a bigger application that you want to unittest, and let’s unit-test that method.
Listing 6.1 Sample method opening an HTTP connection
package junitbook.coarse.try1;
import java.net.URL;
import java.net.HttpURLConnection; import java.io.InputStream;
import java.io.IOException;
public class WebClient
{
public String getContent(URL url)
{
StringBuffer content = new StringBuffer();
try
{
HttpURLConnection connection = |
|
|
|
Open HTTP |
|
||||
(HttpURLConnection) url.openConnection(); |
|
|
|
b connection |
connection.setDoInput(true); |
|
|
|
to URL |
|
|
|
Start reading |
|
InputStream is = connection.getInputStream(); c |
|
|||
|
remote data |
|||
|
byte[] buffer = new byte[2048]; int count;
while (-1 != (count = is.read(buffer)))
{
content.append(new String(buffer, 0, count));
}
}
catch (IOException e)
{ |
e Return null |
return null; |
|
} |
on error |
|
return content.toString();
}
}
Read all data in d stream
b Open an HTTP connection using the HttpURLConnection class.