Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
roth_stephan_clean_c20_sustainable_software_development_patt.pdf
Скачиваний:
29
Добавлен:
27.03.2023
Размер:
7.26 Mб
Скачать

Chapter 8 Test-Driven Development

been built previously. And in some development organizations, this approach is even mistakenly named as “test-driven development,” which is flat wrong.

Like I said, plain old unit testing is better than no unit testing at all. Nonetheless, this approach has a few disadvantages:

•\

There is no compulsion to write the unit tests afterward. Once a

 

feature works (…or rather seems to work), there is little motivation

 

to retrofit the code with unit tests. It’s no fun, and the temptation

 

to move on to the next exciting task is just too great for many

 

developers.

•\

The resulting code can be difficult to test. Often it is not so easy to

 

retrofit existing code with unit tests, because the initial developers

 

didn’t set great store by its testability. This tends to favor the

 

emergence of tightly coupled code.

•\

It is not easy to reach pretty-high test coverage with retrofitted unit

 

tests. The writing of unit tests after the code has the tendency that

 

some issues or bugs can slip through.

Test-Driven Development as a Game Changer

Test-driven development (TDD) turns traditional development completely around. For developers who have not yet dealt with TDD, this approach represents a paradigm shift.

As a so-called test-first approach and in contrast to POUT, TDD does not allow any production code to be written before the associated test has been written that justifies that code. In other words, TDD means that we write the test for a new feature or function always before we write the corresponding production code. This is done strictly step by step: after each implemented test, just enough production code is written that the test will pass and no more! And it is done as long as there are still unrealized requirements for the module to be developed.

At first glance, it seems to be paradoxical and a little bit absurd to write a unit test for something that does not yet exist. How can this work?

Don’t worry, it works. After we have discussed the process behind TDD in detail in the next section, all doubts will hopefully be eliminated.

338

Chapter 8 Test-Driven Development

The Workflow of TDD

When performing test-driven development, the steps depicted in Figure 8-2 are run through repeatedly until all known requirements for the unit to develop are satisfied.

Figure 8-2.  The detailed workflow of TDD as an UML activity diagram

First of all, it is remarkable that the first action after the initial node that is labeled with “Start Doing TDD” is that the developer should think about which requirement to satisfy. Which kinds of requirements are meant here?

Well, first and foremost there are requirements that must be fulfilled by a software system. This applies both to the requirements of the business stakeholders on the top level regarding the whole system, as well as to the requirements residing on lower abstraction levels, that is, requirements for components, classes, and functions, which were derived from the business stakeholders’ requirements. With TDD and its testfirst approach, requirements are nailed down firmly by unit tests. In fact, before the production code is written. In our case of a test-first approach for the development

of units, that is, at the lowest level of the test pyramid (see Figure 2-1 in Chapter 2),

339

Chapter 8 Test-Driven Development

of course the requirements at the lowest level are meant here. Naturally, such a testfirst approach can also be applied at the higher levels of abstraction, such as in an approach named acceptance test–driven development (ATDD), which is a development methodology that encompasses acceptance testing, but claims writing acceptance tests before developers begin coding.

Next, a small test is to be written, whereby the public interface (API) is to be designed. This might be surprising, because in the first run through this cycle, we still have not written any production code. So, what interface can be designed here if we have a blank piece of paper?

Well, the simple answer is this: that “blank piece of paper” is exactly what we want to fill in now, but coming from a different perspective than usual. We take the perspective of a future external client of the piece of software to be developed. We use a small test to define how we want to use the code to be developed. In other words, this is the step that should lead to well-testable and thus also well-usable software units.

After we have written the appropriate lines in the test, we must, of course, also satisfy the compiler and provide the interface requested by the test.

Then immediately the next surprise: the newly written unit test must (initially) fail. Why?

Simple answer: we have to make sure that the test can fail at all. Even a unit test can itself be implemented incorrectly and, for example, always pass, no matter what we’re doing in the production code. So, we have to ensure that the newly written test is armed.

Now we are getting to the climax of this small workflow: we write just enough production code—and not a single line more!—so that the new unit test (… and any previously existing tests) is passed! It is very important to be disciplined at this point and not write more code than required (remember the KISS principle from Chapter 3). It’s up to the developer to decide what is appropriate in each situation. Sometimes a single line of code, or even just one statement, is sufficient; in other cases you need to call a library function. If the latter is the case, the time has now come to think about how to integrate and use this library, and especially how to replace it with a test double (see the section about test doubles in Chapter 2).

If we now run the unit tests and we have done everything right, the tests will pass. We have reached a remarkable point in the process. If the tests pass, we always have

100% unit test coverage at this step. Always! Not only 100% in the sense of a technical test coverage metric, such as condition coverage, branch coverage, or statement coverage. No, much more important is that we have 100% unit test coverage regarding

340

Chapter 8 Test-Driven Development

the requirements that were already implemented at this point! And yes, at this point possibly there may be still some or many non-implemented requirements for the piece of code to be developed. This is okay, because we will go through the TDD cycle again and again until all requirements are satisfied. But for a subset of requirements that are already satisfied at this point, we have 100% unit test coverage.

This fact gives us tremendous power! With this gapless safety net of unit tests, we can now carry out fearless refactorings. Code smells (e.g., duplicated code) or design issues can be fixed. We do not need to be afraid to break functionality, because regularly executed unit tests will give us immediate feedback about that. And the pleasant thing is this: if one or more tests fail during the refactoring phase, the code change that led to it was a very small one.

After the refactoring has been completed, we can implement another requirement that has not yet been fulfilled by continuing the TDD cycle. If there are no more requirements, we are ready.

Figure 8-2 depicts the TDD cycle with many details. Boiled down to its three essential main steps as depicted in Figure 8-3, the TDD cycle is often referred to as “RED – GREEN – REFACTOR.”

Figure 8-3.  The core workflow of TDD

341