Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Tomek Kaczanowski - Practical Unit Testing with JUnit and Mockito - 2013.pdf
Скачиваний:
228
Добавлен:
07.03.2016
Размер:
6.59 Mб
Скачать

Chapter 8. Getting Feedback

all, you will want to know which test has failed, and which assertion has not been met. Secondly, you will want to be able to navigate with ease directly to the test code which failed. Thirdly, you will want to rerun tests (all of them, or maybe only the failed ones), after having introduced some changes to the code. All of this is supported by IntelliJ IDEA.

Figure 8.5. IntelliJ IDEA: failed tests

It is possible to configure the Test Runner Tab so that after having executed the tests it focuses on the first failed test. Figure 8.5 shows such a scenario. The assertion error is printed along with the precise lines of test code where the verification failed. They are clickable, so you can easily move to the test code and start fixing things.

8.1.3. Conclusion

As we have observed in the course this section, both IntelliJ IDEA and Eclipse provide highly readable test execution reports. The overall result is clearly visible in the form of a green or red bar. The results of each test are also shown. In cases of passed tests, only a minimal amount of information is printed, so the screen does not become clogged up with unimportant data. However, in the event of failure, both IDEs show more of the data, and this helps you to fix the bug right away. They offer quick navigation from assertion error to test code.

8.2. JUnit Default Reports

Usually you will run your tests with an IDE. However this is not always the case, and sometimes you will also need to look into the report files generated by JUnit.

JUnit generates one file per each executed test class by itself. In the case of Maven you will find two files for each - with .txt and .xml extensions - in the target/surefire-reports. Their content is equivalent, but the format differs. In the case of Gradle, you will find .xml files in the build/reports/tests directory. In addition, build tools generate some HTML reports, which are probably more agreeable to work with. Maven puts them in the target/site directory, while Gradle puts them in the build/reports/tests directory.

Build tools allow you to configure the output folder for test reports. Please consult the documentation if you are not happy with the default settings.

180

Chapter 8. Getting Feedback

Figure 8.6 shows an overview part of the test execution report generated by Gradle.

Figure 8.6. Test execution report - an overview

The reports allow you to "drill down", so you can see what the number of passed and failed tests is – first in each of the packages, then in individual classes. It is also possible to view the log of each test method (so, for example, you can check what exceptions were thrown, etc.).

Figure 8.7. Test execution report - details

8.3. Writing Custom Listeners

"Getting feedback" from the test execution can mean two things. Usually we are interested in the details of the tests execution after they are all finished (i.e. we want to know the results they finished with), but sometimes we would like to have some feedback during the execution of our tests.

There are many reasons for implementing a custom reporting mechanism. You might be interested in, for example:

getting more detailed information printed to the console,

implementing a GUI progress-bar widget, which would show how the execution of tests is progressing,

taking a screenshot after each failed Selenium test executed with the JUnit framework,

writing test results to the database.

181

Chapter 8. Getting Feedback

All this is possible thanks to the use of rules (see Section 6.8). The TestWatcher class is "a base class for Rules that take note of the testing action, without modifying it". It provides five protected methods which we can override. These methods are:

protected void succeeded(Description description), invoked when a test suceeds,

protected void failed(Throwable e, Description description), invoked when a test fails,

protected void skipped(AssumptionViolatedException e, Description description), invoked

when a test is skipped due to a failed assumption (see Section 6.3),

protected void starting(Description description), invoked when a test is about to start,

protected void finished(Description description), invoked when a test method finishes (whether

passing or failing).

An object of the Description class, passed to all of the above methods, contains information about the test class and the test method. And this is exactly what is required to implement a custom test listener that would then print the execution time of each test.

The implementation is trivial. All we need do to remember the start time of any test and, after it has finished, calculate and print the execution time. Piece of cake!1

Listing 8.1. Implementation of test time execution listener

public class TimeTestListener extends TestWatcher {

private Map<String, Long> startTimes = new HashMap<>();

@Override

protected void starting(Description description) { startTimes.put(description.getMethodName(), System.currentTimeMillis());

}

@Override

protected void finished(Description description) { long executionTime = System.currentTimeMillis()

- startTimes.get(description.getMethodName()); System.out.println(description.getMethodName()

+ ": " + executionTime);

}

}

Our custom listener needs to extend the TestWatcher class… …and override some of its methods.

And that is it! Now, we need to make our tests use it. Since the TestWatcher class is a rule (it implements the TestRule interface), we will use exactly the same syntax as we learned in Section 6.8.

1Obviously the solution I present here is somewhat oversimplified. For example, it won’t work properly with two test methods with identical names from two different classes.

182

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]