Rachel M. Carmena

DO's and DON'Ts when writing tests

Published: 26 April 2019
Last updated: 11 August 2021

This post includes some small tips from my experience when writing tests. I’ll update it as soon as I have more things to share about it.

Don’t mix production code and test code

A long time ago, I reviewed a module which was responsible for receiving messages and doing some actions afterwards.

However, I found production code both to send messages and to receive them. Why? Because it was used for a test.

It was very confusing for me: production code was communicating a wrong responsibility. In that case, source code to send messages was part of test code rather than production code.

Another case: adding objects comparison capabilities in production code which are only used by assertions. This is called Equality Pollution. Check testing frameworks because they include options to avoid this issue.

So, don’t add extra code in production code if it’s only used by tests.

Decouple production code from test code

There are software development teams following this rule: one test class per production class (or one test file per production file or something similar depending on the programming language). And I’ve seen these effects:

  • Writing tests is a boring task.
  • People don’t write tests because of being convinced about the advantages.
  • Writing tests seems like a waste of time.
  • Every single change breaks several tests.
  • Refactoring production code is a pain because of a huge amount of coupled tests.
  • Tests are focused on implementation: they are not good documentation.
  • There are a lot of tests that don’t provide any value or don’t make sense.
  • Sooner or later, tests are skipped.

Test code and production code have different purposes, so they must “live” independently.

Test code should be focused on behaviours, intentions or capabilities:

  • Changes in production code won’t imply changes in test code and vice versa.
  • Refactoring will be possible.
  • Tests will be good documentation.
  • Writing tests will be a meaningful task.
  • People will understand the advantages of starting with tests.
  • Tests will be kept updated.
  • Tests will be valuable and appreciated.

As Kent C. Dodds says:

The more your tests resemble the way your software is used, the more confidence they can give you.

Listen to test code

Just as it’s useful to listen to production code, it’s also useful to listen to test code.

When it’s said listen to the code, it’s related to all the source code.

If we are facing difficulties when writing a test, it’s a signal that should be analysed carefully to know the root causes and to fix them.

Another case (or smell): too much logic or implementation details in tests. What’s being tested? How can the test be broken?

Think twice before removing duplicated test code

A long time ago, I was reviewing a test class with 4 extracted parameterized methods (for 4 different types of expectations) and 6 extracted methods (for 6 different JSON files). I sat with the author of that class and we applied inline method refactoring (in other words, replacing method calls by the corresponding source code). After that, we realized that the test class was more readable and understandable without those 10 additional methods, despite having duplicated source code.

As it’s said in Fifty quick ideas to improve your tests by Gojko Adzic, David Evans and Tom Roden:

It’s far better to optimise tests for reading than for writing (…) if you need to compromise either ease of maintenance or readability, keep readability.

Consider test code as important as production code

All the source code should be cared or reviewed when refactoring, not only production code.

Test code is part of the documentation of your project.

Communicate through proper test names

I find it easier to write proper test names when following this piece of advice by Sandro Mancuso: invert the order of writing test parts when starting with the assertion.

In this way, you’ll see the test name very near the assertion and you’ll be able to check if it makes sense and it’s communicating the same thing.

Furthermore, time is not wasted arranging test data, acting and then realizing that the test makes no sense when writing the assertion.

On the other hand, it forces you to focus on checking a single thing. As it’s said in Fifty quick ideas to improve your tests by Gojko Adzic, David Evans and Tom Roden:

Write assertions first. (…) Tests that are written bottom up, by doing the outputs first, tend to be shorter and more directly explain the purpose.

They also include an interesting recommendation about test names:

Avoid using conjunctions (and, or, not) in test names. Conjunctions are a sign the test is trying to do too much, or lacks focus.

Consider the possibility of extending the testing framework

If the testing framework doesn’t include a matcher which would be interesting for you, consider the possibility of creating a custom matcher to fit your testing needs.

For example, I created a custom Hamcrest Matcher for Golden Master Refactoring, although now I’d recommend the use of ApprovalTests to get the same thing ;)

Don’t check only a few properties when comparing objects

I’ve seen problems because of comparing only 1 or 2 properties from objects with more properties.

Testing frameworks include specific assertions to facilitate this kind of comparisons: field by field comparison, deep equality, etc.

So, check that objects are fully compared.

Check the failure messages

I remember that day in which I spent more than fifteen minutes trying to understand an error raised by the testing framework. Finally, I realized that the expected and actual results were swapped in the assertion, so the message didn’t make sense.

Although some testing frameworks have been improved to avoid these mistakes, take care of it.

Check that the tests will raise an understable message if they fail: what’s failing, the expected output and the actual output.

Choose good and meaningful inputs for the tests

The inputs of the tests are not only important to check possible edge cases but also to show the tested behaviour.

Sometimes it’s possible to describe the properties of the behaviour under test: property-based testing.

On the other hand, if some input data isn’t relevant for a test, it can be named like any....

Check that the test fails under the wrong circumstances

My cousin Francisco Moreno usually says:

Don’t trust a test if you didn’t see it fail.

How could we make a passing test fail?

  • Changing the input or the expected output.
  • Changing the related production code (similar to mutation testing).

Failing tests during the red phase of TDD might not be included in that list: it’s about making a passing test fail.

More details about my cousin’s advice (in Spanish):

Don’t mix behaviours in the same test

Every test should have just one reason to fail.

Be careful with the false sense of security

As Dijkstra said in The Humble Programmer:

(…) program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence.

On the other hand, a high value of code coverage isn’t a guarantee of the testing quality. I talked about it in 99% code coverage - Do we have a good safety net to change this legacy code?.

Friendly reminder

As Dani Latorre says (in English below):

In (my) English:

Tests are part of the documentation. So, writing tests for existing code should be an amazing opportunity to document the behaviour, not only coverage.

Please, we need that they are understood!

Update

About decoupling production code from test code

This is the abstract of the talk Are your tests really driving your development? by Nat Pryce and Steve Freeman during XPDay London 2006. It has been preserved thanks to Kevlin Henney, because he included it in some article or talk:

Everybody knows that TDD stands for Test Driven Development. However, people too often concentrate on the words “Test” and “Development” and don’t consider what the word “Driven” really implies. For tests to drive development they must do more than just test that code performs its required functionality: they must clearly express that required functionality to the reader. That is, they must be clear specifications of the required functionality. Tests that are not written with their role as specifications in mind can be very confusing to read.

On the other hand, in 2009, Michael Feathers and Steve Freeman gave a talk about Ten years of Test-Driven Development where they remembered these thoughts by Kent Beck:

It said the way to program is to look at the input tape and manually type in the output tape you expect. Then you program until the actual and expected tapes match.

I thought, what a stupid idea. I want tests that pass, not tests that fail. Why would I write a test when I was sure it would fail. Well, I’m in the habit of trying stupid things out just to see what happens, so I tried it and it worked great.

I was finally able to separate logical from physical design. I’d always been told to do that but no one ever explained how.

and they gave this piece of advice, among others:

Separate what from how