#tech
#tests
#quality
12 min read

How to Write Good Unit Tests: 14 Tips

Andriy Obrizan

No doubt that covering an application with unit tests increases its quality.

Good unit tests make developers confident when implementing new features. They know when they’ve broken existing functionality and fix it before these bugs go to production. On the other hand, a lousy test suite can make a developer’s life miserable and waste a ton of time on maintenance and fixing issues of the unit tests themselves.

Writing good unit tests isn’t rocket science. It’s just code. Follow best practices while gaining more experience, and eventually, they would become a helpful masterpiece and your most trusted best friend.

We’ll cover some tips that would hopefully guide you in the right direction.

Test Small Pieces of Code in Isolation

Unlike integration tests, where your objective is to make sure that complete actions work in a close-to-production environment, unit tests should help you quickly find and isolate broken pieces of code. That’s why those functions and classes shouldn’t depend on anything other than mocks and stubs.

The more logic you’re trying to cover in your tests, the harder it is to have solid coverage and to find the exact cause of it failing. We’re not advocating that you have to mock and stub absolutely everything. It’s ok to test a function that depends on three more classes also covered with unit tests. When something breaks, those tests will fail too, and you’ll be able to pinpoint the issue quickly. However, this process becomes more challenging, as when multiple tests fail, you’ll have to find what caused the failure first.

Follow Arrange, Act, Assert

The AAA is a general approach to writing more readable unit tests.

In the first step, you arrange things up for testing. It’s where you set variables, instantiate objects, and do the rest of the required setup for the test to run. During this step, you also define the expected result. It gives some advantages. First of all, you’ll be forced to figure it out before calling the tested logic. Second, it’s more convenient to see the expected output right after inputs and not mixed up with the rest of the code when reading the test.

Then you act. Here you invoke the tested function and store its results. The arrange step naturally leads up to it, and after you have the results, it’s time for some retrospection.

Now you assert if the hypothesis is correct. It’s the essence of a unit test, as this part is finally testing something. The goal is to check if the obtained result matches the expected one. Unit test frameworks have various assertion methods that are sometimes called matchers specifically for that.

Keep Tests Short

Short functions are much easier to read and understand. Since we’re testing one piece of logic at a time, the tests shouldn’t be longer than a few lines of code anyway.

Sometimes, however, the arrange logic might be pretty complex. While it might be a sign that there’s something wrong with the code design itself, high-level logic might have multiple dependencies that require quite a lot of boilerplate code to initialize the mocks and stubs.

Avoid copy-pasting this spaghetti everywhere. Unit tests aren’t that different from a regular code, so the DRY (don’t repeat yourself) principle applies. Don’t forget you’ll have to maintain them in the future after all - favor composition over inheritance here. Those base classes tend to become a pile of unrelated shared code very quickly.

Make Them Simple

Avoid complex logic in the tests. You don’t want to test them too.

It’s tempting to write a bunch of generic logic that would shorten the test code even further and seems like a good thing from a DRY perspective. However, when things break, you’ll often have to debug this logic together with the test to find the problem.

As the project becomes more complex, generic code might no longer work for all scenarios and would also grow in complexity to fit them all. Just like any other code, tests are also subject to refactoring. You’ll have to find a good balance between simplicity and repetitiveness.

On rare occasions, tests might naturally become quite complex. You might need custom assertion functions to test the results or a small framework for testing similar functionality. Just don’t forget to cover them as well. It’s not a crime to unit test the tests.

Cover Happy Path First

When testing new functionality, covering the happy path first has many advantages:

  • It helps to polish the API before implementing it when using TDD;
  • They are the simplest tests to write;
  • Passing them gives some confidence that the arrange boilerplate is working;
  • Most importantly, it illustrates how to use the code being tested;

The tests serve as code documentation for the project. When working with pieces of existing functionality you’re unfamiliar with, check the tests and see how to initialize it and what results to expect. Good tests also demonstrate what to expect in edge cases and are often far more readable and straightforward than the code itself.

Test Edge Cases

Now that you’ve covered the happy path, it’s time to go deeper. Test things that aren’t supposed to happen too often: wrong input, missing arguments, empty data, exceptions in called functions, etc. Code coverage tools might help to find code branches that aren’t tested yet. Don’t get too excited and stay sane. You probably shouldn’t aim for 100% test coverage. Testing for near-impossible scenarios is a waste of time unless you’re writing firmware for a mars-rover component.

Some people like the tests to be completely independent of the tested algorithm. While this makes a lot of sense, as the code might change in the future, we like to think of the algorithm’s edge cases and write separate tests for them. For example, when testing the code that checks if the point is inside the triangle, edge cases would be vertices or points on edge. Arguments that might cause divisions by zero, trigger specific branches, or skip the loop body are good examples. You should be testing if the code throws exceptions when it should.

Write Tests Before Fixing Bugs

Once you’ve found the code that isn’t working as it should, consider writing the test that reproduces this bug. Fixing it by debugging the test in isolation from the rest of the application code will be much quicker.

You’ll leave an excellent regression test to spot this bug in the future. And you’ll know that you’ve fixed it properly when the test that previously failed starts passing.

Make Them Performant

Unit tests should be able to run on every machine. Your team should be running them multiple times a day. They would run both during local builds and in your CI. You want them to run fast.

Be sure to mock all external dependencies that might slow it down, like API calls, databases, or file system access. They’re nearly impossible to make deterministic anyway. Avoid thread sleep, waits, and timeouts in your tests. Even when testing timeouts, consider making them extremely short, just a few milliseconds. When testing multithreading/async race conditions, proper triggering is more deterministic than blind waits.

Keep Them Stateless

Your tests shouldn’t change anything outside their scope or leave side effects. If your tests only pass when they run in a specific order, there’s something wrong with either them or the tested code. Tests should be independent of each other. Modern test frameworks would typically run them in parallel by default, so you shouldn’t rely on the global variables or previous test’s side-effects. That’s one of the reasons while using globals is considered a bad practice. They also hide dependencies, make code tightly coupled, require care with multithreading, and so on.

If you need some complex repetitive arrangement, use setup and teardown mechanisms provided by the framework. They are guaranteed to run before and after each test or the whole suite. That way, your test would be deterministic when run individually or as part of the entire suite. The order wouldn’t make a difference.

Write Deterministic Tests

If the test passes - it should always pass, and if it fails - it should always fail. The time of day, stars’ arrangement, or the tide level shouldn’t affect this.

That’s why relying on external dependencies is not a great idea. API might be down, the database busy, or someone decided to run the tests at midnight. When writing testable code, everything outside your control must be considered a dependency and affect the test. It’s pretty common to use Date.now() directly here and there, but even this would make code non-deterministic as it would do something based on the current time. Everything should be mocked or stubbed in the test code to make it reliable.

Use Descriptive Names

The first thing you see when the test is failing is its name. It should provide enough information to understand what exactly failed and what it was trying to do. Fortunately, good unit tests are specific with only one assertion, so they’re easy to name well.

Don’t be afraid of long, descriptive names. You won’t be writing them in the code 😄. For example, for an amount calculation test it('should return 0 for an empty cart') is lot better than it('works for 0') or it('empty cart'). It’s also true for frameworks that use names of the functions as test names, shouldReturnZeroForAnEmptyCart is still much better. Good test names also work as a table of contents when using tests as code documentation. You can quickly find the right test by looking at their names.

Test One Requirement at a Time

Don’t test the whole method at once. When covering individual requirements, the code becomes much more straightforward. You can pick a more specific name, and the test is less bloated, easier to read, and more maintainable.

If the requirements change, make changes to the corresponding tests. You don’t have to look through all of them and check what’s affected.

That’s also a great way to know what code requires testing in the first place. When writing tests that guarantee your requirements, you might find some parts of the code that don’t necessarily need testing. It might be some internal stuff that’s used by code that directly fulfills the requirements. In this case, it’s worth covering only to make further troubleshooting easier when the requirements test failed.

While you probably shouldn’t waste time testing simple internal code, like function identity(x){ return x; }, you should still cover simple code that directly fulfills the requirement. For example, formatShortTime(date) is a one-liner with a decent date&time library, but it’s required to use a specific format and therefore worth testing. It helps to catch regression bugs in the future when someone, for example, might decide to get rid of multiple format time functions and only use one.

Favor Precise Assertions

There’s a reason why testing frameworks provide various assertion methods. They offer different ways to check the result, and they also show more specific error messages when an assertion fails, providing more context to see what’s wrong.

For example,


expect(result === expected).toBeTruthy();

will fail with


expect(received).toBeTruthy()

Received: false

while


expect(result).toBe(expected);

would provide more information about what exactly failed:


expect(received).toBe(expected) // Object.is equality

Expected: "John Doe"

Received: "JohnDoe"

Frameworks also provide various assertions for different ways of testing. For example, in Jest, toBe tests for exact equality using Object.is while toEqual and toStrictEqual recursively checks if the objects have the same type and structure.

You would want to use a special matcher for floating-point equality that ignores tiny rounding errors caused by how floating points are represented in memory. In Jest it’s toBeCloseTo. Regular toEqual would work sometimes, but it even this simple test expect(0.1+0.2).toEqual(0.3) would fail. Your tests have to be deterministic.

Run Tests Automatically

Developers often run tests while they’re writing code to know if new changes didn’t break sometimes. However, as a guarantee, the tests should also run automatically on every build. They should be part of your continuous integration process, and failed tests must be treated as a build failure. Someone should fix it immediately.

To prevent code with failing tests from getting into the repository, consider triggering tests on git push. For JavaScript and TypeScript projects, you can configure that using husky.

You can also run your tests on every commit, but it’s getting in the way for longer tasks when you want separate commits with fewer changes in the application. In our opinion, it’s better to have short explicit commits that might fail the tests than long hard-to-follow ones that always pass.

Conclusion

Unless you’re writing unit tests, you really can’t refactor your code without accidentally breaking something. A codebase becomes harder to maintain, to the point when even fixing existing bugs will be tricky. Good unit tests will let you know before your code makes its way into the repository.

It’s not rocket science, but being good at something always requires practice.