Questioning tests as documentation

I came across this statement recently.

In some ways, automated tests are like documentation – documentation that can't get out of date and is easier to discover. Instead of manually running code and reviewing the output, we'd write test code that runs the query and programmatically verifies the output. So any developer can run that test code to understand how the system is supposed to work and make sure it still works that way.

A couple of thing about this seem fanciful when it comes to testing.

First, saying that tests are documentation that can't get out of date is mostly wishful thinking. Test code is subject to bitrot too. Eventually a project will end up with tests that don't need to be there or are testing something that is unnecessary.

Second, implying that the developer can run some test code and "understand how the system is supposed to work" simply isn't true in a lot of cases. It's true that the developer will know how the system is supposed to behave, but that is not the same as understanding.

Suppose you're navigating a code base. There's no doubt that having tests is better than not having tests. Being able to test the behaviour of a system is almost invaluable. Now suppose you add something to the code and write a test for it. The new test passes but an existing one fails. Upon inspection, it seems to have nothing to do with the changes you just made. The question becomes, why?

There are actually two questions to ask: why did the test fail and why does the test exist? Not all tests are clear on why they exist. Documenting tests has even less priority than documenting code, so unless it's clear from the code, you may be in for some serious digging.

Basically, I don't think tests are a substitute for decent documentation and/or requirments. I used to think they were. I'm not convinced of that anymore. Software development does itself a disservice by thinking they are the same thing.

-- Geoff (