Jared's Weblog

Main Page - Books - Links - Resources - About - Contact Me - RSS


Jared Richardson Most Popular
Help! I've Inherited Legacy Code
Testing Untestable Code
Continuous Integration... Why Bother?
Mock Client Testing
The Art of Work
Technical Idiot Savants
Targetted Skills Aquition vs Free Range Chickens
The Habits of Highly Effective Developers



Blog Archive
2005-December
2005-November
2005-October
2005-September
2005-August
2005-July
2005-June

Mon, 10 Oct 2005

Tests Rust

They really do. Think about it... when you first write automated tests, they work. They're cool and shiny. But then you get busy, so you don't run the tests everyday. Eventually, you don't run them every week. It's like the abandoned bicycle in the backyard.

Over time, the automated tests become something you pull inside to use at the end of your product cycle. They become release tests... at least, you hoped they would be. But something happened to those tests you left out in the rain. They don't run cleanly anymore. They've rusted.

What does it mean for a test to rust? In the best scenario they just squeak a lot. In the worst case, they won't run at all. But why?

First, the code being tested breaks. It happens. Developers are human. Since they aren't perfect errors occur. So you find dozens (or hundreds!) of breaks when you finally run the tests. Unfortunately, with that many failures, most people assume the tests are bad and ignore them.

Second, the tests and the code get out of synch. The two need to march in lockstep. If the test doesn't know what the code is doing, it can't test the response. So again we have massive numbers of failures that get ignored.

Finally, trust fades over time. If your tests haven't been telling the developers anything useful, why should they listen to them? Trust is earned over time, not tossed over the fence at the end of the cycle. During the end of cycle, when everyone is stressed and busy, then you run the tests. Even if the tests only have a small number of failures, the developers will ignore them.

The only way to keep your tests shiny and new is to use them. Run them.

How often should you run them? I'm glad you asked. :)

Assume your test has a basic value of V. I can't tell you what your tests value is. That depends on your product and your test.

But I can tell you that over time V can degrade. Every time your code is touched and the tests aren't run, you lose value. In fact, if a test run is R and a code change is C, the formula is (R/C)*V

You start with the basic value of the test but you'll lose value if you only run the test every 10 code changes. You'll lose more if you only run every 100.

So if your test has a value of 10 but you only run it every 10 times the code changes, you have an effective use of (1/10)*10 = 1. However, if you have a really simple test with a value of only 1 but you run it every time the code changes, it's value to you is (1/1)*1 = 1.

The only way to maximize the return on your testing investment is to run your tests as often as possible, preferably after every code change.

The best way to do that? Continuous Integration. Add software that will build and test your product every time your code changes. It's not as hard as you might think and the pay off is tremendous. There are lots of products available to help you out.

Don't spend all that time on your tests just to see them rust.

Jared

posted at: 23:56 | path: | permanent link to this entry