|This article is taken from the book The Art of Unit Testing. As part of a chapter on integrating unit testing into your current organization, this segment answers one of the key questions involving unit testing. |
Author: Roy Osherove
This article is based on The Art of Unit Testing, to be published in January 2009. It is being reproduced here by permission from Manning Publications. Manning early access books and ebooks are sold exclusively through Manning. Visit the book's page for more information.
Let’s begin with some facts. Studies have shown that raising the code quality overall in the project can increase productivity and shorten schedules (Capers, Programming Productivity 1986) (Capers, Software assesments, benchmarks and best practices 2000). How does this sit with the fact that writing tests can take longer to code? Maintainability and ease of fixing bugs, mostly.
This question is usually asked by the people who are in the front lines in terms of timing. Team leads, project managers and clients. There is a difference in views here. A team lead may really be asking “So what should I tell my project manager when we go way past our due date?” but they may think the process is useful, they are just looking for ammo in the upcoming uphill battle. They may also be asking the question not in terms of the whole product, but in terms of specific feature sets or functionality. A project manager or customer, on the other hand, will usually be talking in terms of full product releases.
The fact that different people may care about different scopes is relevant because the answer for specific scopes may be different. For example, unit testing can even double the time it takes to implement a specific feature in the next month, but the overall bottom line release date for the product in the next five months may actually be sooner. To understand this, let’s look at a real example from a company I was involved with.
A tale of two features
One of the larger companies I was consulting with was starting to implement the idea of unit testing into their process. Before they went all in they went on a pilot project. The pilot project consisted of a group of developer tasked with adding a new feature to an existing large application. The company’s main livelihood was in creating this large application and then customizing parts of it for various clients. It was a billing style application and the company had thousands of developers around the world.
The following measures were taken to test the success of the pilot:
* Test the time it took to go through each of the development stages for the team
* Test the overall time it took for the project to be released to the client
* Test the amount of bugs found by the client after release
The pilot was measured against a similar feature made by a different team for the same client, with almost the exact same feature size (these were both customization efforts of the product for two different clients – one was with unit tests, the other was without).
Table 1 shows the differences in time:
Table 1: Team Progress and output measured with and without tests
Overall, the time to release the same feature with tests was less than it was without tests. The teams were roughly at the same skill and experience level and the features were roughly the same to implement.
Still, the managers on the team with the unit tests didn’t initially believe the pilot would be a success. The reason is that they only looked at the first row as the criteria for success in the project instead of the bottom line. In the first line, it takes twice the amount of time to code the same feature (unit tests cause you to write more code, naturally. It doesn’t matter that the time “wasted” more than made up for itself when the QA team got a hold of the product and found less bugs to deal with.
That’s why it’s important to emphasize that unit testing can increase the amount of time it takes to implement a feature but that time balances out based on quality and maintainability over the product’s release cycle.