* Click the update cart button without checking any items. Should refresh the page.
* Delete one item. Verify it is gone from the cart and not in the wishlist.
* Move one item to the wishlist. Verify it is there and no longer displayed in the cart.
* Delete all the items. The "keep shopping" button should appear.
* (and so on – you get the idea).
As the iteration gets underway, we write FitNesse tests for each story. The other tester on the team and I do this, collaborating closely with our business experts, who often prepare examples and test cases in spreadsheets. We use the build-operate-check pattern for our tests. Each test page starts off with tables that build inputs to the story. These inputs may be global parameters, data stored in memory to be used by the test, or if necessary, actual data in the database. Then we invoke a method (not written yet) which will operate on the inputs with the actual code. Last, the test contains tables which verify the results, again either reading data from memory or from the database itself. Each test also has a setup and teardown method.
For the sample story above, about deleting items from a shopcart, the test might be organized this way:
1. Build a shopcart into memory, adding all the fields for the item such as item number, description, price, quantity.
2. Operate on the shopcart, specifying one item to delete.
3. Check to see that the shopcart contains the correct remaining items.
Unless we’re fairly certain of the test design, we only write one or two test cases, so that we don’t lose a lot of work if we need a big refactoring later. We show the test to a programmer and discuss whether it’s a good approach, making changes if needed. In true CTDD, the programmers would write the test fixture to automate these tests before writing the production code, the same as they do with unit tests. In our project, the programmers usually look over the high-level test cases and write their first draft of the code. Then they take on the task to automate the FitNesse test. Often, the test case shows a requirement that they neglected or misunderstood, and they go back and change the code accordingly. Once the methods for the FitNesse tests are working, we can go back and add test cases, often uncovering more design flaws or just plain bugs.
Once our FitNesse tests are passing, they become part of our daily regression test suite, catching any flaws introduced later. But their most important function has already been served: Writing the tests has forced communication between customers and testers, customers and programmers, testers and programmers. The team had a good understanding of the story’s requirements before starting to write any code, and the resulting code has a good chance of meeting all the customer’s expectations.