I ran another test, and without the scripted tests to limit my observation, noticed something that raised my suspicions: the application became sluggish. Knowing that letting time pass in a system can cause some problems to intensify, I decided to try a different kind of test. I would follow the last part of the manual test script, wait a few minutes, and then thoroughly inspect the system. I ran this new test, and the system felt even more sluggish than before. The application messaging showed me the system was working properly, but the sluggish behaviour was a symptom of a larger problem not exposed by the original tests I had performed.
Investigating behind the user interface, I found that the application was silently failing; while it was recording database transactions as completing successfully, it was actually deleting the data. We had actually been losing data ever since I ran the first tests. Even though the tests appeared to pass, the application was failing in a serious manner. If I had relied only on the scripted manual and automated tests, this would have gone undetected, resulting in a catastrophic failure in production. Furthermore, if I had taken the time to write down the tests first, and then execute them, I would most likely have missed this window of opportunity that allowed me to find the source of the problem. Merely running the scripted tests only felt like repeating an idea resolution, and didn’t lead to any interesting discoveries. On the other hand, the interplay between the tension and resolution of exploratory testing ideas quickly led to a very important discovery. Due to results like this, I don’t tend to use many procedural, pre-scripted manual test cases in my own personal work.
So how did I find a problem that was waiting like a time-bomb for a customer to stumble upon? I treated the test scripts for what they were: imperfect sources of information that could severely limit my abilities to observe useful information about the application. Before, during and after test execution, I designed and re-designed tests based on my observations. I also had a bad feeling when I ran the test. I’ve learned to investigate those unsettled feelings rather than suppress them because feelings of tension don’t fit into a script or process; often, this rapid investigation leads to important discoveries. I didn’t let the scripts dictate to me what to test, or what success meant. I had confidence that skilled exploratory testing would confirm or deny the answers supplied by the scripted tests.
Testers who have learned to use their creativity and intelligence when testing come up with ways to manage their testing thought processes. Skilled exploratory testers use mental tricks to help keep their thinking sharp and consistent. Two tricks testers use to kick start their brains are heuristics (problem-solving approaches) and mnemonics (memory aids) .
Musicians use similar techniques, and may recognize “the circle of fifths” as a heuristic to follow if they get lost in an improvised performance. (This isn’t a guarantee though, a heuristic may or may not work for you. When a heuristic is inappropriate, you simply try another.) Musicians tend to have large toolboxes of heuristics, and also use mnemonics as well. One example is “Every Good Boy Does Fine” which is used to remember the notes “EGBDF” on the lines of a staff. Skilled testers use similar tools to remember testing ideas and techniques.
I’m sometimes called in as an outside tester to test an application that is nearing completion. If the software product is new, a technique I might use is the “First Time User” heuristic. With very little application information, and using only the information available to the first time user, I begin testing. It’s important for me to know as little as possible about the application at this time, because once I know too much, I can’t really test the way a first-time user would.