Login
Username:

Password:


Lost Password?
Register now!
Page « 1 (2) 3 4 5 »
Articles : Exploratory Testing: Finding the Music of Software Investigation
on 2008/3/18 1:03:16 (2261 reads)
Articles


In testing, test scripts that are written down are also open to interpretation by the test executor. Automating these tests is the only way to guarantee that they will be repeated exactly the same way, but like automating music, the lack of interpretation in execution can limit the results. A computer can only find the problems we predict and program it to find. Repeating scripted tests over and over can get boring, tedious, and may only feel like idea resolution, without the vital tension created by curiosity. At the other end of the spectrum, there is improvisational testing: exploratory testing. Pure exploratory testing means that my next test is completely shaped by my current ideas, without any preconceptions. Pure scripted testing and pure exploratory testing are on opposite ends of a continuum.

This analogy of music and software testing isn’t perfect however. Music is performed for entertainment purposes or as practice for musicians who are developing their skills. The end goal is entertainment for listeners, skill development, and the enjoyment of the musician. Software testing on the other hand isn’t generally done for entertainment, instead it is used to discover information. As Cem Kaner says, software testing is an investigative activity to provide quality-related information about software [2]. To gather different kinds of information, we want to be open to different interpretations, and to be able to look at a problem in many different ways. In music, improvisation can have negative effects when used at an inappropriate time or in an inappropriate manner. (When a musician plays a wrong note, we really notice it.) In software testing, exploring and improvisation, even when done wrong, can often lead to wonderful sources of new information. Inappropriate interpretations can be a hazard in musical performances, but on software projects, accidents, or “playing the wrong notes”, can lead to important discoveries. Furthermore, software projects are faced with risk, and exploratory testing allows for us to instantaneously adjust to new risks.

What does skilled exploratory testing look like? Here is scripted testing and exploratory testing in action. In one test effort, I came across a manual test script and its automated counterpart which had been written several releases ago. They were for an application I was unfamiliar with, using technology I was barely acquainted with. I had never run these tests before, so I ran the automated test first to try to learn more about what was being tested. It passed, but the test execution and results logging didn’t provide much information other than “test passed.” To me, this is the equivalent of the emails I get that say: “Congratulations! You may already be a winner!” Statements like that on their own, without some sort of corroboration mean very little.

I didn’t learn much from my initial effort: running the automated test didn’t reveal more information about the application or the technology. Since learning is an important part of testing work, I delved more deeply. I moved on to the manual test script, and followed each step. When I got to the end, I checked for the expected results, and sure enough, the actual result I observed matched what was predicted in the script. Time to pass the test and move on, right? I still didn’t understand exactly what was going on with the test and I couldn’t take responsibility for those test results completely on blind faith. That violates my purpose as a tester; if I believed everything worked as advertised, why test at all? Furthermore, experience has taught me that tests can be wrong, particularly as they get out of date. Re-running the scripted tests provided no new information, so it was time leave the scripted tests behind.

One potential landmine in providing quality-related software information is tunnel vision.  Scripted tests have a side effect of creating blinders - narrowing your observation space. To widen my observation possibilities, I began to transition from scripted testing to exploratory testing. I began creating new tests by adding variability to the existing manual test, and I was able to get a better idea of what worked and what caused failures. I didn’t want to write these tests down because I wanted to adjust them on the fly so I could quickly learn more. Writing them down would interrupt the flow of discovery, and I wasn’t sure what tests I wanted to repeat later.

Page « 1 (2) 3 4 5 »
Printer Friendly Page Send this Story to a Friend Create a PDF from the article
Share The Knowledge

Jobs