Exploratory Testing

Great blog post by Anne-Marie Charrett on Courage in Exploratory Testing.

Exploratory Testing is tester centric, meaning the tester is central to the testing taking place. The tester has the autonomy and the responsibility to make decisions about what to test, how to test, how much to test and when to stop. This may seem blatantly obvious to some, but its surprising the number of test teams where this is not the case.

The downside is that management love their spreadsheets with their percentages and red yellow and green colour coding, which is where that courage thing comes in.

In scripted testing, testers have artifacts which they measure and count giving an illusion of of certainty but really this is smoke and mirror reporting and generally offers little genuine information. “We have reached 78% test coverage, with a DDR of 85%”

I used to work in a testing team where we had spreadsheet-based test scripts that calculated “percentage complete” and “percent confidence” by counting the number of boxes ticked off. The figures those formulae calculated were far less meaningful than estimates based on gut feeling.

For my current project I create test spreadsheets as I test, with “Test performed”, “Test Result” and “Pass/Fail” columns (There are other columns, but that’s the guts of it).  This provides the auditability that managment needs. There is no “Expected result” column, and that’s deliberate. Unfortunately there is no guarantee that management won’t see the thing as a test script and expect the whole lot to be re-run for regression testing…

This entry was posted in Testing & Software and tagged . Bookmark the permalink.

One Response to Exploratory Testing

  1. Michael Orton says:

    I am convinced we could not have delivered the last big project I worked on without a JUnit test suite which performed a basic regression test in about 4 minutes which would have taken hours to do manually. Yet the test suite has caught me out in that it has to use a mock mechanism for database access and we use a very powerful piece of middleware to hide the real database from the application layer. The snag is that this layer creates lots of clones of the objects during any update cycle and unlike true SQL or any normal object based system, you cannot assume that any other object you are pointing to will be at the same logical point in time of that update cycle. Bugs caused by this effect cannot be found by the mock database used by the JUnits.
    So we can regression test, and give the management the numbers they want in terms of tests run and percentage passed, but just how meaningful the numbers are is questionable. But in a world which requires the numbers to exist even though few people read them and fewer still understand them, this is probably the best value for money we can give them.