Tag Archives: Exploratory Testing

Requirements? Are they necessary?

In a thread on the Software Testing Club, one poster came up with this gem:

I repeat, if you have no requirements, you cannot test anything. I think it’s important to grasp that there is a fundamental difference between a test, and an experiment ;-)

To which I replied:

If you have no requirements, you have to make assumptions about how and what to test based on your knowledge of the domain and the system.

Being able to make those sorts of assumptions is part of the skill-set of a good tester. So is knowing when you lack sufficient domain knowledge to be able to make any meaningful assumptions.

It’s the difference between intelligent exploratory testing and mindless script-checking. I test using a mental model of the system. Detailed functional specifications are often a very important source for that model, and when they exist, a tester should certainly read them. But knowing who and what to ask when you have gaps in your mental model is just as important.

Posted in Testing & Software | Tagged | 8 Comments

Exploratory Testing

Great blog post by Anne-Marie Charrett on Courage in Exploratory Testing.

Exploratory Testing is tester centric, meaning the tester is central to the testing taking place. The tester has the autonomy and the responsibility to make decisions about what to test, how to test, how much to test and when to stop. This may seem blatantly obvious to some, but its surprising the number of test teams where this is not the case.

The downside is that management love their spreadsheets with their percentages and red yellow and green colour coding, which is where that courage thing comes in.

In scripted testing, testers have artifacts which they measure and count giving an illusion of of certainty but really this is smoke and mirror reporting and generally offers little genuine information. “We have reached 78% test coverage, with a DDR of 85%”

I used to work in a testing team where we had spreadsheet-based test scripts that calculated “percentage complete” and “percent confidence” by counting the number of boxes ticked off. The figures those formulae calculated were far less meaningful than estimates based on gut feeling.

For my current project I create test spreadsheets as I test, with “Test performed”, “Test Result” and “Pass/Fail” columns (There are other columns, but that’s the guts of it).  This provides the auditability that managment needs. There is no “Expected result” column, and that’s deliberate. Unfortunately there is no guarantee that management won’t see the thing as a test script and expect the whole lot to be re-run for regression testing…

Posted in Testing & Software | Tagged | 1 Comment