Testing & Software Blog

The occasional thoughts of a freelance software tester, drawn from experience across the software development life-cycle.

The Ghost in the Machine

A thread in the Software Testing Club asked about the funniest bugs people had seen. This one of mine is much about the way it was reported as it’s about the bug itself.

Many years ago, we had a support call from a customer, saying “Help, our system is possessed by the spirit of a negligent Edwardian maintenance engineer“.

The function dealt with planned maintenance schedules, which had a frequency in weeks. To allow for two-year maintenance intervals, the frequency field was a 3-digit integer.

The customer had a business requirement that was not explicitly supported; they wanted planned maintenance jobs that would be performed on-demand based on criteria that were outside the scope of the system, rather than performed at fixed intervals. When the engineers decided the job needed doing, they’d update the record and set the due date.

So they entered the maximum permissible number into the “Frequency” field, 999, which worked out as just short of 20 years. Once the maintenance task was performed and completed, the system would obediently calculate the next due date some time in the next century (I told you this was some time ago, didn’t I?)

Then the system started showing long-overdue maintenance tasks that were supposed to have been done in 1907.

We had hit an instance of what was later called the Y2K bug way back in 1987.

The irony was that the Oracle database at the time supported four-digit years, but the UI (built using a precursor to Oracle’s SQL*Forms) did not. The short-term workaround was to limit the value of the field to 520, buying enough time for Oracle’s UI tool to support 4-digit years properly. Later upgrades supported the “missing” functionality properly by making Frequency and Next Due Date optional fields.

Posted in Testing & Software | Comments Off

I loathe having to go through the “forgotten password” rigamarole just so I can leave a comment on someone’s blog. Yet another way in which spam has ruined the internet.

Posted on by Tim Hall | 1 Comment

Would the world (or at least the web) be a far better place if Adobe Flash had never existed?

Posted on by Tim Hall | 2 Comments

Mitt Romney’s Fail Whale

A very interesting analysis of the failed deployment of Team Romney’s Project Orca. It has all the ingredients of a classic IT disaster, including lack of proper stress testing using environment resembling the actual deployment, and most critical of all, wholly inadequate end user training.

Field volunteers also got briefed via conference calls, and they too had no hands-on with the application in advance of Election Day. There was a great deal of confusion among some volunteers in the days leading up to the election as they searched Android and Apple app stores for the Orca application, not knowing it was a Web app.

John Ekdahl, Jr., a Web developer and Romney volunteer, recounted on the Ace of Spades HQ blog that these preparatory calls were “more of the slick marketing speech type than helpful training sessions. I had some serious questions—things like ‘Has this been stress tested?’, ‘Is there redundancy in place?’, and ‘What steps have been taken to combat a coordinated DDOS attack or the like?’, among others. These types of questions were brushed aside (truth be told, they never took one of my questions). They assured us that the system had been relentlessly tested and would be a tremendous success.”

When the thing went live, it all went predictably pear-shaped.

As the Web traffic from volunteers attempting to connect to Orca mounted, the system crashed repeatedly because of bandwidth constraints. At one point the network connection to the campaign’s data center went down—apparently because the ISP shut it off. “They told us Comcast thought it was a denial of service attack and shut it down,” Dittuobu recounted.

You could ask what a spectacular failure of an IT implementation says about the candidate’s competence to be President of the United States.

Posted in Testing & Software | Tagged , | Comments Off

Requirements? Are they necessary?

In a thread on the Software Testing Club, one poster came up with this gem:

I repeat, if you have no requirements, you cannot test anything. I think it’s important to grasp that there is a fundamental difference between a test, and an experiment ;-)

To which I replied:

If you have no requirements, you have to make assumptions about how and what to test based on your knowledge of the domain and the system.

Being able to make those sorts of assumptions is part of the skill-set of a good tester. So is knowing when you lack sufficient domain knowledge to be able to make any meaningful assumptions.

It’s the difference between intelligent exploratory testing and mindless script-checking. I test using a mental model of the system. Detailed functional specifications are often a very important source for that model, and when they exist, a tester should certainly read them. But knowing who and what to ask when you have gaps in your mental model is just as important.

Posted in Testing & Software | Tagged | 8 Comments

I do like this software testing term. “A Lance Armstrong Bug”. It means that the code is passing all the tests, but it’s not behaving as it should.

Posted on by Tim Hall | Comments Off

This is why I will never buy an Amazon Kindle, and why I refuse to buy eBooks or music crippled by DRM. Because the vendor can take away what you thought you’d bought and paid for on a whim. Just like that…

Posted on by Tim Hall | 9 Comments

Even though Excel is Microsoft and therefore supposed to be stable, there were serious bugs” – Actual quote from a meeting in which we decided to defenestrate a notoriously squamous and rugose spreadsheet.

Posted on by Tim Hall | Comments Off

A question for those of you who listen to music while you work. Are you more productive if you listen to music on random shuffle rather than listening to individual albums all the way thought as Steve Wilson intended? Does it actually make any difference?

Posted on by Tim Hall | 1 Comment

Exploratory Testing

Great blog post by Anne-Marie Charrett on Courage in Exploratory Testing.

Exploratory Testing is tester centric, meaning the tester is central to the testing taking place. The tester has the autonomy and the responsibility to make decisions about what to test, how to test, how much to test and when to stop. This may seem blatantly obvious to some, but its surprising the number of test teams where this is not the case.

The downside is that management love their spreadsheets with their percentages and red yellow and green colour coding, which is where that courage thing comes in.

In scripted testing, testers have artifacts which they measure and count giving an illusion of of certainty but really this is smoke and mirror reporting and generally offers little genuine information. “We have reached 78% test coverage, with a DDR of 85%”

I used to work in a testing team where we had spreadsheet-based test scripts that calculated “percentage complete” and “percent confidence” by counting the number of boxes ticked off. The figures those formulae calculated were far less meaningful than estimates based on gut feeling.

For my current project I create test spreadsheets as I test, with “Test performed”, “Test Result” and “Pass/Fail” columns (There are other columns, but that’s the guts of it).  This provides the auditability that managment needs. There is no “Expected result” column, and that’s deliberate. Unfortunately there is no guarantee that management won’t see the thing as a test script and expect the whole lot to be re-run for regression testing…

Posted in Testing & Software | Tagged | 1 Comment