Computing Blog

A blog about all aspects of computing and technology from software development to social network to commentary on the IT industry as a whole.

If you have an adversarial relationships between testers and developers, you’re almost certainly doing it wrong. The common enemy should be the bugs…

Posted on by Tim Hall | Comments Off

On Patents and Copyright

I do wonder how much my position on the so-called “copyright wars” in the music industry is coloured by the way abuse of patent laws is wrecking the software industry, where I earn a living.

While I don’t know enough about the specifics of the recent patent lawsuit between Apple and Samsung to comment on that specific case, I know much this sort of thing can stifle innovation and competition. When you have so-called “patent trolls”, companies whose entire business is to buy up obscure patents from defunct companies, then raise money from patent lawsuits, you know something is badly broken.

There’s a strong element of land-grabbing and rent-seeking about the whole thing, and the way parts of the music industry behave has a similar smell. Not that I don’t believe creative artists are entitled for fair compensation for their work. But a lot of the draconian copyright enforcement legislation written by lobbyists employed by big media companies will have much the same effect as the broken patent laws in protecting established monopolies. It’s not in the interest of consumers, and I don’t believe it’s ultimately in the interests of the artists either.

Posted in Music, Music Opinion, Testing & Software | 2 Comments

Two trains are going to crash. What do you do?

One of the Twitter testing community, @JariLaakso, posted this question:

Two trains are going to crash. Brakes don’t work. What would you do?

Of course, you can’t answer this without asking a lot of questions to establish context.

  • Are you on board one of the trains? If so, are you a passenger or a member of the train crew? If train crew, are you in the driving cab, or elsewhere on the train?
  • Is your train actually moving, or is it stationary and about to be hit by the other train?
  • If you’re not on board the train, are you a bystander, or someone like a signaller?
  • How long before the collision? Just seconds, or longer than that?
  • Is there anyone on board the trains at all? Perhaps it’s a staged crash for a film?
  • Is the “crash” even an impending collision? Or is it a case of software gone blue-screen-of-death meaning the brakes can’t be released and the train needs rebooting before it can go anywhere? This may sound silly, but I’ve been on a Virgin Trains Pendolino that had to do precisely that.

The actual answer turned out to be none of those things, but that’s not really the point. It’s about asking the right questions to get the information you need to be able to answer the original question “What do you do?”.

As an aside, real-life rail (or air) accident reports can often be worthwhile reading for a tester. I’m not talking about sensationalist reports of death and destruction, but the technical stories behind the accidents and how they occurred.

I remember reading L.T.C.Rolt’s classic “Red For Danger” at a formative age. It’s a very well-written and readable account of the evolution of railway safety throughout the steam age. It starts with the development of early primitive signalling systems from the 1840s onwards, and tells of the lessons learned from each successive serious accident. As the story moves into the 20th century, increasingly sophisticated systems from signal interlocking to better and stronger rolling stock meant far fewer disastrous accidents. But even the best systems can fail, with sometimes fatal consequences, and the book explains how.

It’s essentially the story of bugs.

Posted in Testing & Software | 6 Comments

Time to log out of Facebook?

I’ve recently taken an extended break from Facebook. I’d got fed up with the drama, vapidity, over-sharing and passive-aggressiveness. I’m know I’m probably guilty of some of those things myself; that and the fact can easily become a huge time-sink are reasons I felt I needed a time-out from the place. But it’s made me wonder if there is a better way.

I really detest Facebook’s walled-garden approach. The most valuable thing about any internet-based community site isn’t the site itself, it’s the relationships you build and maintain through it. I don’t want those relationships wholly owned and controlled by an increasingly creepy corporation that’s only interested in monetising our mutual personal data so they can sell it to advertisers. Facebook has sucked the life out of far too many forums and blogs, and while many forums have their own problems, that can’t be a good thing. With more and more external websites morphing into detestable Facebook ‘apps’, they’re now actively trying to eat the rest of the web.

The only reason I’ve got a Facebook account at all is because there are people who have no significant online presence outside it, and I don’t want to lose all contact with them. I’d much rather a few more people who want to contact me follow me on Twitter, or comment on my blog. Or just use old-fashioned email.

It’s been said that Facebook was created by people with Aspergers syndrome. Whether this is true or not, it does appear to have belief in the geek social fallacies written all over it, especially #4 in that list. That does seem to be a root cause of a lot of the site’s problems.

In an ideal world, a combination of Twitter and blogging does everything I want out social networking. But blogging in particular is quite hard work if you want to build an audience. Facebook’s greatest strength is that it provides a ready-made audience for those who don’t have an awful lot to say. Unfortunately that’s also it’s greatest weakness, hence the vapidity and over-sharing. I always feel bad when I have to mute, unfollow or in the worse cases block people because they’re friends-of-friends in real life. Just because we like the same music doesn’t necessarily mean we have anything else in common.

So what to do? Should I hold my nose and use Facebook sparingly, just to keep in touch with those who are active nowhere else? Or should I try to encourage more people who actively want to interact with me online to follow me on Twitter or read my blog? Should I be spending more of my online time on existing communities like RMWeb and Dreamlyrics? Or should I put my faith in alternatives such as Google+ or even Diaspora?

You should be asking yourselves the same questions.

Posted in Social Media | Tagged , , | 10 Comments

Incantations in High Elvish?

Great blog post about exploratory testing by James Marcus Bach, and why some people Just Don’t Get It.

It’s difficult for them because Factory School people, by the force of their creed, seek to minimize the role of humanness in any technical activity. They are radical mechanizers. They are looking for algorithms instead of heuristics. They want to focus on artifacts, not thoughts or feelings or activities. They need to deny the role and value of tacit knowledge and skill. Their theory of learning was state of the art in the 18th century: memorization and mimicry. Then, when they encounter ET, they look from something to memorize or mimic, and find nothing.

Those of us who study ET, when we try to share it, talk a lot about cognitive science, epistemology, and modern learning theory. We talk about the importance of practice. This sounds to the Factory Schoolers like incomprehensible new agey incantations in High Elvish. They suspect we are being deliberately obscure just to keep our clients confused and intimidated.

As I’ve explained in previous blog posts, I’ve always taken an exploratory approach to testing, even if what I did wasn’t formally identified as such. Trying to force testing into a purely mechanical script-based approach not only sucks all the fun out of testing, risking disillusionment and burnout, but makes the actual testing less effective.

And while we’re on the subject of old-school techniques, are these guys for real? “Unlike a traditional development process, ours establishes all the system’s requirements before a line of code is written“. Seriously, folks, does anyone still try to develop software that way in 2011? Sounds like a perfect way to implement what the client thought they wanted eighteen months ago.

Remember that old cartoon of the swing hanging from the branch of the tree?

Posted in Testing & Software | 9 Comments

Beware the Unknown Unknowns

Another testing story of mine.

The two related projects were interfaces with external systems handling rent deductions and water billing respectively, both of them for a large overseas customer.

A major problem was that neither developers nor testers had any access to the third-party systems other than specifications for the file formats to be used in the interfaces. This made it impossible to perform complete end-to-end testing within the internal test environment. My biggest challenge as the tester on the project was to try to simulate the external system by creating input files based on those file specifications. The physical sending and receiving of files was beyond the scope of my own testing.

One warning flag was the way one of the sample input files didn’t quite match the file specification. This really ought to have been taken as an omen of how the project would unfold.

It was all flat text files with fixed-length fields, so I put together a suite of SQL scripts which I ran in TOAD (A third-party tool to access Oracle databases) to generate the input files containing the data our system would expect in response to the output files it had produced. These scripts covered various “Happy Path” scenarios, and I’d hack the files with a text editor to test various error conditions. This meant I could simulate end-to-end business scenarios from the perspective of our own system.

I laced my test reports with caveats making it clear that we hadn’t been able to test the full processes with an actual instance of the third-party system. So as soon as the system went into acceptance testing with the client, the response was a flood of issues and defects, almost all of them relating to flawed assumptions and understandings during business analysis. The fixing and retesting went on for more than a year, far longer than the initial development phase of the project.

I can’t honestly describe this as a happy and successful project, but it’s the nature of experience that you frequently learn more from something that’s gone badly from something that went well. While I feel I did as professional a job of testing as was possible under the circumstances, I still wonder how things could have been done better. I did ask my line manager whether the operators of the third-party systems had test instances of their systems available for developers of third-party interfaces, and never got any answers.

This is a good example of risk of using the Waterfall method of development for a project as full of assumptions and unknown unknowns as this one.

Posted in Testing & Software | Comments Off

What was your all-time favourite bug?

Another of my occasional professional-related blog posts. I have considered spinning these off into a separate blog, but for the time being, it’s another category on my personal blog. As I’ve explained before, what I do in my day job isn’t completely unrelated from some of what I do outside of work.

So, the all-time favourite bug?

This is a question I’ve been asked recently. While it’s difficult to single out one bug from all my years of testing experience, this one does stand out, and comes from one of the most challenging projects from my testing career so far.

The application was a work scheduling system, part of a very comprehensive housing management product. A call centre would take calls from tenants reporting broken windows or blocked drains, then the system would schedule appointments for operatives to carry out repairs. The operatives themselves could update progress on these appointments in real time using a mobile app, allowing the system to maintain the daily work plan dynamically.

It was a complex application, involving two third-party products, one of them the mobile app, the other, which formed the focus of my own testing, being the scheduling engine itself. This was a standalone system which took XML messages containing the call information, and sent back XML containing either the appointment, or the reason why it couldn’t create one.

One thing that became apparent early on was the scope of my testing didn’t encompass just our own product and the XML interfaces. End-to-end testing soon revealed a lot of bugs in the scheduling engine itself. We had some complex business rules for dealing with things like multi-person jobs, multiple jobs which had to be performed in sequence, or jobs where the call centre wanted to assign a particular operatives rather than let the scheduling engine choose. With a near-infinite number of potential combinations of conditions I performed extensive exploratory testing on increasingly arcane permutations. A straightforward example might be “Remove wasps nest” followed by “Carry out repairs to guttering“, which required two different trades but had to be performed in the right order. Some of the less straightforward scenarios resulted in the scheduler returning gnomic utterances such as “The optimal schedule does not include this call“, or, if you were really unlucky, “Object reference is not set to an instance of this object“. Unfortunately I could never find a test case that would repeat that last one consistently.

The most significant bug occurred when the schedule filled up. The dynamic nature of the schedule meant that sometimes appointments did get missed. For example, if a previous appointment ended up taking far longer than anticipated, and the system couldn’t find another operative available in that slot because everyone else was busy, the appointment would fall out of the schedule and need to be re-scheduled. The same would happen if an operative went sick or crashed his van during the course of the day.

Unfortunately it did the same thing when I filled up the schedule and tried to schedule further appointments. Rather than reject them with “No operatives available with correct skill” as it should have done, it decided the plan would be more ‘efficient’ if it appointed the new jobs and displaced existing ones it had already committed to. Not a situation which would lead to happy tenants, who would not be impressed if they’d taken an afternoon off work for a plumber who never turned up.

Given the severity, that bug was reported to the third-party supplier and fixed before the system went live on any customer site. But that particular scenario ended up in the suite of regression tests run each time we received a new release of the third-party scheduling engine.

Posted in Testing & Software | 1 Comment

What are the parallels between music criticism and software testing?

Regular readers of this blog don’t really need to be told that I’m a very keen music fan and amateur rock critic. Writing about a small club-based scene I’ve come to know quite a few band members over the years. I’ve even had people suggest I should quit working in the IT industry and become a full-time music writer. But while being on the fringes of the music scene is can be a great experience, I’m not convinced I want to jump ship and join the circus.

But I can see a lot of parallels between music criticism and my professional career as a software tester.

Not that I’m suggesting that testing and reviewing are exactly the same. To start with music is inherently more subjective than software. But there just as it can be a judgement call as to whether or not a piece of software is fit for purpose, it’s never completely subjective as to whether a record or performance is good, bad or indifferent. There are those that claim all opinions are equally valid when it comes to reviews, and there is no such thing as an objectively good or bad record. If you believe that, you clearly haven’t heard Lou Reed’s appalling collaboration with Metallica. It seems to me that both testing and reviewing are something many people can attempt, and just about anyone can do badly, but take skill and experience to do well. You only have to look at the reviews on websites to which anyone can post without moderation to realise there are bad reviewers out there just like there are bad testers.

To review a record or concert requires both an understanding of what the artist is trying to achieve, and an honest assessment of how well they’ve succeed in achieving it. That in turn requires the equivalent of domain knowledge. Just like a lot of indie-pop reviewers come horribly unstuck attempting to review progressive rock or metal releases, ask me to review a dubstep or free-jazz record and I wouldn’t know where to start. But just as testers from different backgrounds will approach things from different angles and uncover different bugs, a reviewer with deep specialist knowledge of a specific genre will have a quite different perspective from one whose taste is far broader. Something that’s meant to have crossover appeal benefits from both viewpoints.

Then there is the issue of speaking truth to power, which can require both courage and diplomacy. Egos even bigger than those of developers go with the territory. When an artist has poured their heart and soul into making a record, they don’t always appreciate being told how their work could have been better. Much like the way developers don’t always appreciate being told the code they’ve slaved over is riddled with bugs they they really ought to have picked up in their own unit testing. And if you’ve ever had the misfortune to work in a dysfunctionally political environment where project managers surround themselves with yes-men and tend to shoot the messenger whenever those messengers are bearers of bad news, then you’ll recognise those over-zealous fans who sometimes try to vilify anyone that attempts constructive criticism.

It’s true that there are a lot of rock critics out there who exhibit exactly the same sort of adversarial behaviour that gives some testers a bad name. Yes, writing and reading excoriating reviews of mediocre records can occasionally be cathartic, but informed and honest constructive criticism is far more valuable in the long run. Just as software testing is a vital part of making sure software is fit for purpose, constructive criticism has a role in making music better.

Perhaps it’s my tester’s ability to see patterns, but what I hope the above goes to show is that sometimes what you do in your “day job” and an apparently unrelated activity you do in your spare time can have more in common than you think. Certainly there are transferable skills, especially those softer ones which are much in demand.

Posted in Testing & Software | Comments Off

Facebook’s New Look – A Tester’s Perspective

If you’re on any social network you’ll know that Facebook rolled out some major changes to their system over the last couple of days. To say it’s gone down like a lead balloon would be an understatement, Facebook users have always been a bit small-c conservative, and don’t like change. But the rage I’m seeing this time round is a lot more intense.

Having a background in software testing gives me some insight into how and why they’ve annoyed so many people so badly this time.

What appears to have happened is they’ve lauched some potentially powerful new features without really bothering to explain to anyone how they work or how they should be used.. Smart Lists are a good example; They’re similar to the circles in Google+, and almost certainly implemented as a response to that. But again, they haven’t made the implications of adding people to certain types of list clear. This probably explains why we’ve seen more than one rock band adding all their fans as employees. Once could be a mistake, twice looks like careless UI design.

As we’ve come to expect from Facebook by now, they’ve set the defaults for most things to values that aren’t the ones you’d have chosen. And it goes without saying that every new data-sharing is opt-out with the relevent option hidden in a rusty filing cabinet marked “Beware of the leopard”. Likewise, I don’t think they’ve bothered to test it properly before they rolled the changes out. Although in this case it’s not so much that the actual software is buggy, but the the design is not as intuitive to ordinary people as their designers seem to think it is.

Facebook’s problem is that a large proportion of its user base isn’t made up of tech-savvy computer nerds, but people like your mum. They’re not the least bit interested in performing unpaid exploratory testing of new and occasionally half-baked software products. They just want to share pictures of grandchildren.

Posted in Social Media, Testing & Software | Tagged | 2 Comments

Testing an Internet Radio Station

Last week I was invited to help test some themed internet radio stations over the past few days. The focus was more on the overall customer experience rather than bug-hunting. But I’m a software testing professional as well as a music fan, so that’s going to have an effect on how I approach things.

Being a huge progressive rock fan, I was naturally driven towards their Prog channel. I listened to it for several hours while doing other work on the PC. Most of the music clearly fell into that genre, even when it was artists I’ve never heard of, and it was a good mix of classic 70s music and more contemporary artists. So far, so good, and the feedback I gave was positive.

But the odd track sounded completely out of place, dance-pop acts or singer-songwriters whose music fell well outside even the broadest possible definition of progressive rock. On further investigation, all of them turned out to be obscure European artists who shared names with better-known prog-rock acts whose own music wasn’t in their library. It’s the same artist disambiguation issue that plagues last.fm once you get beyond household names signed to major labels.

Nice to be able to combined skills learned as software tester with knowledge acquired as a music fan.

Posted in Testing & Software | Tagged , | Comments Off