Sissy Load Tests

In this week’s SD Times, Larry O’Brien identifies a problem that load testing didn’t uncover:

Perhaps the only thing worse than a slow uptake of your application is a smash hit. Users have a way of outfoxing everything, including load tests, and the imperative to respond to existing customers can absorb all the working hours of a team that is scheduled to move on to the next version. Worse, when a product is exposed to an order of magnitude more users than planned and when the product is used more intensely than anticipated, the defect list grows rapidly, potentially panicking the team into treating the symptoms, not the causes. The resulting chaos can easily derail a team, especially one new to agile processes, where “the customer is always right” and being responsive were the values that led to the success in the first place.

Not long ago, I witnessed this very problem. I was engaged to work on the requirements and architecture of The Next Phase, which didn’t seem to have a lot to do with The Current Deployment, whose two big features were a comprehensive audit trail for management and a Web-based “dashboard” that gave users a much better view of their own context. Following the principles of “You Ain’t Gonna’ Need It” and “Don’t Repeat Yourself,” the dashboard and the auditing facilities used the same messages to request information; the dashboard, of course, stripped out the huge blobs of auditing data and presented a much-compressed summary. What was not anticipated (note the use of the passive tense to avoid blame) was that the users found the historical perspective of the dashboard very valuable and configured their dashboards to retrieve not just a day or two of history, but often everything they did in the past month.

Listen, there’s “load testing” and then there’s LOAD TESTING, and I would wager this organization did some “load testing.”

“load testing” occurs when someone, usually in “QA,” runs a set of scripts in a nice, Bob Ross way of having happy little virtual users do happy little things and then sign off. The friendly sets of scripts behave according to anticipated user workflow and ramp up nice and slowly. Like here’s five users, and then ten minutes later, here’s five more users. Because that mirrors workflow.

LOAD TESTING, however, is a nice, Tom Zatar Kay way of throwing everything at once, having the virtual users do resource-sucking tasks and the most processor-intensive things possible. Then, you launch all virtual users at once and see how the server handles a spike. In most cases, it will bomb out because it wasn’t designed to handle that sort of load. Suddenly, passive voice will be deployed in lessons learned documents.

The difference is between using load testing to benchmark or to break software. When you first roll it out, you want to see how it breaks and make sure it doesn’t break early and often.

Once it’s in production, you can run a series of basic, friendly scripts to see how it behaves at certain load levels. That way, you can retest over time and after modification to see how it’s running with the changes or optimizations.

But if you don’t blast the hell out of it, you’re letting the users do it for you. And then the passive voice will be used.

No Responses to “Sissy Load Tests”

  1. angelweave Says:

    I laughed when I read this article (well, it was sitting on our desk) when he thought everything would be fine because they users wouldn’t heavily use the app. The word “fool” came to mind. Of course they’re going to USE the app. If it can be done, it will, in my experience.

    hln


wordpress visitors