With Requirements, Without Requirements, It’s All The Same

I’ve encountered the question in interviews before, “How do you test without requirements?” The SD Times offers an article entitled “Testing Without Requirements—Impossible?

As crazy as it sounds, there are actually teams of software testers that perform their job without the slightest clue of what the application under test is supposed to do. How is this possible?

Jumping Jack Sprat, how do you test with requirements? I’ve heard mysterious talk about these mythical beings upon some pantheon of BusinessAnalympus somewhere, where the gods of projects have come together and thought about how things should go, but I’ve never seen them myself.

Well, okay, I did see, once, a thick binder of what were allegedly requirements for a financial service company with a robust product that a co-worker waved around from previous employment that purportedly identified screens down to the field/control level and explicitly said what could and could not go into said control, but it might have actually been his history notes from high school for all I know.

No, in many of the projects I’ve worked on, the developers have started coding based on nothing more than some stake holder saying, “Wouldn’t it be cool if….” and maybe a couple of inferences. Or perhaps someone has made the effort to put together specs, user flows, and detailed documentation about how the software is supposed to act, and all of that documentation lasts only until such time as the client takes a look at it and says, “Wouldn’t it be cool if…..” or “That’s not what I wanted/signed off on….” At which point, it’s off to the miscommunication and misinference races.

Even if you’ve got requirements (as in the second case above), they’re going to be incomplete or completely inaccurate by the time the application is ready for testing. So you do the same basic things to an application:

  • You get a basic idea of what it’s supposed to do.Every application is a tool designed for a user to do something. So find out what that something is, and then find out if the application does it.
  • You get the basic idea of how the user achieves the goal.That is, the application has a set of controls and a set of pages or screens into which the user should enter some information that the computer computes. So you look at those steps, and say, “Hmmm.” Once you’ve sussed out the extent of the application, that is, the playing field upon which you can operate, you can go to work.
  • Do what the user isn’t supposed to do.Once you know what the user is supposed to do, you can subvert that and do what the user shouldn’t be able to do. The user shouldn’t be able to edit records he or she hasn’t created yet or delete records that he or she has already deleted. Everyone else is working to make sure that the application sort of does what the client demands; however, the absence of requirements also often means that no one has planned for the unexpected. And by unexpected, I mean “normal user negligence.”Sure, it’s a lot of infinity to try to cover, but with some experience, you’ll get a sense of what to try.
  • In the absence of standards, use common sense.If your organization does not have standard set of limitations for data the user can enter, apply common sense. You’ll no doubt catch many of your developers unaware if you start trying to enter 30-digit zip codes, “I Can Has Cheezburger” strings for someone’s age, or gas station prices for currency values (2.999).

The buzzwords to use in your interview, gentle reader, are investigate the application goals and exploratory testing and consult stakeholders and boundary analysis and, if you’re feeling really buzzed, black box testing. Since these words mean something slightly different to everyone who hears them, you’ll get some point across.

Comments are closed.

wordpress visitors