The Law of Diminishing ROI

Some of you might be familiar with a particular facet of the laws of economics called “The Law of Diminishing Returns.” Basically, it says that after some point, more effort is not going to provide enough benefit/profit to justify the continued effort.

Granted, to some organizations, any QA is diminishing returns. Fortunately, if you have a job in QA, you don’t work for an organization that believes that. However, at some point, your organization thinks it has had enough QA. As far as testing goes, I argue that you cannot really guess at what point that happens. You might run hundreds of test cases through the third time, perhaps using automation for the routine stuff, and not uncover new defects. But a show-stopping defect might lie outside whatever tests you have mapped out and tried now, but with another week you might have thought up some new wrinkle in workflow to try (or might have arisen after daylight savings time started the week after you stopped testing or some other environmental concern).

So I won’t concede at any point that there’s a diminishing return at any point at the end of the test cycle. However, I do think the law applies at the beginning of the cycle with automated testing.

I say this after reading this:

In my last blog I shared the “World Quality Report” from Capgemini. One of the QA challenges that the report cited was a slow adoption of automation tools early in the cycle. Why? According to the report, it’s hard to realize the ROI.

The author goes on to give an example of how automation helped realize ROI over the long term, particularly when the product reached a level of stability and regression tests were run over and over again. That is when it is the proper time to introduce automation, but it is not early in the cycle.

Early in the software development cycle includes requirements gathering, discussions with users, some sort of mockups, and early, unstable builds that are not feature complete and whose GUIs will change based on feedback and whatnot. In these early stages, it does not make sense to add an automated tester except to bring up concerns and points about automated testing, including how to best build the application to include testable GUI practices and test harnesses and whatnot.

Starting to build automated scripts against mockups or against those unstable, buggy builds would require more effort in script maintenance than you would receive benefits. If you need to select a tool, it’s probably too early to make the decisions as to which tool(s) to use since the GUI might go in a different direction that would make a different tool (or tools) a better fit.

The law of diminishing returns definitely applies to automated testing early in the SDLC.

On the other hand, if your product/project is going to run long term with numerous build cycles, it does make sense to automate. On the back end.

Automated testing, really, is a misnomer. The testing is not automated. It requires someone to build it and to make sure that the scripts remain synched to a GUI. If the GUI changes or the user workflow changes, a tester needs to not expend effort on testing, but on revising the automated scripts–which might run once for a single build cycle before requiring further revision.

It’s computer-run scripted testing. If we just called it that, people would understand it better and expect less magic from it.

Comments are closed.


wordpress visitors