Archive for the ‘Automated testing’ Category

Debugging Automated Tests, Step One

Wednesday, July 24th, 2019 by The Director

Step on in troubleshooting any failing automated test should always be Look at the application and try to do what the automated test does.

I hate to admit how many times I’ve spent hours trying to debug an automated test only to realize at the end that the automated test was failing because the application had an error.

I guess I spent that much time on it because somehow I trust my own code less than code created by a software developer.

At any rate, Trish Khoo has written an automated test debugging cheat sheet that does not include my step one as step one, but it’s a handy bit of thinking to keep at hand.

Waiting for One of Two Things To Appear on a Page in Java Selenium

Tuesday, January 20th, 2015 by The Director

I’ve been working on a Java Selenium test automation project, teaching myself Java and Selenium WebDriver along the way.

I ran into a problem that I didn’t see solved on the Internet elsewhere, but my skill at searching StackOverflow is not that good.

The problem is conducting an operation that might yield one of two results, such as success or an error message. Say you’ve got an registration form or something that can succeed if you put the right data in it or will fail if you put the wrong data in it. If the user successfully registers, a welcome page displays. If the user fails, an error message displays. And, given this is the Web, it might take some time for one or the other to display. And the implicit waits didn’t look like they’d handle branching logic.

So here’s what I did:


public int waitForOne(WebDriver driver, Logger log,
                      String lookFor1, String lookFor2){
  WebDriverWait wait = new WebDriverWait(driver, 1);
	
   for (Integer iterator = 1; iterator < 60; iterator++){
     try{
         WebElement element = 
            wait.until(ExpectedConditions.elementToBeClickable(By.id(lookFor1)));
         return 1;
        }catch (Exception e){
				
      }  // try 1
      try{
          WebElement element = 
             wait.until(ExpectedConditions.elementToBeClickable(By.id(lookFor2)));
          return 2;
         }catch (Exception e){
      } //try 2
  } //for loop
}// waitForOne

You could even create a longer list of events to wait for one of them to occur by passing in an array of strings and then using a For-Each loop to run through the list.

This sample looks for a Web element by its ID, but you could change it to use another By parameter, such as CSS Selector (cssSelector). Or, if you're feeling dangerous, you could pass in the By parameter as a string and parse it in the method to determine whether to use ID, CSS Selector, or a mix therein. But that's outside the simplicity of this example.

Also note that the For loop that limits the waiting for a total of sixty iterations, which in this case will max out at 120 seconds (at 1 second per attempt for 2 items a maximum of 60 times). You could pass the max in as a parameter when calling this method if you want. That's especially important if you're using a list of possible elements to look for. If you're passing in five elements, suddenly you're at a maximum time of five minutes if it times out completely. You might not want your tests to wait that long, especially if you're using the check multiple times per test.

I'm sure there are more elegant solutions for this. Let's hear them. Because, frankly, I'm not very good at searching StackOverflow, and I'd prefer if you'd just correct my foolishness here in the comments.

Where I Use Loops In Automated Tests

Wednesday, September 3rd, 2014 by The Director

Jim Holmes threw down the gauntlet on Twitter:

However, don’t is such a challenge to QA.

I use loops in my automated tests for the following things:

When running the same sort of test with different data.

When I want to test the same operation with different sets of data, I use a loop to jam the different set of strings into the same form.

incrementor=1
until(incrementor == number_of_rows)
  item = util.get_hash_from_spreadsheet(spreadsheet,incrementor)
  form.add(browser, item, log, filename)
  incrementor = incrementor +1
 end

When adding a large number of records for testing purposes.

I know, with just the right database management toolset, knowledge of the database, and proper tuned queries, I could go directly to the database to add a bunch of records if I want to see what happens when a user is working on a posting with a large number of comments attached to it. I want to see how it reacts when I add another comment or when I delete the posting or a comment.

So I just run a script with it:

incrementor = 0
while(incrementor < 100)
  post.go_to_post(browser,post_id, log, filename)
  countbefore = comment.get_comment_count(util,browser,post_id,urlstring)
  comment_text= "Comment left at "+(Time.now.strftime("%m%d%y%H%M%S"))
  comment.add_comment_on_want_detail(browser, post_id, comment_text, log, filename)
  countafter = comment.get_comment_count(util,browser,post_id, urlstring)
  incrementor = incrementor + 1
end

Sure, we can quibble about whether this is an automated test or just a script; however, the script is testing the site's ability to handle 100 comments in a row, ainna? So it's a test and not a set-up script.

When testing data loads.

I've got a client who runs different programs for banks, and individual programs are targeted to individual bank branches. This means the client runs a spreadsheet through a data load process, and I have to test to ensure that the bank branches are exposed.

So I take a spreadsheet from the data load and run it through the automated interface tester.

incrementor = 1
until(incrementor == number_of_rows)
  branch = util.get_branch_from_spreadsheet(spreadsheet,incrementor)
  form.search(browser, branch, log, filename)
  incrementor = incrementor +1
end

So although you might have good reasons to not use loops in certain instances, loops do prove useful in automated tests.

Just remember, there's always an awful lot of sometimes in never.

I Know It’s Like Training Wheels, But….

Thursday, June 26th, 2014 by The Director

I know this is just a simple trick that marks me as a beginner, but I like to add a comment at the end of a block to indicate what block of code is ending.

Java:

          }  // if button displayed
      }catch (NoSuchElementException e){
          buttonState = "not present";
      }  // try/catch
		
      return buttonState;
   }  // check button
}  //class

Ruby:

    end # until
  end # wait for browser

end # class end

Sure, an IDE will show me, sometimes faintly, what block a bracket closes, but I prefer this clearer indication which is easier to see.

I Said Something Clever, Once

Wednesday, March 19th, 2014 by The Director

Talking about automated testing once, I said:

Automated testing, really, is a misnomer. The testing is not automated. It requires someone to build it and to make sure that the scripts remain synched to a GUI.

Quite so, still

This Is Too Simple To Be A Tip, Isn’t It?

Tuesday, February 11th, 2014 by The Director

In my automated test scripts, I always put one-line comments at the end of loops and methods that show what is closing. For example, in Ruby, it looks like this:

    puts("Waiting for delete confirmation")
    sleep(1)
  end #until
end   #click_delete_yes

It makes it just a little easier to figure out where I am in the code, and it also makes sure I close the loops and functions appropriately.

That’s so simple and obvious it can’t possibly be a helpful tip, coud it?

Is Your Automated Testing A Mechanical Turk?

Wednesday, August 3rd, 2011 by The Director

The Mechanical Turk was a chess-playing device constructed in the 18th century that used a complex set of gears and pistons and whatnot to play chess. Well, it used those things to convey the impression that it was playing chess. Instead, a man sat hidden inside the machine to do the actual work:

The Turk, the Mechanical Turk or Automaton Chess Player was a fake chess-playing machine constructed in the late 18th century. From 1770 until its destruction by fire in 1854, it was exhibited by various owners as an automaton, though it was exposed in the early 1820s as an elaborate hoax.[1] Constructed and unveiled in 1770 by Wolfgang von Kempelen (1734–1804) to impress the Empress Maria Theresa, the mechanism appeared to be able to play a strong game of chess against a human opponent, as well as perform the knight’s tour, a puzzle that requires the player to move a knight to occupy every square of a chessboard exactly once.

The Turk was in fact a mechanical illusion that allowed a human chess master hiding inside to operate the machine. With a skilled operator, the Turk won most of the games played during its demonstrations around Europe and the Americas for nearly 84 years, playing and defeating many challengers including statesmen such as Napoleon Bonaparte and Benjamin Franklin. Although many had suspected the hidden human operator, the hoax was initially revealed only in the 1820s by the Londoner Robert Willis.[2] The operator(s) within the mechanism during Kempelen’s original tour remains a mystery. When the device was later purchased in 1804 and exhibited by Johann Nepomuk Mälzel, the chess masters who secretly operated it included Johann Allgaier, Boncourt, Aaron Alexandre, William Lewis, Jacques Mouret, and William Schlumberger.

If you’re automation effort requires the active, ongoing effort of someone to keep the fragile scripts built with an unperfect test tool up-to-date with the application, you’re not building an automated test suite–you’re building The Turk.

The Law of Diminishing ROI

Tuesday, May 24th, 2011 by The Director

Some of you might be familiar with a particular facet of the laws of economics called “The Law of Diminishing Returns.” Basically, it says that after some point, more effort is not going to provide enough benefit/profit to justify the continued effort.

Granted, to some organizations, any QA is diminishing returns. Fortunately, if you have a job in QA, you don’t work for an organization that believes that. However, at some point, your organization thinks it has had enough QA. As far as testing goes, I argue that you cannot really guess at what point that happens. You might run hundreds of test cases through the third time, perhaps using automation for the routine stuff, and not uncover new defects. But a show-stopping defect might lie outside whatever tests you have mapped out and tried now, but with another week you might have thought up some new wrinkle in workflow to try (or might have arisen after daylight savings time started the week after you stopped testing or some other environmental concern).

So I won’t concede at any point that there’s a diminishing return at any point at the end of the test cycle. However, I do think the law applies at the beginning of the cycle with automated testing.

I say this after reading this:

In my last blog I shared the “World Quality Report” from Capgemini. One of the QA challenges that the report cited was a slow adoption of automation tools early in the cycle. Why? According to the report, it’s hard to realize the ROI.

The author goes on to give an example of how automation helped realize ROI over the long term, particularly when the product reached a level of stability and regression tests were run over and over again. That is when it is the proper time to introduce automation, but it is not early in the cycle.

Early in the software development cycle includes requirements gathering, discussions with users, some sort of mockups, and early, unstable builds that are not feature complete and whose GUIs will change based on feedback and whatnot. In these early stages, it does not make sense to add an automated tester except to bring up concerns and points about automated testing, including how to best build the application to include testable GUI practices and test harnesses and whatnot.

Starting to build automated scripts against mockups or against those unstable, buggy builds would require more effort in script maintenance than you would receive benefits. If you need to select a tool, it’s probably too early to make the decisions as to which tool(s) to use since the GUI might go in a different direction that would make a different tool (or tools) a better fit.

The law of diminishing returns definitely applies to automated testing early in the SDLC.

On the other hand, if your product/project is going to run long term with numerous build cycles, it does make sense to automate. On the back end.

Automated testing, really, is a misnomer. The testing is not automated. It requires someone to build it and to make sure that the scripts remain synched to a GUI. If the GUI changes or the user workflow changes, a tester needs to not expend effort on testing, but on revising the automated scripts–which might run once for a single build cycle before requiring further revision.

It’s computer-run scripted testing. If we just called it that, people would understand it better and expect less magic from it.

How Much Do You Trust Your Third Party Partners Now?

Tuesday, September 21st, 2010 by The Director

Your organization probably trusts its third party integrated software partners as much as J.P. Morgan used to:

JPMorgan Chase is trying to move past three days of problems on its online banking site with an apology and an explanation that seems to put the cause on a third party.

The bank’s online site went offline Monday night and remained offline Tuesday. Service appeared restored by Wednesday, although there were some reports by Twitter users of problems.

The bank, in a statement posted online, said it was “sorry for the difficulties” that customers encountered, and said “we apologize for not communicating better with you during this issue.”

At first, Chase simply cited a “technical issue” for the problem. It has since provided a little more information.

The bank, the nation’s second largest, said in a separate statement that a “third party database company’s software caused a corruption of systems information disabling our ability to process customer log-ins to chase.com.” It added that the problem “resulted in a long recovery process.”

Now, how can you try to keep this from happening to you?

  • Compel your vendors to tell you about their updates. Ideally, you would get a chance to test your software against their new versions before they promote them to production, but at the very least, they better tell you when they plan to put things up so you can test immediately. Remember, your “trusted” partners are organization filled with the same lying developer dogs as yours, but without the QA.
  • Don’t do business with companies that practice continuous deployment. Seriously, they can promote at will and at whim, so your mission-critical software can fail at any time, without any warning, and without any clue that it’s not your fault.
  • Run automated smoke tests against your production site as often as you can stand. Depending upon the nature of the application, this might only be daily, but the more frequently you can sanity check your production environment, the better. There’s nothing better than calling your head of development on Christmas Eve to tell him the site’s down before your users or clients even know.

Remember, you have no trusted partners. You should trust them even less than you trust your own organization, if you can imagine that.

How Can You Tell An Experienced Automated Tester?

Monday, September 6th, 2010 by The Director

How can you tell an experienced automated tester from either an automated testing software solution vendor or someone who’s heard a buzzword dog whistle and is salivating on cue at the chance to work somewhere that’s included Automated Testing on a job posting?

An experienced automated tester is going to spend more time trying to tell you what automated testing isn’t instead of what it is. Because you’ve probably got a pie-in-the-sky you’re slicing to share for dessert.

Cue Trisherino:

I think people have a tendency to greatly underestimate the difficulty of writing a good GUI-automation suite. It’s not as simple as record and playback, and it’s not like building regular software.

I’ve seen experienced developers underestimate it many times. Inevitably, they end up getting very frustrated and complaining about how rubbish the automation tools are. And yes, the poor quality of automation tools is definitely part of the problem. Developers are used to using very polished tools, lovingly crafted by developers, for developers. Testers are used to using either bloated overpriced commercial tools, condescendingly oversimplified so that “anyone” can use them, or well-meaning but under-maintained free tools, struggling to keep up with the latest technologies.

What she said.

You know why developers are so hot for automated testing? They’re used to thinking they can push software around. Unlike some QA people who refuse to be intimidated by their obvious GENIUSSSS!

The Myth of the Automatic Automated Benefit

Tuesday, March 31st, 2009 by The Director

Scary Tester does a good bit of putting a positive spin on when it’s best to do automated testing:

Automated tests are suitable for the following purposes:
–    Regression testing for a stable system that will be run on a regular basis
–    Fast data creation in test systems where the database must be wiped on a regular basis

Automated tests are NOT suitable for the following purposes:
–    Testing new functionality – this should be done manually before automated tests are created
–    Regression testing systems that are expected to have significant user interface changes. Large changes to the user interface require a lot of maintenance for automated tests.

You know, testers make these arguments over and over again, but I’ve gone into a number of places to talk about starting QA efforts on major product lines or to work on smaller (160 hour) projects where the principal involved wants automated testing.  Usually on an evolving product and with only one QA person.  Try as I might to dissuade them, they go out and find someone willing to bill them less fruitful hours of QA work because that’s what the client wants.  And the client/employer gets it: an automated effort of some sort, a low defect count (because the QA person spent hours selecting/writing/maintaining automated scripts instead of testing.

But Scary Tester’s and my commentaries fall on sympathetic ears.  Meanwhile, Baseline magazine will run a bunch of ads from software companies selling automated testing software and amid a splashy article about how automated tests can do the work of 20 monkey testers.

I think I’m repeating myself, aren’t I?

When Automation Goes To Hell….Plus, a Pipe Dream

Tuesday, January 29th, 2008 by The Director

In the January 2008 issue of Software Test and Performance (available as a PDF), the head of Parasoft, a QA software company, explains when automation efforts can go to pieces.

(more…)

But I Like My Solutions Better

Tuesday, January 15th, 2008 by The Director

This month (pdf) in Software Test & Performance, editor Edward J. Correia again takes on automated software testing. The intro paragraph led me to believe he might have become a joyous skeptic, like us:

Why, for instance, do we build software to test other software? This question has never before occurred to me, nor does it parallel such mysteries as people who are financially wealthy but short on values. But it does bear some discussion.

Does he then contemplate the possibility that trusting software to test software is something like telling criminals to police themselves? Nah, he just marvels at the beauty of it. As he should, since the automated software companies are the ones buying the ads in his magazine.

However, we at QAHatesYou.com disagree with his conclusion:

Software is very good at automating things. So when automated testing is the need, why not use the best tool for the job? For the practice of automating software testing, the best tool happens to be more software. Sometimes the best tool is staring you right in the face.

Here at QAHatesYou.com, we have found in our experience that the following are sometimes better solutions, especially when tailored to limited budgets:

  • Zombies. All you need are a recurring maintenance budget, i.e., brains. You can certainly find some unused brains on your development team anyway. So raise some dead and show them which keys to push, and wallah! Automated software testing using the undead.
  • Steam piston driven software appliances. All you need is a machine shop, some wrenches, and boiling water to build complex steam-driven keyboard punchers. Mouse-handling and pointing-and-clicking are less accurate, so you’ll have to work around that. Also, remember to calibrate the finger-rods correctly, or they will punch right through the keyboard instead of efficiently delivering the keyclick you want.
  • Monkeys. Just kidding. We use all our monkeys for new functionality testing.

Automated software testing is really only possible through the use of software, which comes with its own hazards which I’ll go into some other time.


wordpress visitors