Archive for the ‘Best practices’ Category

A Hint For Video Interviews and Sales Calls

Tuesday, January 10th, 2017 by The Director

Whenever I have to take a video call, I spend a couple minutes testing out the angle of the Web cam (and the sound quality of the microphone).

Not only does it ensure that your video call is professional-looking and impressive, but in my case, it can be the difference between frightening someone who doesn’t know me well and impressing them with my well read nature.

That is, I lower the Web cam to hide the bladed weapon collection and focus on the bookshelves below it.

I also remove the room’s second chair and make sure there are no comic book boxes, vacuum cleaners, or piles of rubbish within view.

But, to be honest, I’m still making a fictitious shot, though, because the books behind me are the books I have yet to read.

The JavaScript Warning By Which All Others Are Measured

Friday, May 13th, 2016 by The Director

If you have JavaScript blocked and go to DocuSign, instead of a little bit of red text above the form, you get a page with a message that tells you how to enable JavaScript in the browser you’re using:

Your Web site probably falls far, far short of this.

However, the page still has a common bug. Anyone care to tell me what?

Not Only Wireframes, But Yes, Wireframes

Thursday, February 18th, 2016 by The Director

You know, I like to get a look at any and all artifacts as soon as possible to see if I can spot any flaws as early as I can. This includes comps, prototypes, copy, and wireframes, where I hope to catch oversights before they get into the code.

But in addition to looking for oversights, I always wanted to review the documents qua documents, especially if your company is providing wireframes, comps, prototypes, copy, and so on to the client for review. It gives you a chance to catch mistakes, misspellings, improper branding, and inconsistencies before your client can look at them and think, “Ew, these guys can’t spell our name right on the wireframes. What would they do to our Web site?”

Yes, I did review RFP responses and proposals as well.

The Purple One links to this article entitled Wireframes – Should They Really Be Tested? And If So, How?

New trainees came on board and we had a training class to learn software testing concepts. After seeing those enthusiastic faces with their almost blank-slate minds (professionally), I decided to take a detour to my routine training.

After a brief introduction, instead of talking about software testing like I normally do, I threw a question at the fresh minds – ‘Can anyone explain me what a wireframe is? ’

The answer was a pause and thus, we decided to discuss it. And that is how it started – Wireframe/Prototype Testing

This should provide a good argument and overview if you need one.

The Lair of the Monotor

Thursday, December 20th, 2012 by The Director

Marlena Compton posits If you do testing, you need more monitors.

Au contraire, I say, but I mispronounce it because I do not speak French.

I use a single monitor (across multiple machines, no less, through the magic of KVM).

Ms. Compton says:

Having more monitors leads to better testing because:
More supported browsers are open and easy to compare
More sessions are open so it is easier to see cause and effect problems
I can have more than one or even two or three users signed in with different permission levels.
Even though there are still several browsers open, I can also have some terminals open for grepping through log files and taking notes or logging bugs.

Of all these things, the only time I’ve found multiple monitors to be a worthwhile solution is while running automated tests on a number of machines. Each machine had its own monitor (14″ CRTs back in the day), and the master control monitor (17″ CRT, probably) launched the scripts and displayed a -tailing log while the scripts ran. In the automated environment, where you’re watching the scripts (and maybe the GUIs), this makes sense.

But in my world of manual testing, especially exploratory and ad hockish testing, one monitor is better. A big monitor, to be sure, so you can blow everything up really big, but one monitor just the same.

The reason: Focus.

Ms. Compton says:

In the world of web application testing, this is the difference between noticing something and having it obscured behind too many screens where you will never see it.

The principle extends to across too many screens as well as in open windows that don’t have focus. It’s hard enough to catch one little bit of squirrelly behavior in one little spot in an application page or an application window when it happens right in front of you. If you’ve got to turn your head or rely on your peripheral vision to catch it, you won’t.

A dramatic recreation
Dramatic re-creation

Personally, I focus all my attention on one browser/window at a time. If I could put a photography hood over my head, I would. Come to think of it, maybe I ought to get one. Because I’m zoned in on that thing I’m looking at or testing.

If I need multiple sessions open at once to have different users interacting, I’m still focusing on looking for bugs in one of them at one time. I can do that with multiple browsers open on one machine.

Compatibility testing? Let me tell you how I did it when I was a printer: I took the print sample I was supposed to match and I put the new print sample over it, and I held them together against the light. The Web testing equivalent is to maximize the browser windows, load the pages to compare, and use ALT+TAB to switch between them quickly. Misplaced items will jump around visibly.

So more monitors isn’t necessarily better, especially if your attention has a tendency to


I Build My Walls out of Metallica

Thursday, May 24th, 2012 by The Director

Scientists and interior designers are starting to think that short cubicle walls and open floor plans are neat places to visit, but you wouldn’t want to work there:

Cubicle culture is already something of a punch line — how many ways can we find to annoy one another all day? — but lately the complaints are being heard by the right people, including managers and social scientists. Companies are redesigning offices, piping in special background noise to improve the acoustics and bringing in engineers to solve volume issues. “Sound masking” has become a buzz phrase.

Scientists, for their part, are measuring the unhappiness and the lower productivity of distracted workers. After surveying 65,000 people over the past decade in North America, Europe, Africa and Australia, researchers at the University of California, Berkeley, report that more than half of office workers are dissatisfied with the level of “speech privacy,” making it the leading complaint in offices everywhere.

“In general, people do not like the acoustics in open offices,” said John Goins, the leader of the survey conducted by Berkeley’s Center for the Built Environment. “The noisemakers aren’t so bothered by the lack of privacy, but most people are not happy, and designers are finally starting to pay attention to the problem.”

I prefer a QA lab with walls because it’s so much harder to get heavy metal to bounce off of the walls without the, you know, walls.

That, and QA requires a lot of focus, and people popping by all the time or even the fleeting glimpse out of the corner of my eye can be enough to make me fear I’ve missed something.

Sadly, This Is Not A Standard Test

Wednesday, April 18th, 2012 by The Director

A Computerworld article asks, “Time to de-Flash your site?” A mobile user laments:

“When I am accessing a website that has Flash, I usually get a blank part of the screen, or a red box where the Flash element is,” Cunha says. “Or I may just get a static image.” If the organization behind that website hasn’t developed a scaled-down mobile-friendly alternative, Cunha says he usually avoids the site totally.

Back when I was at the interactive agency, we always tested to see the site without Flash and provided a different static image if the browser didn’t have Flash installed.

I’m sure that’s all done away with now, and most Web shops thought (if they thought at all) that Flash penetration was high enough to make that unnecessary.

And then, a couple years later, popular tablets and smartphones did not support Flash, and the lamentations begin.

Here’s a bit of advice, gratis: If you’re building or testing Web sites, always check to see what happens if dependent technologies aren’t there, and handle their absence gracefully. Sure, the technologies might have a lot of market penetration now, but what’s going to happen in a couple years?

Unless you’re a fan of clients clamoring for free fixes to their suddenly broken sites, just do it. You’ll make me quieter about it, anyway.

Leap Year Reminder

Tuesday, February 7th, 2012 by The Director

I draw your attention to this post from January 2009 about another type of test case to consider during leap year.

Not only do you have to accommodate the date of February 29, 2012, but you need to also check any calculations that count the days.

Are You the Eight Million Dollar Project or the Twenty Dollar Fan?

Wednesday, December 14th, 2011 by The Director

I received this joke in an email from a client:

Cost Effective Engineering Solution

A toothpaste factory had a problem: they sometimes shipped empty boxes, without the tube inside. This was due to the way the production line was set up, and people with experience in designing production lines will tell you how difficult it is to have everything happen with timings so precise that every single unit coming out of it is perfect 100% of the time. Small variations in the environment (which can not be controlled in a cost-effective fashion) mean you must have quality assurance checks smartly distributed across the line so that customers all the way down to the supermarket don’t get angry and buy another product instead.

Understanding how important that was, the CEO of the toothpaste factory got the top people in the company together and they decided to start a new project, in which they would hire an external engineering company to solve their empty boxes problem, as their engineering department was already too stretched to take on any extra effort.

The project followed the usual process: budget and project sponsor allocated, RFP, third-parties selected, and six months (and $8 million) later they had a fantastic solution – on time, on budget, high quality and everyone in the project had a great time. They solved the problem by using high-tech precision scales that would sound a bell and flash lights whenever a toothpaste box would weigh less than it should. The line would stop, and someone had to walk over and yank the defective box out of it, pressing another button when done to re-start the line.

A while later, the CEO decides to have a look at the ROI of the project: amazing results! No empty boxes ever shipped out of the factory after the scales were put in place. Very few customer complaints, and they were gaining market share. “That’s some money well spent!” he says, before looking closely at the other statistics in the report.

It turns out, the number of defects picked up by the scales was 0 after three weeks of production use. It should have been picking up at least a dozen a day, so maybe there was something wrong with the report. He filed a bug against it, and after some investigation, the engineers come back saying the report was actually correct. The scales really weren’t picking up any defects, because all boxes that got to that point in the conveyor belt were good.

Puzzled, the CEO travels down to the factory, and walks up to the part of the line where the precision scales were installed. A few feet before the scale, there was a $20 desk fan, blowing the empty boxes off of the belt and into a trash bin.

“Oh, that,” says one of the workers “one of the guys put it there ’cause he was tired of having to walk over every time the bell rang”!!

I’ve also received that question in job interview/sales pitch situations. Well, not phrased exactly like that: the question is usually, “What is the first thing you’ll look at to improve our quality?” And the answer is always along the lines of, “It depends what you’re doing now.”

The person who asks that question often wants a silver bullet answer, some glib response that encapsulates how to improve their quality and process with an elevator pitch. And, in many cases, they get an elevator pitch selling some particular process or methodology that might or might not deliver a significant improvement but will most certainly come at some cost.

You’ll get the biggest leaps in quality, productivity, and process by listening to the people who are doing what you do every day, and maybe you’ll be better off listening to them rather than bringing in outsiders who have a Procrustean process that your organization will fit one way or the other.

Which is just my way for covering my fumbling answer to the question, which is that some small thing will yield vast improvements, but I don’t know what that small thing is yet.

They Can Have Any Priority They Want As Long As It’s “Normal”

Thursday, September 15th, 2011 by The Director

Sitemeter’s support page has an incident report form with only a single priority level:

Priority Normal is all fouled up.

Why bother including it if there’s only one choice?

They grafted a third party package onto the Web site and did not suppress the field or they have a system beyond that which is customer-facing with other priorities but did not suppress the field on the customer-facing site. Either way, you can guess what I think they should have done.

Suppress a field where the user has no choice.

Bug Opens Doors In New Zealand

Friday, April 29th, 2011 by The Director

Not metaphorical doors. The real doors:

A computer glitch at a New Zealand supermarket led to its doors being opened despite being officially closed, allowing shoppers to walk away with free groceries, The (London) Times reports.

At 8am Friday, the New Zealand supermarket’s computerized system opened its doors and switched on its lights, ready for business as usual. The only problem was nobody had actually told the computer it was Good Friday, a day when supermarkets in New Zealand don’t open, and there was not a checkout person in sight.

That didn’t stop the locals in the North Island city of Hamilton, and soon the Pak ‘n Save aisles were as busy as any normal day, although shoppers were filling their carts and walking straight past the checkout to their cars.

To be honest, this sounds like more of a configuration issue than an actual software bug. Hopefully, the list of holidays and dates would be configurable in any regard. However, we’re reading a story on an Australian Web site that recounts what was reported in a London newspaper, so everything, from the actual occurrence to the reasons behind it, is suspect.

However, it does lend itself to something of a lesson for QA: If your software/embedded systems are to be used around the world, how familiar are you with the processes and impacts in your target markets? You could do like Trisherino does and study from a high level a different country each week, but most importantly, you need to understand practical considerations of your target markets, including character sets and calendars, to test effectively.

What Does Your Software Do On A Sunny Day?

Thursday, April 21st, 2011 by The Director

That is, what happens when the cloud blows away? Never happen? Well, it hasn’t so far today:

Cloud computing is all very well until someone trips over a wire and the whole thing goes dark.

Reddit, Foursquare and Quora were among the sites affected by Amazon Web Services suffering network latency and connectivity errors this morning, according to the company’s own status dashboard.

Amazon says performance issues affected instances of its Elastic Compute Cloud (EC2) service and its Relational Database Service, and it’s “continuing to work towards full resolution”. These are hosted in its North Virginia data centre.

I always include test cases that deal with instances where the server isn’t there, where the database isn’t there, and where other pieces of infrastructure are unavailable. What happens when your poor little client (or Web page in your browser) finds itself all alone?

What We Have Here Is A Failure To Outputiate

Thursday, April 14th, 2011 by The Director

A receipt from a car wash that accepts credit cards shows a stunning amount of inaccurate data:

Receipt from the car wash, beep beep, hah!

The name, address, and the approval number are all obviously dummy data. Should your system in production be outputting this? Of course not. But do you let the users–in this case, an installer or an administrator of the kiosk–just use the dummy data?

You see this trap sometimes when applications put the labels for controls as text in the controls themselves, such as an edit box that says “First Name” until you type into it. Sometimes, you’ll find the application will check to make sure the edit box is not empty, but the application is perfectly happy with “First Name” in it. The application is happy, but is the client happy that 50% of his registrations come from First Name Last Name of Address City State 55555? I think not. Don’t let them do it. Even if they’re trusted computer professionals.

Secondly, this is another reminder to check all your application’s outputs, QA. I know, that means sometimes getting up from the faintly warm glow of your monitor and the seat that has molded itself nicely to your backside, but if your application prints anything, you’d better make sure it looks good on paper (and on A4 paper if you’re pretending international use).

Character Sets Are What Your Application Does In The Dark

Tuesday, January 4th, 2011 by The Director

Karen Johnson has a nice post here about the most basic tests she does when confronted with an application that should handle multiple languages:

So when it was time to choose a handful of languages to test with, my reaction was to choose:

1. one or more Latin-based languages
2. one or more languages with a heavy use of diacriticals
3. a RTL [Right-to-left] language
4. a language that is more symbolic than character-based

A common problem in testing with these languages is the lack of keyboard or a means of entering characters from different languages. Cut and paste can work if you’re careful.

You know, that’s a handy set of tests for anytext-accepting control on any Web site, even if your application only expects and only accepts (allegedly) English.

Some Symptoms of Crappy Surveys

Monday, January 3rd, 2011 by The Director

A user experience designer was dissatisfied with a survey that United Airlines made him take before giving him in-flight Internet:

Instead of a pricing and log-in page, I get a simple screen that says “Before you access the Internet, please take a few minutes to complete a short survey. Your responses will help us improve United in-flight Wi-Fi.”

There’s no option here to skip the survey. I must fill it out. I watched other passengers encounter this page and it’s there for everyone. I’m guessing it’ll be there for a while, so I’ll get to fill it out on every wi-fi flight I take until they stop the survey.

Of course, they want everyone’s opinion. However, do they want everyone’s opinion multiple times? How does that help them?

Given no choice, I started up the survey. That’s when it got really amusing.

You, dear QA, should keep up with good interface design procedures–or at least refresh your list of quiz best practices–with this list. Because things that annoy users are things that should annoy you. And when you’re annoyed, you can spread the love more effectively than some poor sop at 32,000 feet.

Don’t Overlook Your Headings

Tuesday, December 28th, 2010 by The Director

As a reminder, when you’re reviewing a Web site (or anything for that matter), don’t overlook your headings. It’s very easy to do when you’re concentrating on copy or on whether the Web page itself looks and works properly, but those poor little textual or image-based headings need some loving, as in QA abusive loving, too.

Don’t be like the people who assembled the JC Penney catalog this week:

The special transposed models also available
Click for full size

Remember to take just a couple seconds to make sure:

  • The words in the heading are spelled correctly.
  • The heading actually applies to the text.
  • The heading corresponds to any anchor tags associated with it.
  • The heading is in the proper font and size for headings (especially if it’s an image).
  • The heading’s structure is parallel with those of equal heading level.
  • The heading’s grammar is correct.
  • alt and title attributes for heading images match the image text.
  • Headings render in the same style across browsers.

They’re just one little aspect of each page, but you and a lot of people in your organization (and your clients and audience) might overlook them. Everyone else has an excuse to do so. You, QA, do not.

That’s Esoteric. And You Can Log It

Monday, December 20th, 2010 by The Director

When you’re proofreading technical copy, remember there should definitely be a space between the number and the measurement.

Wrong: Right:
2TB disk 2 TB disk
1.82GHz 1.82 GHz

Apparently, that’s from the SI Brochure which is the style guide on talking about measurements.

(Seen on the The Unofficial Apple Weblog.)

The Lesson Is Lost

Wednesday, September 29th, 2010 by The Director

I follow a number of developers on Twitter, and as the new Twitter has opened its raincoat and exposed itself to them, many of them have tweeted about the various problems with the new Web interface and how it doesn’t work with them.

Hey, count me in on their sentiments. However….

I expect that many of those complaining are doing so in between working on Web sites or applications that they’re developing according to a large set of their own preconceptions and desires to do it that way because they think it’s cool, and the users will just have to be trained to do what the developers want to write.

How Much Do You Trust Your Third Party Partners Now?

Tuesday, September 21st, 2010 by The Director

Your organization probably trusts its third party integrated software partners as much as J.P. Morgan used to:

JPMorgan Chase is trying to move past three days of problems on its online banking site with an apology and an explanation that seems to put the cause on a third party.

The bank’s online site went offline Monday night and remained offline Tuesday. Service appeared restored by Wednesday, although there were some reports by Twitter users of problems.

The bank, in a statement posted online, said it was “sorry for the difficulties” that customers encountered, and said “we apologize for not communicating better with you during this issue.”

At first, Chase simply cited a “technical issue” for the problem. It has since provided a little more information.

The bank, the nation’s second largest, said in a separate statement that a “third party database company’s software caused a corruption of systems information disabling our ability to process customer log-ins to” It added that the problem “resulted in a long recovery process.”

Now, how can you try to keep this from happening to you?

  • Compel your vendors to tell you about their updates. Ideally, you would get a chance to test your software against their new versions before they promote them to production, but at the very least, they better tell you when they plan to put things up so you can test immediately. Remember, your “trusted” partners are organization filled with the same lying developer dogs as yours, but without the QA.
  • Don’t do business with companies that practice continuous deployment. Seriously, they can promote at will and at whim, so your mission-critical software can fail at any time, without any warning, and without any clue that it’s not your fault.
  • Run automated smoke tests against your production site as often as you can stand. Depending upon the nature of the application, this might only be daily, but the more frequently you can sanity check your production environment, the better. There’s nothing better than calling your head of development on Christmas Eve to tell him the site’s down before your users or clients even know.

Remember, you have no trusted partners. You should trust them even less than you trust your own organization, if you can imagine that.

Insights into Market Penetration and Tipping Points

Friday, September 3rd, 2010 by The Director

The Five Key Myths About HTML5 contains a lot of insight into market penetration and tipping points regarding new technologies, including browser market share and Flash version penetration.

There’s a tipping point where a technology reaches enough users to be worthwhile for designers and developers to use; however, the thesis is that HTML5 isn’t there yet.

But you can apply the lessons from the article to other technologies your organization might want to use.

Spotting Security Vulnerabilities In Code

Wednesday, June 24th, 2009 by The Director

eWeek has a slideshow quiz for you to test how well you can spot security vulnerabilities in code.

It’s a bit technical for some QA people, but if you’re going to sit through a code review (I did.  Once.  And then code reviews were abandoned), these are the sorts of things you need to look for.  Because every crazy test you would perform on a text box, you should demand they perform on each and every variable passed into a method.  Werd.

wordpress visitors