Archive for the ‘Philosophy’ Category

Software Workarounds Cause Problems With Real Users

Monday, February 25th, 2008 by The Director

A recent article shows how the workarounds put in place in software impedes people. This article, entitled “Apostrophes in names stir lot o’ trouble“, alludes to a number of characters that software designers/developers mishandle:

It can stop you from voting, destroy your dental appointments, make it difficult to rent a car or book a flight, even interfere with your college exams. More than 50 years into the Information Age, computers are still getting confused by the apostrophe. It’s a problem familiar to O’Connors, D’Angelos, N’Dours and D’Artagnans across America.

To avoid SQL injection attacks, which can use the apostrophe, a lot of times developers just prevent users from entering apostrophes into the edit boxes on Web forms.

That’s not the best solution, as the people in the article attest. You shouldn’t let your developers get away with it.

Attack the Man Behind the Curtain

Thursday, February 14th, 2008 by The Director

Ah. A Web application has stood up to your rigorous testing in the original test environment. Hey, in a strange quirk of fate, you’re living in that fantasyland where the application is then deployed to a staging server that resembles the structure of a production server so you can run through it again on a final build. The application passes with flying colors, which means it hasn’t bled to death in stage from your repeated thrusts. No, some project manager or customer lover comes in and says, “Ship it.” One of your technical co-workers deploys the application to production sometime in the middle of a night on a weekend or, more likely, 20 minutes before the client expects it.

You, tester, have just one run through it and MillerCoors time, right? Hold on, little pardner, not so fast. Particularly if your application’s production environment uses a load balancer.


When Automation Goes To Hell….Plus, a Pipe Dream

Tuesday, January 29th, 2008 by The Director

In the January 2008 issue of Software Test and Performance (available as a PDF), the head of Parasoft, a QA software company, explains when automation efforts can go to pieces.


Tips for Using Automated Link Checking Software

Tuesday, January 22nd, 2008 by The Director

As you expect, gentle reader, even when it comes to checking the links on Web sites, I prefer manual testing, particularly at the onset of a Web site development project. That is, I do want to personally, with my own index finger, click every single link on every single page, including that repeated navigational menu bar that would never, ever change across the pages (the developers and designers say) and don’t tend to change except when they catastrophically fail, for no discernable reason, on a single page.

That’s not to say that automated link checking doesn’t have its place, because it does. The remainder of this piece talks about its place.



Wherein The Director Channels Benjamin Franklin

Tuesday, January 8th, 2008 by The Director

“Those who would give up Essential Quality to earn a little Temporary Profit, will see neither Quality nor Profit.”

Feel free to use that in your next status meeting, when someone is explaining why the team shouldn’t fix a large number of defects.

Sometimes, Automated Testing Is Folly

Monday, January 7th, 2008 by The Director

In the message from the editor in the November 2006 (pdf) issue of Software Test & Performance, Edward J. Correia expresses a basic belief in the magical potency of automated testing:

Repetitive tasks are bad enough—having to perform them repetitively is insane. If something can be automated, it should be. Even if it takes 10 times longer and costs a hundred times more than the original task itself, it pays in the long run to automate your tests.

I cannot tell you how many times I’ve gone into interviews and sales calls for projects where the other person across the table blurts out the benefits of automation and how the company wants to use automation to build a complex suite of automated tests to ensure 100% code coverage to run with nightly builds to ensure project success.

But the automated test evangelists, speaking in their tongues, are wrong. Sometimes automated testing is a bane.

Testing With Real Data

Friday, December 21st, 2007 by The Director

An article at Dark Reading explains Real Data in App Testing Poses Real Risks:

If you use real, live customer data in your testing and development of applications, you may want to think twice about the risks of exposing that data.

Organizations that use live data in their testing do so basically because it makes the testing more real-world and better puts the app through its paces. Trouble is, it also can expose sensitive data to engineering staff who normally wouldn’t have access to that data, as well as to consultants and other outside contractors working with your organization on the testing process.

But you don’t have to use the real thing in app testing and development: “It needs to be real enough, but it’s better if it’s not people’s confidential information,” says Gary McGraw, CTO of Cigital.

Still, it’s common practice among many organizations today. According to a new study from the Ponemon Institute, which was commissioned by Compuware, 69 percent of the over 800 IT professionals surveyed said they use live data for testing their applications, and 62 percent say they do so in their software development. Over 50 percent outsource their app testing, and of that group, 49 percent of them share live data with the outsourcing organization.

The article conflates using real data with using live data, but it’s really two different things, both of which comes with its own risks.


Reminder To Our Client-Facing Co-Workers

Thursday, December 13th, 2007 by The Director

Remember, when you try to whitewash a problem, use enough actual whitewash to adequately cover the problem. Otherwise, you’re merely applying an antique finish to the problem.

Another Maxim

Wednesday, December 5th, 2007 by The Director

Sometimes, even if you have a bunch of tools, all problems still look like a nail if you really want to use a hammer.

Memo To Bottom Liners

Wednesday, November 21st, 2007 by The Director

Failure is cheap.

Counterintuitive To Whom?

Tuesday, November 20th, 2007 by The Director

Really, even the journalists of SD Times ignore what QA might have been telling them:

It seems counterintuitive to think that the biggest time-sink in the application production life cycle would receive the least regard from development managers. However, a survey published by Forrester Consulting has revealed that this conundrum is the cold hard fact for many organizations.

The objective of Forrester’s “Problem Resolution Survey Results and Analysis” was to determine where developers and testers are spending their time and to learn what is automated and what is not. The biggest time drain, according to the managers, directors and executives who responded: investigating and resolving application problems.

According to Forrester, almost half of the respondents require more than an hour to document a problem, and a problem report uses six types of media on average. “That is interesting to us, given the large number of problems,” remarked Eldad Maniv, vice president of BMC’s Identify Software Business Unit.

The respondents spend almost three out of every 10 hours (29 percent) in various stages of troubleshooting: documenting, reproducing or testing. On the average, a problem takes six days or more to resolve, and one in four of the problems reported by a QA or test group are returned as irreproducible.


Flout Convention At Your Peril

Thursday, November 15th, 2007 by The Director

A year and a half ago, Peter Coffee of eWeek wrote a column exhorting software designers to stick to convention when designing application interfaces:

The Web-browser actions of “back” and “reload,” along with a history of visited locations, have become widespread user interface metaphors. Browser-style buttons invite us to explore file systems, review online documents and access digital media—whether or not we’re on a network and whether we’re viewing resources that are local or remote. It figures that just as we reach the point where everyone knows how these things work, the content creation community is changing the rules.

Users have deep-rooted expectations about the semantics of “back” and “reload.” “Back” means “show me what I was looking at before I was looking at this.” “Reload” means “show me the present state of the content whose past state I’m seeing right now.” When designers don’t respect these conventions, users become confused and transactions are derailed.

He’s right, of course, and it goes without saying that in the absence of detailed requirements, testers will test applications to behave like all other applications.

So keep your nonconformist genius to the metal sculptures you make in your backyards  on the weekends.

Client Side Or Server Side Validation?

Monday, November 12th, 2007 by The Director

As some of you know, I am quite the proponent of client-side validation for Web applications and Web sites. The main reasons are speed and load.

Evidence-Based Scheduling in FogBugz

Wednesday, October 31st, 2007 by The Director

Joel Spolsky of Fog Creek Software explains some of the thinking behind Evidence-Based Scheduling included in the new release of FogBugz.

Over the last year or so at Fog Creek we’ve been developing a system that’s so easy even our grouchiest developers are willing to go along with it. And as far as we can tell, it produces extremely reliable schedules. It’s called Evidence-Based Scheduling, or EBS. You gather evidence, mostly from historical timesheet data, that you feed back into your schedules. What you get is not just one ship date: you get a confidence distribution curve, showing the probability that you will ship on any given date.

Honestly, that’s what you ought to be doing if you’re taking a scientific approach. However, your organization and its schedule builders aren’t scientific, preferring instead to build timelines and effort estimates to fit external constraints, deadlines, or budgets instead of reality.

So, carry on with those unwritten tasks of covering your rump when failures occur.

“Training Issue” Is No Parry Five

Friday, October 19th, 2007 by The Director

Some developers think the words “Training Issue” are a parry five when it comes to defect resolution. The developers will mark the issue as resolved/won’t fix or some such nonsense, brush the crumbs of QA intransigence from their hands, and go back to voting for the Nintendo Wii over the XBox in an Internet poll.

I don’t know where the developers got the idea of this mythical training upon which they hope to rely to cover their deficient code; none of the places for which I’ve worked or contracted in this century have had actual training departments. Or much of a documentation department. So the developers who deploy the “training issue” defense really mean that a) This issue will be brought up in the 30 minute demo with the check signer where the check signer decides whether to sign that check or b) They really, really hope that a user doesn’t find it or c) “Training Issue” means “customer support issue.”


Friday, September 28th, 2007 by The Director

If you want it quick and dirty, you’ll surely get it dirty.

The Definition of QA Insanity

Wednesday, September 26th, 2007 by The Director

You might have heard that the definition of insanity is doing the same thing over and over again and expecting to get a different result? Software and Web applications are much the same way, in a twisted fashion of their own. You’re insane in QA if you do the same thing over and over again and expect to get the same result.

That’s why QA has to check everything, all the time, over and over again. The simplest and most slam-dunk, we’ve done this a million times before things. The things everyone else takes for granted. For example, let’s look at a simple state combo box/drop-down list.


Garbage In, Garbage Out From Databases, Too

Tuesday, September 18th, 2007 by The Director

Whenever I bring up performing data validation on information returned from the database to the application, they tell me it’s a waste of resources. However, I think Robin Harris might agree with me.


Remember Your Users

Thursday, August 23rd, 2007 by The Director

Remember, computer users are not all geeks, nor are they godlike rockstar developers who live on Web logs, twitter, usenet, or whatever today’s cool means of intrageek chatter is. Here is your computer user:

Pensioners surfing the internet are spending more time online than their younger counterparts.

So-called “silver surfers” dedicate an average of 42 hours a month to the World Wide Web, compared with 37.9 hours among 18 to 24-year-olds.

Those are the computer users who need the bumpers and the training wheels and all the mechanisms within your Web sites/applications. If it’s good enough for those who quote Office Space or Hackers or Star Wars all day, it’s not good enough to ship. It has to be good enough for your grandmother who’s one of the 12 million people still dialing up through AOL.

Pardon me for harping on this again, but I spent several hours last night trying to explain client-server technology, again, to someone who asked me how to move image files (my term, not hers) from My Documents to My Pictures.

D-E-F-E-C-T, Find Out What It Means To Me

Monday, July 30th, 2007 by The Director

If you’re in the software industry and you actually, you know, run the software (which excludes most managers, client facing people, technical writers that produce the manuals, many developers, and some quality assurance professionals whose jobs rely upon creating pretty reports with neat statistics), you’ll probably encounter an issue, otherwise known as a defect or a bug, with your application or Web site. What should you do? If you ask a developer, he’ll probably tell you don’t do that. However, you should probably log an issue or otherwise report the problem so that your organization can earnestly act as though it’s going to fix the problem.

Many organizations create elaborate procedures to trace the defect’s accountability and to standardize defect report information. Many software packages ensure that users enter the same sorts of information for a defect report, but those expensive applications do nothing to audit the quality of the report that the users enter. Some organizations just use spreadsheets and misplaced e-mails in lieu of spending money or installing Bugzilla. Regardless of what technology your organization uses, the quality of the content within the defect report is more valuable than the most rigorous procedures in handling the defect report.


wordpress visitors