I realize I’m a little late to the annual celebration of a maritime disaster, but back when it was timely last week, Popular Mechanics did a piece called Why We’re Still Learning the Lessons of Titanic about how even the most up-to-date engineering can fail catastrophically. A taste:
In one respect, little has changed. As the recent loss of the Italian cruise ship Costa Concordia demonstrates, bad decision making can overcome even robust engineering. Virtually all man-made disasters—including the Three Mile Island nuclear accident, the space shuttle Challenger explosion, and the BP oil spill—can be traced to the same human failings that doomed Titanic. After 100 years, we must still remember—and, too often, relearn—the grim lessons of that night.
No disaster is a single event. Complex systems rarely fail without warning. Instead, accidents are the product of decisions made over hours, days, and sometimes years. Those choices are shaped both by the culture of the organization—whether it’s NASA or the White Star Line, which owned Titanic—and by outside pressures.
When you’re mapping out, building, and testing applications, remember the human failure element. Remember the peril of the badmins.
And when you’re doing your risk analysis about whether critical-but-unlikely bugs need to be fixed or what extreme conditions you should test for, you and everyone else needs to remember that unlikely is not impossible, but catastrophic is always catastrophic.