You Would Think I’d Agree With This Thesis, But….

The Atlantic Monthly has a piece entitled “The Coming Software Apocalypse” that starts out with some examples of computer problems akin to what I post here:

There were six hours during the night of April 10, 2014, when the entire population of Washington State had no 911 service. People who called for help got a busy signal. One Seattle woman dialed 911 at least 37 times while a stranger was trying to break into her house. When he finally crawled into her living room through a window, she picked up a kitchen knife. The man fled.

The 911 outage, at the time the largest ever reported, was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.

Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generating a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.

Not long ago, emergency calls were handled locally. Outages were small and easily diagnosed and fixed. The rise of cellphones and the promise of new capabilities—what if you could text 911? or send videos to the dispatcher?—drove the development of a more complex system that relied on the internet. For the first time, there could be such a thing as a national 911 outage. There have now been four in as many years.

It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.

Okay, I agree with a lot of the premise of the article. But I know that the author is not a computer expert of any stripe when we get to this passage:

Since the 1980s, the way programmers work and the tools they use have changed remarkably little.

Well, that’s a remarkably daft statement. I wrote a bit of code in the 1980s (for pay once, but I was young and I needed the money). What has changed since then?

  • IDEs.
  • Object-oriented programming.
  • Never mind, let’s go back to functional programming again.
  • Client-server architecture.
  • Web-based software.
  • IDEs and other scaffolding mechanism building a bunch of code you don’t understand or need automatically.
  • Inserting open-source libraries and dependencies in your code for everything.
  • Distributed architectures where different machines handle different bits of your code.
  • Cutting and pasting from Stack Overflow.

And so on and so on.

The rest of the article seems to be a white paper for business object-based development. Which is totally a new thing that will change everything. Except that it’s not new; it’s as old at least as Versata, a company I invested in around the turn of the century and that was founded in 1989.

You know why this never takes off? Because the code making the pretty replacement for the code is code itself and an abstraction of the type that this article claims is the problem.

You know what the real problem is?

Computer programming rarely, and even rarelier now, gets to a mature and proven technology. If you’ve been in the business for any number of years, you’ve seen technology stacks come and go along with the various frameworks, architectures, programming languages, and development methodologies. Every couple of years, they rise and fall, and projects, products, and features get started, kludged on using, or completely rebuilt in the new languages and frameworks. Then, a couple years later, something else comes up and something gets started anew.

I know this reads a little bit like Old Man Yells At The Cloud, but there’s a lot of institutional knowledge lost when these ebbs and flows occur. Nobody’s gotten node.js right yet, but don’t worry, there’ll be something new in two years to take its place, and all of our defects can be washed clean and rebuilt in the new hotness.

The article compares software architecture to old timey physical engineering, but it draws the wrong lessons. Instead of trying to make programming more visual like things in the physical world are, we need to ensure that the ‘best practices’ are learned and applied as universally as possible, and to slow down so we can learn what they are and to work with them and with mature technologies to create things that work.

Instead, companies will continue to chase the newest technologies and languages and minimum viable products as fast as they can with the result that computer science is less like science and more like Dungeons and Dragons Wild Magick rules.

Comments are closed.


wordpress visitors