I came of age just as the computer did. One of the first computers that I used was only available in a mail order kit that you assembled yourself and used a cassette tape to store data. Another weighed 30 pounds, had a three and a half inch screen with a text only interface, and five and a half inch floppy disks and was considered "portable." My first video games were graphics free affairs that ran on the university mainframe and used less computing power than a modern graphing calculator.
Early on, I was programming in BASIC, one of the first computer languages that used words rather than machine language. And, one of the very early things that you learned was how easy it was to make a programming mistake that created a recurring loop that left your machine stuck until you powered it off.
Not long after that, Star Trek featured an episode in which the crew cleverly and intentionally got a ships computer into an infinite loop to prevent something horrible from happening.
At the time, I did buy it. Sure, we had problems like that back in the late 1970s and early 1980s with our computers that everyone knew were in their infancy. But, I was sure that this problem that was known so early would be non-existent in no time, let alone in the 25th century or whatever far future era Star Trek was set in.
But, forty years later, this is still a routine problem that comes up many times a week with computers that I use in my personal and professional life. It isn't uncommon for whole big businesses and governments to go down now and then either.
Is this program really an inherently difficult one to stamp out, or do the multi-billion dollar companies that make most of the software and hardware I use on a daily basis just really suck at quality control because they thing there is no money to made in developing reliable systems that aren't so fault prone?