It seems to me that software obeys a Peter Principle-like law in that it will always lie on the border of just-tolerable functionality. They will mostly do what you need them to do but there will always be frustrating glitches that never gets corrected. Newer versions may add more features but the user experience does not improve. For example, in order for me to read all email I receive I need to use two programs – the Mac Mail program and Thunderbird. The reason is that some attachments on emails are scrambled by Thunderbird, while other attachments do not show up at all on Mail without even an indication that there was an attachment at all. I have no idea when this will happen so I keep both programs open and randomly switch between them. Sometimes entire mail messages will only appear in one program but not the other. I haven’t tried very hard to alleviate these problems but a cursory examination on discussion groups indicates that the Mac Mail problem has been around for at least four years. It appears that the problem is insufficiently frequent enough to be worth correcting (although frequent enough to be a nuisance). Another example is that even though computers double in speed every year and a half, it still takes about the same amount of time for my computer to turn on now as it did a decade or more ago. The 20 or 30 seconds it takes to boot up is probably the maximal time a human will tolerate so that every time a computer gets faster, they just load it up with more things for it to do until it hits this tolerabilty threshold. What I can do on a computer does increase with time but these new capabilities tend to saturate at a given level of effectiveness.
This issue becomes a serious problem when your life depends on the software. My guess is that Toyota will never fully understand what causes unintentional acceleration in their vehicles. The events are infrequent enough that they may never be reproduced exactly and so corrections may be impossible. I’m also certain that similar problems will appear for other car makers and will only increase in frequency as more and more of the car becomes automated. I remember reading a few years ago that the Navy used Windows NT to run its computers on its warships. That wasn’t exactly a comforting thought. However, the public does need to be aware that simply chastising executives at Toyota won’t solve this problem. They could have been as responsive as they could to owner complaints and still never have solved the problem To really solve this problem we would probably need a fundamental change in the philosophy of system design. We may need to give up on the notion of complex comprehensive systems and instead rely on very simple finite state automata that can be proven to be reliable. To be proven reliable, the response of the system to all possible inputs in all possible internal states must be checked and this is only possible if the set is finite or at most countable. This means that “intelligent” systems that can adapt to a multitude of contingencies and respond in kind must be scrapped or at least always be backed up with a simple provable system and the conditions where the back up system takes over must also be proven to be reliable. I think there will always be a trade-off. If we insist on complete reliability we must give up some features. We must decide what risk we are willing to accept.