The recent tragedies involving the 737 Max are perfect examples of just how devastating a breakdown in this critical relationship can be. Getting along with gadgets and computers is not new and it predates computers themselves by decades. It used to be all about robots; indeed, the first robot stories were written way back in the 1930s by famous science fiction author Isaac Asimov. Asimov’s stories centered on “Robbie,” the most advanced robotic gadget that had yet been developed. In those very early days of automation, fears and technophobia of the newly developing devices were the basis of many science fiction stories and featured robots that turned against their human programmers. Asimov disagreed; he believed that this Frankenstein complex was an unfounded fear. The majority of his works attempted to provide examples of the help that Robbie and his fellow robots, soon to evolve into “artificial intelligence (AI), could provide humanity. But seeing now that this hasn’t always been the case, Asimov likely would have agreed with Yogi Berra, who once wisely cautioned, “It’s tough to make predictions, especially about the future.”
Whether we’re ready for it or not, the future is here. Today, AI is all about how automation and computerization can improve efficiency and outcomes, ease daily tasks, and reduce our workload on the ground and in the air. Our lives are completely wrapped around our devices and computers that run our phones, laptops, cars, kitchens, TVs, doorbells, workouts, and every other aspect of modern life. As much as today’s technology is about trying to have computers adapt and help us, getting along with all this computerized stuff – especially the instruments in our airplanes – is our task, not the computer devices’.Since this relationship can break down so easily, some smart people have tried to figure out how we can get along better with our new partners. Elon Musk has a startup he calls Neuralink, which he claims is close to developing the first brain-machine interface to directly connect humans and computers. He says he’s “developing ultra-high bandwidth brain-machine interfaces to connect humans and computers.” It sounds a lot like Keanu Reeves in The Matrix. James R. Chiles, in his great book Inviting Disaster: Lessons from the Edge of Technology, talked about this exact topic, citing the grounding of the cruise ship Royal Majesty, which was steered by its GPS (with a disconnected antenna wire) onto a reef off Nantucket Island. He speculated that “perhaps one distant day evolutionists will look back to our time and say that this was when homo sapiens began evolving into homo machina, ‘machine man,’ a species able to understand what it really takes to build and run complex, high-power systems in a world with forces that are still a lot more powerful than we are.”
This evolving relationship between machines and us of course has a name, its own acronym, and even a society. Welcome to the new science of HMI, the Human Machine Interface, and the most recent society for enthusiasts and researchers, the I-triple-E, which stands for the Institute of Electrical and Electronics Engineers. The computers and AI in our airplanes also have names and have come to be known as the “glass cockpit.” With all that glass in front of us we’re never alone anymore in the cockpit; Robbie is there as our second in command. The obvious advantage to all this computerization is that it is programmed to just fly the airplane and not get caught up in our human expectations and biases that we have seen can be such a big problem. But all of the computers in the world still only do what we tell them to do and sometimes it is our own failures that lead to errors and even fatalities when pilots don’t tell the system to do the right things. FAA and NTSB accident reports are full of flights that ended in disaster because pilots and Robbie had a falling out. Failure to make use of Robbie’s capabilities to their full extent accounts for the most common fatal general aviation accidents, VFR flight into IFR conditions. Very experienced pilots can get caught in a similar trap and those two recent tragedies in the 737 Max appear to be related to not knowing what Robbie was up to with a software update known as the Maneuvering Characteristics Augmentation System (MCAS). The new system was designed to automatically command a change in the plane’s attitude if it sensed an imminent stall due to the aircraft’s angle of attack (AOA). Pilots in the new plane were not prepared for this and did not know how to interface with the new device that was supposed to protect the plane from entering a stall but instead induced one. The pilots of the two airplanes that crashed could not manage the MCAS flight control inputs nor disable it to hand fly out of the stall and ended up crashing and killing everyone on board. Chiles, predicting the problems they faced, wrote that “all systems of any worth experience human errors and malfunctions daily.”The FAA has another designation for “Robbie” and categorizes glass cockpit equipped airplanes as “technically advanced aircraft (TAA).” A recent FAA study looked at the accident rates of over 8,000 TAAs manufactured between 2002 and 2006 and compared them to accidents in airplanes with conventional instruments. Again, it seems that Robbie might not be helping us as much as Isaac Asimov envisioned as glass cockpit airplanes had twice the fatal accident rate (16% vs. 31%) than similar aircraft with the standard six-pack of instruments. The Safety Board determined that “[CL1]glass cockpit systems are both complex and so different from standard instruments in function, design and failure modes that pilots just don’t understand the unique operational and functional details of the primary flight instruments in their own airplanes.” In simple terms, Robbie knows what to do but pilots don’t know how to get him to do it. The breakdown in the relationship varies in many of these incidents but appears to be rooted in confusion that our brains are expecting (that bias thing again) one thing and trying to solve problems one way but the automation system is telling us another.
Comparing the two authors we started this topic with, Chiles is much more cautious than Asimov regarding the future. But then again, he benefits from Yogi’s insights and has access to more recent information about the future than Asimov and can see how much more complex things have gotten. Chiles said, “There will be opportunities for people to break the chain and stop an automation system failure before it reaches a disaster. But their chances to respond may be difficult to recognize, fleeting in time, and even more difficult to act upon.” This is exactly what the 737 Max pilots faced, and without a thorough understanding of what Robbie is doing and why, pilots tend to assume it will do the right thing. But it doesn’t always, and Robbie can fly us into a disaster without active supervision. Next month we’ll dissect the FAA recommendations and see how we can manage this crucial interface to have a happy and safe relationship with all of the computerized automation capabilities of our airplanes.