Menu

Me and Robbie (Part Two)

The paradoxes in our quest to find the holy grail of flight safety continued last month with our look into cockpit automation. Even tracing this all the way back to the 1930s when Isaac Asimov wrote science fiction stories about “Robbie” the robot that was programmed to help its human developers, we were left with the conflict of our brain trying to get along well with Robbie and integrate our combined talents into the automated systems in the modern airplane cockpit. We established the need to utilize automation to check our own biases but recent events show that unfamiliarity, overreliance, or failures to monitor and supervise the automation can be just as deadly.

We started peeling apart the layers of these paradoxes a number of months ago when we analyzed the crucial importance of situational awareness (SA) in flight safety only to be forced to confront our own expectation biases (EB) that interfere with our big-picture constructs. The next deeper layer brought us to examine the need to use the artificial intelligence (AI) of modern airplane glass cockpit automation and instrumentation to take EB out of our flight operations.

As you might expect, there’s another shoe to drop here. We need to balance and justify our insistence on the value of automation with the report cited last month from the NTSB that found technically advanced airplanes (TAA) actually have twice the fatal accident rate as traditionally configured airplanes. This report along with the recent 737 tragedies leave us with the uncomfortable feeling that automation has a dark side too and may occasionally be harming us, not helping us. It appears that there are still more layers of this safety onion to peel back.

We can start to peel back the layers with the assumption that the NTSB findings and maybe also the recent 737 Max events might just reflect the learning curve for pilots to understand all of our new technology. This idea was advanced at the Flight Safety Foundation Aviation Safety Seminar meeting in Italy a few years back where it was reported “inadequate crew knowledge of automated systems was a factor in 40% of serious incidents from 2001 to 2009.” In support of this theory is research done by the Aerospace Crew Research Project at NASA indicating that gaining expertise in an advanced technology flight deck takes a lot longer than you might think. Their data show that pilots need approximately 700 hours of line experience in a specific advanced technology aircraft to become an expert using the aircraft’s automated systems. This is a kind of advanced CRM that they refer to as “acquiring human-autonomy teaming (HAT) skills.”  It is a lot more time than GA pilots have in their technically advanced airplanes before launching and also more time than the 737 Max pilots had with the new MCAS automation in their airplanes. The optimistic side of this implies that the problems should get better the longer these devices are in use, but it’s probably more complicated than just the number of hours we train with the automation.

Dr. Linda Skitka and Dr. Kathleen Mosier are two PhD psychologists who are the gurus of automation-induced errors. I discussed their work with Dr. Mosier a few weeks ago in preparation for this article and I appreciate the insight she has brought to understanding these incidents. She told me that she and Dr. Skitka have studied this problem for decades and their work explores Robbie’s dark side to help us understand other factors that contribute to the NTSB findings of such a high fatality rate in TAAs. They have coined a descriptive phrase,automation bias(yeah, I know, another bias to worry about) that they call a kind of “cognitive laziness.” They published a study using a scenario where professional airline pilots, in a simulated engine fire incident, overwhelmingly shut down the engine indicated by their automated system (that was the wrong engine) and did not examine other indicators that would have shown the true status of each engine. Their study concluded, “When people have an automated decision aid available, they do as it directs. The presence of automated cues appears to diminish the likelihood that decision makers will put forth the cognitive effort to seek out other diagnostic information or process all available information in cognitively robust ways.” Even more concerning was their finding that, “in the absence of automated instructions, people often do nothing, regardless of what other system (traditional) indices indicate should be done.” This “automation complacency leads pilots to develop a self-satisfaction and failure to check automated systems sufficiently, assuming that everything was fine when, in reality, a critical event was about to occur.”

David Lyell has also done work in this area and concurred with these conclusions. He said that “operators’ perceptions of the automated system’s reliability can influence the way in which the operator interacts with the system. Our human brains have a propensity to favor suggestions from automateddecision-making systemsand to ignore contradictory information made without automation, even if other information is correct.”

We don’t have to leave the ground to see the consequences of this kind of complacency and cognitive laziness. The poster child for automation deference, turning decision-making duties and even common sense over to an automated device, is a New Jersey tourist, Noel Santillan, who became an unlikely folk hero in Iceland a few years back after he let his GPS “guide” him all over the island just trying to cover the 20-minute hop from the airport to downtown Reykjavik. I’ve driven that route and it’s not complicated; it’s a straight shot north, keep the ocean off your left shoulder and it’s right along the coast on Route 41 with hardly any turn-offs. ­It seems that his hotel in nearby Reykjavik had the same name as a remote fishing village totally on the opposite side of the island and, blindly “trusting” the automation, he mindlessly drove for more than 6 hours and hundreds of miles out of his way until it dawned on him that something wasn’t quite right.  How can anyone wander so far off the mark? The answer is complicated and it can happen in the air too but with much more devastating consequences.

Take for example American Airlines flight 965, a Boeing 757 flying from Miami to Cali, Colombia, that crashed in the Andes late at night on December 20, 1995. With clear skies and no traffic in the pattern, Cali Center had cleared the flight from 65 miles out direct to the Cali VOR for a straight in approach to runway 19 (the Rozo 1 arrival). The cockpit crew had over 20,000 hours of experience and was familiar with the approach, having flown into Cali on numerous occasions. They selected “R” for “Rozo,” which was the IAF waypoint for the approach on the Flight Management Computer (FMC) confirming on their approach plates that showed “R” as the identifier letter for Rozo. “R” also identified another navaid in the FMC database, Romeo, which was 150nm east of Rozo.

With “R” entered, the FMC “chose” Romeo over Rozo as its new destination since it was higher in the alphabetical listing. The pilots did not know that their computer was programmed to default to alphabetical order when selecting inputs that started with the same letter. With the autopilot set for what the crew thought would be a straight in approach, the “Romeo” data entry started the plane on a turn to the left (east) toward the rising peaks of the Andes. The crew couldn’t understand why, but again trusting the automation, they didn’t disengage it to hand fly the airplane on the correct flight path. By now they had engaged the vertical descent mode and the airplane was configured for landing with the spoilers deployed. The Ground Proximity (GPWS) warning sounded but it was too late and in spite of trying to pull up, the aircraft crashed into the mountains at 8,900 feet, 5,000 feet above Cali, killing 159 people onboard. Miraculously, four passengers survived.

To shed more light on these decision tendencies, Drs. Skitka and Mosier carried out further studies comparing pilot decision-making with and without an automated monitoring aid (AMA). Traditional systems of instrument gauges were described as 100% reliable; the AMA was described as highly but not perfectly reliable. They found that “the AMA failed 12% of the time by either failing to prompt participants about a high risk event or incorrectly prompting a response” (as in our examples). They found that when challenged with a potential problem, “participants in the non-automated (traditional six-pack of instruments) condition responded with 97% accuracy, whereas participants in the automated condition responded with only a 59% accuracy rate to these same events.” Wow, that’s almost exactly the 2-fold fatality difference identified in the NTSB study.

 

The same study went on to say that “people with an AMA were therefore more likely to miss events than those without the automation when the AMA failed to notify them of the event. When automated monitoring aids operated properly, their presence led to an increase in accuracy and a reduction in errors over not having the automation. However, when the automation failed (or was not properly programmed), the presence of an automated aid led to an increase in errors relative to non-automated systems.” Maybe we can coin another phrase, “the double-edged sword of automation.” DESA? Never mind, that’s just too much.

A growing body of research helps us understand why our brains are having some trouble adapting to these new levels of automation. These studies suggest that Robbie is so good that our reliance on automated technology actually risks, quite literally, making us dumber by altering the way we process information and solve problems.  The Douglas Mental Health University Institute in Montreal did a study comparing the brains of “spatial thinkers and navigators” who utilize an understanding of problems and relationships by “actively thinking” about visual cues and landmarks compared to “stimulus-response problem solvers and navigators,” who go into a kind of cognitive autopilot trance-like thought process and follow only automated directions like those generated by a GPS. The spatial navigators showed significantly more activity in the critical problem-processing part of the brain called the hippocampus during navigation exercises that allowed for different orientation strategies. Long-term MRI follow-up showed that these types of problem solvers also have more gray matter (“thinking cells”) in the hippocampus region of the brain than the stimulus-response navigators, who don’t build cognitive maps. “If you follow a GPS blindly,” the study concludes, “it could have a very detrimental long-term effects on cognition.”

This was confirmed at the University of London Department of Neurology, where researchers did a study of the brains of London taxi and bus drivers, also with functional MRI (f-MRI). The older drivers who had spent decades navigating the ancient alleys and back twisty roads of London by visual clues and memory had significantly more cognitive gray matter and hippocampus density than younger drivers who turned their cognitive duties over to automation and relied only on GPS maps. They also found a direct link between habitual use of automated navigational technology decision processing and memory loss, a precursor to age-related cognitive degeneration. Dr. Veronique Bohbot at McGill University backed up these findings with her own studies, showing that a smaller and weaker hippocampus makes you more vulnerable to brain diseases like Alzheimer’s, since it’s one of the first regions to be affected.

These are important indicators that not only give us a lot more insight into the NTSB findings and causes of other automation-related accidents but also hint at ways to solve the problems. As we peel down to the deepest layers of our conundrum and look for solutions to automated flight safety, it seems to be clearly more involved than just the stick and rudder skills of years past.  Next month we’ll complete our look into our interactions with the two sides of modern cockpit automation and discuss a number of solutions to this issue and how we can fulfill Asimov’s predictions that automation is really here to help all of us live easier lives and fly safer.

Kenneth Stahl, MD, FACS

Kenneth Stahl, MD, FACS is an expert in principles of aviation safety and has adapted those lessons to healthcare and industry for maximizing patient safety and minimizing human error. He also writes and teaches pilot and patient safety principles and error avoidance. He is triple board-certified in cardiac surgery, trauma surgery/surgical critical care and general surgery. Dr. Stahl holds an active ATP certification and a 25-year member of the AOPA with thousands of hours as pilot in command in multiple airframes. He serves on the AOPA Board of Aviation Medical Advisors and is a published author with numerous peer reviewed journal and medical textbook contributions. Dr. Stahl practices surgery and is active in writing and industry consulting. He can be reached at [email protected].

Related Articles