Menu

We're Getting Lazy

A few years ago, a 28-year-old New Jersey dude named Noel Santillan decided to take a trip and check out Iceland. Its a wonderful place to visit; Ive been there and have many fond memories of the awesome landscape, friendly people, and great, healthy food. 

Noel rented a car when he got to the Reykjavik airport (RKV) and programmed the GPS on his phone to take him to a street in downtown Reykjavik with the hard-to-pronounce (and spell) name, Laugavegur. Unfortunately, he keyed an incorrect letter into the address, adding an “R” in the spelling and entered “Laugarvegur” instead of his intended destination. Although the spelling is pretty close, the two streets aren’t; they’re actually 5 hours apart.

In my travels to Iceland, I also rented a car, and also programmed the GPS to take me and my family from the airport to our hotel in Reykjavik. It’s hard to get lost as the airport is right across the harbor and within sight of the city, about 20 minutes away. It’s an easy drive too, a nice wide four-lane road with great signage directing you right from the airport parking lot to downtown Reykjavik. There’s almost never any traffic and the road is kept free of snow and ice during the long winters, although not always clear of lava that spews onto the roads from one of the 150 nearby volcanos.

Our friend Noel put his head down and started driving, keeping his eyes glued to the GPS, which everyone knows is ALWAYS right, and blindly followed it on a 5-hour joy ride completely to the other side of the country. He apparently never contemplated that he may have missed his exit and needed, perhaps, to rethink where he was going. This kind of dependence on machines is called “automation bias” and is a common source of errors up in the sky and right down here on the ground. Good ol’ Noel is my nominee for the first-place award in the annual automation bias competition. 

Its OK to have some fun with someone who falls prey to this bias on the ground and only burns up a lot of gas and a lot of time, but its deadly serious if it happens in the cockpit. Automation bias is blamed for the tragic crash of American Airlines Flight 965 on December 20, 1995. The aircraft was a Boeing 757 on a regularly scheduled passenger flight from Miami to Cali, Colombia, flown by a flight crew with over 20,000 hours in the airplane and multiple previous flights to Cali. With clear evening skies and no traffic in the pattern, Cali Center had cleared the flight through a long mountain valley from 65 miles out direct to the Cali VOR for a straight in approach to runway 19, following the Rozo 1 arrival. You can follow my description on the instrument approach plate and the flight tract here.

The cockpit crew selected R for Rozo, which was the IAF waypoint for the approach on the flight management computer (FMC), confirming on their approach plates that showed R as the identifier letter for Rozo. R also identified another navaid in the FMC database, Romeo, which is 150 miles east of Rozo, high up in the Andes Mountains. With R entered, the FMC chose Romeo, the IAF for Bogota, Colombia, over Rozo since it was higher in the alphabetical listing. With the autopilot now set direct to Romeo, the plane started a turn to the left (east). The crew flew on autopilot for a while to the east and then realized they were no longer on a straight-in approach to Cali, but they didnt disengage the FMC and hand fly the airplane on the correct flight path. Instead, they reprogrammed the autopilot to their intended IAF, Rozo,” and the plane started a turn back to the west, but were still in the vertical descent mode with the airplane configured for landing and the spoilers deployed. The aircraft descended during the turn back toward Cali, and now outside the straight-in valley approach, crashed into the rising terrain of the Andes at 8,900 feet, 5,000 feet above Cali, killing 159 people onboard. The flight data recorder showed the autopilot was still fully engaged on impact.

A while back I wrote a three-part series in this space on automation bias and how abdicating your decision-making authority to the automation can get you into trouble in the cockpit. The risks of falling into the automation bias trap has increased hugely in the last few years due to the added risk of AI that weve talked about last month, and this important topic needs to be revisited in light of our new technology. This added concern has also entered the medical world and there have been a number of worrisome studies being reported in the medical literature on just this issue, physicians turning their primary role of making decisions over to artificial intelligence. One recent article in The Lancet carried the ominous title Endoscopist Deskilling Risk after Exposure to Artificial Intelligence in Colonoscopy.One word you never want your doc described with is deskilled.The study looked at 1,443 patients who underwent colonoscopy before (n=795) and after (n=648) the introduction of AI. They found that the colon cancer detection rate by the docs dependent on AI was greatly diminished. There was a significantly decreased rate of cancer detection when compared to doctors who did standard colonoscopy.What really mattered to the patients was that AI-assisted colonoscopy missed almost 10% of their cancerous and pre-cancerous tumors and a chance for an early, complete cure.

A well-documented and thoughtful summary of the impact of AI on physicians clinical capabilities was published in Medscape and titled AI Eroding Cognitive Skills in Doctors: How Bad Is It?The article asked some critical questions and you could easily substitute pilot for doctor.What happens to a doctors (pilots) mind when theres always a recommendation engine sitting between thought and action? How quickly do habits of attention and skills fade when the machine is doing the prereading, the sorting, even the first stab at a diagnosis?The conclusions support the contentions I made at the outset of this article and are deeply concerning.As AI becomes so good, human experts defer to the AI so much that they become susceptible to automation bias.More specifically, AI threatens some of the most central parts of clinical practice: the hands-on skill of examining patients, the ability to communicate clearly and manage their concerns, the craft of building a differential diagnosis, and the broader judgment that ties it all together.This really bothers me, but we can learn from itboth pilots and doctors need to choose the tasks AI should own. Humans have to measure when it helps or harms and never leave our clinical judgment, or pilot judgment, to unsupervised and unattended automation. When you turn your cognitive responsibilities over to automation you can blindly fly your airplane into a mountain, drive all the way across a country, or totally erode your professionalism and professional skills.

Just how AI leads to this cognitive laziness is another frightening prospect for our future and was the topic of a report that came out last month from MIT in Boston looking at college and graduate students who wrote their essay assignments with AI. It was titled, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using AI Assistant for Essay Writing Tasks.The research is an incredible deep dive using real-time brain wave (EEG) analysis following the electrical activity in the heads of these students and also detailed follow-up study of their knowledge and retention of what they had researched and written.The subjects in this study were divided into three groups. The first group used an LLM (large language module) app called ChatGPT from Open AI to generate their homework assignment. The second group used standard web-based app search engines like Google and DuckDuckGo to research their work and the third group were the brain-only kids. The participants each cycled through all three groups and were independently tested for their work product in each research environment.

The conclusions were unfortunately not all that surprising, yet still very dramatic. While monitoring real-time EEG electrical brain wave patterns on the participants during their essay work, the brain only group had higher neural connectivity throughout the cortical areas of the brain used for cognition and complex data processing.The LLM ChatGPT group showed very low cortical electrical activity showing little cognitive work during their research. In other words, they were just copying stuff and not thinking or learning from what they were doing. In addition, only 15% of the LLM group could quote anything from their own essays, proving that they had learned nothing from doing their homework. Ninety percent of the active-thinking students, the search engine group, and the brain only group could discuss and quote from their own work, indicating a high degree of actual learning had taken place doing their assignments.

Over the last few months, we talked about the wondrous ways our brains use ingenuity and heuristic modeling to make decisions and achieve cognitive excellence. It seems that overreliance on AI bypasses all of these cerebral capabilities and erodes our thought and reasoning skills, truly a perfect definition of cognitive laziness. The takeaway lessons from all of this are that theres nothing wrong with having a laid-back, chill day. Let the grass grow a little longer, kick back and watch a good football game, forget work, ignore your phone, be lazy and let your mind relax. But never let yourself get cognitively lazy, never let AI do your work or let the GPS take you on a joy ride with no thought. I quoted President Ronald Reagan last month and Ill let him poke fun again: Ive heard that hard work never killed anyone, but I say why take the chance?Hard work and physical exertion are one thing, but do the hard cerebral work, and never get cognitively lazy and risk the decisions of your mental labors; its an invitation to degrading your skills and even disaster.

Kenneth Stahl, MD, FACS
Kenneth Stahl, MD, FACS, is a surgeon who is triple board-certified in cardiac surgery, trauma surgery/surgical critical care and general surgery. Dr. Stahl holds an active ATP certification and is a 25-year member of the AOPA with thousands of hours as pilot in command in multiple airframes. He serves on the AOPA Board of Aviation Medical Advisors and is a member of the Federal Aviation Administration Aeromedical Innovation and Modernization Advisory Board. He is an expert in principles of aviation safety and has adapted those lessons to healthcare and industry for maximizing patient safety and minimizing human error. He also writes and teaches pilot and patient safety principles and error avoidance and is a published author with numerous peer reviewed medical journal and textbook contributions. Dr. Stahl practices surgery and is also active in writing and industry consulting. He can be reached at [email protected].
Topics: Pilot Protection Services

Related Articles