It’s called “heuristics.” Looking into how our brain performs this amazing process prompted questions about artificial intelligence (AI), since it’s been taking over so many domains that used to be reserved for us mere mortals. AI stands at the interface of human problem solving and machine problem solving and AI proponents believe it can actually accomplish this or even surpass us in the future. The question for us pilots is how much we should rely on this technology for critical decisions we face in our own cockpits—just maybe not on the surface of the moon. Does AI have the kind of ingenuity the human brain does that can save us in our own airborne emergencies? Or is it just the mother of all search engines that compiles and mathematically “weighs” options it finds on the internet like we search our own memories for unique solutions?
The history of AI goes way back, a lot further into our past than I ever knew, and is a shining example of exactly what we covered last month—man’s incredible ingenuity. The first speculations about a “machine” that could think on its own date all the way back to Gulliver’s Travels, Jonathan Swift’s 1726 fantastic tale of imaginary journeys. Swift’s imaginary machine was designed to address the issue that “everyone knows how laborious the usual method is of attaining to arts and sciences; whereas, by this machine’s contrivance, the most ignorant person, at a reasonable charge, and with a little bodily labor, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study.” It took centuries to actually construct anything remotely like Swift’s idea. In 1914, Spanish engineer Leonardo Torres y Quevedo demonstrated the first chess-playing machine, “El Ajedrecista,” which used electromagnets and was fully automated to select board moves. Thirty years later, one of the unsung heroes in the Allies’ defeat of the Nazis in WWII was British mathematician Alan Turing, who invented an Automatic Computing Engine to crack Nazi secret codes. It was the first design of a stored-program computer. He later published “Computing Machinery and Intelligence,” which predicted that “by the end of the century one will be able to speak of machines thinking without expecting to be contradicted.” The century ended and neither Jonathan Swift’s nor Dr. Turing’s answers to our question, “Can machines really think?” were far off.
The term “artificial intelligence” was first coined just 10 years after Turing’s prediction at a 1955 summer workshop that proposed AI as the name for the developing technology of thinking computers. Countering the argument that computers could someday be programmed to “think,” Hubert Dreyfus published “Alchemy and Artificial Intelligence” in 1965, arguing that the human mind operates fundamentally differently from computers. Just as I’ve wondered, he predicted limits to AI progress due to the challenges of replicating human intuition and understanding. He set off a storm of debates about AI’s practical limits. “Neural networks” were the answer to this skepticism and were designed specifically to imitate the human decision-making and ingenuity capacity of the brain. A neural network is a machine learning model that makes “decisions” by mimicking the heuristics model of our own thinking with electric “neurons” called “nodes.” These nodes work together to identify issues, search the internet for options, weigh the probability they might fit, and then push out an answer.
Neural networks are made up of layers of these nodes—an input layer, hidden central layers, and an output layer. Each node connects to others and has its own algorithmic weighting program and threshold. It works just like we heuristically weigh options to solve problems. If the output of any individual node is above the specified threshold value programmed by its developers, that node is activated, sending data to the next layer of the network—but if it doesn’t, no data is passed along. Revisiting the details of heuristics we talked about last month, the keys to our methods of ingenuity center on mental trials and errors of our previous experiences and “counterfactual reasoning.” It’s our way of “weighing” the options we’ve considered. This is what AI has copied, but also where AI has us beat. AI has the capacity to compare gigabytes of data found on the internet for patterns and usage frequency that are translated into the “likelihood” of a match by its programmed algorithms. It then “war games” all of these options for predictions and comes up with conclusions in seconds. This is the reason AI eats up so much power, both computing and electrical, since each “node” acts like a little search engine scouring the web for answers and pattern recognition. We mere mortals don’t have access to this rich database nor the cerebral computing power to process it.
The more layers of nodes that are stacked on each other, the greater capacity the AI algorithm has at problem solving and theoretically the more accurate the conclusions. This ability is terrific for pattern recognition (think facial recognition), photo and word scanning, and data crunching. AI can compose music by surfing all the music on the web and crunching it together; the same for writing an article or book. It grades and compresses everything written on the internet and then spits out a hybrid of all the info it has found. So, do computers running AI apps actually possess “ingenuity” as we defined last month? Or are they just some super-juiced-up search engine that uses our heuristic methods of problem solving, only better, since they process gigabytes of information? It’s hard for our brains to compete against a machine that can scour the entirety of human knowledge, or at least the uncountable zillions of terabytes of data on the internet, in a couple of seconds to come up with an answer to our questions.
As it stands now, AI does not seem to be able to “intuit” an answer using abstract reasoning, creativity, and ingenuity, like we can do to solve problems. What it beats us at is “knowing” the way everyone else on the planet solved a similar problem and assigning a statistical probability that it will work for our current need. Could AI have solved the problems we talked about last month and found the felt tip pen in Buzz Aldrin’s pocket, or slingshot Jim Lovell and Apollo 8 around the back of the moon? Maybe someday, but there’s no doubt that we have entered an “algorithmic age” that is increasingly defining, and perhaps encroaching on, virtually everything we do. Let’s just hope that someone comes up with something for AI analogous to Isaac Asimov’s “3 Laws of Robots”:
AI has been engineered into almost everything in our cockpits, our phones, and our lives. Use AI to its fullest capacity to help find things out that you would never have thought of without access to terabytes of internet information. But be careful, there’s a downside to AI too and no doubt that GIGO, “garbage in—garbage out,” infects AI. Dr. Kate Crawford in Atlas of AI addresses the problem. “All classification systems are embedded in a framework of societal norms, opinions and values, even in the supposedly hyper-rational artificial intelligence programs we use today.” We’ve already seen all kinds of reports of AI producing absurd answers to simple questions since AI finds dumb stuff that people post on the internet just as easily as it finds correct information. This is especially true when the incorrect information is passed along frequently enough that it increases the algorithmic “weight” assigned by AI nodes. You can’t abdicate your decision-making responsibility to AI, especially in the cockpit where your “muscle memory” is such a critical part of safety. As Ronald Reagan once famously quipped, “Trust but verify,” use AI for all its upsides but don’t let it use you, think for yourself, and as always, FLY SAFE!