The referenced legislation was, of course, to empower Skynet to manage the North American Aerospace Defense Command, removing humans from decision-making. Skynet became self-aware and, recognizing humans were the single biggest threat to humans, launched a preemptive nuclear strike on Russia leading to global devastation.
For those not familiar, the above scenario was the basis for the Terminator series of movies and inspired widespread fear of artificial intelligence, AI. As computing became more widespread there were always those to present the downside or prophesize doom, from the now-famous IBM president who in 1943 said, “I think there is a world market for maybe five computers,” to the writings of Hubert Dreyfus, including his 1972 book, What Computers Can’t Do. Many technologies can be used for good or evil; the same breakthrough in physics led to nuclear power and the specter of nuclear decimation. For AI, the journey to singularity, where AI surpasses human intelligence and can think independently, has people like Elon Musk, Steve Wozniak, and the late Stephen Hawking warning us that AI could lead to the end of the human race. Others feel it spells the end of humans racing down blind alleys and to an unnecessary early grave. Setting aside broader philosophical considerations, how is AI involved in medicine today and what does the future hold?
AI simulates human ability to solve problems, make decisions, and be autonomously creative by accumulating and understanding data; the most widely discussed application would be self-driving cars. AI relies on machine learning whereby an algorithm creates models by being trained on data by a human programmer; for instance, “teaching” a computer how to play chess, the rules and moves have to be installed and countless games played. The construct is often described as a neural network because it is modeled on a human brain, multiple nodes each with multiple connections capable of sifting through a huge amount of data. Beyond this, deep learning takes AI to the next level and generative AI can create original text, images, videos, and so on.
Most people are familiar with the dreadful poetry and drama generated by AI and, of course, Hollywood celebrities anxiously watch how AI-generated avatars threaten to replace their roles in upcoming entertainment. AI promises to automate repetitive and boring jobs, provide faster and more insightful data analysis leading to improved decision-making, reduce human error, and increase productivity by providing 24/7 availability and limiting risk exposure for humans.
We know that smoking, obesity, and lack of physical activity lead to cardiovascular disease, but what other factors may play a role leading to the plane crash of a heart attack or stroke? This is probably the most immediately compelling use of AI to analyze huge sets of human data. Identifying previously unrecognized dangerous behaviors may provide us with the tools to convince people to change and thereby prevent auguring into the dirty side. Additionally, this approach may help us make difficult diagnoses by identifying patterns; like using synthetic vision in conjunction with an instrument approach under minimums. Applying AI to complex and massive libraries of chemistry and biology might allow drug developers to identify molecules and thereby rapidly accelerate new medicine development.
Heart function is routinely evaluated with ECG, an electrical “map” of one’s ticker, and for years cardiogram machines have delivered an automated diagnosis; after all, the ECG strip should yield a highly predictable pattern, a perfect set-up for machine analysis. This concept is now being explored with AI whereby various imaging modalities like X-rays, CT, MRI, ultrasound, retinal, and other scans are submitted to an AI-driven diagnostic review, and early signs suggest accuracy is dramatically improved.
Medical research relies on comparing one thing with another in population samples—a long and complex activity, fraught with problems. Using AI, we may be able to expedite the process to better identify which kind of people to study and possibly use surrogate humans as subjects, a fancy way of saying we will deploy data constructs rather than actual human beings, thereby keeping people from harm.
Medical recordkeeping is an arduous but critically important part of healthcare; imagine failing to note a direction from ATC correctly or reporting a squawk and you get the picture. AI is finding its way into medical recordkeeping in many ways, starting with dictated notes that are “proofread” by the AI engine. I remember the joyous expectation when I first used a Dictaphone after operating, only to collapse with laughter at the errors the transcriber offered up. For instance, after doing a circumcision where one removes the prepuce the typist had written precipice. I literally fell over! AI-empowered records may alert healthcare providers to potentially dangerous drug interactions, missed diagnoses, and incorrect approaches and enable better communication between all involved in a patient’s care. And better coding of diagnoses and treatments will lead to better data for future AI analysis.
Many people now wear digital devices, and the data from these may help us spot early signs of disease driven by AI algorithms. Of course, there are issues of how to manage the information; should the individual or emergency services be informed, and who will pay for such services? After obtaining my smart watch, I was perpetually getting alerts that I had fallen while cheering and jumping up suddenly at sporting events, and the watch was preparing to send out an SOS until I assured the device I was fine. One can imagine how with time, the system would “know” that I was at a sporting event from GPS data, my personal behaviors, and prior experience.
On a darker side, there have been examples of people using AI to write medical journal articles, but thus far these are easily spotted and dealt with. Incidentally, I tried using the two most widely available engines to generate this article and found both to be of minimal value.
AI might function well to perform various administrative activities like scheduling screening tests based on age and risk factors, ensuring follow-up appointments are booked and aberrant blood and other tests are acted upon. This would free up healthcare providers and patients alike to focus on other matters.
In 1997, IBM’s Deep Blue computer beat Garry Kasparov, who was the world chess champion. In 2016 Deep Mind’s AlphaGo program beat world champion Lee Sedol. In 2022 large language models (LLM; e.g., OpenAI’s ChatGPT) changed the way we think about AI. And in April 2024, the New England Journal of Medicine reported that an LLM outperformed medical students in Board exams. The potential for AI to improve human health, avoid disease, and accelerate new cures is indisputable, but human oversight is still required. Blind faith is, after all, blind, and the words of Ronald Reagan ring true from when he delivered a Russian proverb to Mikhail Gorbachev during nuclear disarmament talks: “doveryai no proveryai”—trust, but verify. Concerns about data privacy, medical ethics, and problems we have yet to conceive of must be considered. As AI improves, we will doubtless see ever more astonishing uses to change the way we practice medicine.
So here’s an AI-generated joke to finish: I tried to start an airline with my AI assistant, but it kept crashing. Turns out, it wasn’t very AI-rworthy.
Fly well!
Please consider signing up to my podcast that I record with my old medical school colleague and dear friend, Dr. Nigel Guest. You can subscribe free at www.jointhedocs.com on either Spotify or Apple podcasts and enjoy social media videos on:
YouTube: @JoinTheDocs,
Instagram: @JoinTheDocs
TikTok: @JoinTheDocs
Facebook: @JoinTheDocs
Twitter: @JoinTheDocs
You can send your questions and comments to Dr. Sackier via email: [email protected]