Judging from the current commercials for AI-enabled devices, their principal utility is to camouflage our own thickheadedness by feeding us the names of people we should remember but forgot, or summarizing text we should have read but didn’t, or rewriting emails so inappropriate we shouldn’t have written them in the first place. Presumably, someone thinks these use cases will sell cellphones, but others of us find a different set of use cases more interesting.
While the term “Artificial Intelligence” (AI) is in vogue, other terms coined around the same time preceded its popularity and are more focused on data analytics. “Machine Learning” (ML) and “Deep Learning” (more connected to neural networks) can be seen as a subset and an underpinning of AI. Where AI broadly represents techniques that make machines mimic human intelligence, ML is focused on processing large quantities of data to yield insights and suggest actions in ways that don’t need to be preprogrammed.
It is easy to imagine the practical applications of ML in healthcare which has both the availability of large datasets and an impetus to make treatment decisions based on those data. At the same time, a maxim often quoted in health research provides one of the most succinct cautions regarding the limits of ML, “correlation does not imply causality.” While modern day ML powered by our massive computational assets can spot patterns and associations hidden in petabytes of data, they are just that, associations in a dataset. Even the mindboggling capacity of our current computational processes can’t derive causality.
Or can they?
A term coined much more recently than AI or ML is “Causal AI.” This is a concept closely associated with the work of Judea Pearl as represented in his publication, The Book of Why. In this book and elsewhere he outlines statistical methodology to empower causal inference in new and powerful ways. Again, the implications for fields like epidemiology and healthcare in general are clear. A simple Google search returns a panoply of articles and applications of this concept that provide compelling examples of how it is impacting healthcare decision making today.
This is neither an endorsement nor a refutation of whether the use of Bayesian Networks and the counterfactuals involved in the implementation of Causal AI actually achieve their aims, but more of an invitation to consider the concept itself. Can computers truly establish causality? Can they be made to go deeper than pure associations in data, or as proposed in the article “Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It”, does there remain “an unbridgeable gap (a “humanity gap”) between the engineered problem-solving ability of machine and the general problem-solving ability of man…”?
I’ll admit to being fascinated by these types of questions, but I might also suggest we ought to consider the correlate question of how this technology is employed. Highlighted in a recent open forum and Project Management Symposium session, Dana-Farber already leverages the power of AI in a myriad of ways from directing patient treatment to instituting our own generative AI tool. AI is helping DFCI patients in practical ways right now, and that will undoubtedly continue as we strive to create an “AI-Enabled DFCI Cancer Hospital.” When you contrast that with the mimicry and deception implicit in acting like we know people we don’t know, or read something we didn’t read, it may not be the intelligence of the AI that we should be concerned about, but our own intelligence in how we use it.