A patient came to me a few weeks ago with a new chief complaint. A “free” dermatology app, driven by artificial intelligence, told him that one of his moles was concerning and should be evaluated by a doctor. He’d snapped a quick picture of it and some neural network spat out that it was “80% concerning.” The app then recommended that he pay $40 for one of its physicians to review the image. Since he was an established patient with me, he came to my office rather than paying the app. In so doing, he inadvertently put me in a tough ethical situation.
The mole looked benign, and in fact just a month prior I’d evaluated it as part of his annual skin screening. It looked fine then and it still looked fine. The color was gray but uniformly and symmetrically so (reassuring). It was raised with a cobblestoned appearance (which suggests it’s been around for a while), was consistent with other moles on his body, was typical for his ethnicity, and had no other concerning or remarkable features. My diagnosis hadn’t changed, but my thinking had.
What was this algorithm? How does it work? Does it know something I don’t? Did it pick up on the gray color and overreact? What does “80% concerning” even mean? Is this just a money-making ploy? What if I’m wrong and it turns out to be a skin cancer? Am I a bad doctor, after all? Will he be okay? Will I get sued if I make the wrong choice?
These questions ran through my head and out of my mouth during my visit with this patient. A younger gentleman wearing a t-shirt from one of the big tech companies, he was interested to engage in this line of discussion. He’d come out of interest rather than concern, which made for a safe interaction. I encouraged a biopsy for due diligence, and after a long discussion about my newfound medicolegal bind and the future of dermatology, he politely declined. I photographed the lesion and asked him to come back in three months for re-evaluation.
The whole thing left a sour taste in my mouth; if the app’s teledermatologist deemed this guy’s mole concerning, he’d still need a biopsy (which can’t be done over the phone). The only difference is that he would have paid $40 out of pocket to be told he needs to see a dermatologist (me). Alternatively, if the teledermatologist thought it was a normal mole (which was obvious), he would have paid $40 for the mistake of taking an image of his mole.
It seems these apps (this app, at least) make money on the perceived value of the neural network; the AI functions as a marketing tool to get you past a paywall to see a remote dermatologist who doesn’t take insurance. The app’s designers hope that already-insured patients inherently trust the neural network more than their local dermatologist and can then be frightened by a non-specific metric (“80% concerning”) to part with $40 instead of seeing me and having their insurance cover it.
The patient had just seen me, but this new tool suggested that perhaps I missed a life-threatening health issue. It then used that sense of betrayal to lure my patient into making a purchase, despite the fact that he has low-deductible insurance. This business model doesn’t just erode trust in medical professionals, it profits off this harm and hurts patients’ wallets.
Because he came to me instead of taking the bait, my patient’s insurance was set to pay my office a total of about $150 for the visit and the followup in three months (in addition to the charge for the mole screening a month prior). None of this was necessary, and I’m sure the app’s designers feel cheated out of that cash. Meanwhile, he told me he plans to download ten more derm AI apps and see what they say. Great. Are there really that many? A quick peruse of the Android App Store told me yes, definitely.
While this interaction may seem like a novelty, for the last few years, it’s felt inevitable. During my residency, a paper out of Stanford showed that machine-learning algorithms can routinely rival dermatologists in diagnosing both cancers and rashes based on clinical images. It made quite a splash at the time (2017), and I remember a heated discussion in our monthly journal club that reverberated throughout our day treating patients. Many of the residents were excited about the possibilities of this new technology, while many attendings seemed skeptical and defensive. There was a palpable fear regarding what it would mean for our patients, our specialty, and our livelihoods. The more accepting residents (including myself) talked about how such technology could be used in clinical settings to augment physician-level clinical decision-making, or be rolled out in rural primary care settings, where dermatologists often aren’t immediately available, to triage cases. The more skeptical among us worried that AI technology would be misused by patients, drive “scope creep” (nurse practitioners and physician assistants performing increasingly doctor-like work), or boost spending on healthcare by artificially inflating the frequency of routine biopsies.
Flash forward a year: I was fresh out of residency when a connection from a past life reached out to discuss a similar algorithm intended for use in derm offices. This time it was a neural network that analyzes dermoscopic images taken by a provider during office visits. He wanted my opinion on whether it would be useful, and I responded with a meandering, noncommittal answer touching on rural health clinics, clinical decision-making, and ways the technology could be misused to increase health spending. He (predictably, in retrospect) responded with great interest in its potential to drive up biopsy numbers, which would increase the profitability of the system. Biopsies mean revenue for hospitals and doctors, and AI means those decisions could be blamed on a computer instead of a trigger-happy doctor. Validate the neural network, and you tap untold riches.
It should have come as no surprise, then, that my first interaction with a derm-AI was from a patient who was asked to pay out of pocket to discuss an “80% concerning” mole— ripening my growing fear that someday we’ll squander the miracle of artificial intelligence. Of course we would use this technology to squeeze a few more pennies out of Millennials conditioned to trust computers over people. Look what we did just days after learning to split the atom.
Flash forward to last week: that patient came back. He decided he wanted the biopsy, after all. Part of me wondered whether deep down he trusts the machine more than me. Given everything we’d discussed, I felt compelled to do the procedure. After a few days of waiting, I had confirmation that I do know what I’m talking about, after all. The verdict? “Intradermal melanocytic nevus with superficial congenital features,” AKA a normal mole, likely present since birth. The patient traded reassurance for a scar, and his insurance will pay my office about $250 (and UCSF dermatopathology another $200) for all our trouble.
So now I’m left wondering what’ll come next. Perhaps this “80% concerning” mole just happened to fall into that other 20%. Maybe this is the first iteration of a technology destined to become far more powerful than my mind; perhaps it will start picking up on morphological features that I don’t notice and making rock-star diagnoses. Someday AI could be more than a marketing ploy and the paywall worth scaling. Something tells me, though, that this patient was a harbinger of things to come: an industry tailor-made to increase costs and erode trust in the human mind. I hear atoms splitting in the distance.
At the end of the day, AI is a tool. And no matter its marvel, no matter its potential, every tool operates as an extension of the mind that wields it. And our minds, helming these mortal meat-sacks we call bodies, are beholden to the primitive parts of our brains that helped us navigate the wilds before we had fire, let alone wifi. In tools, we see the potential to escape from the limitations of our biology, the ability to build a better world, and the possibility to outcompete all the other talking apes on the block. Faced with a world of possibility, we’re left with the choice of how to use the tools we create.
That’s hardly a critique; tools propelled us out of Eden and into a deeper understanding of ourselves and of nature. But they also potentiate the darkest impulses of the psyche, many of which are rooted in the desire to accumulate resources.
And so, my first professional contact was one of profiteering. I await Second Contact.