The malpractice question with AI is not what the doctor did, but can they be sued for not using it?
That would have sounded absurd not long ago. When Reid Hoffman suggested that failing to consult AI for a second opinion could one day be malpractice, he was not making a legal claim but rather pointing to where expectations are drifting.
Medicine has always adapted to new technology but systematically. First comes proof that something actually improves care. Then comes adoption. Only after that does it become standard practice. With AI, that sequence is being flipped and expectation is arriving before the evidence and the pressure is mounting for doctors to adapt quickly.
READ: Sreedhar Potarazu | The AI big bet: Laying off employees in exchange for chips and tokens (April 28, 2026)
There is no question that generative AI can help. It can scan large bodies of literature, summarize guidelines and surface possibilities quickly. In a busy clinic, that kind of assistance matters. But as important as utility is reliability, which defines the standard of care.
Clinicians are already experimenting with tools like OpenEvidence, Doximity GPT and Glass Health in very practical ways. They are not using them to make final decisions, but to sanity-check differential diagnoses, quickly review evolving guidelines or explore edge cases that fall outside routine experience. In many settings, these tools function more like a fast literature search than a true clinical partner — useful for breadth, but still dependent on the physician to filter, interpret and ultimately decide. The advantage, of course, is the massive amount of data that is synthesized at speed.
A recent report on generative AI health platforms under regulatory review makes clear that even regulators have not settled on what these tools actually are. Are they clinical decision support? Are they medical devices? Or something else entirely? That lack of clarity goes directly to liability. If a tool is not clearly defined, it cannot be cleanly regulated, and if it is not cleanly regulated, it cannot be safely folded into what physicians are expected to do.
At the same time, the performance data does not support the idea that AI is ready to act as a dependable second opinion.
A recent study stress-tested OpenEvidence on complex subspecialty cases — not exam-style questions, but real clinical reasoning. The results were sobering. Accuracy dropped to roughly 34 percent in rapid mode and only reached about 40 percent with deeper analysis.
That is not a minor gap but tells you something fundamental.
Most of these tools work by pulling from published research and summarizing it. That approach works when the literature is clear and the patient fits neatly into known categories. Real patients rarely cooperate like that. They present with overlapping problems, incomplete histories and findings that do not line up with a single guideline.
READ: Sreedhar Potarazu | We’re not competing with AI on intelligence—it’s the blind spots (April 27, 2026)
The literature itself is not a perfect source of truth. When AI treats published work as unquestioned truth, those weaknesses do not disappear. So, the idea that not using AI could be malpractice gets ahead of reality.
The legal standard has never required physicians to use every available tool. It asks whether a reasonably competent physician would have acted the same way under similar circumstances. For AI to become part of that expectation, it has to show consistent, real-world benefit.
If anything, physicians are being placed in an uncomfortable position. Use AI, and you may be relying on a tool that is still being evaluated. Do not use it, and someone may argue you ignored an available resource.
The U.S. Food and Drug Administration (FDA) has started to outline how AI-enabled tools might be evaluated, but generative AI does not fit neatly into existing categories. These tools change, update and generate outputs that are not always predictable. That makes it harder to draw clear lines around how they should be used in clinical practice.
Current federal policy has leaned toward speed — encouraging innovation and faster adoption. What it has not done is clearly define how physicians should use these tools in a way that protects both patients and clinicians.
There is also a deeper point that tends to get lost. Medicine is not just about accessing information. It is about judgment, especially when the answer is not obvious. AI can assist with that, but it does not carry responsibility for the outcome. The physician does.
Calling an AI chatbot’s response a “second opinion” suggests a level of accountability that does not exist.
A better question is not whether doctors can be sued for not using AI. It is how to use AI without weakening clinical judgment or creating new legal risks. Tools that incorporate multiple types of data, emphasize reasoning rather than just summarizing papers and keep clinicians actively involved may eventually earn a place in everyday practice. But that shift has to follow evidence.
At some point, it is possible that not using AI could fall below expectations. But that point comes after the technology proves it can consistently help in real clinical settings.
Right now, the greater risk is not that physicians are ignoring AI. It is that too much is being asked of it, too soon.

