Fast, confident, and increasingly persuasive, AI is arriving in ophthalmology as if it were a new colleague. The technology can interpret images, suggest plans for cataract surgery, track patients remotely, and summarize patterns across enormous data sets. For eye surgeons, the opportunity is obvious: greater consistency, fewer missed findings, and potentially more time for the human side of care. The risk we face is equally clear: the more seamless AI becomes, the easier it will be to let the technology quietly redefine clinical responsibility.
In cataract planning, AI could help reduce refractive surprises, refine toric decisions, and personalize IOL selection beyond what static formulas allow. Planning is also where AI can mislead us. If a model’s training data do not reflect our devices, techniques, patient mix, and real-world variability, its recommendations may be precise but inappropriate.
Adaptive systems introduce another tension in that they can change over time. The accuracy of a tool that performed well last quarter may drift without our noticing at the moment a decision is made.
Diagnostic AI offers a major upside in screening, triage, and the detection of disease progression, especially where patients’ access to care is limited. Diagnosis, however, is not pattern recognition alone; it is also context and consequence. If uncertainty is hidden behind definitive outputs, or if alerts multiply into background noise, AI could shift us toward faster rather than better decisions. Trust might also shift. Will patients believe us, the software, or whichever appears more certain?
Remote monitoring extends care beyond the clinic, enabling earlier detection and fewer unnecessary visits. It can also create a new workload, with continuous data streams that demand interpretation, escalation pathways, and medicolegal clarity. Without thoughtful design, remote monitoring risks becoming surveillance without service, and it could widen inequity if the benefits accrue mainly to digitally fluent patients.
Big data can reveal trends and averages, but it cannot capture values. It cannot measure what glare means to a night driver, what independence means to an older patient, or how risk tolerance changes when a person’s life circumstances change. If we let data replace dialogue, we gain efficiency and lose meaning.
AI will not determine ophthalmology’s future on performance alone. Governance will. The goal is not to adopt AI but to shape the technology so that it strengthens accountability, makes uncertainty visible, and protects what no algorithm can replace: patients' trust in a surgeon who owns the decision.