As artificial intelligence (AI) becomes increasingly integrated into healthcare services, there are important lessons that the medical profession can learn from the aviation industry, which faced widespread loss of human skills after the adoption of autopilot.
While calls for medicine to learn from aviation are not new, a group of clinicians and flight safety experts have collaborated to move this conversation beyond familiar analogies by looking at where the comparison is meaningful and where it fails, in order to examine how automation has reshaped pilot expertise - and what this might mean for clinicians working alongside AI.
Clinical researchers from University College London Institute of Ophthalmology and Moorfields Eye Hospital NHS Foundation Trust worked with the flight safety department of Lufthansa on recommendations for future-proofing the clinical workforce and improving patient outcomes. Published in npj Digital Medicine, the perspective piece argues that healthcare must shift from viewing AI as an "autopilot" to embracing it as a "digital copilot".
The Automation Paradox
The authors highlight the "automation paradox" in which increasing automation erodes human skills and awareness. In aviation, this led to the term "children of the magenta line", a generation of younger pilots who became so dependent on the magenta autopilot navigation line on screen they lacked the skills to fly manually.
Lead author Ariel Ong said: "Medicine risks repeating aviation's early automation mistake of placing too much faith in the machine while losing critical skills. Aviation learned that the goal was never to replace the pilot, but to enable rigorous simulation training. We argue for the need to embrace that same philosophy to ensure clinician judgement is not eroded as AI becomes increasingly embedded in healthcare."
Senior author Josef Huemer commented: “Medicine has already borrowed heavily from the aviation industry. For example, surgical checklists, safety time-outs, human factors simulation training, and a culture of incident reporting and analysis that allows healthcare workers to feel safe reporting errors without retribution, all have their origins in flight safety. With AI now poised to reshape medical workflows, we should consider how we can learn lessons on automation from the aviation industry to avoid making the same mistakes.”

Key Recommendations
Informed by lessons from flight safety, the authors make five recommendations.
1. Benchmark clinicians and monitor unaided performance
Institutions should assess real-world clinician performance without AI assistance and set minimum unaided practice requirements after AI deployment, just as pilots have to maintain manual flying skills during routine flights with ongoing monitoring to detect overreliance.
2. Prioritise independent reasoning in early training
For younger clinicians trained in AI-rich environments, the risk shifts from deskilling to "never skilling" or "mis-skilling." Evidence suggests learners develop shallower knowledge with AI tools than through self-directed learning. Early training should build independent reasoning before automation is introduced, allowing AI to scaffold rather than substitute skill development.
3. Ensure clinicians understand AI limitations
Medical schools should teach AI literacy and technical competence in using AI tools. These skills should then be maintained and enhanced through professional development so clinicians are equipped to identify AI bias and other shortcomings.
4. Introduce scenario-based simulation training
Mandatory simulator training that recreates AI failure scenarios should be adopted, similar to aviation practice. This should extend beyond traditional surgical simulation to include development of dedicated simulation environments for non-surgical, end-to-end clinical workflows where AI is or might be used. In addition, routine AI settings, unannounced "surprise breaks" from AI can assess a clinician’s readiness to operate safely without automation.
5.Cultivate operational understanding
Clinicians should have a fundamental grasp of how an AI tool arrives at a decision and know when to override it. This mirrors aviation's "golden rule": understand the automated system at all times. When that understanding is lost, reduce automation step by step until situational awareness is
restored.
The authors conclude that regulation should evolve beyond certifying AI as a medical device to address competence, accountability, and safety within the human-AI partnership. Humans and AI should ideally function as “co-intelligent” partners, combining human contextual reasoning with algorithmic speed and pattern recognition. Patients, too, favour this approach, consistently responding in surveys that they prefer clinicians to lead in decision-making, with AI assisting.
Read the recommendations in full: Flight rules for clinical AI: lessons from aviation for human-AI collaboration in medicine | npj Digital Medicine

