A new publication in Nature Medicine discusses 'Building a code of conduct for AI-driven clinical consultations'

The diversity of AI applications in medicine is growing at an accelerating rate, with that trend set to continue as technology develops. This diversity is a big challenge to developers, validators, and regulators who work best with reproducible and systematic methods. How can the standards used for one application translate to another?

For applications with some level of autonomy, we suggest looking outside medicine to draw on lessons that have led to successful implementation. Specifically, the autonomous driving industry has benefited from agreed definitions of core behaviours, levels of autonomy, and safety standards. Applications at different levels may be compared fairly and in tasks that are grounded in real world activity.

Many educational and observational frameworks for clinical work have been developed, and most emphasise both technical and interpersonal skills: ‘cure’ and ‘care’ behaviours. Drawing on these themes may help stakeholders define what medical AI could and should do in terms of a finite number of core behaviours; modified by clinical context and patient factors. This can pave the way to consistent evaluation, implementation, and governance—currently a significant barrier.

To read the full article on this topic, click here to be redirected to Nature Medicine