
Artificial intelligence (AI) models show great promise for glaucoma detection using retinal images, but their ‘black box’ nature is a major barrier to clinical adoption. Clinicians are often hesitant to trust a diagnosis without understanding its reasoning. This paper provides a comprehensive systematic review of explainable artificial intelligence (XAI), a field dedicated to making machine learning models transparent and interpretable. The authors analysed 56 selected studies to map the current landscape of XAI in glaucoma assessment. The review details various XAI techniques, such as LIME and Grad-CAM, that are being applied to fundus photographs and optical coherence tomography (OCT) scans. These methods help clinicians by providing visual aids, like heatmaps, to explain which parts of an image the AI focused on to reach its conclusion, thereby increasing trust. However, the authors also highlight significant gaps in the current research. They identify a persistent trade-off between model accuracy and its interpretability. Furthermore, they note a critical lack of robust clinical validation and a failure to incorporate longitudinal data for progression analysis. This paper is a valuable resource for clinicians and researchers, offering a clear overview of both the potential of AI in glaucoma care and the critical challenges that must be overcome before these tools can be safely and widely implemented.

