Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Contemporary medical AI systems exhibit a critical vulnerability: they deliver confident predictions without mechanisms to express uncertainty or acknowledge limitations, leading to dangerous overreliance in clinical settings. This paper introduces the BODHI (Bridging, Open, Discerning, Humble, Inquiring) framework, a dual-reflective architecture grounded in two essential epistemic virtues: curiosity and humility, as foundational design principles for healthcare AI. Curiosity drives systems to actively explore diagnostic uncertainty, seek additional information when faced with ambiguous presentations, and recognize when training distributions fail to match clinical reality. Humility provides complementary restraint, enabling uncertainty quantification, boundary recognition, and appropriate deference to human expertise. We demonstrate how these virtues function synergistically in a dynamic feedback loop, preventing both reckless exploration and excessive caution while supporting collaborative clinical decision-making. Drawing from psychological theories of curiosity and cross-species evidence of epistemic humility, we argue that these capacities represent fundamental biological design principles essential for systems operating in high-stakes, uncertain environments. The BODHI framework addresses systemic failures in medical AI deployment, from biased training data to institutional workflow pressures, by embedding uncertainty awareness and collaborative restraint into foundational system architecture. Key implementation features include calibrated confidence measures, out-of-distribution detection, curiosity-driven escalation protocols, and transparency mechanisms that adapt to clinical context. Rather than pursuing algorithmic perfection through pure optimization, we advocate for human-AI partnerships that enhance clinical reasoning through mutual accountability and calibrated trust. This approach represents a paradigm shift from overconfident automation toward collaborative systems that embody the wisdom to pause, reflect, and defer when appropriate.

More information Original publication

DOI

10.1371/journal.pdig.0001013

Type

Journal article

Publication Date

2026-01-01T00:00:00+00:00

Volume

5