Projekteja vuodessa
Abstrakti
Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle—from conception, technical development, clinical validation, and real-world deployment.
| Alkuperäiskieli | Englanti |
|---|---|
| Artikkeli | e71236 |
| Sivut | 1-13 |
| Sivumäärä | 13 |
| Julkaisu | Journal of Medical Internet Research |
| Vuosikerta | 27 |
| DOI - pysyväislinkit | |
| Tila | Julkaistu - 2025 |
| OKM-julkaisutyyppi | A2 Katsausartikkeli tieteellisessä aikakauslehdessä |
Rahoitus
This study was supported by the European Union’s Horizon 2020 projects EuCanImage (grant agreement No 952103) and INTERVENE (grant agreement No 101016775), as well as by the Innovative Medicines Initiative 2 Joint Undertaking project Bigpicture (grant agreement No 945358) and BIOMAP, which has received funding from the Innovative Medicines Initiative 2 Joint Undertaking (JU; grant agreement No 821511). The JU receives support from the European Union’s Horizon 2020 research and innovation program and EFPIA. This publication reflects only the author’s view and the JU is not responsible for any use that may be made of the information it contains. PM received funding from the Research Council of Finland (Finnish Center for Artificial Intelligence FCAI, and grants 352986, 358246, NextGenerationEU).
Sormenjälki
Sukella tutkimusaiheisiin 'Trust, Trustworthiness, and the Future of Medical AI : Outcomes of an Interdisciplinary Expert Workshop'. Ne muodostavat yhdessä ainutlaatuisen sormenjäljen.Projektit
- 3 Päättynyt
-
CLISHEAT/Marttinen: Green and digital healthcare
Marttinen, P. (Vastuullinen johtaja), Gao, Y. (Projektin jäsen), Moen, H. (Projektin jäsen) & John, T. (Projektin jäsen)
EU The Recovery and Resilience Facility (RRF)
01/01/2023 → 31/12/2025
Projekti: RCF Academy Project targeted call
-
INTERVENE: International consortium for integrative genomics prediction
Kaski, S. (Vastuullinen johtaja), Moen, H. (Projektin jäsen), Cui, T. (Projektin jäsen), Raj, V. (Projektin jäsen), Safinianaini, N. (Projektin jäsen) & Wharrie, S. (Projektin jäsen)
01/01/2021 → 31/12/2025
Projekti: EU H2020 Framework program
-
-: Finnish Center for Artificial Intelligence
Kaski, S. (Vastuullinen johtaja)
01/01/2019 → 31/12/2022
Projekti: Academy of Finland: Other research funding