Improving Trustworthiness of AI Solutions: A Qualitative Approach to Support Ethically-Grounded AI Design

Andrea Vianello, Sami Laine*, Elsa Tuomi

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

12 Citations (Scopus)
154 Downloads (Pure)

Abstract

Despite recent efforts to make AI systems more transparent, a general lack of trust in such systems still discourages people and organizations from using or adopting them. In this article, we first present our approach for evaluating the trustworthiness of AI solutions from the perspectives of end-user explainability and normative ethics. Then, we illustrate its adoption through a case study involving an AI recommendation system used in a real business setting. The results show that our proposed approach allows for the identification of a wide range of practical issues related to AI systems and further supports the formulation of improvement opportunities and generalized design principles. By linking these identified opportunities to ethical considerations, the overall results show that our approach can support the design and development of trustworthy AI solutions and ethically-aligned business improvement.

Original languageEnglish
Pages (from-to)1405-1422
Number of pages18
JournalInternational Journal of Human-Computer Interaction
Volume39
Issue number7
Early online date13 Jul 2022
DOIs
Publication statusPublished - 2023
MoE publication typeA1 Journal article-refereed

Fingerprint

Dive into the research topics of 'Improving Trustworthiness of AI Solutions: A Qualitative Approach to Support Ethically-Grounded AI Design'. Together they form a unique fingerprint.

Cite this