Abstract
There are many examples of problems resulting from inscrutable AI systems, so there is a growing need to be able to explain how such systems produce their outputs. Drawing on a case study at the Danish Business Authority, we provide a framework and recommendations for addressing the many challenges of explaining the behavior of black-box AI systems. Our findings will enable organizations to successfully develop and deploy AI systems without causing legal or ethical problems.
Original language | English |
---|---|
Article number | 7 |
Pages (from-to) | 259-278 |
Number of pages | 20 |
Journal | MIS Quarterly Executive |
Volume | 19 |
Issue number | 4 |
DOIs | |
Publication status | Published - Dec 2020 |
MoE publication type | B1 Non-refereed journal articles |