Design of trustworthy and inclusive AI services in the public sector

Research output: Contribution to conferencePosterScientific

44 Downloads (Pure)


In May 2021, European Commission published the new AI regulations proposal (AI Act) [1]. This brought a sense of urgency for the developers of AI services, within industry, governments and public sectors. In recent discussions with companies, AIGA [2] and the Finnish government's public ICT representatives, I recognize that more than ever a close collaboration between the policy, technology and societal experts is needed. Furthermore, it is important to include civil society in the dialogue and deliberations [3], who, as shown in my recent paper [4], have their own set of requirements for trustworthy public AI services.

My proposed research focuses on how trustworthy and inclusive AI can be effectively implemented in the public sector: understanding the technological needs, challenges and regulations for devising trustworthy AI services, while developing tools, methods and critical assessment of outcomes. It builds on research in AI ethics including transparency, inclusion, accountability, auditability, and explainable AI, while engaging aspects of HCI and Human-AI interaction.

I am using a multidisciplinary and participatory approach in this research. On one side, I am collaborating with key stakeholders, such as public sector representatives (eg. from the City of Helsinki), public services designers and developers, policy and legal experts. I plan to conduct qualitative interviews, focus group sessions, and if possible, ethnographic and case studies with this group. The work has already started as the study on AI Act implications for educational and public services. On another side, I am including civil society in the study, such as regular citizens and representatives of vulnerable communities that might be affected by the public AI services. With this group, I have been conducting interviews, workshops and focus groups. Finally, I intend to gather both groups to perform the co-design sessions. As the result, I intend to publish: 1) frameworks of the successful participatory method for public AI services and 2) exemplar trustworthy and inclusive public AI service interface.

In summary, I aim to provide realistic solutions and recommendations for making public AI services trustworthy and beneficial to society, thereby under-represented groups. This could contribute, first, to the empowerment of citizens by acknowledging their experiential expertise and providing them ways of participation in developing public AI services. Second, to the successful implementation of AI services by the public services providers. Third, to understanding the implications of and preparations for the AI Act.

References: [1] European Commission, “Proposal for a Regulation laying down harmonised rules on artificial intelligence, ”, accessed 28.05.2021 [2] Artificial Intelligence Governance and Auditing (AIGA) project,, accessed 28.05.2021 [3] Young, M., Magassa, L. & Friedman, B. Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics Inf Technol 21, 89–103 (2019) [4] Drobotowicz K., Kauppinen M., Kujala S. (2021). Trustworthy AI Services in the Public Sector: What Are Citizens Saying About It?. In: Requirements Engineering: Foundation for Software Quality, REFSQ’21
Original languageEnglish
Publication statusPublished - 1 Nov 2021
MoE publication typeNot Eligible
EventNordic Conference for Young AI Researchers - Oslo, Norway
Duration: 1 Nov 20212 Nov 2021
Conference number: 1


ConferenceNordic Conference for Young AI Researchers
Internet address


  • Artificial Intelligence
  • Public Sector
  • Transparency
  • Participatory Design
  • Human-AI interaction
  • Human-Computer Interaction
  • Trustworthy AI


Dive into the research topics of 'Design of trustworthy and inclusive AI services in the public sector'. Together they form a unique fingerprint.

Cite this