New research project explores how government AI can go wrong and how to fix it
AI is increasingly used by government agencies to improve public services, ranging from processing benefit applications to automating decisions. While these systems promise efficiency and cost savings, they can also cause serious problems.
Errors or unfair decisions have caused citizens to lose benefits they were entitled to, sparking protests and lawsuits. These have in turn resulted in governments worldwide facing financial losses and reputational damage.
"Our project aims to understand why these failures happen and how they can be prevented. For this purpose, we will examine eight real-life cases of government AI systems from Europe and Australia. These case studies will provide insights into what went wrong, why it happened, and what can be learned", says Rinta-Kahila, associate professor at Hanken’s Department of Management and Organisation.
Ultimately, the goal is to help governments implement AI responsibly and manage risks better. By shedding light on the anatomy of harmful AI systems, the research will contribute to safer, fairer public services in the age of automation.
The four-year project is funded by the Research Council of Finland with over 700 000 EUR.
Dr. Tapani Rinta-Kahila is an associate professor and academy research fellow at Hanken starting from 1 April 2026. He is also an ARC DECRA Fellow at The University of Queensland Business School. Dr. Rinta-Kahila’s research focuses on issues related to implementing and managing artificial intelligence systems in organisations, the harmful consequences of technologies, and the changing nature of work, among other topics. His work has appeared in leading journals, including MIS Quarterly and Journal of the Association for Information Systems, and he has received recognitions such as the Stafford Beer Medal, AIS Senior Scholar Best IS Publication Award, and AIS Early Career Award.
Text: Marlene Günsberg
Photo: University of Queensland
