The project Automating Public Services: Learning from Cancelled Systems investigates why government departments and agencies in different countries are deciding to pause or cancel their use of algorithmic and automated decision support systems.
Research
The project involves: 1) a scoping exercise to identify and list paused or cancelled systems in Europe, Australia, Canada, New Zealand and the United States and 2) more detailed case study investigations of cancelled systems in the areas of fraud detection, policing and child welfare to better understand the rationales and factors leading officials to pause or cancel these systems. The research methods used include desk-based scoping research, document analysis as well as interviews.
Outputs
The project report was published September 2022 and can be found here.
A summary of project findings was published in the Conversation and can be found here.
Context
Previous research has documented increasing government interest and use of predictive analytics and automated systems across public services in Canada, the United Kingdom, New Zealand, and Australia. These applications range from algorithmic, machine learning and predictive systems to risk score populations, families and individuals. As government bodies in the UK and elsewhere advance their uses of data driven systems in an effort to enhance efficiency and planning, researchers raise concerns about privacy, security, transparency and accountability as well as potentials for discriminatory sorting, exclusion and exploitation. Further, our research and the work of others demonstrates that new computer information systems can shift operations, understanding, working practices as well as relations with colleagues and ‘clients’ in ways that have significant implications for governance.
We know from investigations done across Data Justice Lab projects and those done by others that there has been a mixed response from government agencies to uses of algorithmic and automated systems in public services. Some departments and agencies have implemented these programs, some are piloting them, others have cancelled the use of these systems after trying them. Often these systems are being introduced without adequate public engagement or debate.
Investigating paused or cancelled programs provides a means to learn from those who have direct experience with trialling predictive and automated decision support systems. Through this project we aim to gain greater insight into what kind of contextual forces and rationales may be influencing decisions to pause or stop the use of these systems. Such investigations are particularly relevant in a policy context where there are widespread calls for greater transparency and accountability surrounding the use of automated systems.
Researchers
For this project we are partnering with the Carnegie UK Trust.
The project team includes:
Data Justice Lab – Joanna Redden, Jessica Brand, Ina Sander and Harry Warne.
Carnegie UK Trust – Anna Grant and Douglas White