Civic Innovations

Technology, Government Innovation, and Open Data


AI Can’t Save Us By Itself

Reading this horrific story about algorithmically-assisted domestic violence interventions in Spain should be required reading for anyone pushing for the adoption of AI into critical government services, specifically those impacting or supporting decision making or the allocation of scare, vital resources.

In a nutshell, Spain uses an algorithm to provide a risk assessment for those potentially at risk for gender violence. Depending on this assessment, police may take steps to intervene and provide protection for those identified by the algorithm as being at elevated levels of risk. But it turns out that sometimes people with assessments identifying them as low risk are the victims of further violence. This is horrible and tragic, but of even more concern is the degree to which this approach has been embedded into the processes of police

Spain has become dependent on an algorithm to combat gender violence, with the software so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins.

At least three important things to take away from this very sad story that are at the top of my mind.

First, government services like domestic violence intervention (or intervention to protect at risk children, or to approve or deny applications for public benefits) can be life or death decisions. when mistakes happen, people can lose their lives. Arguing that criticisms of algorithms or AI in these scenarios are misplaced because “humans sometimes make mistakes too” misses a critical part of the responsibility for managing these important services – accountability. Who do we hold accountable for decisions that get made or actions that are taken by government when some part of the decision making has been ceded to an algorithm or an AI model?

Second, prior academic research on algorithmic and AI tools meant to assist or support decisions on government interventions shows a troubling dynamic at work. In her groundbreaking book “Automating Inequity,” author Virginia Eubanks observed this phenomenon when evaluating the Alleghany Family Screening Tool – an algorithmic tool used in Alleghany County Pennsylvania to identify potentially at risk children. While this tool was meant to assist screeners in deciding whether to intervene and remove a child from a potentially dangerous situation, over time the behavior of screeners using this tool seemed to change.

“[t]he AFST is supposed to support, not supplant, human decision-making in the call center. And yet, in practice, the algorithm seems to be training the intake workers.”

Intake screeners have asked for the ability to go back and change their risk assessments after they see the AFST score, suggesting that they believe the model is less fallible than human screeners.

We see echoes of this in the recent story from Spain where police simply “accepted the software’s judgment” on whether to allocate resources to protect someone from violence.

Most importantly, this all suggests that the “human-in-the-loop” approach to implementing new AI solutions in government may not be enough of a check on the tendency for humans to allow algorithms and AI models to supplant their judgement. New AI tools are being adopted by government at an accelerated pace, often in places where there are resource constraints – which also happens to be the places where governments make decisions about whether to protect a domestic violence victim, remove an at risk child, or to send a police car to an emergency.

The adoption of new algorithms and AI models into government service delivery must be informed by the information we have on the the very human tendency to cede authority for making critical judgements to software. It must also be accompanied by proper safeguards, training for staff, and rigorous reviews to ensure that the ultimate authority and accountability for making life or death decisions remains with humans.

People’s lives depend on it.

Leave a comment

About Me

I am the former Chief Data Officer for the City of Philadelphia. I also served as Director of Government Relations at Code for America, and as Director of the State of Delaware’s Government Information Center. For about six years, I served in the General Services Administration’s Technology Transformation Services (TTS), and helped pioneer their work with state and local governments. I also led platform evangelism efforts for TTS’ cloud platform, which supports over 30 critical federal agency systems.