Public services and government functions are increasingly datafied and automated. That means they are provided, enforced or withheld based on analysing data about people.
Data is collected about us when we apply for benefits, through cameras when we walk down a street, when we register our children for school, or when we do our shopping online, to name just a few examples. We are categorised, profiled and scored with this data, which affects the services we receive. But this often happens without our knowledge and with few possibilities to object.
The growing reliance on data and automation leads to a transfer of power away from citizens. This raises concerns for democracy. How, then, can people intervene and have a say? How can we democratise the datafied society?
Automated decision systems (ADS) are used by companies, local authorities and other state agencies to analyse that data, make predictions about citizens’ needs and actions, and inform how public services are delivered. They are also used to assess citizen behaviour and initiate state responses.
Governments are collecting and analysing a wide variety of data about citizens, and using that data to make decisions about public services—such as social security, health and housing—and state interventions—such as policing and criminal justice. This data may include everything from socio-demographic data such as nationalities, living conditions and income, to behavioural data and social media use.
Below are two examples of the use of data systems in the public sector.
Live facial recognition technology (LFRT) is increasingly deployed in public spaces. It captures biometric data of people, such as facial features, identifies them in real time, and thus allows for the permanent tracking of citizens’ movements. It has been used to target particular communities and in ways that limit free expression and association.
In 2019, the London Met's LFRT misidentified 96% of people flagged as potential criminals.
LFRT changes the relationship between state and citizens by transferring power to the state and leaving citizens exposed and subject to police action. Yet these technologies continue to be deployed without our consent and understanding of how they will impact our lives and our rights.
LFRT has also been shown to be inaccurate and biased. In 2019, the London Metropolitan Police’s (Met) LFRT misidentified 96% of people flagged as potential criminals. Researchers have documented racial and gender biases embedded in facial recognition systems. The consequences are disproportionately felt by people who are already marginalised—as in the case of a 14-year old Black child who was wrongfully identified and detained using LFRT in Romford, UK in 2019.
In 2017, the UK government rolled out Universal Credit, a single point of contact for all welfare, as part of a larger strategy to transform government through digitising services and implementing ADSs.
Universal Credit replaced 6 systems including the housing benefit, child tax credit and income support. The algorithm that determines eligibility and the amount of benefit has been shown to systematically disadvantage recipients.
The system was built to interpret monthly salaries, but is ill-equipped to understand the reality of those who actually need Universal Credit. Many workers who rely on Universal Credit are paid at irregular intervals, such as part-time and service workers. Thus, a common error in the Universal Credit system is overestimating salaries for these workers and not sending them the benefits they are entitled to.
People have said they were forced to skip meals and go hungry, while others borrowed money to cover the gap and have since fallen into debt.
Automating this system has caused severe income losses. People have said they were forced to skip meals and go hungry, while others borrowed money to cover the gap and have since fallen into debt. Uncertainty around when the next check will or will not come has caused psychological distress.
By automating welfare, Universal Credit has replaced consideration of citizens’ specific lives and circumstances with a flawed algorithm, and has made it harder for claimants to object and correct wrongful treatment.
Automated decision systems are all around us. Yet the gravity of their impact is unmatched by our awareness of them and our voice in their deployment.
ADSs make judgements about us according to criteria that are not transparent, with consequences that may harm us, and without giving us an open way of redress.
All this raises fundamental questions regarding the state of democracy in the datafied society. A democracy relies on people being active in their own governance. However, the public does not have much opportunity to gain understanding or have a say in what systems should be in place and whether important state functions should be datafied and automated at all.
What are avenues for people to participate in decisions about the use of ADSs by public institutions?
By exploring civil society strategies, ways to bring more participatory methods to government, and challenging the status quo on how we think about data, we can find inspiration for change.