The algorithmic administration: automated decision-making in the public sector

Per leggere l’articolo tradotto in italiano clicca l’icona blu google translate  (la quarta da sinistra in fondo all’articolo)   . Per un uso professionale e/o di studio raccomandiamo  di fare riferimento al testo originale.



Fonte:  che ringraziamo

Autore:  Kilian-Vieth-Ditlmann


Automating administration processes promises efficiency. Yet the procedures often put vulnerable people at a disadvantage, as shown by a number of examples throughout Europe. We’ll explain why using automation systems in the domain of public administration can be especially problematic and how risks may be detected at an early stage.

In its coalition agreement, Germany’s Federal Government has declared its commitment to modernize governmental structures and processes. To realize this goal, it plans to deploy AI systems for automated decision-making (ADM) in public institutions. These are intended to automatically process tax reports and social welfare applications, to recognize fraud attempts, to create job placement profiles for unemployed clients, to support policing, and to answer citizens’ enquiries by using chat bots. At best, AI can relieve authorities and improve their service.

So far, so good. Some European countries are already far more advanced in their digitalization processes than Germany. Yet by looking at these countries, we can observe possible problems caused by automatization in this sector.

System errors

Soizic Pénicaud organized trainings for people in close contact with beneficiaries for the French welfare management. Once, such a case worker told her about people having trouble with the request handling system. The only advice she would give them was “there’s nothing you can do, it’s the algorithm’s fault.” Pénicaud, a digital rights researcher, was taken aback. Not only did she know that algorithmic systems could be held accountable, she also knew how to do it.

Together with several civil society organizations, she filed freedom of information requests with the welfare agency. In November 2023, she finally obtained the details of the algorithm. Statistical analyses showed that the system had many faults. In particular, it considered people living with a disability, single mothers, and poorer people in general a potential fraud risk, leading in many cases to an automated suspension of benefits and the creation of “robo-debts,” indebtedness caused by automated disadvantage. The algorithm affected 13 million households.

Automated injustice in Europe

Unjust and poorly designed systems have frequently been found to be directed against certain groups of people in European countries.


In Denmark, social welfare beneficiaries are only fully eligible if they worked less than 225 hours in the previous year. A private consulting agency had developed a tool to implement this requirement automatically. When many of the beneficiaries found their benefits were cut due to a software error, municipalities had to check every single case manually. Each case check took 30 minutes to three hours.

In Italy, an algorithm was deployed to efficiently assign teachers with short-term contracts to schools with staff requirements. Code and design errors led to teachers’ lives being severely affected. For instance, the teachers were asked to commute for hundreds of hours even though there were open positions much closer to their places of residence.

In Austria, the Public Employment Service (AMS) introduced a chatbot based on ChatGPT worth 300,000 euros to help jobseekers with questions about job opportunities and career choices. It turned out that the bot propagated a rather conservative idea of gender roles: It advised women to take up gender studies but men to go into IT.

In the Netherlands, the authority responsible for paying out unemployment benefits used an algorithm to track visitors to its website. The algorithm collected all their IP addresses and used them for geolocalization. It was therefore ruled unlawful and must not be used anymore.

The Dutch Institute for Human Rights has ruled that the Vrije Universiteit Amsterdam is discriminating against students. The university was using facial recognition software to prevent students from cheating in examinations. The system disproportionately often reports people with a darker skin color.

There have also been numerous cases in Germany in which authorities have exceeded the limits of legality with the introduction of AI systems. In February 2023, the Federal Constitutional Court ruled that Palantir software used in Hesse and Hamburg should not have been used. This sentence did not prevent the Bavarian state police from testing a Palantir system that was probably not very different from the banned ones only a few weeks after the ruling. In November 2023, it was revealed that the Bavarian police were automatically analyzing and exploiting personal data from their own databases during the Palantir system’s test phase.

People on the move: defenseless test subjects

Germany’s Federal Office for Migration and Refugees (BAMF) has provided a prominent example of prematurely using problematic automation systems. The BAMF uses software that is supposed to recognize dialects in order to determine the identity and origin of asylum seekers. Scientists consider this method pseudoscientific. Nevertheless, it is to be feared that the  software’s assessment will have an impact on whether asylum is granted or not.

Especially people on the move – migrants, refugees, and travelers – are increasingly being subjected to AI and automated decision-making systems that haven’t been sufficiently tested. Furthermore, such systems’ implementation requires a democratic discussion about which haven’t taken place yet. There is an increasing number of investigations into how non-transparent these systems are and how insufficiently their use is legitimized and monitored.

Human rights are violated frequently in areas like migration, as there is a huge power imbalance between those who use AI and those who are affected by it. When AI is used for border protection, for example, discrimination patterns are reproduced. People who are already discriminated against are victimized. They usually lack the means to defend themselves and also have other practical hurdles to overcome. If the EU is serious about putting its values and principles into practice, it must protect all people’s fundamental rights – including the rights of people who are not EU citizens (yet).

Risk control: public administration’s weak spot

The European case studies show that the introduction of automated systems entails major risks – especially if they are not used with due caution. They might curtail people’s fundamental rights or deny them access to public goods and services. Special conditions apply in the public sector: We have no choice between different providers but are inevitably subject to decisions made by the administration responsible for us. In addition, public authorities access sensitive personal data and their decisions might have severe consequences for the people concerned, cuts in welfare benefits threaten people’s livelihoods, for example.

While automating administrative processes, it must be ensured that they provide a general benefit, cause no harm, and are fair. They must also not restrict the freedom of action of those affected. Algorithmic systems should therefore only be used in the public sector if they fulfil strict requirements and are effectively monitored. However, this is difficult because algorithmic systems are often very non-transparent: for the authorities and their staff, for those affected, and for society as a whole. All this prevents the systems’ critical examination. Assessing algorithmic systems’ impact must start with transparency measures – if only to enable those affected to defend themselves against automated decisions. We often don’t even know if authorities leave decisions to algorithms.

What can we do if automation leads to administrative errors?

People who have been or are affected by the use of algorithmic systems must be given access to all relevant information concerning them so that they can object to it. This entails easily accessible, inexpensive, and effective legal remedies at their disposal. If their rights have been violated, they must be compensated.

The risks of automated decision-making systems arise from the technical model that they are based on but also on how they are used, what they are used for, and what context they are used in. If authorities will use some particularly risky AI systems, a fundamental rights impact assessment is required under the AI Act. In some cases, they will have to make such assessments publicly accessible. How effective this measure will be depends on how it is implemented.

A publicly accessible online register could be introduced as a publication platform for the results. An AI register would provide companies and public authorities with an overview of systems that are already in use. People would be able to better understand automated decisions that affect them and to exercise their rights to protect themselves. Last but not least, civil society and scientists would learn which automation systems are being used. With this knowledge, we could kick off a discussion about which innovations and which use of the systems we aspire to in our society. A transparency register would also help administrations to learn from the mistakes and successes of other projects.

Read more on our policy & advocacy work on ADM in the public sector.

Sign up for our Community Newsletter

For more detailed information, please refer to our privacy policy.