A court ruling in the Netherlands has banned the use of a system that scored citizens on how likely they were to commit certain kinds of fraud. The decision has been hailed as setting “an important precedent for protecting the rights of the poor in the age of automation” by human rights campaigners.
The System Risk Indication system was used by the Dutch government to profile citizens, analyze their personal data, and decide whether they were likely to commit tax or benefit fraud. The system, also known as SyRI, used an algorithm to score citizens. Citizens were not told how the system calculated its decisions.
However, a Dutch court ruled that the government needed to stop using the system immediately as it infringed on human rights. The court decided that the system infringed on Article 8 of the European Convention on Human Rights that guarantees a private life.
The decision has the potential to influence future decisions about how automated systems and artificial intelligence are employed in government decision-making. Amos Toh, a senior researcher in artificial intelligence and human rights for Human Rights Watch, hailed the decision:
“By stopping SyRI, the Court has set an important precedent for protecting the rights of the poor in the age of automation. Governments that have relied on data analytics to police access to social security – such as those in the US, the U.K., and Australia – should heed the Court’s warning about the human rights risks involved in treating social security beneficiaries as perpetual suspects.”
Toh noted that one of the key issues with the system was its opaque operation. Even during the court case, the government did not provide a clear explanation for how the system uses data to arrive at conclusions. This meant people essentially could not challenge their scores, even though the government stored the results for two years. SyRI was also employed entirely in what were termed “problem” neighborhoods.
Notably, the court did not employ Article 22 of the General Data Protection Regulation, which protects against automatic decisions with legal effects. TechCrunch notes that it’s unclear whether Article 22 applies if there’s a human involved in the process,