Inside a Misfiring Government Data Machine

1 year ago 65

Last week, WIRED published a series of in-depth, data-driven stories about a problematic algorithm the Dutch city of Rotterdam deployed with the aim of rooting out benefits fraud.

In partnership with Lighthouse Reports, a European organization that specializes in investigative journalism, WIRED gained access to the inner workings of the algorithm under freedom-of-information laws and explored how it evaluates who is most likely to commit fraud. 

We found that the algorithm discriminates based on ethnicity and gender—unfairly giving women and minorities higher risk scores, which can lead to investigations that cause significant damage to claimants’ personal lives. An interactive article digs into the guts of the algorithm, taking you through two hypothetical examples to show that while race and gender are not among the factors fed into the algorithm, other data, such as a person’s Dutch language proficiency, can act as a proxy that enables discrimination.

The project shows how algorithms designed to make governments more efficient—and which are often heralded as fairer and more data-driven—can covertly amplify societal biases. The WIRED and Lighthouse investigation also found that other countries are testing similarly flawed approaches to finding fraudsters.

“Governments have been embedding algorithms in their systems for years, whether it’s a spreadsheet or some fancy machine learning,” says Dhruv Mehrotra, an investigative data reporter at WIRED who worked on the project. “But when an algorithm like this is applied to any type of punitive and predictive law enforcement, it becomes high-impact and quite scary.”

The impact of an investigation prompted by Rotterdam’s algorithm could be harrowing, as seen in the case of a mother of three who faced interrogation

But Mehrotra says the project was only able to highlight such injustices because WIRED and Lighthouse had a chance to inspect how the algorithm works—countless other systems operate  with impunity under cover of bureaucratic darkness. He says it is also important to recognize that algorithms such as the one used in Rotterdam are often built on top of inherently unfair systems.

“Oftentimes, algorithms are just optimizing an already punitive technology for welfare, fraud, or policing,” he says. “You don’t want to say that if the algorithm was fair it would be OK.”

It is also critical to recognize that algorithms are becoming increasingly widespread in all levels of government and yet their workings are often entirely hidden fromthose who are most affected.

Another investigation that Mehrota carried out in 2021, before he joined WIRED, shows how the crime prediction software used by some police departments unfairly targeted Black and Latinx communities. In 2016, ProPublica revealed shocking biases in the algorithms used by some courts in the US to predict which criminal defendants are at greatest risk of reoffending. Other problematic algorithms determine which schools children attendrecommend who companies should hire, and decide which families’ mortgage applications are approved.

Many companies use algorithms to make important decisions too, of course, and these are often even less transparent than those in government. There is a growing movement to hold companies accountable for algorithmic decision-making, and a push for legislation that requires greater visibility. But the issue is complex—and making algorithms fairer may perversely sometimes make things worse.

Read Original