Tags

The battle for transparency in AI decision-making
19/04/2021
Main image
Using the Uber app on a smartphone

“This will be an entire field of law and, with the European Commission’s new proposal for a European AI Regulation, we will have a regulatory framework that is more focused and sector-specific,” says Dutch privacy lawyer Anton Ekker, who last month represented a group of drivers in the Netherlands seeking access to their personal data and greater transparency of algorithmic management systems at ride-hailing firms Uber and Ola.

Ekker is no stranger to high-profile cases in the esoteric and nascent legal field of artificial intelligence (AI). The Amsterdam-based lawyer successfully challenged the Dutch government’s introduction of a risk-scoring algorithm to detect welfare fraud in 2020. Now, Ekker is taking on some of the world’s best-known technology platforms.

The Amsterdam District Court’s decisions in March – two related to transparency in general and one about automated dismissal specifically – were the first time a court had found that workers were subject to automated decision-making, defined under article 22 of the EU’s General Data Protection Regulation (GDPR). The decisions, a mixed bag of results for the parties involved, drew widespread media attention, and in the days that followed both the drivers – represented by Ekker – and Uber claimed victory. Ola, the biggest losers in the case, stayed quiet.

Uber was largely successful in fending off the wide-ranging data requests brought by the App Drivers and Couriers Union (ADCU) and the Worker Info Exchange (WIE). Although the Dutch court did order Uber to reveal data used to dismiss two drivers accused of fraudulent activity, it also found that the tech giant’s automated decision-making processes maintain human involvement – a finding the workers plan to appeal as the court did not assess whether this involvement was meaningful, says Ekker. No penalties or damages were awarded against the tech company.

“There have been drivers dismissed by algorithm,” Ekker assures IEL following the court’s ruling. “There are a host of issues with algorithms on the Uber platform that still need to be brought before the courts. All the data categories for example, like the profiling, the ratings, locations, and also the issue of data portability. On all these issues, we still need guidance from the courts so it will take a long time; this is just the first step.”

On the subject of transparency, the court decided that the burden of proof rests with the workers who must demonstrate they have been subject to automated decision-making before demanding transparency of such data. They must also provide greater specificity on the data sought rather than placing the burden on platforms to explain what data is held and how it is processed, according to the court’s reasoning. 

“This creates an impossible situation,” says Ekker. “How can you demonstrate a decision is automated if you don’t have access. If this is how the law works, then my conclusion is that article 22 of the GDPR is worthless and transparency doesn’t mean anything.”

The majority of drivers, and the public, are unaware of how intrusive AI technology has become, says Ekker, who points to Uber’s real-time ID verification system as a prime example of algorithmic failings. To combat drivers faking their identities – and resolve its long licensing battle with Transport for London – the Uber app requires drivers to regularly submit a selfie which is then compared against a photograph on Uber’s database, either by a human worker or the platform’s facial recognition technology.

Unfortunately, numerous reports suggest the software fails to recognise drivers or couriers from minority ethnic backgrounds or if drivers shave their beards or heads. And although Uber says drivers and couriers have a choice between AI and human verification, user say that even if they opt for the latter, mistakes made by the software are not overridden.

Algorithms jeopardise the worker. The number of people who have been subjected to AI surveillance has grown rapidly as platforms expand

“So many different parts of driver's behaviour are monitored,” says Ekker. “Not only your ratings and driving behaviour, your location, and your selfies. This is all part of the Uber business model and its philosophy that it is an AI-first company; that they are going to solve any problem with algorithms so there is no human involvement anymore.”

Ekker continues: “Algorithms jeopardise the worker. The number of people who have been subjected to AI surveillance has grown rapidly as platforms expand. Now, it is mostly people who do low-paid jobs, but others might be impacted in the future. Other employers might start using such software and analytics, so this is a huge issue.”

Ekker is confident that platforms will, eventually, be forced to provide greater transparency of AI tools than what they already provide. “In the end, these workers will prevail because there is a strong legal principle within the GDPR. We are still struggling to get there, but eventually the courts will grant access more easily. For now, we managed to get some more information and we need to build on that.”

If the courts will not oblige, however, then it will be up to Europe’s policymakers to protect workers’ rights. The EU’s Culture and Education Committee, for example, recently called for all AI technologies to be regulated and trained to protect non-discrimination, gender equality, pluralism, as well as cultural and linguistic diversity within Europe.

Since then, leaked Commission proposals show plans to prohibit commercial mass surveillance systems and China-style social scoring systems that could lead to discrimination, as well as the introduction of compliance requirements for as yet to be defined “high risk” AI applications. Businesses could be fined €20m or 4% of their global turnover if found to be in breach of the regulations. The EU’s executive branch is expected to officially unveil its proposals on 21 April.

Meanwhile, in its much-criticised recent report, the UK government’s Commission on Race and Ethnic Disparities recommended improving the transparency and use of AI, calling on the Equality and Human Rights Commission to clarify how to apply the Equality Act to algorithmic decision-making, including guidance on the collection of data to measure bias, and the lawfulness of bias-mitigation techniques.

The fight for AI transparency looks set to continue, with the ADCU and WIE claiming victory again this past week after Uber failed to reply to a default judgment, issued by the same Amsterdam court, ordering compensation and reinstatement of six drivers allegedly dismissed by the ride-sharing app’s algorithm.

Uber says it will contest the judgment, claiming it only recently became aware of the claim and that the Dutch code of civil procedure was not followed. For his part, Ekker, who was also representing the ADCU and WIE in this case, says the writ of summons and the judgment where properly served to Uber, so he is “surprised” and “puzzled” by Uber’s response. He is certain of one thing, however: arguments over data transparency will continue for some time to come.

“Transparency of algorithms is a very hot topic and will not go away in the next few years,” he says. “I hope it will create a better power balance for platform workers.”