Artificial intelligence conjures up many competing images. From the loveable droid hero R2-D2 in Star Wars and Iron Man’s trusty cyber-butler Jarvis, to the killer HAL 9000 from 2001: A Space Odyssey and the genocidal Skynet and its army of cyborg terminators, popular culture portrays AI technology as both the saviour and destroyer of humanity. In the real world, however, we are some way off self-aware machines tasked with making our lives infinitely easier – or ending them entirely.
Nevertheless, AI is all around us; it’s in ride-hailing apps, smartphone face recognition, autocorrect tools, smart speakers, and more besides. While these present incarnations of AI are unlikely to go all murderbot on us, their human overlords, the dangers such algorithms pose are no less real, especially when used in the workplace.
Stories demonstrating the limitation of current-generation AI are numerous: Amazon was forced to scrap a recruiting tool that showed bias against women; HireVue dropped facial expression monitoring from its hiring algorithm; IBM withdrew its facial recognition software over fears its technology was racially “biased”; Twitter users taught a Microsoft chatbot to be racist in less than a day; Google Photos was found to label black faces as gorillas, while some digital cameras misidentify Asian faces as blinking; automation software used by the UK government is reportedly displaying racial bias; and there are suggestions that algorithms are already making economic inequality worse.
Given its obvious flaws, it is understandable how many lawyers and unions are concerned that algorithms could become responsible for workers being hired, promoted, and terminated. It is in this context the European Commission finally published its long-awaited draft regulations on the use of AI applications in the bloc.
If enacted, the rules require providers of “high risk” AI systems to give clear information about how they work and ensure there is human oversight. The EU executive rightly classifies AI systems used in employment, worker management, and access to self-employment as high risk, specifically those related to recruitment and selection processes, decisions on promotion and termination, and for task allocation, monitoring, or evaluation of workers. But do these draft rules go far enough or are they as flawed as the algorithms they intend to regulate?
“The Commission seems to take for granted that AI systems to surveil people at work should be allowed if certain procedural requirements and safeguards are met, but this is actually what should be up for discussion,” argues Professor Valerio De Stefano of the Institute for Labour Law at the University of Leuven. “First, we should discuss whether we should allow AI systems to monitor people in the workplace and what the impact on their lives may be. Only then should we decide what procedural requirements should be put in place.”
As with the EU’s General Data Protection Regulation (GDPR), the draft rules on algorithmic decision-making will be extraterritorial and harmonising in nature, applying to any company selling AI products into the bloc and not just to EU-based businesses. “With the GDPR, we’ve seen this lighthouse effect where other countries have had to follow suit to some extent,” says Aislinn Kelly-Lyth, a research assistant on algorithmic management at the University of Oxford. “Something quite interesting about this proposal is that it regulates along the value chain so if a UK or US company was providing AI for use in the EU they would fall into this.”
However, unlike the GDPR and its article 88, which allows member states to adopt more protective rules for data protection in the context of employment, there is no corresponding rule in the EU’s draft AI regulations. As a result, De Stefano believes there is a risk member states will be prohibited from introducing more stringent rules on surveillance, such as those already introduced by France and Germany.
“This regulation starts from the assumption that AI should be fostered and should be incentivised in the European market,” he says. “The risk here is that existing legislation that prevents AI in workplaces could be deemed incompatible with EU law. National legislation would provide unions and works councils the right to veto excessive practices that extend from technology, but the problem is that these additional protections might be considered excessive. The proposal mentions there could be additional safeguards at the state level, but the overall aim of these regulations is to harmonise the internal market. We won’t know in advance whether national courts or the CJEU will allow the involvement of unions to stop these practices, which is quite worrying.
“What people sometimes don’t realise is that workplaces are environments where workers are subject to extensive powers and prerogatives from their employers. Technology can put these powers on steroids and the power of legislation should be to limit excessive use of these powers and I don’t think these proposals go in this direction.”
If AI excesses are not limited, De Stefano fears an adverse impact on employee mental health at a time when there are increasing fears about workplace wellbeing. “If you know you are constantly monitored by technology you might develop stress, which impacts on your psychological risk. This is not taken into account in the proposals and why I think it would be much better if the elements on employment were [the] subject of a separate instrument after a much more meaningful discussion on what kinds of systems we want or don’t want in our workplaces.”
With a recent study showing that facial recognition technology is unable to interpret the emotions of black faces, the fact the EU rules seemingly allow for tech companies to produce AI to detect emotions raises ethical questions. “Why would you allow this in the first place?” asks De Stefano. “Why would you allow a system that allows employers to detect people’s mental states? This is a discussion we didn’t have, but we should have now. It is not about what safeguards you should put around this practice, it is why should you put this practice in place in the first place.”
“There are definitely huge concerns about using something that doesn’t have real predictive or proven predictive value, but does have proven bias risk,” adds Kelly-Lyth, who also warns of a “systemic harm” that might result if a select few AI producers dominate the European market. “A worst-case scenario would be if the same vendor sold the same technology, with an algorithm harmful to certain employees, to various employers. The employee will fall foul of the algorithm at every employer they work for.”
Kelly-Lyth also warns of AI wiping out a layer of management that could act as a human safeguard against algorithmic bias. “You could have a scenario where not only decisions are being made by algorithms harmful to you, but you now have no one to complain to. The GDPR goes a long way to preventing that sort of dystopian future as it prohibits significant decisions from being made about you in an automated way. However, the recent Uber decision in the Netherlands sets the bar for that human decision-making very low.
“Also, although the GDPR contains a number of transparency provisions, they don’t help someone who has been discriminated against by an algorithm. A lot of the transparency obligations in the draft regulations rest on providers giving transparency to the users, and the users, in this case, are the employers.”
To limit the risk of unintended consequences, employers need to ask the right questions when procuring AI for their organisations, says Kelly-Lyth. “Vendors often have quite positive marketing materials. We have seen European employers buy US software to score employees on productivity or screen CVs. This software often comes with a GDPR-compliant ‘tick’, which means nothing. Employers might think they are complying with the law, but they are open to litigation – the liability falls with them. What this draft regulation does well is put the burden on the AI provider instead of the user and that is an important step.”
Beyond that, Kelly-Lyth advises employers to engage with their workers, unions, or works councils, especially as data controllers are already required to undertake a data protection impact assessment under the GDPR. “The most important thing is transparency. When deciding which software to buy, it would be a good idea if employers actually consulted with their workers; you’ll have your employees on board and they can ensure you are not just buying something for the sake of it, but something that will actually help your business.”