Ghost in the regulations: are the EU’s new AI rules flawed?
Main image
Artificial intelligence
John van der Luit-Drummond is editor-in-chief of International Employment Lawyer

Artificial intelligence conjures up many competing images. From the loveable droid hero R2-D2 in Star Wars and Iron Man’s trusty cyber-butler Jarvis, to the killer HAL 9000 from 2001: A Space Odyssey and the genocidal Skynet and its army of cyborg terminators, popular culture portrays AI technology as both the saviour and destroyer of humanity. In the real world, however, we are some way off self-aware machines tasked with making our lives infinitely easier – or ending them entirely.

Nevertheless, AI is all around us; it’s in ride-hailing apps, smartphone face recognition, autocorrect tools, smart speakers, and more besides. While these present incarnations of AI are unlikely to go all murderbot on us, their human overlords, the dangers such algorithms pose are no less real, especially when used in the workplace.

Stories demonstrating the limitation of current-generation AI are numerous: Amazon was forced to scrap a recruiting tool that showed bias against women; HireVue dropped facial expression monitoring from its hiring algorithm; IBM withdrew its facial recognition software over fears its technology was racially “biased”; Twitter users taught a Microsoft chatbot to be racist in less than a day; Google Photos was found to label black faces as gorillas, while some digital cameras misidentify Asian faces as blinking; automation software used by the UK government is reportedly displaying racial bias; and there are suggestions that algorithms are already making economic inequality worse.

Given its obvious flaws, it is understandable how many lawyers and unions are concerned that algorithms could become responsible for workers being hired, promoted, and terminated. It is in this context the European Commission finally published its long-awaited draft regulations on the use of AI applications in the bloc.

If enacted, the rules require providers of “high risk” AI systems to give clear information about how they work and ensure there is human oversight. The EU executive rightly classifies AI systems used in employment, worker management, and access to self-employment as high risk, specifically those related to recruitment and selection processes, decisions on promotion and termination, and for task allocation, monitoring, or evaluation of workers. But do these draft rules go far enough or are they as flawed as the algorithms they intend to regulate?

“The Commission seems to take for granted that AI systems to surveil people at work should be allowed if certain procedural requirements and safeguards are met, but this is actually what should be up for discussion,” argues Professor Valerio De Stefano of the Institute for Labour Law at the University of Leuven. “First, we should discuss whether we should allow AI systems to monitor people in the workplace and what the impact on their lives may be. Only then should we decide what procedural requirements should be put in place.”

As with the EU’s General Data Protection Regulation (GDPR), the draft rules on algorithmic decision-making will be extraterritorial and harmonising in nature, applying to any company selling AI products into the bloc and not just to EU-based businesses. “With the GDPR, we’ve seen this lighthouse effect where other countries have had to follow suit to some extent,” says Aislinn Kelly-Lyth, a research assistant on algorithmic management at the University of Oxford. “Something quite interesting about this proposal is that it regulates along the value chain so if a UK or US company was providing AI for use in the EU they would fall into this.”

However, unlike the GDPR and its article 88, which allows member states to adopt more protective rules for data protection in the context of employment, there is no corresponding rule in the EU’s draft AI regulations. As a result, De Stefano believes there is a risk member states will be prohibited from introducing more stringent rules on surveillance, such as those already introduced by France and Germany.

“This regulation starts from the assumption that AI should be fostered and should be incentivised in the European market,” he says. “The risk here is that existing legislation that prevents AI in workplaces could be deemed incompatible with EU law. National legislation would provide unions and works councils the right to veto excessive practices that extend from technology, but the problem is that these additional protections might be considered excessive. The proposal mentions there could be additional safeguards at the state level, but the overall aim of these regulations is to harmonise the internal market. We won’t know in advance whether national courts or the CJEU will allow the involvement of unions to stop these practices, which is quite worrying.

“What people sometimes don’t realise is that workplaces are environments where workers are subject to extensive powers and prerogatives from their employers. Technology can put these powers on steroids and the power of legislation should be to limit excessive use of these powers and I don’t think these proposals go in this direction.”

AI regulation must strike the correct balance
Jan-Ove Becker is a partner at vangard | Littler in Germany
The EU’s AI regulation is certainly a very important development for employers. We can see that AI technology will increasingly be used in hiring processes, employee performance ratings and – by some employers – even surveillance of workers. Regulation must find a balance between reasonable commercial and operational interests of companies and privacy and anti-discrimination rights of employees.
The German government has just recently published a new law on the modernisation of the Works Council Act. This new law includes explicit rights of employee representatives on the introduction and use of AI in the workplace. This shows the relevance of this topic. The impact of the new AI regulation will also depend on the terms and wording of the regulation developed in the next couple of months or years.
While AI is a technology that raises ethical and legal questions, the EU must also recognise that employers are increasingly global organisations and too strict rules may impose significant administrative, legal and financial burdens for EU employers and slow future development. Hearing all stakeholders in the ongoing review process will therefore be crucial.

If AI excesses are not limited, De Stefano fears an adverse impact on employee mental health at a time when there are increasing fears about workplace wellbeing. “If you know you are constantly monitored by technology you might develop stress, which impacts on your psychological risk. This is not taken into account in the proposals and why I think it would be much better if the elements on employment were [the] subject of a separate instrument after a much more meaningful discussion on what kinds of systems we want or don’t want in our workplaces.”

With a recent study showing that facial recognition technology is unable to interpret the emotions of black faces, the fact the EU rules seemingly allow for tech companies to produce AI to detect emotions raises ethical questions. “Why would you allow this in the first place?” asks De Stefano. “Why would you allow a system that allows employers to detect people’s mental states? This is a discussion we didn’t have, but we should have now. It is not about what safeguards you should put around this practice, it is why should you put this practice in place in the first place.”

“There are definitely huge concerns about using something that doesn’t have real predictive or proven predictive value, but does have proven bias risk,” adds Kelly-Lyth, who also warns of a “systemic harm” that might result if a select few AI producers dominate the European market. “A worst-case scenario would be if the same vendor sold the same technology, with an algorithm harmful to certain employees, to various employers. The employee will fall foul of the algorithm at every employer they work for.”

Kelly-Lyth also warns of AI wiping out a layer of management that could act as a human safeguard against algorithmic bias. “You could have a scenario where not only decisions are being made by algorithms harmful to you, but you now have no one to complain to. The GDPR goes a long way to preventing that sort of dystopian future as it prohibits significant decisions from being made about you in an automated way. However, the recent Uber decision in the Netherlands sets the bar for that human decision-making very low.

“Also, although the GDPR contains a number of transparency provisions, they don’t help someone who has been discriminated against by an algorithm. A lot of the transparency obligations in the draft regulations rest on providers giving transparency to the users, and the users, in this case, are the employers.”

Employers must keep pace with AI risks
Louise Skinner and Lee Harding are partners at Morgan Lewis & Bockius
Harnessing the power of AI has the potential to make many positive impacts on society, including the creation of highly skilled jobs, increased productivity, and bringing new technologies to market to improve people’s lives. However, employers must keep pace with the risks presented by the use of AI in the workplace, and ensure steps are taken to understand and address such risks. 
Many employers have started to use AI tools to assist in recruitment, greatly cutting down on the time taken to review and assess candidates. While there are benefits in reducing the time that recruiting staff would otherwise spend on these tasks, there is a risk of inherent bias in the algorithms used in such technologies, which could lead to unfair decisions being made, and in turn, discrimination claims from unsuccessful candidates or disgruntled employees. Similarly, some employers are using AI to assist in performance management, including tracking breaks in activity and productivity, and issuing automated warnings where a drop-in activity is identified. 
While, again, there are efficiencies that can be achieved by an employer in implementing technology in this way, they must be mindful of privacy concerns and ensure that they operate within the applicable data privacy framework. Further, they must ensure that in relying on an algorithm to make decisions concerning employees’ productivity and the use of their time, they do not discriminate against employees with particular protected characteristics, such as disabled employees whose roles are subject to reasonable adjustments. 
Employers seeking to implement AI in the workplace should ensure that they take steps to identify and address such risks in advance of using this technology. They should also continuously monitor their practices to identify any new or emerging potential risk, and to ensure compliance with increasing regulation in this area.
Those employers who harness AI effectively, and take active steps to mitigate the risk of harm to employees, will have the potential to achieve great efficiencies and be recognised as innovative employers of choice. As a matter of best practice, employers should also consider consulting with employees about how AI is being used in the workplace to monitor their activities. At a minimum, employers should be open and transparent with employees about the use of technologies and the safeguards that have been put in place.

To limit the risk of unintended consequences, employers need to ask the right questions when procuring AI for their organisations, says Kelly-Lyth. “Vendors often have quite positive marketing materials. We have seen European employers buy US software to score employees on productivity or screen CVs. This software often comes with a GDPR-compliant ‘tick’, which means nothing. Employers might think they are complying with the law, but they are open to litigation – the liability falls with them. What this draft regulation does well is put the burden on the AI provider instead of the user and that is an important step.”

Beyond that, Kelly-Lyth advises employers to engage with their workers, unions, or works councils, especially as data controllers are already required to undertake a data protection impact assessment under the GDPR. “The most important thing is transparency. When deciding which software to buy, it would be a good idea if employers actually consulted with their workers; you’ll have your employees on board and they can ensure you are not just buying something for the sake of it, but something that will actually help your business.”