Tags

How can employers prepare for an emerging AI regulatory landscape?
13/05/2021
Main image
Camera monitoring
Authors
Image
Sara Ibrahim
Sara Ibrahim is a barrister specialising in employment and discrimination law at
3 Hare Court Chambers
Image
Katharine Bailey
Katharine Bailey is a pupil barrister at 3 Hare Court Chambers

 

Before the covid-19 pandemic, artificial intelligence (AI) and algorithmic management systems were primarily deployed by “sharing” or gig economy businesses. The pandemic has accelerated this kind of digitisation across all sectors as businesses rushed to implement paperless systems and enable mass-scale remote working. With employers increasingly “monitoring” their employees with AI, steps need to be taken to minimise their legal exposure, particularly in light of emerging European and US regulatory developments.

AI in the workplace

According to an EU report in 2019, about 40% of HR functions in international companies based in the US used AI applications, with European and Asian organisations also beginning to adopt the technology. Just some of the AI systems used by businesses in recruitment and task-allocation include CV scanners, psychometric testing, and facial recognition scanning during video interviews.

However, in the wake of the pandemic, we are now seeing more AI that “monitors” employees in the workplace, particularly in the “remote workplace”. To this end, businesses increasingly deploy “people analytics” – statistical tools, big data, and AI that measure, report, and understand employee performance. Using these analytical tools often means implementing intrusive technical infrastructure, such as technology that tracks workers’ emails, screen-time, and mouse-clicks. More intrusive measures in circulation include GPS tracking, audio-recording, or video-monitoring via webcam.

While using AI is an exciting commercial and technological opportunity, employers need to be aware that it can expose them to new legal risks. In particular, AI’s opacity, complexity, dependency on data, and autonomous nature means it can adversely affect the rights of those it monitors. Additionally, with the publication of the EU’s proposed Artificial Intelligence Act (AIA), and the possibility of more regulation and/or enforcement in the US, businesses need to be aware of their obligations in an emerging regulatory landscape. This applies to businesses physically located outside the EU and US. For example, AIA applies to EU companies, to third countries that place services with AI systems in the EU, and providers whose AI systems produce outputs used in the EU. This extraterritorial scope is similar to the EU’s GDPR and has invited speculation that AIA will have the “Brussel’s Effect” – setting the global standard.

International regulatory landscape

On 21 April 2021, the EU Commission published the proposed text of AIA. The AIA is extraterritorial in scope and includes a broad definition of AI. It applies a four-tiered framework to differentiate AI systems that are: prohibited (damage safety, livelihoods, and rights), high-risk (put fundamental rights at risk), limited-risk (chatbots), and low-risk (spam filters). There are eye-watering penalties for AI providers who fall foul of AIA: up to €30m or, if the offender is a company, up to 6% of total worldwide annual turnover (whichever is higher), for users of prohibited systems or high-risk AI systems with inadequate data governance; up to €20m or 4% of annual turnover for other breaches; and up to €10m or 2% of annual turnover for supplying incorrect, incomplete, or misleading information.

AI regulation has been on the agenda in the US for some time. Two bills were introduced in the House of Representatives last Congress – the Algorithmic Accountability Act and No Biometric Barriers Act – but neither passed the Republican-controlled session. Commentators and stakeholders believe President Biden is poised to regulate Big Tech: one of his advisors, Tim Wu, is a leading expert in regulation and proponent of “net neutrality”, and Biden also appointed Lina Khan, an antitrust legislation expert, to the Federal Trade Commission (FTC).

On 19 April 2021, a staff attorney at the FTC published a strongly worded blog post, “Aiming for truth, fairness, and equity in your company’s use of AI”, which reminds businesses of the existing legal framework through which the FTC regulates AI. The blog also carries the blunt warning: “if you don’t hold yourself accountable, the FTC may do it for you”. US lawyers and other stakeholders have inferred from this that US AI regulation is imminent.

Employers vulnerabilities

It may come as something of a relief to employers that AIA allocates liability to the technology company that is the “provider” of the AI, rather than the employer-business who is the “user”. It remains to be seen where liability will fall under a US regulation. However, employers would be wise to keep track of AIA’s development and regulatory changes elsewhere. If enforcement action is taken against a user-business’ AI provider, that may cause serious operational disruption and financial damage.    

Furthermore, it may seem obvious, but employers must bear in mind that AIA does not mean user businesses are “off the hook” when it comes to deploying AI. To use AI, employers take legal risks in relation to data, anti-discrimination, and employment law. So, employers need to think critically about their AI interventions and how they interfere with employees’ rights.   

For example, the TUC has recently reported on how Uber has misused monitoring algorithms: customers input “feedback” data about drivers, but their perceptions as humans are susceptible to bias and prejudice. A scenario like this might leave an employer in the jurisdiction of England and Wales vulnerable to an Equality Act 2010 challenge whereby a provision, practice, or criterion deployed by the employer (an AI monitoring intervention) places people with a certain protected characteristic at a disadvantage given inputted data is inherently biased and inadequately screened.

For employers deploying AI to monitor remote workers, Article 8 of the European Court on Human Rights (right to privacy and family life) should also be considered. If employers are considering AI that monitors via webcam, it is worth bearing in mind that the European Court of Human Rights has found filming someone in their home without their consent to be “a serious, flagrant and extraordinarily intense invasion of her private life”.

Voice monitoring software is another attractive AI intervention – it can report on employee engagement or monitor customers’ voices and “nudge” employees to adjust their communication style. This AI intervention potentially engages both anti-discrimination rights and the right to privacy. Certain groups of people with protected characteristics may be more or less susceptible to voice monitoring than others. Further, voice monitoring may be a disproportionate intrusion into someone’s private life. Additionally, AI like this relies on being “covert”, which increases an employer’s legal exposure. Covert AI may be classified “prohibited” under the proposed AIA given it deploys “subliminal techniques beyond a person’s consciousness”.

Employers deploying algorithmic monitoring technology can protect themselves legally through a range of measures. Transparency (with employees and customers alike) is important. So too is regular consultation with employees and reporting on outcomes – this will enable employers to identify discriminatory or disproportionately intrusive measures and make necessary changes. Businesses should consider seeking out legal advice and other experts to draw up an AI policy and training programme. A good AI policy will set out the business’ view on the ethics of AI, aim to tackle bias, ensure fairness, and balance surveillance against accuracy. Those who use AI without taking these important steps, may face legal challenges on the grounds of their failure to consider these domestic and international safeguards for privacy, as well as anti-discrimination.