Tags

Workplace algorithms and how to use your data ethically
01/07/2021
Main image
AI webinar

The idea that robots are coming to replace human workers is not a new one, but as the workplace evolves companies are increasingly using artificial intelligence tools throughout the employment life cycle; from recruiting, to performance management, and terminations. This trend has led to increasing concerns from policymakers and trade unionists over the role algorithmic decision-making has in the monitoring, management, and disciplining of human workers.

A recent IEL webinar dove into this topic with the help of employment and technology experts from Littler Mendelson’s US, UK, and Germany offices. The following is a selection of key takeaways from the session, which can be viewed in full and on-demand here.

US companies are at the forefront of AI in the workplace, explained Littler shareholder Charlotte Main, with employers using these innovative new tools to reduce the cost and time required to hire, improve worker performance, eliminate bias, and improve workplace diversity, often doing at scale what humans, however diligent, cannot.

Culturally, Germany is more sensitive to potential surveillance and data privacy issues, meaning AI is not yet a common part of German companies’ HR function, explained Jan-Ove Becker, a partner at vangard | Littler, who highlighted recent EU and national efforts – including in Germany and Spain – aimed at regulating the use of AI in the workplace.

Although AI has its obvious benefits, it also presents dangers for employers, with GQ | Littler partner Raoul Parekh recognising direct and indirect discrimination as key risks for UK companies. For Parekh, checking the AI code and data set is key to ensuring discrimination doesn’t creep into machine learning over time, adding that employers should consider being transparent with employees over how their AI software works.

The use of AI in disciplinary and dismissal decisions is an obvious hot-button topic for employers, with Becker warning that, in the EU, a termination carried out by algorithm alone would be a violation of GDPR, and he warned employers that using AI to conduct performance management and improvement procedures must have human oversight.

The interplay between AI regulation and data protection law is another big concern for employers, according to the panel. If data is shared between different companies in a group, the panel recommended that the data transfer is secured by specific confidentiality agreements, certifications, and audits. With AI systems rarely on an employer’s premises, consideration should also be given to how a company’s data is handled by third-party service providers.

Asked how employers can ensure AI is used ethically, the panel were in agreement that the incorporation of human decision-making in the hiring, monitoring, and disciplining of staff remained of paramount importance. Main also suggested that, utilised correctly, AI could be a key tool in the detection and elimination of employment discrimination, for example through proactive enforcement – such as equity audits – or by detecting the unintended bias of recruiters reviewing resumes.

Finally, asked to provide in-house counsel with one tip on utilising AI in the workplace, Parekh stressed the importance of knowing what you are buying, from the data set through to liability in the contract of purchase, and ensuring you regularly audit your data. Watch the webinar in full on our on-demand page here, and look out for more IEL webinars coming soon.