On 11 July 2021, football fans across Europe came together to witness England take on Italy in the final of Euro 2020 at Wembley stadium. An engaging match swayed back and forth between the two sides before eventually going to penalties, where the Italians emerged victorious after three England players missed their spot kicks.
In the aftermath of the match, England fans displayed equal parts pride in their “young lions” and understandable disappointment at another failed bid at international footballing glory. Yet among the plaudits and “what ifs”, three of England’s young, black players – those who had missed the decisive penalties – were subjected to shocking racist abuse across various social media platforms.
In the days that followed, fingers have been pointed at politicians and tech giants for allowing such racist abuse to be stoked, enflamed, and ultimately permitted on social media. But what about employers? What, if any, responsibility or duty of care does an organisation bear for protecting its workers from hate-filled Twitter rants and other vile, illegal content, especially when the job of the staff in question is social media? And what about those employees – like modern-day footballers – who are quasi-spokespeople for their brands?
“Social media staff and corporate spokespeople face a very particular type of pressure in their working environments. Every word and phrase has to read correctly; confused messaging, undue delay, or even spelling mistakes can cause the wrong type of headlines, and what’s more, added wit and flair can earn a company increased respect and reach. No pressure, right?,” comments Natalie McEvoy, counsel at Slateford.
The more difficult the content, the more critical the support offered to employees will be, adds McEvoy, who points to a 2017 claim brought against Microsoft by two former moderators alleging the content they had been exposed to had resulted in severe post-traumatic stress disorder.
In that case, the employees were required to view and report material flagged by automated software as being potentially illegal, such as images of child abuse. Similarly, Facebook recently agreed to pay $52m in compensation to content moderators who were also responsible for reviewing violent and graphic images, including rape and suicide, posted on the social media giant.
More acutely aware of the risks, several companies, including Microsoft and Facebook, now offer their media moderators more mental health and wellbeing resources, such as periods of rotation out of the most harrowing work, mandatory meetings with a psychologist trained to recognise trauma, and even spousal wellness programmes.
Whether facing the pressure of catching and reporting illegal activity, or responding to irate customers on Twitter unhappy with a company’s service or product, McEvoy says employers need to ensure social media workers are given time to decompress: “Employers may need to be more watchful in this particular role when it comes to the right to disconnect from technology; social media is a 24/7 marketplace, but restful time away is essential to a job well done.”
“Social media moves very quickly and around the clock, and staff should not be expected to be constantly monitoring and reacting to [it],” agrees James Storke, a partner at Lewis Silkin. “Responsibilities need to be shared across a team to avoid potential working time health and safety issues, and clear expectations set as to when staff should be checking and responding to social media posts.
“If an employee is engaging with social media through the company’s corporate profile, then any abuse they suffer may feel a little less personal. If, however, an individual is engaging personally, as a quasi-spokesperson for the company online, then abuse is likely to be much more personal,” he adds, opening the door to potential litigation if the situation is not handled correctly.
Since 2013, UK employers cannot be held liable for third-party harassment, although they can be liable if their subsequent action or inaction is found to be discriminatory. This may change, however, as the UK government is currently consulting on whether third-party harassment should be re-introduced and the extent of the employer’s liability in such circumstances.
Other claims employers need to be aware of include a failure to provide a safe place or system of work, or potentially a claim under the Equality Act 2010 for discrimination. And, in circumstances where a complaint has been raised but not acted on, or the employee suffers a detriment in response to raising a concern, a whistleblowing claim may also be possible.
“To avoid potential claims of constructive unfair dismissal, discrimination, or personal injury, employers should take any concerns of stress or burnout raised by those working on social media accounts seriously and ensure adequate support is provided, including reporting abusive content to the platform provider,” advises Storke.
Aside from the risks of legal action, there are commercial risks, too, explains Poulter. “An employer’s failure to provide due care of its workforce can lead to low morale, a high turnover of employees, increased sickness absence, and the enhanced costs of covering such absence.”
The impact of this may be especially noticeable when the disaffected employees are acting as the representatives and social media mouthpieces of the company, with a direct line of communication with colleagues, customers, and the public at large, Poulter adds.
“The potential reputational damage arising from this can be reduced by having a proper system of account management in place, for ensuring that ownership and control of social media accounts sits at a senior level and access to accounts can be separately controlled and updated regularly.”
As IEL has previously highlighted, employers must act swiftly when workers are found to engage in behaviour online that is either illegal or risks damaging an organisation’s reputation. Indeed, a Savills employee has already been suspended by the estate agency after he was accused of tweeting racial abuse at England players Bukayo Saka, Jadon Sancho, and Marcus Rashford.
But while it is undoubtedly important to train – and occasionally discipline – employees on their social media (mis)conduct, in a world where interactions are increasingly digital, it is equally if not even more important to protect those whose roles are to support and speak for the business online.
For their part, the Football Association and the players’ domestic clubs – Arsenal, Borussia Dortmund, and Manchester United – have condemned the racist abuse directed at England’s footballers. How they support the players in the weeks and months ahead may well act as a blueprint for other employers keen to safeguard their employees from online harm.