Tags

Protecting staff from social media abuse as important as protecting your brand
15/07/2021
Main image
English fans chanting on the street in Moscow 2018
Authors
Image
John VDLD
John van der Luit-Drummond is editor of International Employment Lawyer

On 11 July 2021, football fans across Europe came together to witness England take on Italy in the final of Euro 2020 at Wembley stadium. An engaging match swayed back and forth between the two sides before eventually going to penalties, where the Italians emerged victorious after three England players missed their spot kicks.

In the aftermath of the match, England fans displayed equal parts pride in their “young lions” and understandable disappointment at another failed bid at international footballing glory. Yet among the plaudits and “what ifs”, three of England’s young, black players – those who had missed the decisive penalties – were subjected to shocking racist abuse across various social media platforms.

In the days that followed, fingers have been pointed at politicians and tech giants for allowing such racist abuse to be stoked, enflamed, and ultimately permitted on social media. But what about employers? What, if any, responsibility or duty of care does an organisation bear for protecting its workers from hate-filled Twitter rants and other vile, illegal content, especially when the job of the staff in question is social media? And what about those employees – like modern-day footballers – who are quasi-spokespeople for their brands?

“Social media staff and corporate spokespeople face a very particular type of pressure in their working environments. Every word and phrase has to read correctly; confused messaging, undue delay, or even spelling mistakes can cause the wrong type of headlines, and what’s more, added wit and flair can earn a company increased respect and reach. No pressure, right?,” comments Natalie McEvoy, counsel at Slateford.

The more difficult the content, the more critical the support offered to employees will be, adds McEvoy, who points to a 2017 claim brought against Microsoft by two former moderators alleging the content they had been exposed to had resulted in severe post-traumatic stress disorder.

In that case, the employees were required to view and report material flagged by automated software as being potentially illegal, such as images of child abuse. Similarly, Facebook recently agreed to pay $52m in compensation to content moderators who were also responsible for reviewing violent and graphic images, including rape and suicide, posted on the social media giant.

More acutely aware of the risks, several companies, including Microsoft and Facebook, now offer their media moderators more mental health and wellbeing resources, such as periods of rotation out of the most harrowing work, mandatory meetings with a psychologist trained to recognise trauma, and even spousal wellness programmes.

“Those who are employed by social media platforms to monitor users and content, usually for behaviour which is criminal or breaches the terms of use, are often dealing with the most vile and hateful posts,” explains Kevin Poulter, an employment partner at Freeths in London and an expert in social media issues in the workplace.

“Not only do these employees have to deal with such deplorable content all day in their work, but they also have to make judgement calls on what is or is not acceptable within the terms of use policy. There is also political, social, and inevitably, from time to time, some personal pressure on those employees too.”

In the heat of the moment
Hannah Price, legal director at Lewis Silkin
“Social media staff need to be able to react quickly and be very attuned to the culture and values of the company they represent; they need to hit the right tone, knowing when it’s right to engage and when it’s more appropriate to remain silent. 
“Training and a well-written policy are essential in this regard. There also need to be clear reporting lines and/or checking processes in place – capable of reacting very swiftly if posts relating to the company go viral – so staff know they have the support of the company with the approach that they choose to take.
“Equally, the reporting line should also reduce the risk of employees responding publicly ‘in the heat of the moment’ and potentially inflaming the situation or attracting negative PR.”

Whether facing the pressure of catching and reporting illegal activity, or responding to irate customers on Twitter unhappy with a company’s service or product, McEvoy says employers need to ensure social media workers are given time to decompress: “Employers may need to be more watchful in this particular role when it comes to the right to disconnect from technology; social media is a 24/7 marketplace, but restful time away is essential to a job well done.”

“Social media moves very quickly and around the clock, and staff should not be expected to be constantly monitoring and reacting to [it],” agrees James Storke, a partner at Lewis Silkin. “Responsibilities need to be shared across a team to avoid potential working time health and safety issues, and clear expectations set as to when staff should be checking and responding to social media posts. 

“If an employee is engaging with social media through the company’s corporate profile, then any abuse they suffer may feel a little less personal. If, however, an individual is engaging personally, as a quasi-spokesperson for the company online, then abuse is likely to be much more personal,” he adds, opening the door to potential litigation if the situation is not handled correctly.

Since 2013, UK employers cannot be held liable for third-party harassment, although they can be liable if their subsequent action or inaction is found to be discriminatory. This may change, however, as the UK government is currently consulting on whether third-party harassment should be re-introduced and the extent of the employer’s liability in such circumstances. 

Other claims employers need to be aware of include a failure to provide a safe place or system of work, or potentially a claim under the Equality Act 2010 for discrimination. And, in circumstances where a complaint has been raised but not acted on, or the employee suffers a detriment in response to raising a concern, a whistleblowing claim may also be possible.

“To avoid potential claims of constructive unfair dismissal, discrimination, or personal injury, employers should take any concerns of stress or burnout raised by those working on social media accounts seriously and ensure adequate support is provided, including reporting abusive content to the platform provider,” advises Storke.

Aside from the risks of legal action, there are commercial risks, too, explains Poulter. “An employer’s failure to provide due care of its workforce can lead to low morale, a high turnover of employees, increased sickness absence, and the enhanced costs of covering such absence.”

The impact of this may be especially noticeable when the disaffected employees are acting as the representatives and social media mouthpieces of the company, with a direct line of communication with colleagues, customers, and the public at large, Poulter adds.

“The potential reputational damage arising from this can be reduced by having a proper system of account management in place, for ensuring that ownership and control of social media accounts sits at a senior level and access to accounts can be separately controlled and updated regularly.”

As IEL has previously highlighted, employers must act swiftly when workers are found to engage in behaviour online that is either illegal or risks damaging an organisation’s reputation. Indeed, a Savills employee has already been suspended by the estate agency after he was accused of tweeting racial abuse at England players Bukayo Saka, Jadon Sancho, and Marcus Rashford.

But while it is undoubtedly important to train – and occasionally discipline – employees on their social media (mis)conduct, in a world where interactions are increasingly digital, it is equally if not even more important to protect those whose roles are to support and speak for the business online.

For their part, the Football Association and the players’ domestic clubs – Arsenal, Borussia Dortmund, and Manchester United – have condemned the racist abuse directed at England’s footballers. How they support the players in the weeks and months ahead may well act as a blueprint for other employers keen to safeguard their employees from online harm.

Crisis training your social media team
Natalie McEvoy, counsel at Slateford
“Staff may need to develop an eye to critical path conflict triage to detect the early rumblings of a communications issue. There is, for example, a lesser-admitted truism that complaints from journalists or ‘influencers’ probably need to be resolved more speedily than those from ‘Joe Public’.
“Accidents happen when comms staff feel the pressure to be across multiple platforms simultaneously and constantly. Regular breaks should be encouraged, ideally away from the screen. If appropriate, a second pair of eyes could review content before posting, or a composed post could be ‘held’ in a halfway house for a short review period to allow the author to reflect before going live.
“Primary and secondary comms might look and feel very different if a reader does become engaged; this should be planned for and, possibly in the case of big organisations, be handled by different teams taking the detail ‘offline’ or aside from the main brand messaging.
“The level of pressure and exposure your employees are subject to needs to be considered and supported appropriately. Over-legislating may result in inauthentic comms or clunky delays, but failing to protect a stressed employee from burnout can be a ticking time bomb for reputational damage. Voice your communications clearly, stick to your corporate values, and look after your employees to best avoid a crisis.”