In 2019, Julie Sweet, the newly-appointed chief executive of global consulting firm Accenture, held a meeting with top managers. She had a question: Should Accenture get out of some of the work it was doing for a leading client, Facebook?
For years, tensions had mounted within Accenture over a certain task that it performed for the social network. In eight-hour shifts, thousands of its full-time employees and contractors were sorting through Facebook’s most noxious posts, including images, videos and messages about suicides, beheadings and sexual acts, trying to prevent them from spreading online.
Some of those Accenture workers, who reviewed hundreds of Facebook posts in a shift, said they had started experiencing depression, anxiety and paranoia. In the United States, one worker had joined a class-action lawsuit to protest the working conditions. News coverage linked Accenture to the grisly work. So Sweet had ordered a review to discuss the growing ethical, legal and reputational risks.
At the meeting in Accenture’s Washington office, she and Ellyn Shook, head of human resources, voiced concerns about the psychological toll of the work for Facebook and the damage to the firm’s reputation, attendees said. Some executives who oversaw the Facebook account argued that the problems were manageable. They said the social network was too lucrative a client to lose.
The meeting ended with no resolution.
Facebook and Accenture have rarely talked about their arrangement or even acknowledged that they work with each other. But their secretive relationship lies at the heart of an effort by the world’s largest social media company to distance itself from the most toxic part of its business.
Toxic content
For years, Facebook has been under scrutiny for the violent and hateful content that flows through its site. Chief executive Mark Zuckerberg has repeatedly pledged to clean up the platform. He has promoted the use of artificial intelligence to weed out toxic posts and touted efforts to hire thousands of workers to remove the messages that AI doesn’t.
Behind the scenes, Facebook has quietly paid others to take on much of the responsibility. Since 2012, the company has hired at least 10 consulting and staffing firms globally to sift through its posts, along with a wider web of subcontractors, according to interviews and public records.
No company has been more crucial to that endeavour than Accenture. The Fortune 500 firm, better known for providing high-end tech, accounting and consulting services to multinational companies and governments, has become Facebook’s single biggest partner in moderating content, according to an examination by The New York Times.
Accenture has taken on the work – and given it a veneer of respectability – because Facebook has signed contracts with it for content moderation and other services worth at least $500 million (€422 million) a year, according to the New York Times examination. Accenture employs more than a third of the 15,000 people whom Facebook has said it has hired to inspect its posts. And while the agreements provide only a small fraction of Accenture’s annual revenue, they give it an important lifeline into Silicon Valley. Within Accenture, Facebook is known as a “diamond client”.
Their contracts, which have not previously been reported, have redefined the traditional boundaries of an outsourcing relationship. Accenture has absorbed the worst facets of moderating content and made Facebook’s content issues its own. As a cost of doing business, it has dealt with workers’ mental-health issues from reviewing the posts. It has grappled with labour activism when those workers pushed for more pay and benefits. And it has silently borne public scrutiny when they have spoken out against the work.
Those issues have been compounded by Facebook’s demanding hiring targets and performance goals and so many shifts in its content policies that Accenture struggled to keep up, 15 current and former employees said. And when faced with legal action from moderators about the work, Accenture stayed quiet as Facebook argued that it was not liable because the workers belonged to Accenture and others.
“You couldn’t have Facebook as we know it today without Accenture,” said Cori Crider, a co-founder of Foxglove, a law firm that represents content moderators. “Enablers like Accenture, for eye-watering fees, have let Facebook hold the core human problem of its business at arm’s length.”
The New York Times interviewed more than 40 current and former Accenture and Facebook employees, labour lawyers and others about the companies’ relationship, which also includes accounting and advertising work. Most spoke anonymously because of non-disclosure agreements and fear of reprisal. The New York Times also reviewed Facebook and Accenture documents, legal records and regulatory filings.
Facebook and Accenture declined to make executives available for comment. Drew Pusateri, a Facebook spokesperson, said the company was aware that content moderation “jobs can be difficult, which is why we work closely with our partners to constantly evaluate how to best support these teams”.
Stacey Jones, an Accenture spokesperson, said the work was a public service that was “essential to protecting our society by keeping the internet safe”.
Neither company mentioned the other by name.
Pornographic posts
Much of Facebook’s work with Accenture traces back to a nudity problem.
In 2007, millions of users joined the social network every month – and many posted naked photos. A settlement that Facebook reached that year with Andrew Cuomo, who was New York’s attorney general, required the company to take down pornographic posts flagged by users within 24 hours.
Facebook employees who policed content were soon overwhelmed by the volume of work, members of the team said. Sheryl Sandberg, the company’s chief operating officer, and other executives pushed the team to find automated solutions for combing through the content, three of them said.
Facebook also began looking at outsourcing, they said. Outsourcing was cheaper than hiring people and provided tax and regulatory benefits, along with the flexibility to grow or shrink quickly in regions where the company did not have offices or language expertise. Sandberg helped champion the outsourcing idea, they said, and midlevel managers worked out the details.
In 2010, Accenture scored an accounting contract with Facebook. By 2012, that had expanded to include a deal for moderating content, particularly outside the United States.
That year, Facebook sent employees to Manila in the Philippines, and Warsaw, Poland, to train Accenture workers to sort through posts, two former Facebook employees involved with the trip said. Accenture’s workers were taught to use a Facebook software system and the platform’s guidelines for leaving content up, taking it down or escalating it for review.
What started as a few dozen Accenture moderators grew rapidly.
By 2015, Accenture’s office in the San Francisco Bay Area had set up a team, codenamed Honey Badger, just for Facebook’s needs, former employees said. Accenture went from providing about 300 workers in 2015 to about 3,000 in 2016. They are a mix of full-time employees and contractors, depending on the location and task.
Facebook also spread the content work to other firms, such as Cognizant and TaskUs. Facebook now provides a third of TaskUs’s business, or $150 million a year, according to regulatory filings.
The work was challenging. While more than 90 per cent of objectionable material that comes across Facebook and Instagram is removed by AI, outsourced workers must decide whether to leave up the posts that the AI doesn’t catch.
They receive a performance score that is based on correctly reviewing posts against Facebook’s policies. If they make mistakes more than 5 per cent of the time, they can be fired, Accenture employees said.
Psychological costs
Within Accenture, workers began questioning the effects of viewing so many hateful posts.
In Dublin, one Accenture moderator who sifted through Facebook content left a suicide note on his desk in 2018, said a mental health counsellor who was involved in the episode. The worker was found safe.
Joshua Sklar, a moderator in Austin who quit in April, said he had reviewed 500 to 700 posts a shift, including images of dead bodies after car crashes and videos of animals being tortured.
If workers went around Accenture’s chain of command and directly communicated with Facebook about content issues, they risked being reprimanded, he added. That made Facebook slower to learn about and react to problems, he said.
Facebook said anyone filtering content could escalate concerns.
Another former moderator in Austin, Spencer Darr, said in a legal hearing in June that the job had required him to make unimaginable decisions, such as whether to delete a video of a dog being skinned alive or simply mark it as disturbing. “Content moderators’ job is an impossible one,” he said.
In 2018, Accenture introduced WeCare – policies that mental-health counsellors said limited their ability to treat workers. Their titles were changed to “wellness coaches” and they were instructed not to give psychological assessments or diagnoses, but to provide “short-term support” like taking walks or listening to calming music. The goal, according to a 2018 Accenture guidebook, was to teach moderators “how to respond to difficult situations and content.”
Accenture’s Jones said the company was “committed to helping our people who do this important work succeed both professionally and personally”. Workers can see outside psychologists.
Scrutiny
By 2019, scrutiny of the industry was growing. That year, Cognizant said it was exiting content moderation after tech site the Verge described the low pay and mental-health effects of workers at an Arizona office. Cognizant said the decision would cost it at least $240 million in revenue and lead to 6,000 job cuts.
More than one Accenture chief executive debated doing business with Facebook.
In 2017, Pierre Nanterme, Accenture’s chief at the time, questioned the ethics of the work and whether it fitted the firm’s long-term strategy of providing services with high profit margins and technical expertise, three executives involved in the discussions said.
No actions were taken. Nanterme died of cancer in January 2019.
Five months later, Sweet, a longtime Accenture lawyer and executive, was named chief executive. She soon ordered the review of the moderation business, three former colleagues said.
Last year, a worker in Austin was one of two from Accenture who joined a class-action suit against Facebook filed by US moderators. Facebook argued that it was not liable because the workers were employed by firms such as Accenture, according to court records. After the judge in the case ruled against Facebook, the company reached a $52 million settlement with the workers in May 2020.
For Sweet, the debate over the Facebook contracts stretched out over several meetings, former executives said. She subsequently made several changes.
In December 2019, Accenture created a two-page legal disclosure to inform moderators about the risks of the job. The work had “the potential to negatively impact your emotional or mental health”, the document said.
Last October, Accenture went further. It listed content moderation for the first time as a risk factor in its annual report, saying it could leave the firm vulnerable to media scrutiny and legal trouble. Accenture also restricted new moderation clients, two people with knowledge of the policy shift said. Any new contracts required approval from senior management.
But Sweet also left some things untouched, they said.
Among them: the contracts with Facebook. Ultimately, the people said, the client was too valuable to walk away from. – This article first appeared in the New York Times