Why Facebook and Twitter Aren’t Stopping the Flood of False and Toxic Content

Tech insiders and experts expose the dark side of the wildly lucrative social media business.

Mother Jones illustration; Getty

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

As billions in earnings continue to flow for the corporate titans of social media, so does a tide of false and malicious content across their global networks. This dual phenomenon is no coincidence: The growth of the platforms and the spread of misinformation and propaganda are entangled at the core of the business models for Facebook, Twitter, and other leading tech companies, according to a range of experts. Now, a reckoning may be looming as repercussions continue to come to light, from the use of Facebook in ethnic cleansing campaigns in Myanmar to the exploitation of Twitter and Facebook to distort the politics of US elections.

Experts in tech, advertising, and information security say real progress combatting misinformation remains essentially impossible without addressing hard truths about the highly lucrative business model powering social media companies. The same audience-targeting engine that reaps billions of dollars in revenues also spreads bogus content like wildfire. It’s a machine built fundamentally on users’ attention and engagement, which the companies exploit through algorithmically tuned news feeds that favor sensational content and serve up ads for users tailored on a wealth of data captured about their interests and demographics.

Despite fierce criticism, the companies have failed to make substantial change, according to some industry insiders. Since the 2016 election fiasco, Facebook and Twitter have taken steps toward transparency around political advertising—though a ProPublica investigation found multiple examples in which Facebook’s “paid for by” label was manipulated. And though transparency with political ads is a worthwhile goal, experts say, that only addresses a small part of the overall problems. “What they’ve done so far is really a sham,” says Dipayan Ghosh, a former public policy adviser for Facebook who now researches digital privacy issues at the Harvard Kennedy School’s Shorenstein Center. “It’s not in their commercial interest to do more.”

Facebook’s stock has slumped since its leadership was exposed recently for burying warning signs of Russian influence campaigns and targeting critics of the company, but a torrent of bad publicity that began two years ago hasn’t seemed to significantly disrupt the company’s mammoth ad-driven revenues. During the third quarter of 2018, Facebook revealed 290,000 users had been targeted by fake pages aimed at stoking American political divisions, and that 30 million users’ private information had been compromised. The company was also caught using private phone numbers consumers had provided for account security—to target them with ads. During that same quarter, Facebook hauled in $13.7 billion, 33 percent more than in its previous Q3. Days after announcing those earnings, Facebook took down more than 100 Kremlin-linked accounts that had targeted the 2018 midterms.

For Twitter’s part, accounts that spread disinformation during the 2016 election were still pumping out more than a million tweets a day this September. During its third quarter of 2018, Twitter pulled in $758 million, up almost 30 percent from the prior year. As Twitter stock surged on the news, independent researchers were busy tracking bots that drove most of the inflammatory—and often false—pre-election debate about the migrant caravan hyped by President Donald Trump. (Meanwhile, CEO Jack Dorsey was bantering in September about his company’s name originally deriving from “short inconsequential bursts of information.”)

The social media giants have worked aggressively to subvert rising government interest in reining them in. The New York Times recently revealed how Facebook pressured members of Congress, even getting allies such as Democratic Sen. Chuck Schumer to intervene on the company’s behalf. Other top tech companies are also actively working to fend off regulation, according to industry experts who spoke to Mother Jones.

Calls for regulation have left civil libertarians struggling with some strong crosscurrents—keen on protecting individual privacy, but also warning that some proposed actions could threaten free expression. “People who want Facebook to regulate toxic speech would think differently if Donald Trump ran Facebook,” says Jay Stanley, senior policy analyst for the American Civil Liberties Union. “Those who want Facebook to filter out poisonous content are implicitly counting on the goodwill and judgment of those doing the filtering.” 

But Ghosh points to the immense reach of these platforms in urging lawmakers to treat regulation as a critical matter of national security. Kremlin-linked content reached 126 million users around the presidential election, according to Facebook’s own analysis of the influence campaign.

Says Ghosh: “We have to characterize this as a threat to democracy.”

The engagement machine driving ad sales and toxic content

Tech companies have built their businesses around three things, Ghosh explains: “Tremendously compelling, borderline addictive services; collection of data on individuals; and highly opaque algorithms to curate content and target ads.” This potent mix is leveraged both by legitimate advertisers selling products and malicious actors pushing divisive content.

This played out in the 3,519 Facebook ads placed by the Kremlin-linked Internet Research Agency around the 2016 election, targeting conservatives and liberals alike. The most successful IRA ad was aimed at Americans who support law enforcement, encouraging them to “like” a Facebook page called “Back the Badge.” The ad campaign earned 73,000 clicks and a clickthrough rate of 5.6 percent—much higher than the average Facebook ad CTR of 0.9 percent. Meanwhile, the troll factory was targeting African American groups with ads promoting surveys, events, and content like the “Blacktivist” page. At the climax of the election in November, that page ran an ad targeting people interested in the civil rights movement: “Watch this heart-piercing story about a racial bias that might cause law enforcement officers to shoot innocent and unarmed black people.” The ad generated 11,360 clicks, with a CTR of about 6 percent. Altogether, IRA ads appeared 37 million times in news feeds, peaking just before and after the election, and drew 3.7 million clicks, according to a Washington Post analysis.

The exceptional engagement rate was far more relevant than the $100,000 spent on ads by the IRA. (Facebook took in almost $27 billion in advertising revenue in 2016.) As a rule, the more engaged users are with content, the more time they spend on a network, allowing more ads to be shown in their feeds. If users who saw the IRA ads “liked” the pages, it means they potentially engaged with—and helped spread—foreign propaganda, for months or even years.

“If this weren’t a Russian intelligence operation, we’d be saying the social media manager should get a raise,” says Renee DiResta, research director at cybersecurity company New Knowledge. “They were running a very effective marketing campaign.”

­

But most importantly for Facebook’s core business: Every click gave the company a bit more information about that user, which the company uses to fine-tune its algorithms to serve up increasingly customized content and ads. The cycle propels Facebook’s growth as an ever-more appealing platform for legitimate advertisers and propaganda pushers alike.

“Facebook’s customers—the advertisers—are not being hurt” by propaganda on the platform, says Danny Rogers, co-founder of the Global Disinformation Index, a UK-based nonprofit focused on independently tracking online disinformation. Advertisers might even see disinformation campaigns as a positive indicator, he says: “They’re thinking, ‘If they can manipulate an election, they can sure sell my toothpaste.’”

The IRA’s relatively puny ad buys opened the door to an exponentially larger audience for its propaganda. About 11.4 million people were directly exposed to the ads—but the pages those ads encouraged users to “like” created organic content that ultimately may have reached up to 126 million people, according to a report from Democrats on the House Intelligence Committee. “They promote a lot at the beginning to suck you in,” says Rogers, who also teaches about disinformation at New York University. “The goal is to feed people the information equivalent of salt and fat.” Once users are hooked, there’s no need to spend more: “It becomes a free smorgasbord with people sharing the content for you.”

Facebook and Google together control almost 60 percent of digital advertising, and outside research affirms that provocative content commands the most attention, says Michael Posner, director of the NYU Stern Center for Business and Human Rights, which recently released recommendations on “Combating Russian Disinformation.” “Things like fear and anger—things that get negative emotional reactions—get people to click and stay,” he says. “This is the most important question related to the companies’ business model, and in my experience, they’re not eager to talk about it.”

“We don’t want false news on Facebook, and we don’t want to profit from it,” a Facebook spokesperson told Mother Jones, pointing to research showing the company has made some progress on the issue. But the company declined to comment on how inflammatory content drives its users.

Democrat Sen. Kamala Harris pressed Facebook COO Sheryl Sandberg on this point during a hearing on Capitol Hill in September: “A concern that many have is how you can reconcile an incentive to create and increase your user engagement when the content that generates a lot of engagement is often inflammatory and hateful,” Harris said.

“Hate speech is against our policies, and we take strong measures to take it down,” Sandberg replied. But she did not address what bearing it has on the company’s business incentives.

Twitter’s most visible action so far against malicious trolls has been to hide, but not remove, abusive tweets. The company also says it is creating a policy against “dehumanizing speech.” A Twitter spokesperson told Mother Jones the company has seen fewer reports of abuse since making policy changes this year, but furnished no specifics.

Political warfare—on behalf of the social media companies

In October 2017, Democratic Sens. Amy Klobuchar and Mark Warner, along with late Republican Sen. John McCain, introduced the Honest Ads Act, which would require labeling and financial disclosures for political advertising on social media and websites—in line with what’s long been required in broadcast and print. The bill has been stuck in committee ever since. The recent New York Times exposé showed how Sandberg lobbied Klobuchar, and how Schumer pressured Warner to back off. Even though tech companies have been on the record as supporting the bill since April, “privately, they’ve made it clear they are not for it,” Posner says.

Lawmakers from both parties also get pushback from political strategists, Posner says. “They like it the way it is. Some of the people running PACs and fundraising don’t want to shine a light on this issue.”

“Politicians [in Washington] have benefited from these platforms,” Ghosh agrees. “Especially President Trump.” As the Honest Ads Act was emerging, the Trump campaign’s digital director was lauding Facebook as having been a key to Trump’s victory. “Facebook now lets you get to…places possibly that you would never go with TV ads,” Brad Parscale told CBS’s 60 Minutes in October 2017. “Now I can find, you know, 15 people in the Florida Panhandle that I would never buy a TV commercial for.”

Alphabet, Google’s parent company, and Facebook are among the top spenders on federal lobbying in 2018, according to the Center for Responsive Politics—and they’re fighting more than advertising regulation, Rogers says. “A big part of the lobbying effort is to weaken the stronger state privacy laws” like California’s bill passed in June, because companies don’t want to lose access to data they glean, he says. “Facebook and Google have extracted a lot of value from us.”

Warner declined to comment on tech companies’ lobbying efforts, but he reiterated that the Honest Ads Act would be a good start to addressing disinformation on social media. “We need to pass it into law so we can have a level playing field for digital political and issue ads,” he told Mother Jones. This past summer, Warner published a white paper exploring additional options to counter misinformation and protect user data. He said tech companies have taken too long to acknowledge that manipulation by bad actors is “a fundamental challenge for their business models.” 

“Congress can’t simply trust them to address these issues on their own,” Warner added, “and I suspect we will have bipartisan support to move forward.”

Facebook and Twitter representatives told Mother Jones their companies support the Honest Ads Act, but declined to discuss lobbying efforts. Google didn’t respond to requests for comment.

DiResta, the cybersecurity expert, agrees advertising disclosure is important, but she says it won’t stop disinformation operators, who are always upping their game. It’s truly a global problem: National security analysts have recently observed Iranian operatives deploying a playbook similar to the Kremlin’s, exploiting divisions to engage with users and get them to spread content organically. DiResta notes malicious actors also can use methods such as infiltrating private Facebook groups. “The ads are just a means to an end,” she says. “Ad disclosures are not the silver bullet to stopping disinformation.”

Solutions as a “social contract”

Ghosh, who was a policy adviser to President Barack Obama prior to his work at Facebook, advocates a multi-pronged regulatory approach. He recently co-authored a report calling for increased transparency for social networks’ ads and algorithms, improved privacy protections, and checks on companies’ monopolies on data and services.

This is critical as the United States heads into what promises to be another bitterly contentious election cycle in 2020. “We don’t want to have a media environment that subjects the individual to hate, to abuse, to disinformation,” Ghosh says. “This is about crafting a new social contract.”

Rogers, of the Global Disinformation Index, contends that existing laws could be leveraged to battle phony content, including financial regulations. “Twitter bots are a form of fraud,” he argues, noting that outside researchers have found that up to 15 percent of Twitter’s users are automated accounts. “That means shareholders aren’t getting accurate information” about Twitter’s user base and growth. “Where’s the Securities and Exchange Commission in this?”

Ghosh and co-author Ben Scott, a former State Department adviser, report that political candidates spent $1.4 billion on digital ads in 2016; spending is projected to be around $1.8 billion for 2018. “We need to change the infrastructure so voters can see where an ad is coming from, what political party has paid for it, and who has seen the ad and who has been targeted by it,” Ghosh says. They also recommend labeling bots and allowing auditors to monitor algorithms, like the one powering Google search, for disinformation and bias.

They urge lawmakers to give consumers control over their personal datasets, including the ability to take them from one service to another. Privacy laws passed in California and the European Union are good starts, Ghosh says, but they can’t be broadly enforced across the United States.

And, they say, social media companies should be regulated as monopolies, with Facebook’s purchase of Instagram and WhatsApp treated with the same scrutiny as, say, the AT&T-Time Warner merger. “You could characterize this as a monopoly in two forms: in a market in social-media services, and in a market in consumer information,” Ghosh says. “That kind of concentration spells danger for the consumer.”

Several companies have reached out to Ghosh about his proposals, he says. “They don’t like it,” he notes, “but they know regulation is coming.”

*Correction: A previous version of this story misidentified the Facebook platform used by Myanmar’s military to promote ethnic cleansing. The story has been updated.

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate