As Russia continues its ruthless war in Ukraine, pundits are speculating what social-media platforms might have done years ago to undermine propaganda well before the attack. Amid accusations that social media fuels political violence — and even genocide — it is easy to forget that Facebook evolved from a site for university students to rate each other’s physical attractiveness. Instagram was founded to facilitate alcohol-based gatherings. TikTok and YouTube were built to share funny videos.
The world’s social-media platforms are now among the most important forums for discussing urgent social problems, such as Russia’s invasion of Ukraine, COVID-19 and climate change. Techno-idealists continue to promise that these platforms will bring the world together — despite mounting evidence that they are pulling us apart.
Efforts to regulate social media have largely stalled, perhaps because no one knows what something better would look like. If we could hit ‘reset’ and redesign our platforms from scratch, could we make them strengthen civil society?
Researchers have a hard time studying such questions. Most corporations want to ensure studies serve their business model and avoid controversy. They don’t share much data. And getting answers requires not just making observations, but doing experiments.
In 2017, I co-founded the Polarization Lab at Duke University in Durham, North Carolina. We have created a social-media platform for scientific research. On it, we can turn features on and off, and introduce new ones, to identify those that improve social cohesion. We have recruited thousands of people to interact with each other on these platforms, alongside bots that can simulate social-media users.
We hope our effort will help to evaluate some of the most basic premises of social media. For example, tech leaders have long measured success by the number of connections people have. Anthropologist Robin Dunbar has suggested that humans struggle to maintain meaningful relationships with more than 150 people. Experiments could encourage some social-media users to create deeper connections with a small group of users while allowing others to connect with anyone. Researchers could investigate the optimal number of connections in different situations, to work out how to optimize breadth of relationships without sacrificing depth.
A related question is whether social-media platforms should be customized for different societies or groups. Although today’s platforms seem to have largely negative effects on US and Western-Europe politics, the opposite might be true in emerging democracies (P. Lorenz-Spreen et al. Preprint at https://doi.org/hmq2; 2021). One study suggested that Facebook could reduce ethnic tensions in Bosnia–Herzegovina (N. Asimovic et al. Proc. Natl Acad. Sci. USA 118, e2022819118; 2021), and social media has helped Ukraine to rally support around the world for its resistance.
The next question is what types of algorithm could encourage consensus and discourage hate, abuse and division. Most platforms order posts according to engagement. But this incentivizes extreme statements that generate controversy and create vicious cycles of incivility and outrage. Our laboratory’s Bipartisanship Leaderboard ranks Twitter users by the number of likes their posts receive from both Republicans and Democrats. What would happen if news feeds were filled with content that diverse groups appreciate, instead of people preaching to the choir?
Yet another line of research concerns what would happen if social-media users had to be identifiable in real life. Anonymity can help people to avoid censorship in authoritarian regimes or to explore alternative viewpoints without peer pressure. How can we give people the freedom to explore ideas anonymously without enabling trolls or imposters?
Answers to such questions could inform regulators, entrepreneurs and any effort to make social media healthy for society. We cannot leave social-media companies to attain these answers themselves. These companies will always struggle to balance profits with improving human life — but they will respond to legislation, public opinion and individual behaviour. Facebook, which once seemed invincible, just announced its most dismal earnings report in years — and admitted it is struggling to compete with TikTok for users.
In the wake of the testimony to Congress of Frances Haugen — who blew the whistle on Facebook for failing to curb misinformation, mental health impacts, crime and more — public trust in tech companies is low. A social-media platform for scientific research could help them. Facebook can’t suddenly make some users anonymous, but could test this feature in a carefully controlled setting. (Disclosure: we have received funding from social-media companies, but without restrictions on what we study or publish.)
Building a broader platform for scientific research on social media requires an unprecedented interdisciplinary effort that involves social scientists, computer scientists and many others in and outside industry. It also needs extensive funding — ideally from governments or specialized philanthropies. Perhaps a government agency to regulate social media should be funded by tech corporations themselves, based on the model of the US Food and Drug Administration.
It’s a tall order, but regulators and tech leaders are currently flying blind. Without evidence-based insights about how to innovate, pundits and policymakers will continue to speculate about how to fix a system that is beyond repair — while the rest of us suffer the consequences.
C.B. has served as an academic consultant for Twitter’s Incentives Team, which is exploring new ways to increase positive behaviour on its platform. In this capacity, he has been paid $2,675. The Polarization Lab, which he co-directs, has also received unrestricted gifts to support our research from Twitter ($50,000), Facebook ($150,000) and Google ($200,000).