The Applied Social Media Lab is a two-year project (June 2024-June 2026) at Harvard’s Berkman Klein Center for Internet and Society. We are bringing industry-trained technologists into academia to build social media solutions in the public interest and establish a long-lasting community of practice. This document explains the strategy we will take to accomplish our mission.
We define “social media in the public interest” broadly.
Any system that enables human connection and exchange online is a form of social media. Conventional platforms like Facebook are only a small part of the story. Social media today includes video creator platforms like YouTube, TikTok, and Twitch; partially-private communities like WhatsApp groups and Discord servers; and app/game features, such as reviews on Google Maps and live chat on Roblox. Social media also includes digital environments for teaching and learning. We believe that it’s ultimately better to be inclusive than precise, to capture the full diversity of online social experiences.
The public interest is best served when social media fosters healthy and thriving human interactions. Social media is almost everywhere today, enabling and shaping the future of speech, which means its impact on the well-being of society is of pressing concern to the public interest. At its best, social media is positive and nurturing, but it doesn’t feel that way right now. Serious problems like fraud, hate, harassment, and child endangerment are pervasive, and online conflicts are spilling into real-life spaces. Culturally and linguistically homogeneous systems do a poor job of protecting the distinct needs of different communities worldwide. Ubiquitous scanning and monitoring threaten our privacy. Despite companies announcing huge investments in safety, things don’t seem to be getting better. Why?
Today’s dominant approach to social media fails the public interest.
These issues persist because too many of today’s platforms use the same narrow strategy to confront problems: promote and police. First, our thoughts, feelings, and creative ideas are standardized as measurable, interchangeable units of “content.” Then, recommendation systems elevate “good” content, defined as “whatever causes people and advertisers to use our product more,” while moderation tools and algorithms partially weed out “bad” content under the banner of “trust and safety” or “integrity.” Because algorithmic moderation at global scale is very difficult, especially across language and cultural barriers, some amount of bad stuff always gets through to cause harm. Sometimes the system fails entirely, when the quest for usage and profits ends up actually promoting the bad stuff.
The people who build promote-and-police systems often mean well, and do their best to foster healthy communities, but the public is rarely given more than token input into their decisions. We are presented with a hard bundle of rules and features which we cannot modify or opt out of in any meaningful way. If we don’t like the bundle, our only option is to try another platform, but transferring our friends and history is difficult or impossible. This lock-in makes it too hard for newer or healthier modes of human interaction (such as small, self-governing communities) to survive and grow, which creates the false impression that most people prefer the status quo.
Social media in the public interest must be open to variety.
It is too early in the history of the Internet to allow promote-and-police or other narrow practices to become social media’s only framework and final equilibrium. The public interest demands choice. We should be able to organize communities, protect ourselves, and move our data between platforms as we see fit. Just as there are many paths to human thriving, social media should make it easy for people to explore many different approaches to healthy online interactions.
Importantly, this does not mean abandoning a platform’s responsibility to treat people with a baseline level of dignity. A completely ungoverned online space can too easily become a swamp (because bad actors take over) or a desert (because everyone flees). Problems start when platform governance is not transparent, responsive, or accountable. We deserve understandable content policies and enforcement practices based on a participatory rules-making process. Society needs access to data about on-platform discourse, including popular themes, trends, and harms, to ensure that platforms are fulfilling their obligations. Inspired by the famous aphorism of the free software movement (“free as in speech, not free as in beer”), we want social media to be “open as in ‘open book,’ not open as in ‘open season.’”
An academic lab can open the path to a healthier future.
A lab of technologists working within the academic tradition of teaching and learning can do things that would not be possible in any other way. Corporations are typically expected to hoard knowledge for competitive advantage, or in fear of a public relations backlash. We will collaborate openly with peer researchers and institutions and solicit and share discoveries widely. Large social media platforms move cautiously because of the burden of maintaining their installed base. We will focus on new solutions that point to what social media could be in ten years or beyond. Start-up businesses are often at the mercy of the business and API choices of incumbent platforms. We will look for technology approaches that are resilient in the face of change. And we will not be doctrinaire: we know there are many ways to assess and intervene for the public good, and we welcome many different approaches.
With this freedom and opportunity, our team will build, test, and share software in four focus areas:
- Spaces for civil discourse and collaboration that offer new ways for people to exchange views and deliberate constructively online, and that can be adopted by other platforms.
- Transparency tools that make detailed information about social media platforms available and understandable to researchers, journalists, civil society organizations, and the general public.
- Personalized safety applications that give individuals, families, and communities the ability to customize their experiences and protect themselves from online threats and harms across devices and social platforms.
- Interoperable software infrastructure to make it easier for new companies to build social media platforms that support healthy and varied human experiences.
Our approach is independent, but not adversarial. We hope that today’s large platforms will find our software interesting, and even inspiring. We will connect with them as we see opportunities to collaborate in the public interest.
Our success depends on our people and our methodology.
Our work will be experimental and iterative, focused on freedom of imagination and rapidly prototyping new solutions in search of major breakthroughs that can drive substantial change. As such, our most important resource is people – the staff, students, fellows, affiliates, workshop participants, network members, and advisors who will collaborate to produce ideas and code. We will provide them with the support and technical infrastructure they need to build useful software and practices, and the environment to try ambitious things without fear of failure.
Our commitment to people goes beyond our full-time staff. We will directly support a community of research fellows and students to explore new options and grow as leaders in the space. We will also leverage BKC’s unique convening power to conduct a regular series of talks, workshops, and conferences to connect and expand our group of collaborators and supporters. We’ll work with peers in the policy and civil society worlds who will contribute to and share our ideas as well. We ultimately aim to provide self-sustaining community tools that will keep this expanded group together to continue the work of the lab far into the future.
We will ensure learning, safety, and stewardship of resources by giving the work of the lab a clear operating structure. When an idea progresses beyond prototyping, a written charter will document the project’s alignment with our focus areas, theory of change, predicted milestones, and any potential new harms that the work might cause. We will use consistent software development and go-to-market processes to track and evaluate progress. If a project shows promise as a potential “home run” breakthrough that can change the future trajectory of social media, we will be quick to shift people to accelerate and publicize the work. When a project ends, we will write and share a retrospective of the work done and lessons learned.
We will measure our progress in both software and community.
At the end of two years, we will have succeeded if we:
- Land one or more “home run” software breakthroughs that significantly move the future of social media toward the public interest;
- Establish a durable community of current and future technology leaders who will continue building social media software in the public interest.
These are not easy tasks, and success is not guaranteed, but the history of the Internet has been written by dedicated teams working toward a shared vision of a better future. We walk in their footsteps and hope that in the next two years (and beyond) we will make a meaningful contribution to a healthier future for social media.