New innovations and collaborations are helping protect people and advertisers from potentially harmful content.
Traditionally, enforcing safety measures on social media has meant taking down harmful content after it was posted and disabling the accounts that created it. Today, though, technological advancements are allowing services like Twitter to take a much more proactive and proportional approach to creating a safe and rewarding environment for the people and brands who use them.
“We want to move away from a reactive and punitive approach and use technology to focus on education, prevention and providing our users and advertisers with far more control over their experience,” says Oscar Rodriguez, Twitter’s director of product health.
While a small portion of the technology delivering platform safety is visible to people, much more of it works behind the scenes and involves recent investments in infrastructure and machine-learning models, as well as partnerships with third-party organizations, Rodriguez says. “We have this iceberg dynamic, where 5% of what we’re doing to keep Twitter safe is happening above the waterline,” he says. “What creates a safer experience for our customers is not always visible to them.”
Enriching the Social Experience
For people using Twitter around the world, we focus on utilizing technology in ways that both prevent the sharing of potentially harmful content and allow people to control their own potential exposure to it.
For example, we recently unveiled Reply Prompts, which identify potentially harmful Tweets before they are sent and ask the poster to pause and think twice before sending. In 2020, we rolled out new conversation settings that let people determine who can respond to each of their Tweets—whether that’s anyone, only their followers, or only people mentioned in that specific Tweet.
Birdwatch, which is currently in beta testing, allows people in the pilot to add context to content that they believe is potentially misleading. Algorithms are being developed, such as reputation and consensus systems, to eventually surface notes deemed helpful and appropriate by a diverse set of contributors directly on Tweets.
Birdwatch is just one tool that can address misleading content and help people on Twitter make more informed decisions about what content they want to engage with, building on the Twitter Rules that explicitly prohibit the most harmful misinformation. “Our policies around misinformation and the enforcement of those policies work to identify content that is likely to lead to real-world harm or society-wide impact, as opposed to, say, unverified rumors that might impact a particular individual,” Rodriguez adds. “Birdwatch enables a broader approach to providing context to people so that they can understand a variety of viewpoints and then encourages them to look up more information on a particular topic.”
Moreover, we recently launched a pilot version of Safety Mode that temporarily autoblocks accounts that have been identified as harassing someone on the platform—before, someone would have to manually block or mute another account.
“It’s really all of these technologies that, when combined, mark a much larger shift toward making Twitter a safer place where we can have a healthier conversation,” Rodriguez says.
Building Advertiser Confidence
Beyond the platform safety measures Twitter provides to all people, we’ve also undertaken ongoing efforts to offer safety features specific to advertisers’ needs.
We have been introducing new practices and tools aimed at minimizing the occurrence of ads appearing next to objectionable content. “Our top areas of focus when it comes to using technology for brand safety are transparency and enhanced measures for content adjacency,” says Kate Fauth, Twitter’s product manager for brand safety.
For example, among the several ways we are reducing the odds that ads will appear next to objectionable content, Fauth points to two measures: machine learning and conversation monitoring. While advancements in machine learning are helping to improve brand safety, humans are still a critical part of the equation, she says. Machine learning helps us identify and act on potentially problematic content, but humans also play an important role in identifying and acting on content, particularly when it comes to sensitive trending topics.
We are also currently building integrations with third-party media quality and verification providers such as DoubleVerify and Integral Ad Science that will allow advertisers to better understand the types of content that appear next to their ads in an in-feed, user-generated environment. “Transparency is a deeply held value at Twitter, and we believe that advertisers should have as much information as possible about where their ads appear,” Fauth says.
While we build proprietary solutions to enhance safety, we also turn to external technology providers and organizations to supplement our efforts and provide greater overall transparency and credibility. We partner with a vast array of organizations around the world to enhance safety in a variety of ways, says Nick Pickles, Twitter’s senior director of global public policy strategy, development and partnerships.
“Building an effective trust and safety effort relies on different expertise, different perspectives, different lived experiences and different technical inputs,” Pickles says. “And without collaboration with academics, civil society and governments around the globe, we wouldn’t have access to the wealth of expertise and information that we do.”
For example, we consult with the Trust & Safety Council—a group of independent expert organizations from around the world—ahead of product and policy launches. While consultation is key, the focus is for members to highlight potential risks they are observing in their markets. The Council ensures we are aware of ongoing, priority trends and policy issues to address globally.
From preventing child sexual exploitation to protecting the integrity of conversations around elections, we work with many different organizations to tap their expertise and help navigate a range of platform safety concerns. In the child protection space, for example, we partner with other organizations around the world to digitally identify offending content across all platforms so it can be removed faster. This allows us to be more proactive about sharing information and removing harmful content at speed and scale.
Adds Pickles: “Every partnership is different in the way that it impacts the product. I think that the value of the kind of partners that Twitter works with is what allows us to take a holistic approach to keeping our community safe.”
This article, in its original form, was previously published on partners.wsj.com on November 15, 2021. Wall Street Journal Custom Content is a unit of The Wall Street Journal advertising department. The Wall Street Journal news organization was not involved in the creation of this content.