Media
GLAAD: Social Media Platforms Are Failing to Keep LGBTQ+ Users Safe
GLAAD's second annual Social Media Safety index finds five major platforms wanting.
July 13 2022 1:58 PM EST
May 31 2023 3:38 PM EST
trudestress
By continuing to use our site, you agree to our Private Policy and Terms of Use.
GLAAD's second annual Social Media Safety index finds five major platforms wanting.
GLAAD has released its second annual Social Media Safety Index, and it has found that major social media platforms remain lacking when it comes to safety for LGBTQ+ users.
The organization evaluated five platforms -- Instagram, Facebook, Twitter, YouTube, and TikTok -- and all of them scored under 50 percent on the index. Instagram had 48 percent, Facebook 46 percent, Twitter 45 percent, YouTube 45 percent, and TikTok 43 percent.
They were evaluated on 12 indicators. These include whether the company discloses a policy commitment to protect LGBTQ+ users from harm, discrimination, harassment, and hate on the platform; an option for users to add pronouns to their profiles; a policy that expressly prohibits targeted deadnaming and misgendering of other users; what options users have to control the company's collection, inference, and use of information related to their sexual orientation and gender identity; and training for content moderators, including those employed by contractors, that trains them on the needs of vulnerable users, including LGBTQ+ users.
"Today's political and cultural landscapes demonstrate the real-life harmful effects of anti-LGBTQ rhetoric and misinformation online," GLAAD President and CEO Sarah Kate Ellis said in a press release. "The hate and harassment, as well as misinformation and flat-out lies about LGBTQ people, that go viral on social media are creating real-world dangers, from legislation that harms our community to the recent threats of violence at Pride gatherings. Social media platforms are active participants in the rise of anti-LGBTQ cultural climate and their only response should be to urgently create safer products and policies, and then enforce those policies."
Problems identified in the report, as stated in its executive summary, include "inadequate content moderation and enforcement (including issues with both anti-LGBTQ hateful content and over-moderation/censorship of LGBTQ users); harmful and polarizing algorithms; and an overall lack of transparency and accountability across the industry, among many other issues -- all of which disproportionately impact LGBTQ users and other marginalized communities who are uniquely vulnerable to hate, harassment, and discrimination. These problems are even more exacerbated for folks who are members of multiple communities (BIPOC, women, immigrants, people with disabilities, people of historically marginalized faiths, etc.). Social media platforms should be safe for everyone, in all of who they are."
The report found that severe harassment increased in the past year; that there was inaction against viral misinformation, which increases support for anti-LGBTQ+ legislation, or lack of enforcement of existing policies against misinformation; that only select platforms prohibit actions like targeted misgendering and the promotion of conversion therapy; and that companies did not use the tools they already have to curb hateful content.
GLAAD did find improvements on some platforms. For instance, this year TikTok stepped up to protect transgender and nonbinary people by adopting a prohibition on targeted misgendering and deadnaming. Twitter is the only other platform that has such a policy, and it has been in place since 2018.
"All platforms should follow the lead of TikTok and Twitter and should immediately incorporate an explicit prohibition against targeted misgendering and deadnaming of transgender and non-binary people into hateful conduct policies," GLAAD's senior director of social media safety, Jenni Olson, said in the release. "This recommendation remains an especially high priority in our current landscape where anti-trans rhetoric and attacks are so prevalent, vicious, and harmful. We also urge these companies to effectively moderate such content and to enforce these policies."
GLAAD's other recommendations include improving the design of algorithms that currently circulate and amplify harmful content, extremism, and hate; training moderators to understand the needs of LGBTQ+ users and to moderate across all languages, cultural contexts, and regions; transparency with regard to content moderation, community guidelines and terms of service policy implementation, and algorithm designs; strengthening and enforcing existing community guidelines and terms of service that protect LGBTQ+ people and others; and respecting data privacy, especially where LGBTQ+ people are vulnerable to serious harms and violence, including ceasing the practice of targeted surveillance advertising, in which companies use algorithms to recommend content to users in order to maximize profit.
GLAAD also released new data from a May 2022 study conducted with Community Marketing & Insights. It found that 84 percent of LGBTQ+ adults agree there are not enough protections on social media to prevent discrimination, harassment, or disinformation; 40 percent of all LGBTQ+ adults and 49 percent of transgender and nonbinary people do not feel welcomed and safe on social media.
In addition, the Anti-Defamation League's newly released Online Hate and Harassment report found that 66 percent of LGBTQ+ users experienced harassment online, with 54 percent of LGBTQ+ users reporting severe harassment including sustained harassment, stalking, or doxxing, GLAAD notes.
GLAAD's index was created with support from Craig Newmark Philanthropies, the Gill Foundation, and Logitech. GLAAD convened an advisory committee of thought leaders to advise on industry and platform-specific recommendations in the Index. Committee members include Alok, author, performer, and media personality; Lucy Bernholz, director of the Digital Civil Society Lab at Stanford University; Alejandra Caraballo, a clinical instructor at the Cyberlaw Clinic, Berkman Klein Center for Internet & Society at Harvard Law School; Jelani Drew-Davi, director of campaigns, at Kairos; Liz Fong-Jones, principal developer advocate for SRE and observability at Honeycomb; Evan Greer, director of Fight for the Future; Leigh Honeywell, CEO and cofounder of Tall Poppy; Maria Ressa, journalist and CEO of rappler; Tom Rielly, founder, TED Fellows program, Digital Queers, and PlanetOut.com; Brennan Suen, deputy director of external affairs, Media Matters for America; and Kara Swisher, contributing writer and host of the Sway podcast at The New York Times.
Before releasing the index, GLAAD held briefings with each platform named in it to review issues that LGBTQ users face and the recommendations described in the report. GLAAD plans to continue the dialogue about LGBTQ+ safety amongst tech industry leaders and to spotlight new and existing safety issues facing LGBTQ+ users.
Viral post saying Republicans 'have two daddies now' has MAGA hot and bothered