Tech / Technology

The Trevor Project leaves X as anti-LGBTQ hate escalates

Posted on:

On the one year mark before the next presidential election, LGBTQ organization the Trevor Project announces its leaving X/ Twitter amid escalating hate and online vitriol.
A group of people march down a street during a Pride parade. They are holding a large orange Trevor Project banner.

With the 2024 presidential election is just a year away, advocates are doing everything they can do bring attention to the country’s most pressing social justice issues. Today, national LGBTQ youth organization the Trevor Project announced it is leaving X (formerly Twitter) amid growing anti-LGBTQ sentiment, both online and off.

“LGBTQ young people — and in particular, trans and nonbinary young people — have been unfairly targeted in recent years, and that can negatively impact their mental health. In 2023, hundreds of anti-LGBTQ bills have been introduced in states across the country, which can send the message that LGBTQ people are not deserving of love or respect. We have seen this rhetoric transcend politics and appear on social media platforms,” the Trevor Project said in a statement.

On Nov. 9, the organization posted the following message on its page:

The Trevor Project has made the decision to close its account on X given the increasing hate & vitriol on the platform targeting the LGBTQ community — the group we exist to serve. LGBTQ young people are regularly victimized at the expense of their mental health, and X’s removal of certain moderation functions makes it more difficult for us to create a welcoming space for them on this platform. This decision was made with input from dozens of internal and external perspectives; in particular, we questioned whether leaving the platform would allow harmful narratives and rhetoric to prevail with one less voice to challenge them. Upon deep analysis, we’ve concluded that suspending our account is the right thing to do.

A 2023 survey of LGBTQ teens conducted by the Trevor Project found that discrimination and online hate contributes to higher rates of suicide risk reported by LGBTQ young people.

In June, GLAAD marked X as the least safe social media platform for LGBTQ users in its annual analysis of online safety, known as the Social Media Safety index. The report cites continued regressive policies, including the removal of protections for transgender users, and remarks by X CEO Elon Musk as factors in creating a “dangerous environment” for LGBTQ Americans.

In April, a coalition of LGBTQ resource centers nationwide formally left the platform in response to the removal of hateful conduct protections for both LGBTQ and BIPOC users, saying in a joint statement: “2023 is on pace to be a record-setting year for state legislation targeting LGBTQ adults and youth. Now is a time to lift up the voices of those who are most vulnerable and most marginalized, and to take a stand against those whose actions are quite the opposite.”

Protections for the LGBTQ community and reproductive health access are expected to be a flashpoint in the upcoming election cycle, especially amid Republican candidates. At the same time, social media platforms and the online spaces they create, are facing a growing call to address the rise of hate-filled content and misinformation — now exacerbated by astonishing rise of generative AI tools — that disproportionately affect marginalized communities.

The Trevor Project directs any LGBTQ young people looking for a safe space online to its social networking site TrevorSpace.org or its Instagram, TikTok, LinkedIn, and Facebook accounts: “No online space is perfect, but having access to sufficient moderation capabilities is essential to maintaining a safer space for our community.”

Tech / Technology

Zuckerberg shot down multiple initiatives to address youth mental health online, claims a new lawsuit

Posted on:

Newly unsealed documents in a lawsuit against Meta outline a history of rejecting opportunities to address youth mental wellbeing.
A blue and black illustration of Mark Zuckerberg in profile.

Still embroiled in lawsuits over the company’s slow move to address its platforms’ effects on young users, Meta CEO Mark Zuckerberg is now under fire for reportedly blocking attempts to address Meta’s role in a worsening mental health crisis.

According to newly unsealed court documents in a Massachusetts case against Meta, Zuckerberg was made aware of ongoing concerns about user mental wellbeing in the years prior to the Wall Street Journal investigation and subsequent Congressional hearing. The CEO repeatedly ignored or shut down suggested actions by Meta’s top executives, including Instagram CEO Adam Mosseri and Facebook’s president of global affairs Nick Clegg.

Specifically, Zuckerberg passed on a 2019 proposal to remove popular beauty filters from Instagram, which many experts connect to worsening self image, unreachable standards of beauty, and perpetuated discrimination of people of color. Despite support for the proposal among other Instagram heads, the 102-page court document alleges, Zuckerberg vetoed the suggestion in 2020, saying he saw a high demand for the filters and “no data” that such filters were harmful to users. A meeting of mental health experts was allegedly cancelled a day before a meeting on the proposal was scheduled to take place.

The documents also include a 2021 exchange between Clegg and Zuckerberg, in which Clegg forwarded a request from Instagram’s wellbeing team asking for an investment of staff and resources for teen wellbeing, including a team to address areas of “problematic use, bullying+harassment, connections, [and Suicide and Self-Injury (SSI)],” Insider reports.

While Clegg reportedly told Zuckerberg that the request was “increasingly urgent,” Zuckerberg ignored his message.

The Massachusetts case is yet another legal hit for Meta, after being lambasted by state governments, parent coalitions, mental health experts, and federal officials for ignoring internal research and remaining complicit in social media’s negative effect on young users.

On Oct. 25, a group of 41 states and the District of Columbia sued Meta for intentionally targeting young people using its “infinite scroll” and algorithmic behavior and pushing them towards harmful content on platforms like Instagram, WhatsApp, Facebook, and Messenger.

In 2022, Meta faced eight simultaneous lawsuits across various states, accusing Meta of “exploiting young people for profit” and purposefully making its platforms psychologically addictive while failing to protect its users.

Meta’s not the only tech or social media giant facing potential legal repercussions for its role in catalyzing harmful digital behavior. The state of Utah’s Division of Consumer Protection (UDCP) filed a lawsuit against TikTok in October, claiming the app’s “manipulative design features” negatively effect young people’s mental health, physical development, and personal life. Following a similar case from a Seattle public school district, a Maryland school district filed a lawsuit against nearly all popular social platforms in June, accusing the addictive properties of such apps of “triggering crises that lead young people to skip school, abuse alcohol or drugs, and overall act out” in ways that are harmful to their education and wellbeing.

Since the 2021 congressional hearing that put Meta’s youth mental health concerns on public display, the company has launched a series of new parental control and teen safety measures, including oversight measures on Messenger and Instagram intended to protect young users from unwanted interactions and reduce their screen time.

Tech / Technology

Meta faces pressure from human rights organizations for its role in Ethiopian conflict

Posted on:

A new report from Amnesty International accuses Meta of having an inciting role in an ongoing conflict in Ethiopia’s Tigray region, pressuring the company to compensate victims and reform its content moderation.
Meta and Facebook logos

Meta, and its platform Facebook, are facing continued calls for accountability and reparations following accusations that its platforms can exacerbate violent global conflicts.

The latest push comes in the form of a new report by human rights organization Amnesty International, which looked into Meta’s content moderation policies during the beginnings of an ongoing conflict in Ethiopia’s Tigray region and the company’s failure to respond to civil society actors calling for action before and during the conflict.

Released on Oct. 30, the report — titled “A Death Sentence For My Father”: Meta’s Contribution To Human Rights Abuses in Northern Ethiopia — narrows in on the social media mechanisms behind the Ethiopian armed civil conflict and ethnic cleansing that broke out in the northern part of the country in Nov. 2020. More than 600,000 civilians were killed by battling forces aligned with Ethiopia’s federal government and those aligned with regional governments. The civil war later spread to the neighboring Amhara and Afar regions, during which time Amnesty International and other organizations documented war crimes, crimes against humanity, and the displacement of thousands of Ethiopians.

“During the conflict, Facebook (owned by Meta) in Ethiopia became awash with content inciting violence and advocating hatred,” writes Amnesty international. “Content targeting the Tigrayan community was particularly pronounced, with the Prime Minister of Ethiopia, Abiy Ahmed, pro-government activists, as well as government-aligned news pages posting content advocating hate that incited violence and discrimination against the Tigrayan community.”

The organization argues that Meta’s “surveillance-based business model” and algorithm, which “privileges ‘engagement’ at all costs” and relies on harvesting, analyzing, and profiting from people’s data, led to the rapid dissemination of hate-filled posts. A recent report by the UN-appointed International Commission of Human Rights Experts on Ethiopia (ICHREE) also noted the prevalence of online hate speech that stoked tension and violence.

Amnesty International has made similar accusations of the company for its role in the targeted attacks, murder, and displacement of Myanmar’s Rohingya community, and claims that corporate entities like Meta have a legal obligation to protect human rights and exercise due diligence under international law.

In 2022, victims of the Ethiopian war filed a lawsuit against Meta for its role in allowing inflammatory posts to remain on its social platform during the active conflict, based on an investigation by the Bureau of Investigative Journalism and the Observer. The petitioners allege that Facebook’s recommendations systems amplified hateful and violent posts and allowed users to post content inciting violence, despite being aware that it was fueling regional tensions. Some also allege that such posts led to the targeting and deaths of individuals directly.

Filed in Kenya, where Meta’s sub-Saharan African operations are based, the lawsuit is supported by Amnesty International and six other organizations, and calls on the company to establish a $1.3 billion fund (or 200 billion Kenyan shillings) to compensate victims of hate and violence on Facebook.

In addition to the reparations-based fund, Amnesty International is also calling for Meta to expand its content moderation and language capabilities in Ethiopia, as well as a public acknowledgment and apology for contributing to human rights abuses during the war, as outlined in its recent report.

The organization’s broader recommendations also include the incorporation of human rights impact assessments in the development of new AI and algorithms, an investment in local language resources for global communities at risk, and the introduction of more “friction measures” — or site design that makes the sharing of content more difficult, like limits on resharing, message forwarding, and group sizes.

Meta has previously faced criticism for allowing unchecked hate speech, misinformation, and disinformation to spread on its algorithm-based platforms, most notably during the 2016 and 2020 U.S. presidential elections. In 2022, the company established a Special Operations Center to combat the spread of misinformation, remove hate speech, and block content that incited violence on its platforms during the Russian invasion of Ukraine. It’s deployed other privacy and security tools in regions of conflict before, including a profile lockdown tool for users in Afghanistan launched in 2021.

Additionally, the company has recently come under fire for excessive moderation, or “shadow-banning”, of accounts sharing information during the humanitarian crisis in Gaza, as well as fostering harmful stereotypes of Palestinians through inaccurate translations.

Amid ongoing conflicts around the world, including continued violence in Ethiopia, human rights advocates want to see tech companies doing more to address the quick dissemination of hate-filled posts and misinformation.

“The unregulated development of Big Tech has resulted in grave human rights consequences around the world,” Amnesty International writes. “There can be no doubt that Meta’s algorithms are capable of harming societies across the world by promoting content that advocates hatred and which incites violence and discrimination, which disproportionately impacts already marginalized communities.”