Tech / Technology

The Trevor Project leaves X as anti-LGBTQ hate escalates

Posted on:

On the one year mark before the next presidential election, LGBTQ organization the Trevor Project announces its leaving X/ Twitter amid escalating hate and online vitriol.
A group of people march down a street during a Pride parade. They are holding a large orange Trevor Project banner.

With the 2024 presidential election is just a year away, advocates are doing everything they can do bring attention to the country’s most pressing social justice issues. Today, national LGBTQ youth organization the Trevor Project announced it is leaving X (formerly Twitter) amid growing anti-LGBTQ sentiment, both online and off.

“LGBTQ young people — and in particular, trans and nonbinary young people — have been unfairly targeted in recent years, and that can negatively impact their mental health. In 2023, hundreds of anti-LGBTQ bills have been introduced in states across the country, which can send the message that LGBTQ people are not deserving of love or respect. We have seen this rhetoric transcend politics and appear on social media platforms,” the Trevor Project said in a statement.

On Nov. 9, the organization posted the following message on its page:

The Trevor Project has made the decision to close its account on X given the increasing hate & vitriol on the platform targeting the LGBTQ community — the group we exist to serve. LGBTQ young people are regularly victimized at the expense of their mental health, and X’s removal of certain moderation functions makes it more difficult for us to create a welcoming space for them on this platform. This decision was made with input from dozens of internal and external perspectives; in particular, we questioned whether leaving the platform would allow harmful narratives and rhetoric to prevail with one less voice to challenge them. Upon deep analysis, we’ve concluded that suspending our account is the right thing to do.

A 2023 survey of LGBTQ teens conducted by the Trevor Project found that discrimination and online hate contributes to higher rates of suicide risk reported by LGBTQ young people.

In June, GLAAD marked X as the least safe social media platform for LGBTQ users in its annual analysis of online safety, known as the Social Media Safety index. The report cites continued regressive policies, including the removal of protections for transgender users, and remarks by X CEO Elon Musk as factors in creating a “dangerous environment” for LGBTQ Americans.

In April, a coalition of LGBTQ resource centers nationwide formally left the platform in response to the removal of hateful conduct protections for both LGBTQ and BIPOC users, saying in a joint statement: “2023 is on pace to be a record-setting year for state legislation targeting LGBTQ adults and youth. Now is a time to lift up the voices of those who are most vulnerable and most marginalized, and to take a stand against those whose actions are quite the opposite.”

Protections for the LGBTQ community and reproductive health access are expected to be a flashpoint in the upcoming election cycle, especially amid Republican candidates. At the same time, social media platforms and the online spaces they create, are facing a growing call to address the rise of hate-filled content and misinformation — now exacerbated by astonishing rise of generative AI tools — that disproportionately affect marginalized communities.

The Trevor Project directs any LGBTQ young people looking for a safe space online to its social networking site TrevorSpace.org or its Instagram, TikTok, LinkedIn, and Facebook accounts: “No online space is perfect, but having access to sufficient moderation capabilities is essential to maintaining a safer space for our community.”

Tech / Technology

Political ads on Facebook, Instagram required to disclose use of AI

Posted on:

Meta has introduced a set of AI rules for political advertisers on Facebook and Instagram.

OpenAI just announced its latest large language model (LLM), GPT-4 Turbo. Elon Musk’s xAI recently unveiled its own AI chatbot, Grok. And Samsung is jumping on the bandwagon, too, with its LLM, Gauss. On top of all this, AI-powered video and image generators are continuing to evolve. With artificial intelligence creating more and more content on the web, some social media platforms want users to know when media is created or altered by AI.

The latest company to step in with a set of policies around AI-created content is Meta, the parent company of Facebook and Instagram. And its new set of rules sets a standard on its platforms, specifically for political advertisers.

In a blog post on Wednesday, Meta stated it has a new policy that will force political advertisers to disclose when a Facebook or Instagram ad has been “digitally created or altered, including through the use of AI.” This includes “any photorealistic image or video, or realistic sounding audio, that was digitally created or altered.” The policy will pertain to all social issues, electoral, or political advertisements.

According to Meta’s new policy, a disclosure on an ad will be required when the advert depicts a real, existing person “saying or doing something they did not say or do.” Furthermore, if an ad contains a fictional yet realistic-looking person, it too must include a disclosure. The same goes for any political ad that uses manufactured footage of a realistic event or manipulates footage of a real event that happened.

There are some uses of AI or digital manipulation that will not require disclosures, Meta says. But these are only for uses that are “inconsequential or immaterial to the claim, assertion, or issue raised in the ad.” Meta provides examples of these exceptions, such as image size adjusting, cropping an image, color correction, and image sharpening. The company also reiterates that any digital manipulation utilizing any of those examples that does change the claims or issues in the ad would need to be disclosed. 

And, of course, these AI-created or altered ads are all still subject to Facebook and Instagram’s rules around deceptive or dangerous content. The company’s fact-checking partners can still rate these ads for misinformation or deceptive content.

AI will be a more prominent factor in next year’s coming elections, such as the 2024 U.S. Presidential election, than it ever has been before. Even Meta now has its own large language model as well as an AI chatbot product. As these technologies continue to evolve, readers can be sure more online companies are going to produce a set of corporate or platform standards. With those elections on the horizon, Meta seems to be setting the ground rules for political ads now.

Meta’s new AI policy will roll out officially in 2024 and will pertain to advertisers around the globe.

Tech / Technology

Meta faces pressure from human rights organizations for its role in Ethiopian conflict

Posted on:

A new report from Amnesty International accuses Meta of having an inciting role in an ongoing conflict in Ethiopia’s Tigray region, pressuring the company to compensate victims and reform its content moderation.
Meta and Facebook logos

Meta, and its platform Facebook, are facing continued calls for accountability and reparations following accusations that its platforms can exacerbate violent global conflicts.

The latest push comes in the form of a new report by human rights organization Amnesty International, which looked into Meta’s content moderation policies during the beginnings of an ongoing conflict in Ethiopia’s Tigray region and the company’s failure to respond to civil society actors calling for action before and during the conflict.

Released on Oct. 30, the report — titled “A Death Sentence For My Father”: Meta’s Contribution To Human Rights Abuses in Northern Ethiopia — narrows in on the social media mechanisms behind the Ethiopian armed civil conflict and ethnic cleansing that broke out in the northern part of the country in Nov. 2020. More than 600,000 civilians were killed by battling forces aligned with Ethiopia’s federal government and those aligned with regional governments. The civil war later spread to the neighboring Amhara and Afar regions, during which time Amnesty International and other organizations documented war crimes, crimes against humanity, and the displacement of thousands of Ethiopians.

“During the conflict, Facebook (owned by Meta) in Ethiopia became awash with content inciting violence and advocating hatred,” writes Amnesty international. “Content targeting the Tigrayan community was particularly pronounced, with the Prime Minister of Ethiopia, Abiy Ahmed, pro-government activists, as well as government-aligned news pages posting content advocating hate that incited violence and discrimination against the Tigrayan community.”

The organization argues that Meta’s “surveillance-based business model” and algorithm, which “privileges ‘engagement’ at all costs” and relies on harvesting, analyzing, and profiting from people’s data, led to the rapid dissemination of hate-filled posts. A recent report by the UN-appointed International Commission of Human Rights Experts on Ethiopia (ICHREE) also noted the prevalence of online hate speech that stoked tension and violence.

Amnesty International has made similar accusations of the company for its role in the targeted attacks, murder, and displacement of Myanmar’s Rohingya community, and claims that corporate entities like Meta have a legal obligation to protect human rights and exercise due diligence under international law.

In 2022, victims of the Ethiopian war filed a lawsuit against Meta for its role in allowing inflammatory posts to remain on its social platform during the active conflict, based on an investigation by the Bureau of Investigative Journalism and the Observer. The petitioners allege that Facebook’s recommendations systems amplified hateful and violent posts and allowed users to post content inciting violence, despite being aware that it was fueling regional tensions. Some also allege that such posts led to the targeting and deaths of individuals directly.

Filed in Kenya, where Meta’s sub-Saharan African operations are based, the lawsuit is supported by Amnesty International and six other organizations, and calls on the company to establish a $1.3 billion fund (or 200 billion Kenyan shillings) to compensate victims of hate and violence on Facebook.

In addition to the reparations-based fund, Amnesty International is also calling for Meta to expand its content moderation and language capabilities in Ethiopia, as well as a public acknowledgment and apology for contributing to human rights abuses during the war, as outlined in its recent report.

The organization’s broader recommendations also include the incorporation of human rights impact assessments in the development of new AI and algorithms, an investment in local language resources for global communities at risk, and the introduction of more “friction measures” — or site design that makes the sharing of content more difficult, like limits on resharing, message forwarding, and group sizes.

Meta has previously faced criticism for allowing unchecked hate speech, misinformation, and disinformation to spread on its algorithm-based platforms, most notably during the 2016 and 2020 U.S. presidential elections. In 2022, the company established a Special Operations Center to combat the spread of misinformation, remove hate speech, and block content that incited violence on its platforms during the Russian invasion of Ukraine. It’s deployed other privacy and security tools in regions of conflict before, including a profile lockdown tool for users in Afghanistan launched in 2021.

Additionally, the company has recently come under fire for excessive moderation, or “shadow-banning”, of accounts sharing information during the humanitarian crisis in Gaza, as well as fostering harmful stereotypes of Palestinians through inaccurate translations.

Amid ongoing conflicts around the world, including continued violence in Ethiopia, human rights advocates want to see tech companies doing more to address the quick dissemination of hate-filled posts and misinformation.

“The unregulated development of Big Tech has resulted in grave human rights consequences around the world,” Amnesty International writes. “There can be no doubt that Meta’s algorithms are capable of harming societies across the world by promoting content that advocates hatred and which incites violence and discrimination, which disproportionately impacts already marginalized communities.”

Tech / Technology

4 Google Pixel safety features that help keep users safe

Posted on:

Google Pixel users have more personal safety tools at their fingertips than they might realize.
Google Pixel logo displayed on a smartphone.

Google‘s diverse range of technological offerings — from the upcoming AI-enhanced homepage and Maps tools to its range of tablets, computers, and phones — is a constantly evolving competitor amid a vast tech market, with companies pitching new devices, tools, and software designed to upend the industry and change lives.

But for every new paid marvel, free and easily accessible technology — like a lot of the Google services you probably use every day — serve just as important everyday purposes to those in need.

New Google upgrades include Maps tools that enhance access for those with disabilities, flood predicting capabilities to address the growing impact of climate change, and new features that help individuals build their digital and news literacy in an age of growing tech-based misinformation. For parents and caregivers, Google search can automatically blur explicit images in Search. The company has even released steps for people to remove personal images and information from its results to combat doxxing and the spread of revenge porn.

Google’s safety offerings go even further — the company’s new guide to personal safety features on Pixel devices, for example, outlines additional ways individual Google users can use their personal tech and the company’s fingertip-focused resources to keep themselves and their community safe.

Here’s a few noteworthy personal device features that utilize Google’s Personal Safety app:

Emergency SOS

Like Apple’s Emergency SOS tool, Google Pixel’s Emergency SOS will dispatch emergency responders or alert a user’s emergency contacts that they need help.

How to use Emergency SOS:

  1. Make sure you have an emergency contact saved to your phone or Personal Safety app.

  2. Go to Settings.

  3. Choose “Safety & emergency.” Users can add an emergency contact here, or scroll directly to “Emergency SOS.”

  4. Use the slider to turn Emergency SOS on.

  5. Adjust alert settings to your liking. Emergency SOS defaults to a soundless, haptic alert. Users can turn on an audio alarm, as well. Users can also choose between pressing and holding a button on your screen for three seconds or starting a 5-second countdown to issue the actual alert.

  6. To issue an Emergency SOS in the future, just quickly press the power button five times or more, and follow the chosen prompt on screen.

Scheduled safety shecks

Safety Checks are Google’s option for individuals wanting some peace of mind while out and about. Released alongside the Pixel Watch 2, the feature lets users set up a timed safety check for themselves and appointed emergency contacts during specific activities, which will then alert the contact if you fail to respond at the end of the safety check or if the safety check is cancelled. Users can also mark themselves safe during a Safety Check.

If a Safety Check goes unmarked, a user’s emergency contacts will get a text that shares their current location on Google Maps. If the phone turns off or loses signal, Safety check will remain active.

How to schedule safety checks:

  1. Make sure you have the Personal Safety app downloaded.

  2. Open the app and tap Safety Check.

  3. Select your activity (or “reason”) and the check’s duration. Checks can last between 15 minutes to 8 hours.

  4. Select the contacts you’d like to be notified. When the Safety Check begins, the contact will get a text with your name, the duration of the check, and the provided reason.

  5. Tap Start.

Car crash detection

Car crash detection can sense and alert emergency responders to a severe car crash. If a user’s phone detects a crash, it will vibrate, ring loudly, and ask if you need help aloud and on your phone screen. Users can then request their phone to call emergency services by saying “Emergency” or taping the emergency button on the screen twice. If the phone doesn’t get a response, it will automatically call emergency services, tell the responder that a car crash occurred, and share the device’s last known location.

Users can cancel the call by saying “Cancel” or tapping “I am ok.”

Car crash detection only works for Google Pixel 3, 4, and later. It won’t activate if a device is on Airplane mode or when Battery Saver is on.

How to turn on Crash Detection:

  1. Open the Personal Safety app.

  2. Make sure you have a SIM added to your phone.

  3. Tap “Features.”

  4. Scroll to “Car crash detection” and tap “Manage settings.”

  5. Turn on Car crash detection.

  6. Follow the on screen prompts to enable device permissions, including access to your physical activity, and record your voice for emergency detection.

Google Pixel Watch users can also turn on Fall Detection, which detects when a user has fallen and hasn’t moved for 30 seconds. After 30 seconds, the watch will vibrate, sound an alarm, and send a notification asking if you’re okay, Google explains. If the watch hasn’t detected movement or received a response after one minute, it will automatically call emergency services and share your location.

Crisis alerts

Google’s crisis alerts uses the Personal Safety app once again to notify users when a public emergency or local crisis, like a flood or other natural disaster, is happening in your area. When a user gets and taps on a crisis alert, they will be taken to additional information from local governments and organizations about the event and how to prepare for an imminent event.

How to turn on crisis alerts:

  1. Go to the Personal Safety app.

  2. Tap “Features.”

  3. Scroll to “Crisis alerts.” Tap “Manage settings.”

  4. Turn on crisis alerts.

  5. Location access has to be enabled for alerts to work.

Want more Social Good stories in your inbox? Sign up for Mashable’s Top Stories newsletter today.