Tech / Technology

WhatsApp under fire for AI-generated sticker responses to ‘Palestine’

Posted on:

When users search for “Palestine” on Meta-owned WhatsApp, the AI-generated stickers return biased images.

Earlier this year, Meta-owned WhatsApp started testing a new feature that allows users to generate stickers based on a text description using AI. When users search “Palestinian,” “Palestine,” or “Muslim boy Palestine,” the feature returns a photo of a gun or a boy with a gun, a report from The Guardian shows.

According to The Guardian‘s Friday report, search results vary depending on which user is searching. However, prompts for “Israeli boy” generated stickers of children playing and reading, and even prompts for “Israel army” didn’t generate photos of people with weapons. That, compared to the images generated from the Palestinian searches, is alarming. A person with knowledge of the discussions that The Guardian did not name told the news outlet that Meta’s employees have reported and escalated the issue internally.

“We’re aware of this issue and are addressing it,” a Meta spokesperson said in a statement to Mashable. “As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems. We’ll continue to improve these features as they evolve and more people share their feedback.”

It is unclear how long the differences spotted by The Guardian persisted, or if these differences continue to persist. For example, when I search “Palestinian” now, the search returns a sticker of a person holding flowers, a smiling person with a shirt that says what looks like “Palestinian,” a young person, and a middle-aged person. When I searched “Palestine,” the results showed a young person running, a peace sign over the Palestinian flag, a sad young person, and two faceless kids holding hands. When you search “Muslim boy Palestinian,” the search shows four young smiling boys. Similar results are shown when I searched “Israel,” “Israeli,” or “Jewish boy Israeli.” Mashable had multiple users search for the same words and, while the results differed, none of the images from searches of “Palestinian,” “Palestine,” “Muslim boy Palestinian,” “Israel,” “Israeli,” or “Jewish boy Israeli” resulted in AI stickers with any weapons.

There are still differences, though. For instance, when I search “Palestinian army,” one image shows a person holding a gun in a uniform, while three others are just people in uniform; when I search “Israeli army,” the search returns three people in uniform and one person in uniform driving a military vehicle. Searching for “Hamas” returns no AI stickers. Again, each search will differ depending on the person searching.

This comes at a time in which Meta has come under fire for allegedly shadowbanning pro-Palestinian content, locking pro-Palestinian accounts, and adding “terrorist” to Palestinian bios. Other AI systems, including Google Bard and ChatGPT, have also shown significant signs of bias about Israel and Palestine.

Tech / Technology

Elon Musk’s AI project is launching. He says it’s the ‘best that currently exists’.

Posted on:

Elon Musk’s xAI project is launching on Nov. 4, Musk tweeted.
Elon Musk

Elon Musk’s artificial intelligence project, xAI, is launching its first product on Saturday.

Musk shared the news on X/Twitter on Friday, saying that xAI will release its “first AI” to a select group of users.

“In some important respects, it is the best that currently exists,” he tweeted.

Led by Musk, xAI is an artificial intelligence company consisting of AI experts that have previously worked at companies such as DeepMind, OpenAI, Google, Microsoft, and Tesla, as well as the University of Toronto. The company was launched in July 2023, with the self-proclaimed goal to “understand the true nature of the universe.”

Now, it appears that xAI will launch a product to some beta testers, though it’s unclear who is getting access.

Musk recently participated in an AI-related discussion with UK’s Prime Minister Rishi Sunak. He also hinted at various developments such as a new, AI-based “See similar” posts feature that’s rolling out on X/Twitter now.

On its website, xAI says it’s a “separate company from X Corp, but will work closely with X (Twitter), Tesla, and other companies to make progress towards our mission.”

While it’s unclear what type of AI will xAI launch tomorrow, it will be joining a growing number of AI-related products that launched in the past year or so, including OpenAI’s ChatGPT, as well as Google’s Bard chatbot.

Tech / Technology

OpenAI’s response to the AI executive order? Silence.

Posted on:

Many leading AI companies issued statements in response to President Biden’s executive order, but OpenAI has yet to say anything.
OpenAI CEO Sam Altman making a grimace in front of a microphone

In the wake of President Biden’s executive order on Monday, AI companies and industry leaders have weighed in on this watershed moment in AI regulation. But the biggest player in the AI space, OpenAI, has been conspicuously quiet.

The Biden-Harris administration’s far-ranging executive order addressing the risks of AI builds upon voluntary commitments secured by 15 leading AI companies. OpenAI was among the first batch of companies to promise the White House safe, secure, and trustworthy development of its AI tools. Yet the company hasn’t issued any statement on its website or X (formerly known as Twitter). CEO Sam Altman, who regularly shares OpenAI news on X, hasn’t posted anything either.

OpenAI has not responded to Mashable’s request for comment.

Of the 15 companies that made a voluntary commitment to the Biden Administration, the following have made public statements, and all of which expressed support for the executive order: Adobe, Amazon, Anthropic, Google, IBM, Microsoft, Salesforce, and Scale AI. Nvidia decline to comment.

In addition to crickets from OpenAI, Mashable has yet to hear from Cohere, Inflection, Meta, Palantir, and Stability AI. But OpenAI and Altman’s publicity tour proclaiming the urgent risks of AI and the need for regulation makes the company’s silence all the more noticeable.

Altman has been vocal about the threat that generative AI made by his own company poses. In May, Altman, along with technology pioneers Geoffrey Hinton and Bill Gates signed an open letter, stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

At a senate hearing in May, Altman expressed the need for AI regulation: “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” said Altman in response to inquiry from Sen. Blumenthal, D-CT about the threat of superhuman machine intelligence.

So far, cooperation with lawmakers and world leaders has worked in OpenAI’s favor. Altman participated in the Senate’s bipartisan closed-door AI summit, giving OpenAI a seat at the table for formulating AI legislation. Shortly after Altman’s testimony, leaked documents from OpenAI showed the company lobbying for weaker regulation in the European Union.

It’s unclear where OpenAI stands on the executive order, but open-source advocates say the company already has too much lobbying influence. On Wednesday, the same day as the AI Safety Summit in the U.K., more than 70 AI leaders issued a joint statement calling for a more transparent approach to AI regulation. “The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst,” said the statement.

Meta Chief AI Scientist Yann LeCun, one of the signatories, doubled down on this sentiment on X (formerly known as Twitter) by calling out OpenAI, DeepMind (a subsidiary of Google), and Anthropic for using fear-mongering to ensure favorable outcomes. “[Sam] Altman, [Demis] Hassabis, and [Dario] Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry,” he posted.

Anthropic and Google leadership have both provided statements supporting the executive order, leaving OpenAI the lone company accused of regulatory capture yet to issue any comment.

What could the executive order mean for OpenAI?

Many of the testing provisions in the EO relate to huge foundation models not yet on the market and future development of AI systems, suggesting consumer-facing tools like OpenAI’s ChatGPT won’t be impacted much.

“I don’t think we’re likely to see any immediate changes to any of the generative AI tools available to consumers,” said Jake Williams, former US National Security Agency (NSA) hacker and Faculty member at IANS Research. “OpenAI, Google, and others are definitely training foundation models and those are specifically called out in the EO if they might impact national security.”

So, whatever OpenAI is working on might be subjected to government testing.

In terms of how the executive order might impact directly OpenAI, Beth Simone Noveck, director of the Burnes Center for Social Change, said it could slow down the pace of new products and updates being released and companies will have to invest more in research and development and compliance.

“Companies developing large-scale language models (e.g. ChatGPT, Bard and those trained on billions of parameters of data) will be required to provide ongoing information to the federal government, including details of how they test their platforms,” said Noveck, who previously served as the first United States Deputy Chief Technology Officer under President Obama.

More than anything, the executive order signals an alignment with growing consumer expectations for greater control and protection of their personal data, said Avani Desai, CEO of Schellman, a top CPA firm that specializes in IT audit and cybersecurity.

“This is a huge win for privacy advocates as the transparency and data privacy measures can boost user confidence in AI-powered products and services,” Desai said.

So while the consequences of the executive order may not be immediate, it squarely applies to OpenAI’s tools and practices. You’d think OpenAI might have something to say about that.

Tech / Technology

Best free ChatGPT courses | Mashable

Posted on:

The best free ChatGPT courses on Udemy. Learn how to boost your business, increase your productivity, and so much more.
ChatGPT on laptop

TL;DR: Find the best free online ChatGPT courses on Udemy. Learn how to boost your business, increase your productivity, and so much more.


Have you tried using ChatGPT? If you have, you’ll know all about it’s enormous potential. If you haven’t, you’ve got a big surprise coming your way.

Whether or not you’re familiar with this popular chatbot, you should take the opportunity to learn more about this technology before it’s everywhere. This is the moment. Leave it too late and you’ll be left behind.

Fortunately, platforms like Udemy are offering a wide range of online courses on ChatGPT for free. We’ve curated a selection of standout courses from Udemy to kickstart your learning journey. These are the best online ChatGPT courses you can take for free:

These free online courses do not include certificates of completion or direct instructor messaging. But you still get unrestricted access to all the video content, so you can learn at your own pace.

Discover the best free ChatGPT courses on Udemy.

Tech / Technology

ChatGPT can now analyze documents including PDFs

Posted on:

A ChatGPT update brings several important new features, including the ability to upload documents and have them analyzed.
ChatGPT app

OpenAI’s ChatGPT is getting an important update that allows users to upload documents and have them analyzed.

The new version, currently in beta and rolling out to some ChatGPT Plus subscribers (@luokai via The Verge), give users the ability to upload many types of documents, including PDFs or data files.

I was able to test the chatbot’s new feature myself, by turning on beta features in the settings, and then choosing “Advanced data analysis,” which allows for file uploads and gives ChatGPT the ability to write and execute python code. I first tried uploading Shakespeare’s Macbeth in PDF format, though ChatGPT couldn’t analyse the file due to its formatting (it did, however, recognize the play, and it offered to give me a summary of it anyways). I also tried with a scholarly article on the economic impact of melting ice caps; ChatGPT analyzed the file, and provided bulleted key points, as well as a number of additional insights, rounding it up with a comprehensive summary of the document.

This functionality can be incredibly powerful in certain situations, as you can now feed ChatGPT specific documents and have it extract summaries, various data points, or even write graphs and charts based on that data.

It’s worth noting that ChatGPT’s creator OpenAI has already landed in hot water for training its models on copyrighted work. Anyone using ChatGPT to analyze documents should be mindful of which documents they use, and even more so, how they use ChatGPT’s results.

ChatGPT’s new beta also has a feature that makes it easier to use, as it automatically switches between various modes of operation, including Browsing, DALL-E, and Advanced Data Analysis. This was not enabled for me, but it definitely sounds better than having to choose the specific tool you want to use every time you fire up ChatGPT.

While some ChatGPT Plus subscribers can try these new features out right now, there’s no word on when these features might become available to everyone.

Tech / Technology

Google paid the $26 billion price of being ‘default’

Posted on:

Google wants people to use it as their default search engine — so much so that it might have been willing to pay for it.
The logo of Google is displayed on the mobile phone screen above search box is displayed on the screen in Ankara, Turkiye on September 18, 2023.

Google seems scared.

The tech giant really wants people to use it as their default search engine — so much so that it might have been willing to pay for it.

According to the US v. Google federal antitrust trial against the company reported by CNBC, Google paid $26.3 billion to be the default search engine on web and mobile browsers in 2021. That’s the cost of more than two million Rolex watches. With $26 billion, you could purchase multiple professional sports teams, fund significant scientific research projects, or support massive infrastructure development in a country. If you spent $1 per second, it would take you over 820 years to spend $26 billion.

The Verge did the math and, if you look at how much money Google makes in ad revenue, which would likely be one of the main reasons to push it as the default search engine, the platform is spending 16 percent of its search revenue and 29 percent of its profit to get this done. Remarkably, $26.3 billion is just 1.7 percent of Google’s total market cap. It’s also half of what Elon Musk bought Twitter, now X, for, which feels like another L for Musk.

It’s unclear how much money Google paid specific companies and partners to be the default search engine on its platforms, but CNBC reported that Apple likely received a pretty big piece of the pie. Google could pay Apple as much as $19 billion, CNBC reported in a separate piece.

Google, of course, would probably have preferred these numbers to stay secret. Now everyone knows exactly how much their default settings are worth — more than three billion chicken sandwiches from Popeyes.