Tech / Technology

ChatGPT can now analyze documents including PDFs

Posted on:

A ChatGPT update brings several important new features, including the ability to upload documents and have them analyzed.
ChatGPT app

OpenAI’s ChatGPT is getting an important update that allows users to upload documents and have them analyzed.

The new version, currently in beta and rolling out to some ChatGPT Plus subscribers (@luokai via The Verge), give users the ability to upload many types of documents, including PDFs or data files.

I was able to test the chatbot’s new feature myself, by turning on beta features in the settings, and then choosing “Advanced data analysis,” which allows for file uploads and gives ChatGPT the ability to write and execute python code. I first tried uploading Shakespeare’s Macbeth in PDF format, though ChatGPT couldn’t analyse the file due to its formatting (it did, however, recognize the play, and it offered to give me a summary of it anyways). I also tried with a scholarly article on the economic impact of melting ice caps; ChatGPT analyzed the file, and provided bulleted key points, as well as a number of additional insights, rounding it up with a comprehensive summary of the document.

This functionality can be incredibly powerful in certain situations, as you can now feed ChatGPT specific documents and have it extract summaries, various data points, or even write graphs and charts based on that data.

It’s worth noting that ChatGPT’s creator OpenAI has already landed in hot water for training its models on copyrighted work. Anyone using ChatGPT to analyze documents should be mindful of which documents they use, and even more so, how they use ChatGPT’s results.

ChatGPT’s new beta also has a feature that makes it easier to use, as it automatically switches between various modes of operation, including Browsing, DALL-E, and Advanced Data Analysis. This was not enabled for me, but it definitely sounds better than having to choose the specific tool you want to use every time you fire up ChatGPT.

While some ChatGPT Plus subscribers can try these new features out right now, there’s no word on when these features might become available to everyone.

Tech / Technology

The White House announces an executive order on AI regulation — how ChatGPT and its ilk are affected

Posted on:

The White House just announced an executive order on AI regulation, which means major players like Open AI, Google, Microsoft and other prominent AI players must abide by the new legislation.
President Biden speaking at a podium

The White House just announced a thunderous executive order tackling AI regulation. These directives are the “strongest set of actions any government in the world has ever taken” to protect how AI affects American citizens, according to White House Deputy Chief of Staff Bruce Reed.

The Biden administration has been working on plans to regulate the untethered AI industry. The order builds on the Biden-Harris blueprint for an AI Bill of Rights as well as voluntary commitments from 15 leading tech companies to work with the government for safe and responsible AI development.

Instead of waiting for Congress to pass its own legislation, the White House is storming ahead with an executive order to mitigate AI risks while capitalizing on its potential. With the widespread use of generative AI like ChatGPT, the urgency to harness AI is real.

White House AI executive order: 10 key provisions you need to know

What does the executive order look like? And how will it affect AI companies? Here’s what you need to know.

1. Developers of powerful AI systems (e.g., OpenAI, Google and Microsoft) must share the results of their safety tests with the federal government

In other words, while a prominent AI company is training its model, it is required to share the results of red-team safety tests before they are released to the public. (A red team is a group of people that test the security and safety of a digital entity by posing as malicious actors.)

According to a senior administration official, the order focuses on future generations of AI models, not current consumer-facing tools like ChatGPT. Furthermore, companies that would be required to share safety results are those that meet the highest threshold of computing performance. “[The threshold] is not going to catch AI systems trained by graduate students or even professors. This is really catching the most powerful systems in the world,” said the official.

2. Red-team testing will be held to high standards set by the National Institute of Standards and Technology

Homeland Security and the Departments of Energy will also work together to determine whether AI systems pose certain risks in the realm of cybersecurity as well as our chemical, biological, radiological, and nuclear infrastructure.

3. Address the safety of AI players using models for science and biology-related projects

New standards for “biosynthesis screening” are in the works to protect against “dangerous biological materials” engineered by AI.

4. AI-generated content must be watermarked

The Department of Commerce will roll out guidance for ensuring all AI-generated content — audio, imagery, video, and text — is labeled as such. This will allow Americans to determine which content is created by a non-human entity, making it easier to identify deceptive deepfakes.

5. Continue building upon the ‘AI Cyber Challenge’

For the uninitiated, the AI Cyber Challenge is a Biden administration initiative that seeks to establish a high-level cybersecurity program that strengthens the security of AI tools, ensuring that vulnerabilities are fixed.

6. Lean on Congress to pass “bipartisan data privacy legislation”

The executive order is a message to Congress to speed things up. Biden is calling on lawmakers to ensure that Americans’ privacy is protected while prominent AI players train their models. Children’s privacy will be a primary focus.

7. Dig into companies’ data policies.

The White House says that it will evaluate how agencies and third-party data brokers collect and use “commercially available” information, meaning public datasets. Some “personally identifiable” data is available to the public, but that doesn’t mean AI players have free rein to use this information.

8. Tamp down on discrimination exacerbated by AI

Guidance will be rolled out to landlords, federal contractors, and more to reduce the possibility of bias. On top of that, the government will introduce best practices to address discrimination in AI algorithms. Plus, the Biden administration will address the usage of AI in sentencing regarding the criminal justice system.

9. Attract top global talent

As of today, the ai.gov site has a portal for applicants seeking AI fellowships and job opportunities in the U.S. government. The order also seeks to update visa criteria for immigrants with AI expertise.

10. Support workers vulnerable to AI developments

The Biden administration will support workers’ collective bargaining influence by developing principles and best practices to protect workers against potential harms like surveillance, job replacement, and discrimination. The order also announced plans to produce a report on AI’s potential for disrupting labor markets.

Mashable will be down in D.C. to get more information about how the new AI executive order will affect major players like Open AI, Google, and Microsoft as well as the average American citizen. Stay tuned for our coverage on this matter.

Tech / Technology

Google paid the $26 billion price of being ‘default’

Posted on:

Google wants people to use it as their default search engine — so much so that it might have been willing to pay for it.
The logo of Google is displayed on the mobile phone screen above search box is displayed on the screen in Ankara, Turkiye on September 18, 2023.

Google seems scared.

The tech giant really wants people to use it as their default search engine — so much so that it might have been willing to pay for it.

According to the US v. Google federal antitrust trial against the company reported by CNBC, Google paid $26.3 billion to be the default search engine on web and mobile browsers in 2021. That’s the cost of more than two million Rolex watches. With $26 billion, you could purchase multiple professional sports teams, fund significant scientific research projects, or support massive infrastructure development in a country. If you spent $1 per second, it would take you over 820 years to spend $26 billion.

The Verge did the math and, if you look at how much money Google makes in ad revenue, which would likely be one of the main reasons to push it as the default search engine, the platform is spending 16 percent of its search revenue and 29 percent of its profit to get this done. Remarkably, $26.3 billion is just 1.7 percent of Google’s total market cap. It’s also half of what Elon Musk bought Twitter, now X, for, which feels like another L for Musk.

It’s unclear how much money Google paid specific companies and partners to be the default search engine on its platforms, but CNBC reported that Apple likely received a pretty big piece of the pie. Google could pay Apple as much as $19 billion, CNBC reported in a separate piece.

Google, of course, would probably have preferred these numbers to stay secret. Now everyone knows exactly how much their default settings are worth — more than three billion chicken sandwiches from Popeyes.