Hobbies
0

‘I Expect Some Really Bad Stuff To Happen,’ Says the CEO of ChatGPT’s Parent Company—Here’s What He’s Talking About

Key Takeaways

  • OpenAI CEO Sam Altman recently warned that “some really bad stuff” is coming with AI—especially deepfakes and other “really strange or scary moments.”
  • OpenAI’s new video app Sora 2 quickly claimed the top spot of Apple’s App Store in the days after its launch late last month, showing how quickly deepfake-style tech is going mainstream.
  • Altman said he hopes society learns how to build guardrails before the tech gets even more powerful.

Sam Altman, CEO of the OpenAIthe company behind ChatGPTis providing unexpectedly stark warnings about the effects of his own products and those like them.

“I expect some really bad stuff to happen because of the technology,” he said in a recent interview on the a16z podcast from the venture capital firm Andreessen Horowitz.

The warning isn’t hypothetical. Videos made by OpenAI’s new version of Sora, released late last month by invitation only, were quickly seen on social media as the app quickly climbed to number one on Apple’s (AAPL) U.S. App Store. Social media soon included Sora-authored deepfakes featuring Martin Luther King Jr. and other public figures, including Altman himself, who was depicted as engaged in various forms of criminal activity. (OpenAI later blocked users from making Martin Luther King Jr. videos on Sora.)

But if Altman truly expects “really bad stuff to happen,” why is his company seemingly helping to accelerate its arrival?

Why This Matters To You

AI-generated deepfakes may be indistinguishable from real videos, which may make it difficult to trust what you see on social media. You may lose trust in the news footage or financial advice videos you see on the platforms faster than tech companies and regulators can build safeguards. Scammers are already using similar tools to create fake videos for use in fraud. Use caution and always question the content you see on social media before taking it as truth.

Altman: We Must ‘Co-Evolve’ With AI

Altman’s rationale for going ahead with this public release is that society needs a test drive.

“Very soon the world is going to have to contend with incredible video models that can deepfake anyone or kind of show anything you want,” he said on the podcast.

Rather than perfecting the technology behind closed doors, he argues society and AI must “co-evolve”—that “you can’t just drop the thing at the end.” His theory: Give people early exposure so communities can build norms and guardrails before these tools become even more powerful.

The stakes include losing what we’ve long taken as evidence of the truth—videos of events have helped change world history. But the upside, according to Altman, is that we’ll be better prepared when even more sophisticated tools arrive.

Warning

Holocaust-denial videos created with Sora 2 collected hundreds of thousands of likes on Instagram within days of the app’s launch, according to the Global Coalition Against Hate and Extremism. The organization argues that OpenAI’s usage policies—which lack specific prohibitions against hate speech—have helped enable extremist content to flourish online.

What Altman Says Comes Next

Altman’s warning wasn’t just about fake videos. It was about what happens when so many of us outsource our decisions to algorithms few people understand.

“I do still think there are going to be some really strange or scary moments,” he said, stressing that just because AI hasn’t yet caused a catastrophic event “doesn’t mean it never will.”

“Billions of people talking to the same brain” could create “weird, societal-scale things,” he said. Put another way, it could lead to unexpected chain reactions, causing consequential shifts in information, politics, and communal trust that ripple faster than anyone can control.

Despite these broad shifts affecting us all, Altman argued against regulations for this technology.

“Most regulation probably has a lot of downside,” he said. Though Altman also said he supports “very careful safety testing” for what he called “extremely superhuman” models.

“I think we’ll develop some guardrails around it as a society,” he said.


Source link

More Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Most Viewed Posts
No results found.