© 2026 WNMU-FM
Upper Great Lakes News, Music, and Arts & Culture
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Support Today

OpenAI's Sora app may be going away, but its legacy will be the spread of AI video slop

SCOTT DETROW, HOST:

OpenAI has announced it is shutting down an app that could make AI-generated videos with the click of a button. NPR's Geoff Brumfiel reports on how it already changed the internet forever.

GEOFF BRUMFIEL, BYLINE: The app was called Sora, and it worked like this. Users typed in a brief description of what they wanted to see and Sora would generate a 10-second video that looked completely real, stuff like police body cam footage of a dog driving a car.

(SOUNDBITE OF ARCHIVED RECORDING)

AUTOMATED VOICE: (As Police Officer) All right. Paws off the wheel, buddy.

(SOUNDBITE OF TIRES SCREECHING)

AUTOMATED VOICE: (As Police Officer) Hey, don't. Hey, stop. Stop the car. Dispatch, driver just fled. It's a dog. Repeat.

BRUMFIEL: The app offered an infinite scroll of AI slop like that. But Sora also let users download videos and share them elsewhere. And share they did. Sora quickly became a go-to app for generating fake news, like videos of ICE agents arresting priests.

(SOUNDBITE OF ARCHIVED RECORDING)

AUTOMATED VOICE: (As ICE Agent #1) Bishop, stop resisting. Hands behind your back.

AUTOMATED VOICE: (As Bishop) You have no right. This is a church.

AUTOMATED VOICE: (As ICE Agent #2) Sir...

BRUMFIEL: That video was completely fake, as were other Sora videos of bombs falling in the Mideast, Venezuelans celebrating the U.S. capture of Nicolás Maduro and much, much more. Sora caused problems for OpenAI. There were accusations of copyright violations. And it gobbled up computing power generating all those videos. The company did not say why it stopped the app. But AI video slop will probably be Sora's greatest legacy, says Hany Farid, a professor at UC Berkeley who specializes in studying digital images. The app only lasted six months. But now?

HANY FARID: Every single video, every image, everything is now in doubt.

BRUMFIEL: Farid and his colleagues recently conducted research where they asked people to try and tell the difference between real and AI-generated videos, including videos made by Sora.

FARID: People were pretty bad at it. On average, their accuracy was about 70%, where chance is 50%.

BRUMFIEL: But people were actually worse at identifying real videos than fake ones. That's because they no longer trusted their eyes.

FARID: In fact, authenticating real content is harder because you only have the lack of evidence, which people don't feel comforted in.

BRUMFIEL: In the end, Farid says, liars will prosper because people will no longer be as likely to believe real videos of abuse and corruption.

FARID: It fuels that liar's dividend, which, of course, is incredibly dangerous.

BRUMFIEL: The problem isn't going away. There are many other tools which can generate fake AI content and they're better than ever. Meanwhile, OpenAI says it's redirecting the team that worked on Sora to help build AI-powered robots instead. Geoff Brumfiel, NPR News.

(SOUNDBITE OF LOLA YOUNG SONG, "CONCEITED") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Geoff Brumfiel works as a senior editor and correspondent on NPR's science desk. His editing duties include science and space, while his reporting focuses on the intersection of science and national security.