Tech

OpenAI’s Sora 2 AI Image Maker Is Already Generating Chaos

Who could’ve predicted such a thing? Who, I ask, who?

Looks pretty real, right? Sora 2 generated this. – Credit: OpenAI

How can anybody be sure that anything they see these days is real?

The New York Times reported on Friday, October 3, that in the three days since OpenAI’s Sora 2 has been available for use, people have already managed to create potentially inflammatory videos that are, frankly, so much more realistic-looking than past AI-generated videos that they could cause mass misinformation on an entirely new scale.

Videos by VICE

yep, that’s a sora-generaetd video of openai ceo sam altman shoplifting in a target – credit: @GabrielPeterss4 at x.com

Yep. we saw this coming.

Put aside the amusing video of OpenAI’s Sam Altman getting caught shoplifting from a Target, and consider the ramifications of people who use Sora’s incredible realism to spread misinformation and inflame public tensions.

One Sora-generated video showed what appeared to be security camera footage of a masked man stuffing ballots into a mailbox. Another showed the immediate aftermath of an explosion in Israel. Others showed robberies in convenience stores and home break-ins, as The Times reports.

Sora has guardrails that keep it from generating videos of violence and of famous, living public figures, such as Trump, but people have found some ways around it. For one Sora video, a person merely asked for a video of rallygoers at a political rally to hear a president, and although he wasn’t shown, the voice certainly sounded like Obama’s.

For its part, OpenAI includes a moving watermark that hops around the screen during playback. If you’re watching a Sora video, you’ll see a small, anthropomorphic puff of smoke with two eyes, sort of like if Kirby had a child with a cloud.

It’s transparent—it’s a watermark, after all—but its opacity is turned up higher than most watermarks I tend to see, and so it’s not easy to miss.

How long will it be before people find a way to remove it, though? It’s already scary easy for people to fool folks into vigilante lynchings, insurrections, and such based on just a few words in a Facebook post.

Give them a photo-realistic, AI-generated video showing something that never happened or existed? We knew the moment was coming. I just didn’t think we’d get here this fast.

Thank for your puchase!
You have successfully purchased.