SpongeBob SquarePants Cooking Meth and Fake JFK Speeches: How the Sora 2 Launch Went Sideways

lightbright

Master Pussy Poster
BGOL Investor
The tech company's latest bet is on more advanced animations, but it quickly had to backtrack due to glaring copyright issues

At the end of 2024, OpenAI unveiled Sora, a text-to-video AI model that could generate moving images based on user prompts, ranging from stylistic animations to photorealistic “footage.” Although these snippets often included visual errors that clearly marked them as products of AI processing, the ramifications of such technology were felt far and wide: As early adopters proclaimed new creative opportunities, political commentators fretted over the potential for disinformation, while the debate in Hollywood about whether and how to leverage AI took on added urgency. (In March, OpenAI even held a Sora event at an L.A. movie theater to woo industry insiders, screening 11 short “films” made with the model.)

Almost a year after Sora’s debut, the AI boom (which many have argued is a bubble) continues apace, and OpenAI has now unveiled Sora 2. The AI firm describes the updated model’s outputs as “more physically accurate, realistic, and more controllable than prior systems,” with “synchronized dialogue and sound effects.” OpenAI CEO Sam Altman, meanwhile, called it “a tremendous research achievement,” and said that using it was “the most fun I’ve had with a new product in a long time.” For now, Sora 2 is only accessible by exclusive invite — an OpenAI spokesperson tells Rolling Stone that they have a waiting list and are “unable to provide a code at this time” — but all the hype and the tightly controlled release don’t mean the rollout has been entirely smooth sailing.

Disinformation and Extremist Content​

An important distinction between Sora and Sora 2 is that the latter is now the basis for a new app — simply called “Sora” — that functions as a social media network. It’s essentially a version of TikTok with nothing but artificially generated content. As such, videos appear in a user’s feed, and can be liked and remixed by others on the platform. Last week, on the day Sora 2 officially launched, an OpenAI employee who works on the product claimed to have posted the first viral video there: a deepfake of security camera footage showing Altman shoplifting graphics processing units, or GPUs, hardware essential for the computing power to run AI systems such as Sora itself.
The implications were obvious. Not only did other people generate similar bogus footage of Altman post it as if it were authentic, but tech reporters at The Washington Post and elsewhere soon demonstrated that Sora 2 could depict real people dressed as Nazis, fabricate false archival footage of John F. Kennedy and Martin Luther King Jr. saying things they never really did, insert other users into historical events such as the Jan. 6 Capitol riot, and generate “ragebait” scenes of confrontations between individuals of different races. While plenty of the early videos were patently unrealistic — a segment in which the late rapper Tupac Shakur appears on Mister Rogers’ Neighborhood, for example, or a 1990s-era commerical for a toy version of Jeffrey Epstein‘s private island — it’s clear that the updated model can be abused to extremist ideological ends.

Copyright Infringement

Pikachu, Ronald McDonald, the kids of South Park, and Peter Griffin from Family Guy were among the many pieces of protected intellectual property to show up on the Sora app shortly after it launched. Copyright considerations aside, some of it was harmless, yet it doesn’t take a corporate lawyer to understand that images of SpongeBob SquarePants cooking meth or sporting a Hitler mustache are going to cause legal headaches down the line. “The only conclusion I can draw is OpenAI is trying to get sued,” quipped one early user on X, sharing screenshots of Sora videos featuring well-known cartoon characters.

Sure enough, just three days after the launch of Sora 2, OpenAI had to crack down on this legally hazardous content with a revised copyright policy. Whereas the company had first announced that any material was fair game unless rightsholders opted out of the platform — potentially a sneaky way of permitting the appropriation of almost any branded content — Altman announced in a blog post on Friday that they were switching to an “opt-in” arrangement that would give rightsholders “more granular control” over how their IP does or doesn’t appear on Sora. The CEO noted that some “edge cases” might get through the added guardrails, though users did start receiving error notices on prompts that indicated a possible “similarity to third-party content.”

Mounting Energy Usage

Altman’s Friday update also acknowledged that Sora users “are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.” An explosion of video generation presents a significant strain on OpenAI’s data servers. By one estimate from researchers writing in MIT Technology Review earlier this year, even a short, non-high-definition video clip may require more than 700 times the energy it takes to produce a high-quality still image.

CONTINUED:

 
Back
Top