A new AI software that generates realistic videos is raising concerns around misinformation applications
Credit: NurPhoto/Getty Images
the_post_thumbnail_caption(); ?>OpenAI’s new text-to-video AI tool, Sora, produces realistic video from text or photo prompts. Sora, announced on February 15, 2024 by OpenAI, has not yet been released to the public, but is already raising concerns around potential misuse.
Journalist Lauren Leffer covers the story in Scientific American, an online news platform, and consults with research experts, including Irene Pasquetto, assistant professor at the University of Maryland College of Information Studies (INFO). In the article, Lauren dives into the technology behind Sora as well as the possibilities and perils – such as spreading misinformation.
Pasquetto cautions that “overstating Sora’s risks or possible harms can easily contribute to the cloud of hype around AI” – and that tackling disinformation is ultimately a social question, not a technical one. “It’s important, she says, to keep the harms in context and to focus on root causes: although Sora makes it easier and quicker to produce short videos—currently the dominant content on social media—it doesn’t, in itself, pose a new problem. There are already numerous ways to manipulate online videos.”
Read the full article here.
The orginal article was written by Lauren Leffer and published by Scientific American on March 4, 2024.