If you thought it was hard telling deepfakes on TikTok, the confusion and misinformation will only be getting worse. Sam Altman of OpenAI has confirmed that ChatGPT will get video creation tools within the next two years.
On the Unconfuse Me podcast (via Tom’s Guide) with Bill Gates, Altman discussed the future of ChatGPT. When asked about the company’s plans for the next two years, Altman stated that “multimodality will definitely be important… speech in, speech out, images, eventually video. Clearly people want that.”
How it works
Altman didn’t go into specifics on how video creation would function in ChatGPT or even when it would launch, but we can make a guess based on the AI tool’s existing format. Users would simply need to input a description of what they want to see, and ChatGPT would spit back an AI-generated video.
Obviously, there are some major benefits to making video even easier to create. As video becomes the backbone of social media, more and more people are looking to create video content. Simplifying that process is of course something that comes with a number of benefits for creators of all skill levels.
The deepfake problem
However, while this tool isn’t inherently bad, it is open for misuse. As we’ve already seen with image generators like Dall-E and Midjourney, Deepfakes of celebrities and political figures are simple to create. Midjourney even halted free trials over viral deepfakes just last year.
Deepfakes are already a major problem on social media platforms from TikTok to Facebook. Those deepfakes range in severity from fake celebrity dance videos to fake political ads. Making video generation easier than ever will only make the misinformation problem worse, as the barrier to entry for deepfakes will be significantly lowered with the use of a tool like ChatGPT.
As we’ve come to learn in recent years, misinformation is very quick to spread online and becomes difficult to disprove as it gets shared more and more. While we’ve written about how to spot AI content online, these tools will only get better and harder to spot over time which means people will need to be even more vigilant about what content they consume.