The Latest Updates from the Frontier of AI, GenAI Video, and Immersive Experiences
Video generation models aren’t new. Artificial intelligence has been able to create realistic video content for a while, and consumer tools are readily available. Both Google and OpenAI have released Veo and Sora text-to-video and image-to-video generation models in the past. So, why is the technology only blowing up now with the launch of OpenAI’s Sora app?
For those unfamiliar, Sora is the AI video-generation app from OpenAI that combines a generator and social platform in one. It launched for iOS and on the web in September and recently debuted for Android. Chances are, even if you haven’t heard of Sora, you’ve probably seen a video from it shared on another platform, like X or TikTok.
Sora is on its way to becoming a household name for OpenAI in the same way ChatGPT did two years ago. It’s succeeding and starting to infiltrate pop culture in a way Google’s Veo video-generation models haven’t. After trying both Veo 3 and Sora 2, it’s clear that OpenAI is winning the generative video race by going all-in on presentation — and removing as many limitations as possible.
AndroidCentral
Destinate creates professionally produced cinematic AI videos for major openings, launches, and pre-debut campaigns. Using a hybrid approach that blends GenAI, real-world assets, and creative direction, we help brands bring destinations, developments, and experiences to life before they open.
To Our Ready, Player, Travel Newsletter Today