OpenAI is expanding the reach of its AI-powered video engine: users can now generate Sora 2 videos via the web, no longer only through the iOS app. This update opens up new access points and comes alongside an increase in permitted video duration, making the Sora 2 experience more flexible and inclusive.
Prior to today’s update, Sora 2 video generation was restricted to the invite-only iOS app, excluding users without iPhones from using the new model. The shift to web support allows broader audiences to experiment with Sora 2’s capabilities. That said, access still requires an invite code for the new video model layer. Meanwhile, ChatGPT Plus and Pro users retain access to the original Sora model without needing an invite.
With the latest update, eligible users can now invoke Sora 2 directly via the web interface, removing the iPhone-only limitation. However, the invite code requirement remains in place for new users. Users with ChatGPT Plus or Pro subscriptions can still use the original Sora model without needing invites, but to access Sora 2’s enhanced features, the invite remains essential.
In the same announcement, OpenAI revealed that base users of Sora 2 can now generate video clips up to 15 seconds-an upgrade over the prior 10-second cap. Pro-tier subscribers gain even more flexibility, with support for generating up to 25-second AI videos. This expansion empowers creators to produce richer and more expressive content.
For ChatGPT Pro users, OpenAI has also enabled the Sora storyboard on the web. This feature provides an editing canvas to insert reference images, adjust resolution, set aspect ratios, and manipulate duration. A video timeline lets creators prompt individual frames or segments, giving granular control over the generated content.
Users will see the video timeline interface and can author different text prompts for separate frames or segments-guiding the AI to shape varying actions or transitions within a single clip.
Sora 2 is designed to deliver more reliable physics, improved object permanence, and synchronized audio. These enhancements allow more realistic compositions -such as a basketball bouncing rather than teleporting if a shot misses. OpenAI also embeds visible and invisible provenance signals (e.g., watermarks and C2PA metadata) to mark content as AI-generated. :contentReference
With the increased compute demands, OpenAI has ramped infrastructure, enabling the web rollout and length expansion. :contentReference Meanwhile, partnerships like Mattel’s collaboration to test product concept visualization via Sora 2 are underway. :contentReference
The broader launch also raises ethical and operational challenges. Some users have criticized Sora 2’s social-style feed as encouraging superficial content consumption. Others question moderation and safety, especially given its capacity for realistic renderings of people and events. OpenAI states that early access features are restricted: upload of real person images, especially minors, are limited, and watermarking and reverse-audio/image tracking help enforce accountability.
Some content creators see Sora 2’s app-driven format as limiting, relegating powerful generative tech to short scrollable clips. Balancing easy access with creative integrity and responsible use is clearly a pressing task for OpenAI’s roadmap.
The web access expansion means creators on non-iOS platforms can finally test and adopt Sora 2 workflows. Longer durations unlock more expressive storytelling for marketing, social content, or narrative shorts. The storyboard and timeline tools make iterative control easier, reducing the need for external editing tools.
That said, early adopters will still contend with invite-only gatekeeping, moderation constraints, and feature rollouts. As OpenAI refines policies and infrastructure, access may broaden further.
OpenAI is now letting users generate Sora 2 videos on the web and increases duration limits-this dual upgrade marks a key phase in making advanced AI video tools more accessible, customizable, and powerful across digital content communities.
No comments yet. Be the first to comment!