- Sora | OpenAI
Turn your ideas into videos with hyperreal motion and sound
- Sora: Creating video from text | OpenAI
Sora builds on past research in DALL·E and GPT models It uses the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data As a result, the model is able to follow the user’s text instructions in the generated video more faithfully
- Sora 2 is here - OpenAI
Our latest video generation model is more physically accurate, realistic, and controllable than prior systems It also features synchronized dialogue and sound effects Create with it in the new Sora app
- Sora is here - OpenAI
Our video generation model, Sora, is now available to use at sora com Users can generate videos up to 1080p resolution, up to 20 sec long, and in widescreen, vertical or square aspect ratios You can bring your own assets to extend, remix, and blend, or generate entirely new content from text
- Sora 2 Model | OpenAI API
Flagship video generation with synced audio Compare
- Getting started with the Sora app - OpenAI Help Center
Sora is a new OpenAI app for creating short videos with synchronized audio It’s powered by our next-generation model, Sora 2, which improves realism, physics, and instruction-following The app is designed for low-friction, collaborative creation - make something from text or a photo, remix what you love, and share with friends
- Sora System Card - OpenAI
Sora is OpenAI’s video generation model, designed to take text, image, and video inputs and generate a new video as an output Sora builds on learnings from DALL-E and GPT models, and is designed to give people expanded tools for storytelling and creative expression
- Launching Sora responsibly - OpenAI
To address the novel safety challenges posed by a state-of-the-art video model as well as a new social creation platform, we’ve built Sora 2 and the Sora app with safety at the foundation Our approach is anchored in concrete protections
|