Google Omni AI Video Generator
Open a chat. Describe a scene. Remix it until it's yours. The chat-edit video workflow Gemini calls 'Omni' — live in your browser today. No waitlist, no API keys, no installs. Powered by the Gemini video stack (Veo 3.1, the model Omni extends); auto-upgrades to Omni the day Google ships it.
Powered by the Gemini video stack — describe, remix, render
Verified leaks from 9to5Google, TestingCatalog, Chrome Unboxed and r/GeminiAI
Real Google Omni Video Demos — Generated From the Leaked Workflow
These are the same prompts Reddit tester @Zacatac_391 ran inside the leaked Gemini Omni interface on May 11, 2026 — generated through the Gemini video stack the Omni model extends. Describe a scene in plain language, render, then remix camera, audio, or dialogue without leaving the chat.
Spaghetti Seaside — Two-Person Dialogue | Google Omni Video Demo
More Omni Demos — Generate Your Own in the Workspace
Publish Everywhere
Reported by the publications that broke the Gemini Omni leak
What Is Google Omni — The New Gemini Video Model
Google Omni (also referred to as Gemini Omni) is the new AI video generation model surfacing inside Google's Gemini app. In May 2026, a UI string reading 'Create with Gemini Omni — Meet our new video model. Remix your videos, edit directly in chat, try a template, and more.' was spotted by X user @Thomas16937378 and propagated through 9to5Google, TestingCatalog, and Chrome Unboxed. Metadata suggests Omni extends the existing Gemini video stack (internal codename Toucan, currently powered by Veo 3.1). aigeminiomni.org lets you practice the same chat-edit workflow today; <a href="/showcases" class="underline decoration-primary/40 underline-offset-4 hover:text-primary">see what creators made in the workspace</a> or <a href="#workspace" class="underline decoration-primary/40 underline-offset-4 hover:text-primary">open the workspace</a> to generate your first clip.
Chat-Native Video Editing
Don't open a timeline. Describe the change. Google Omni's defining UI string — 'edit directly in chat' — turns a video into a living document. Want a tighter shot, warmer lighting, a different line of dialogue? Type it. The Gemini video stack regenerates only what changed.
One-Click Remix of Any Clip
Every Google Omni generation becomes a remix seed. 'Remix your videos' was the second pillar Google teased in the Gemini app UI. Swap the protagonist, the camera angle, the time of day, the entire setting — in a single click. The original stays. The variant is yours.
Templates That Don't Feel Like Templates
'Try a template' was the third Gemini Omni pillar — but templates here are starting jokes, not finishing lines. Start from cinematic dialogue, anime opening, top-down ASMR, or product unboxing. Then bend it through chat until nobody can tell which template was the seed.
Native Audio, Multi-Camera, Ambient Music
Reddit tester @Zacatac_391 said the voice quality was 'much better than Veo by a large margin' and that Omni 'even added some light background music' during a restaurant scene. The seamless multi-camera transitions are Omni's standout signature — and they happen inside a single shot, no edit needed.
Use Google Omni Online — No Install, No Waitlist
Open a chat in your browser and generate the Omni-style video workflow today. Free generations to start, plans from $9.9/month for high-volume creators.
Why the Google Omni Workflow Matters
Most video AI today asks you to write a perfect prompt, hit generate, and pray. Google Omni's chat-edit pattern flips that loop: you start rough, refine in conversation, and never leave the canvas. Here's what changes when the model lives inside the chat.
Workflow Advantage
No Timeline. No After Effects. No Re-Renders.
Traditional video AI gives you a clip and dumps you back into a timeline. Google Omni keeps every refinement inside the same chat — change a face, swap a line, tighten the shot — and the model regenerates only the differential. Reddit testers reported camera angle changes 'frequently and with good coherence.'
How the Google Omni Workflow Works
Four steps. One canvas. The Omni workflow Gemini teased — described, demoed, and running in your browser today.
1. Describe a Scene
Open the workspace. Pick a template or type a scene from scratch — 'A professor writes a trigonometric proof on a chalkboard,' or 'Two men eating spaghetti at a seaside restaurant.' Plain language only. No prompt engineering required.
2. Generate the First Clip
The Gemini video stack renders a ~10-second clip with native audio and ambient music. Reddit testers reported this was the 'best video model I have seen, maybe not the best, but a really strong performance' — particularly on prompt adherence.
3. Remix in Chat
Don't open an editor. Type 'tighter close-up on the actor's eyes,' 'swap the centerpiece for a candle,' or 'add light background piano.' Google Omni regenerates only the parts that changed, preserving the rest of the shot.
4. Render & Share
Export your final clip. Share the chat thread as a public template so others can fork your prompt. When Google officially ships Omni at I/O 2026, every clip in your workspace re-renders at higher fidelity automatically.
Google Omni AI Video Generator — Key Features
Every capability Gemini Omni teased in the leaked UI, available in the workspace today.
Chat-Edit Video
Type changes in plain language. Google Omni regenerates only the differential, preserving everything you didn't change. The defining Omni interaction pattern, working today.
One-Click Remix
Every generated clip becomes a seed. Spin variants in seconds — same scene, different lighting, different protagonist, different season. 'Remix your videos' is the second leaked Omni pillar.
Leaked Omni Templates
Start from the same six prompts Reddit testers used in the leaked Gemini Omni interface — chalkboard math, seaside dialogue, product unboxing, and three more — then bend them through chat.
Native Audio + Music
Synthesized speech, ambient room tone, and contextual background music — all rendered in one pass. Reddit testers called the audio quality 'much better than Veo by a large margin.'
Seamless Multi-Camera
Multiple camera angles inside a single shot, with coherent action across cuts. The standout feature Phemex News and Reddit testers highlighted as Omni's signature visual move.
Auto-Upgrade to Omni
Powered by the Gemini video stack (Veo 3.1) today. The moment Google ships the public Omni model at Google I/O 2026 on May 19-20, your workspace switches automatically.
Google Omni vs Veo 3.1 vs Sora 2 — Which One Can You Use Today?
Where Google Omni sits in the May 2026 video AI landscape — alongside the Gemini Veo 3.1 stack it extends, and against OpenAI's Sora 2 which shut down its consumer app on April 29, 2026.
Google Omni — Chat-Edit Defining Pattern
Status: leaked May 2, 2026; expected official launch at Google I/O 2026 (May 19-20). Chat-edit video, one-click remix, native audio with ambient music, and seamless multi-camera. Use via aigeminiomni.org today on the Gemini video stack; auto-switches to the official Omni model the day Google ships it.
Veo 3.1 (Toucan) — Production Gemini Video Today
Internal codename Toucan. Currently powers Google's production Gemini video generation with 4K output and natively generated audio. Gated, region-locked, and limited to Gemini Advanced subscribers. Same model Omni extends — meaning the chat-edit workflow you practice today already runs on Omni's foundation.
Sora 2 — Shut Down April 29, 2026
OpenAI shut down the Sora 2 consumer app on April 29, 2026. Google responded publicly with 'video's here to stay' and accelerated the Omni rollout for I/O 2026. The Sora 2 era ended; the Omni era is the next chapter — and you can practice in it now.
Who Uses the Google Omni Workspace
The Google Omni workflow isn't just a faster Veo — it's a different relationship between you and a video model. Here's who benefits most from generating-then-remixing through chat instead of writing perfect prompts.
Short-Form Creators on TikTok, Reels, Shorts
Generate the first cut from a chat description. Remix in chat for vertical, horizontal, and square. Push BPM or change palette without leaving the canvas. Google Omni's one-click remix maps perfectly to a publishing cadence that demands variants, not perfection.
Indie Filmmakers and Spec Directors
Block a scene in 10 seconds before you scout a location. Test camera language, lighting, and dialogue beats through chat. The Omni workspace becomes a previz tool that costs nothing to iterate inside.
Performance Marketers and Brand Studios
Spin 30 ad variants from one chat thread. Each remix preserves the brand frame and swaps only the offer, headline, or hero shot. Native audio means no separate VO budget for early testing.
Educators and Course Creators
Generate an explainer with on-screen math, diagrams, or chalkboard text — the kind of content Reddit testers proved Google Omni handles well. Remix the language or pace through chat without re-recording.
Google Omni — The Numbers That Matter
Verified data points from the May 2026 Gemini Omni leak window and the broader video AI landscape Omni is launching into.
May 2 Day Gemini Omni UI was first spotted in 2026
Day Gemini Omni UI was first spotted in 2026
80.6K Views on TestingCatalog's leak thread
Views on TestingCatalog's leak thread
May 19 Google I/O 2026 — expected Omni launch
Google I/O 2026 — expected Omni launch
What Early Testers Are Saying About Google Omni
Verified leaks and first-hand reactions to the Gemini Omni video model. Every quote below links back to its original source — Reddit, X, or a named publication.
I won't lie, this is one of the best video models I have seen — maybe not *the* best, but a really strong performance. The voice quality is much better than Veo by quite a large margin. It even added some light background music.
@Zacatac_391, Reddit r/GeminiAI · Early access tester · May 11, 2026
@Zacatac_391
Reddit r/GeminiAI · Early access tester · May 11, 2026
A new video generation model is apparently coming to Gemini, with 'Omni' producing some pretty impressive initial results. The video does a great job of handling text while putting out a fairly realistic video.
Ben Schoon, Senior Editor, 9to5Google · May 11, 2026
Ben Schoon
Senior Editor, 9to5Google · May 11, 2026
GOOGLE I/O: New evidence of the upcoming Gemini Omni video model has been spotted. Based on the description, we might be really talking about the true 'Omni' model based on Gemini, rather than Veo.
@testingcatalog, AI News on X · 80.6K views · May 11, 2026
@testingcatalog
AI News on X · 80.6K views · May 11, 2026
An impressive new Gemini 'Omni' video model just leaked ahead of Google I/O. The remix-your-videos, edit-directly-in-chat workflow is what makes this different from every other video AI we've seen this year.
Robby Payne, Chrome Unboxed · May 11, 2026
Robby Payne
Chrome Unboxed · May 11, 2026
The new Omni model isn't just good at video — it's also reasoning across these mediums. You got early access to the big new model that supposedly combines it all: video, audio, text, images.
u/Street_Celebration_3, Reddit r/GeminiAI commenter · May 11, 2026
u/Street_Celebration_3
Reddit r/GeminiAI commenter · May 11, 2026
Google appears to be testing a new video-generation model called Omni inside Gemini, surfaced via a UI string spotted ahead of Google I/O 2026: 'Start with an idea or try a template. Powered by Omni.'
WaveSpeed AI Research, Video infrastructure blog · May 3, 2026
WaveSpeed AI Research
Video infrastructure blog · May 3, 2026
Frequently Asked Questions about Google Omni AI Video Generator
Common questions about the Gemini Omni video model leak, how to use the workspace today, and how the auto-upgrade works at Google I/O 2026.
Be ready before Google I/O 2026.
Open your workspace, generate your first Omni-style video, and have it rendered when Google flips the switch on May 19. The workflow is the same. The engine upgrades itself.
