Three Tools That Used to Be Separate Are Now One
In late February 2026, Google shipped a major redesign of Flow, its AI video creation platform. The change: they merged three previously separate products into a single interface.
- Flow — the Veo-powered video generation tool
- Whisk — the visual collage and mood board tool for combining reference images
- ImageFX — the text-to-image generator
The result is a pipeline that goes from initial concept to generated image to animated, audio-synced video inside one workspace. For creators who were previously bouncing between tabs and exporting assets between tools, this is a meaningful quality-of-life change.
What Veo 3.1 Adds in This Update
The video generation in Flow runs on Veo 3.1, which Google updated alongside the Flow redesign. The practical additions worth knowing:
Native audio generation. Veo 3.1 generates synchronized audio alongside video — natural dialogue, ambient sound, sound effects. Same pattern we're seeing across the industry right now with Sora 2 and Runway Gen-4.5. The days of generating silent AI video and manually dubbing audio are ending.
Native 9:16 support. The "Ingredients to Video" feature now generates natively in vertical format. For anyone producing content for YouTube Shorts, Instagram Reels, or TikTok, this matters. Previously you were cropping horizontal output or accepting quality loss. Now you generate vertical and it looks right.
Cinematic style control. The model has improved understanding of cinematic language in prompts — references to shot types, lighting styles, and pacing that translate more accurately into output. Still imperfect, but noticeably better than earlier versions.
The Whisk Integration Is the Most Interesting Part
Whisk was always an underrated tool. The ability to combine multiple reference images into a visual direction — character appearance, environment, style — and generate from that combination is genuinely useful for pre-production work.
Now that Whisk feeds directly into Flow, you can build a visual reference board, generate still frames that match your direction, then animate those frames into video. That's a coherent pipeline for concept development that didn't exist cleanly before.
For pitching a visual direction to a client or collaborator before committing to production, this is practical. You can show a near-complete visual language at the concept stage without involving a designer or motion artist.
Pricing: Where It Gets Complicated
Flow is free to access with usage limits. The paid tiers:
- Google AI Pro: $19.99/month — higher generation quotas
- Google AI Ultra: $249.99/month — highest quotas, priority access
The free tier is genuinely useful for exploration and low-volume work. The jump from $20 to $250 is steep. For professional use that requires consistent volume, the mid-tier is the right entry point. The Ultra tier is for teams or heavy production pipelines.
How It Compares to Runway Right Now
Runway Gen-4.5 remains the standard I'd use for professional-quality output where creative control matters most. The model has more nuanced response to directorial prompts and the multi-shot editing that creates one-minute videos with character consistency is still ahead of what Flow produces.
Google Flow has the advantage of the integrated pipeline and the free entry point. For creators who are building a workflow from scratch and want everything in one place, it's a solid starting point. For productions where output quality is the primary concern, Runway is still the benchmark.
They solve slightly different problems. Flow is a creative workspace. Runway is a production tool. The distinction is real in practice.
What I Would Actually Use This For
The specific use case where Google Flow makes sense: early-stage concept development with clients who aren't technically sophisticated. The integrated Whisk-to-video pipeline lets you walk through a visual concept in a single session — mood board to generated frames to animated preview — without explaining three different tools.
That's a real workflow improvement for client-facing work. The output quality at that stage doesn't need to be final. It needs to communicate direction. Flow handles that well.
If you haven't explored it since the redesign, it's worth an afternoon. The integration is cleaner than it was six months ago and the Veo 3.1 output at native 9:16 is noticeably better for short-form content.
The pattern I keep noticing across these unified launches
Google Flow is the third "unified workspace" launch I have used this year. Adobe shipped a similar consolidation. Runway has been quietly merging its image and video editors. The arc is clear: the era of switching between five separate AI tools to make one piece is ending.
That is good for working creators. The cognitive overhead of context-switching between Whisk, ImageFX, Veo, Runway, and a non-AI editor was real. Hours per week, not minutes.
The tradeoff to keep an eye on is concentration. When the workspace is unified inside a single platform, the platform decides the defaults. The defaults shape the output. The output shapes the visual culture. Convenience always comes with that quiet cost. Worth using these tools, worth knowing they are not neutral.
Sources: Google Developers Blog — Introducing Veo 3.1 (Jan 2026) | Google Flow March 2026 redesign