Get to Great Faster With AI: A Creative Director's Workflow for Avoiding AI Slop
- Steven Townsend
- Apr 7
- 5 min read
If you're impressed by what AI can do today, consider this: today is the worst AI will ever be. The tools are only getting better. The question isn't whether creative teams will use AI, it's whether they'll use it well.
I recently gave a webinar for Onward Search on how I use tools like Claude Cowork, Midjourney, Runway, and Figma to accelerate creative ideation without producing the generic, soulless output that's flooding the internet. The short version: AI doesn't replace creative direction. It demands more of it.
The Problem: Fast, Cheap, Safe
Most AI-generated creative work follows the same pattern. Someone types a basic prompt, accepts the first output, and ships it. The results are Fast, Cheap, Safe, technically competent images and videos that look like everything else. For some shareholders, clients, and brands, that's good enough. The demand for content is so great right now, some brands are settling for that default workflow, and it's turning advertising into visual noise.
The legendary creative director Lee Clow once said that when advertising is done well, it becomes part of our culture rather than pollution. I met Lee, when I was a new Creative Director at Chiat Day, the iconic agency he helped put on the map. I can only imagine he would call AI slop pollution at scale.

The Alternative: Fast, Artful, Impact
The framework I use flips that equation. AI should be Fast, Artful, Impact using speed not to cut corners, but to iterate quickly toward more powerful ideas. The "fast" part isn't about skipping the creative process. It's about compressing the distance between a concept and seeing it realized, so you can evaluate it, refine it, and push further than a traditional timeline would allow.
Great advertising, the kind that Nike, Old Spice, and Levi's have produced, starts with a disruptive idea, the kind AI has difficulty generating. It generate a thousand images, but a human, with high creative standards, a Creative director can pick an artful one. The on AI accelerates the path from idea to visual proof of concept, which lets teams explore more territory in less time.
When advertising is at its best, it generates impact for brands. Good creatives want to help their clients influence culture, and even define it.
Case Study: Genesis GV80 Campaign Concept
To demonstrate this workflow live in the webinar, I worked from a real brief. As a freelancer, I was tasked to tell the world about the 2026 Genesis GV80, a family SUV with a disappearing third row seat. The car is stunningly designed. It's beautiful. A "Fast, Cheap, Safe" approach would generate familiar lifestyle shots of the vehicle and call it done. Instead, I started where every good campaign starts with an insight.
From Brief to Concept: "Room for Imagination"
The disappearing third row creates extra space. Who benefits most from extra space in a family vehicle? Kids. And what do kids do with space? They fill it with imagination. Child-sized usually means small, but not when it comes to imagination. You can fill a city with what kids can think up.
That insight became the campaign concept: Room for Imagination. Give people space and they'll fill it with incredible ideas. A larger vehicle opens up a world of possibilities.
Step 1: Visualize the Idea With Sora (Text-to-Video)
Before generating a single image, I used OpenAI's Sora (RIP) to test the emotional territory. The prompt described a girl and her father discovering a wild horse caught in the bushes in a city. When she frees it, she discovers it's a unicorn. This wasn't meant to be a final asset. I wanted to see whether the "child's imagination meets the real world" direction had emotional weight. It did.

Step 2: Iterate Rapidly With Midjourney (Text-to-Image)
With the emotional direction validated, the team and I moved into Midjourney to explore visual treatments for the campaign. This is where the iterative speed of AI becomes a genuine creative advantage. Traditional workflows take hours as Art Directors produce comps. With Midjourney, we generated dozens of variations quickly, exploring rainbow and weather-inspired imagery: storms, light refraction, spectrum colors playing across the GV80's surfaces.
The key here is curation, not generation. Most outputs were discarded. The ones that worked captured something specific: the sense of wonder and color that a child's imagination might project onto the world around them, including the family car.

Step 3: Bring Stills to Life With Runway (Image-to-Video)
The strongest Midjourney outputs became inputs for Runway's image-to-video capabilities. A still image of the GV80 surrounded by rainbow-spectrum fog became a cinematic clip with atmospheric movement. Interior shots with prismatic light gained depth and motion. This step transforms a mood board into something a client, or a production team, can immediately react to.

The Result: "Pull In Imagination"
The final campaign concept -- Pull In Imagination -- landed on a cinematic visual of the GV80 in a forest setting with a child pulling streams of rainbow light from the sky. It's evocative, brand-appropriate for Genesis's luxury positioning, and distinctly different from anything a default AI prompt would produce. The entire exploration, from brief to visual concept, took a fraction of the time a traditional process would require.
What This Means for Creative Teams
This workflow isn't about replacing photographers, directors, or designers. It's about giving creative directors a way to test and refine ideas at the speed of thought, then bringing proven concepts to production teams with a clear visual direction rather than a verbal brief and a mood board of stock photography.
Scale With Runway Workflows and Figma Weave
For production-scale work, I use Runway Workflows and Figma Weave to systematize the process. Runway Workflows lets you chain generation steps, prompt refinement, image generation, and video generation, into repeatable pipelines. Figma Weave brings AI-generated assets into design workflows where they can be composed into actual campaign layouts.

The tools I used in this case study, Midjourney for text-to-image iteration, Runway for image-to-video and production workflows, and Figma Weave for future, large-scale, creative integration, are the current state of the art. They'll be better, replaced, or like Sora, discontinued in a month. The competitive advantage isn't in knowing which buttons to press, or which tools to use. It's having the creative judgment to direct the output toward something that is artful and drives impact.
AI doesn't get you to great. Humans, with high creative standards, get you to great. AI can get them there faster.
Steven Townsend is a Senior Creative Director & Technologist with experience at agencies including 72andSunny, Saatchi & Saatchi, and Deutsch. He specializes in integrating AI tools like Claude, Midjourney, Runway, and Figma into creative workflows for brands including The North Face, Toyota, Genesis, and DoorDash. Learn more at townsendsteven.com.


Comments