From Zero to Promo Video in 30 Minutes with AI
How to retain creative control and direction over AI when you aren't writing much of the code
I created this 74-second promo video draft in about 30 minutes. An AI wrote every line of the Remotion code, but it wasn't the creative force behind it. I was.
This walk-through is about how you retain control and direct the final product when AI is heavily used. To show this, I created a video for Air, a new Python web framework that's too recent to be in the AI's training data - a perfect test case.
AI can't feel a musical phrase change, easily notice when text is displayed too quickly, or know what’s interesting to me.
This post breaks down the 30-minute session to get to a first draft video and shows how to provide the context, taste, and critical feedback needed to stay in control
This was inspired by Greg Ceccarelli creating his companies product video with AI
Getting Started: The Blank Canvas
If a tool has a standard structure and command to generate it - use it. I grabbed the command to create a hello world starter template from the remotion docs to start the project.
npx create-video@latest --hello-world
Key Tip: Even when I am doing lots of agentic coding I read docs, learn, and see how I can give better context and directions. I want to learn from every project.
Step 1: Providing Good Context
You can't get a good output from bad input. I gave tools for getting the air code and documentation.
Prompt Snippet:
You have tools for both Air as well as Air Docs, so explore those thoroughly, look through everything, understand how they work. It needs to be factually correct because this is a very technical audience.
AI could use tools to pull the core concepts directly from the source of truth.
AI Chat History:
Tool use: mcp__air__fetch_air_documentation
This meant I could easily direct AI to look at pieces of the library or docs when I wanted to bring in context.
Step 2: The initial prompt
I voice transcribed a prompt about the important air features, what my vision was, what I wanted people to feel when they watched it, who the target audience was, and more to get context into the conversation.
Key tip: You want to get as much of your vision, opinions, intuition, feelings, and goals into context as possible.
Step 3: Adding Taste
Time to for the iteration loop: review, critique, command.
Feedback: Readability and Credit
My first reactions:
Everything was too small and cramped
I wanted to credit Air's creators, Daniel and Audrey Roy Greenfeld, more. I found their website and downloaded some images of them.
I thought the Air logo should be used so I grabbed the SVG of that also
I voice transcribed more instructions, continuing to add more context at each step.
My Prompt Snippet:
"Everything is so hard to read. Bump it up and make things easy to see. I've also added the real logo... as well as images of the two authors (danny as well as audrey) so you can credit both."
Feedback: Highlighting Key Features
The initial draft missed two of Air's most important features: its excellent documentation and how well tested it is. I prompted the AI to add them to the "Core Features" section.
It misunderstood and air.test
doesn't actually exist. My domain knowledge allowed me to correct the AI's output.
My Prompt Snippet:
"Remove the built for testing piece, there isn't actually and air.test module so change that. This is about the maintainers testing the code thoroughly for quality"
Feedback: Better Code Examples
Just showing code isn't enough. You have to explain why it's interesting. I gave instructions on animated highlighting and annotation callouts to explain concepts. I also gave instructions to add a slide for how Air + FastAPI can be done in the same application.
My Prompt Snippet:
"We can we do more with the code by highlighting and annotating. We need to callout to why something is interesting. Here’s some ideas:….."
This resulted in a much clearer and complete code demonstrations
Step 3: Syncing with Audio
A video needs music. I found a royalty-free track on Pixabay and gave it to the AI. I listened to a few to find the one I wanted looking for:
Songs that were close to the same length of my video
Had an impactful phrase change about 30 seconds in
Has a light airy beginning
This is a level of creative direction AI can't do on its own. It can't "feel" the music. It needs a human to provide the keyframe.
My Prompt Snippet:
"I want 'Simple Air Application' to come in right at 35 seconds to match the major phrase change in the song. Figure out how to adjust the beginning to do that and feel natural."
I was able to be vague with how to adjust the timing because I was careful about picking a song that didn’t require many modifications. If I had to make significant modifications I would have editing the song to fit better.
I also had it match the video's total duration to the song's length (1:14) by holding the final "Call to Action" slide so the music could fade out naturally instead of an artificial fade.
The Result: A Powerful First Draft
This took about 30 minutes of interaction.
Is it perfect? No. The pacing could still be improved, some slides could use more time to breathe, and the copy can be improved. But as a first draft, It saves me hours of tedious work.
The key takeaway is that AI is a powerful tool, but it's most effective in the hands of someone with a clear vision and domain expertise. The quality of the final product depends entirely on the quality of the human feedback and the context they provide. You still need taste and expertise. You still need to be the director.
If you're an engineer or team lead looking to build these reliable, customized AI workflows, this is what we teach in our Elite AI Assisted Coding course. We'll show you how to build a universal AI setup and design agentic workflows to ship faster and more effectively, and how to build deliberate software where every line of code is well understood, so you can switch between paradigms based on what’s appropriate for the task.
The next cohort, led by Eleanor Berger (ex-Microsoft, ex-Google) and Isaac Flath (ex-Answer.ai), runs from Oct 6–24, 2025.