This page is an example of how I approach AI-assisted filmmaking — not as a shortcut, but as a production process.
Case Study:
A Single Shot, Built Properly
This case study demonstrates how I approach complex, cinematic camera movement using a filmmaker-first mindset — designing physically plausible motion, locking continuity, and building the shot so it survives editorial, VFX, and delivery.

AI is used as a controlled execution layer, not a shortcut. Every decision starts with lens choice, camera constraints, and narrative intent.
The Challenge
The Shot: Camera starts tight on a smartphone screen (clean plate for later UI insert), then slowly pulls back and booms down to reveal a mechanic on a creeper under a car. The mechanic slides out, sits up, grabs the phone, and looks at it

The Ask: Explain, step-by-step and in as much detail as possible, exactly how you would generate this full shot today with current tools to achieve maximum consistency, cinematic quality, and precise camera control.
image description here.
Why This Shot Matters

This isn’t a flashy idea.
It’s a control problem.

A shot like this tests:
  • camera consistency
  • spatial logic
  • character stability
  • lighting continuity
  • emotional clarity

Most AI video falls apart somewhere in that list.

So instead of “prompting harder,”
I approached this like a real shoot.

Philosophy and Concept

"

I don’t believe AI replaces filmmaking.

I believe it removes friction from it.


The job is still the same:

plan the shot, control the frame, respect the audience.


Tools change. Taste doesn’t.

"

My Process
Plan the shot before generating anything

Camera type. Lens feel. Lighting direction.

This opening frame is the most important image in the sequence.

If that’s wrong, everything downstream is noise.

Separate elements that need control

The phone screen is treated as a clean plate so the UI can be handled later — the same way real commercials are made.

That separation keeps flexibility and avoids locking mistakes into the video.

Lock the environment before moving the camera

The car, the creeper, the space, the lighting — all of that needs to feel real and consistent.

If a mechanic watches this, it should feel correct.

Treat the character like a real casting decision

The mechanic isn’t a one-off face. He’s a repeatable character.

That consistency makes the rest of the work possible.

Design the camera move intentionally

This isn’t random AI motion.

It’s a controlled pullback and jib-style move that communicates scale and realism.

Focus on the reaction, not the trick

The moment the mechanic looks at the phone is where the shot actually lives.

Everything before it is setup.

EXECUTIVE / TECHNICAL
BREAKDOWN
For those who want the full technical breakdown, below is the step-by-step construction of the shot.
When I read “camera starts tight on a smartphone screen (clean plate)”, my first consideration is cinematography, not AI.
I immediately define:
  • Camera type and sensor feel
  • Lens choice (macro vs standard close-focus)
  • Lighting direction, intensity, and falloff
  • Emotional intention of the opening frame
This is the most important image in the sequence — it establishes tone, realism, and trust.

For the clean plate, I would not assume a green screen by default. I would research best practices for screen replacement in AI-assisted pipelines. Ideally, the phone screen is generated as a neutral, reflection-accurate blank surface, with UI handled as a separate, isolated asset.

My preferred approach:
  • Generate or ingest a still UI mockup in a separate project
  • Version and approve UI independently
  • Composite or insert the UI later for maximum control and flexibility
I would also define the type of phone. Practically, it should read as an iPhone due to market familiarity, but without using protected branding unless the project includes that partnership. The goal is instant audience recognition without legal exposure.
Results
This case study demonstrates how I approach AI-assisted filmmaking as a production process, not a prompt experiment.

The goal was to design a single, technically demanding shot and execute it using current consumer-access AI tools while maintaining cinematic logic, spatial realism, and narrative intent.

The process successfully established and stabilized all required production components:
  • A repeatable, grounded mechanic character suitable for reuse
  • A physically plausible auto-repair environment with correct spatial logic
  • Separate, controllable assets (vehicle, creeper, phone)
  • Cinematic camera intent defined using real-world filmmaking language
  • A step-by-step workflow that mirrors traditional commercial production
Where this process intentionally stops is at final composite execution.

Current publicly available AI image and video models still struggle with multi-object spatial consistency, scale accuracy, and occlusion across iterations. These limitations are not solvable through prompting alone without paid access, internal tooling, or compositing pipelines.

Rather than forcing an artificial “final result,” this case study stops at the point where professional judgment would intervene in a real production.
What This Example Represents

This isn’t about one shot.

It’s about:

  • plan shots before generating anything
  • design for control and consistency
  • understand where AI works — and where it breaks
  • apply traditional filmmaking logic to modern tools



AI is just another tool in the chain.


The job is still the same: plan the shot, control the frame, respect the audience.
Versioning Example
Final Words
  • Kenny
    Creative Director / Filmmaker / AI Curator
    "Thank you for visiting my Case Study on AI Filmmaking! If you want the full technical breakdown (step-by-step), you can download it below."
Download the Executive Challenge Response (PDF)
Steps
The actual work.