Heather Elder Represents
Reps Journal

Dan Saelinger Integrates Photography and AI for Allbirds

Dan Saelinger is one to embrace new technologies, noting that they are an extra tool in his toolbox, not something that will replace his love of the craft. His collaboration with Allbirds for its Spring/Summer 2026 release aligned with that philosophy. The project called for both grounded and traditional still-life imagery and a short form motion piece to use across social and digital platforms. 

Combining his strengths in practical photography and AI technologies, Dan advised the creative team to rely more heavily on in camera photography for the majority of the content. The still imagery was all shot in studio, with a simple lighting set up and tasteful styling, while the animated piece utilized AI to give it a dynamic look. The body of work is tactile and sharp looking while still delivery the energy Allbirds wanted.

What was the core visual idea behind the Allbirds imagery?

Allbirds wanted the imagery to feel authentic with real visual weight. Their shoes are built around materials and sustainability, so the goal was to highlight that tactile quality rather than make something overly slick or overly polished.

I built the lighting around a single source setup with some carefully placed fill. That created a directional wash that emphasized the texture and material of the product. I wanted the light to feel simple and intentional, almost like the scene could exist naturally rather than feeling like a heavily produced studio image.

The idea was to let the product feel dimensional and honest, something you could almost reach out and touch.

You leaned heavily on traditional photography during the shoot. Why was that important for this project?

The shoot focused on the craft and materials used in making the shoes, so it felt right to lean into the traditional craft of photography. I approached it as simple visualization through thoughtful lighting.

For each shot I meticulously sculpted the product and scene with that single light source, making minute adjustments to the fill and placement. I wanted the integrity of the image to come from what we achieved in camera rather than relying heavily on post-production.

There is a certain authenticity that comes from nailing the image on set. When the lighting, shadows and form are working together in real space, the result has a physical presence that is difficult to replicate later.

Where did AI come into the process?

AI came into play specifically for the motion component. The concept called for a short animated piece where the material ingredients float around the shoe. Achieving that practically would have been difficult within the time and budget of the project.

We captured the still foundation on set through a series of plates and then assembled the image in post. Once that still composition was established, I moved into Runway to animate the scene.

Because the base image was already fully realized photographically, the AI portion became focused purely on creating the motion.

Why did AI make sense for that portion of the project?

Traditionally, you would approach something like that through CGI. You would send the assets to a CGI studio, rebuild the objects, and animate them. That process can easily run into tens of thousands of dollars.

For something like a social media motion piece, that kind of budget usually isn’t realistic.

Using AI allowed me to take a finished still image and animate it relatively efficiently. The animation itself probably took about half a day to a day once the image was built.

It’s a practical solution when the project needs movement but doesn’t have the time or resources for full CGI production.

You’ve talked about the balance between old-school craft and new technology. How do you decide which approach to use?

I’ve always thought of my process as assembling the right tools to serve the idea. Every project has different needs, and the job is to evaluate what will create the strongest image.

For the Allbirds still imagery, traditional photography was absolutely the right answer. The lighting, the shadows, the physical presence of the product were all essential to the visual language of the campaign.

For the motion piece, AI was the right tool. It allowed us to create something dynamic and visually engaging without completely changing the production model.