Luma Launches AI Agents Powered by ‘Unified Intelligence’ Models for Creative Workflows

Luma

AI video-generation startup Luma AI has unveiled a new platform called Luma Agents, designed to handle complex creative workflows across text, images, video, and audio. The agents are powered by the company’s newly introduced Unified Intelligence models, marking a major step toward fully integrated multimodal AI systems.

The new tools aim to help marketing teams, advertising agencies, and enterprises automate end-to-end creative production while maintaining control over strategy and output.

Luma Agents Aim to Transform Creative Production

According to the company, Luma Agents are capable of planning, generating, and refining creative assets across multiple formats. This includes written content, images, videos, and audio, enabling teams to move from concept to final production within a single AI-powered workflow.

The agents can also coordinate with several other AI systems, including models from Google, ByteDance, and ElevenLabs, expanding the platform’s capabilities for creative production and content generation.

By integrating multiple tools and models into one system, Luma aims to simplify how organizations manage complex creative processes.

Powered by the Uni-1 Multimodal AI Model

At the core of the platform is Uni-1, the first model in Luma’s Unified Intelligence family. The model is designed to process and generate multiple types of data—including language, images, audio, and spatial reasoning—within a single architecture.

Amit Jain, co-founder and CEO of Luma AI, explained that Uni-1 can “think in language and render in images,” describing the approach as “intelligence in pixels.”

Future versions of the model are expected to expand capabilities further, including deeper audio and video generation features.

Built for Marketing and Enterprise Teams

Luma says its agent-based platform is designed specifically for professional creative environments such as advertising agencies, design studios, and large brands.

The company has already started deploying the technology with organizations including Publicis Groupe and Serviceplan Group, as well as global brands like Adidas and Mazda.

These early deployments aim to demonstrate how AI agents can accelerate creative campaigns while maintaining brand consistency.

AI That Learns and Improves Its Own Output

One of the standout features of Luma Agents is their ability to evaluate and refine their own work through an iterative feedback process.

Instead of requiring constant prompting from users, the system can generate multiple variations of creative outputs and improve them through internal self-critique. This approach mirrors techniques used in advanced coding agents that repeatedly test and refine solutions.

By maintaining persistent context across assets, collaborators, and project iterations, the agents can continuously optimize their results.

Faster Campaign Production at Lower Cost

Luma claims its platform can significantly reduce both production time and costs for large-scale marketing campaigns.

In one demonstration, the system used a short creative brief and a product image to automatically generate multiple advertising concepts, including location ideas, model selections, and visual styles.

In another example, the platform transformed a large global advertising campaign into localized ads for different markets in just 40 hours, dramatically reducing production expenses while meeting internal quality standards.

Gradual Rollout Through API Access

Luma Agents are currently available through an API, allowing developers and enterprises to integrate the technology into their existing workflows.

However, the company plans to roll out broader access gradually to ensure stability and reliability for early adopters.

As generative AI continues to evolve, Luma’s Unified Intelligence approach highlights a growing shift toward AI systems that can manage entire workflows rather than individual tasks.