Category Definition

The AI-Native Design Stack

Not a design tool. Not a design system. The infrastructure that coding agents actually need.

Every software stack has a design layer. For most of software history, that layer was populated by tools built for humans. The AI-native era needs a design layer built for agents.

The stack beneath every interface

Every interface that gets shipped has a design layer, whether it was intentionally designed or not. That layer is where decisions about color, typography, spacing, hierarchy, and rhythm get made. In the human-designed era, those decisions were made by designers using tools like Figma, Sketch, or Adobe XD. The tools were excellent. They produced remarkable work. They required a skilled human operator to produce it.

In the AI-native era, the interface layer is increasingly built by coding agents – Cursor, Claude Code, GitHub Copilot, and others. These agents are not designers. They do not use design tools. They build from code, driven by intent, without a Figma file to reference and without a designer in the loop at the moment of each decision.

What fills the design layer for them? In most workflows today, nothing. The agent makes its best guess. The result is functionally correct and aesthetically unmemorable.

Why design tools don't fill it

Design tools were built for the human→interface workflow: a designer opens Figma, makes decisions, exports specifications, a developer implements them. The workflow is sequential. The designer acts before the developer builds. When the developer is replaced by a coding agent, the sequence breaks. The agent is not waiting for a Figma spec. It is building now.

A design tool that requires a human to operate it upstream cannot be part of an agentic workflow downstream. The gap is architectural, not a matter of feature completeness. Figma is an excellent design tool – excellent at what it does. Design tools require human operators. The AI-native design stack does not.

Why design systems don't fill it either

Design systems – whether built in Tokens Studio, distributed via Style Dictionary, or documented in Storybook – get agents partway there. An agent with access to a published token set can use the correct color values. An agent with access to a component library can reach for canonical components. But token values and component options are a vocabulary, not a judgment. The design system says what exists. It does not say what to reach for. It encodes consistency but not composition.

For the use cases a design system already serves well – ensuring engineers use the right color values, enforcing spacing tokens – it remains invaluable. The AI-native design stack does not replace it. It extends above it.

See also: why design systems are not expression infrastructure.

What the AI-native design stack provides

The AI-native design stack is the layer of infrastructure that gives agents what they cannot get from design tools or design systems alone: design judgment available at runtime, without human intervention. This includes the means to encode brand expression – the intent behind the values, not just the values themselves – and the means to deliver that expression to agents at the moment they build.

It is infrastructure in the strict sense: it operates beneath the agent's workflow rather than alongside it, providing a reliable service that the agent can invoke rather than a resource the agent must manually consult. When this layer is in place, a coding agent building a new component does not produce a component that happens to use the right colors. It produces a component that feels like it belongs – because the judgment that makes a composition feel considered was already present at the moment of building.

Why this category is new

The AI-native design stack did not need to exist before coding agents built production interfaces. When humans composed every interface, the designer's judgment was always present – embedded in the person doing the composing, not in infrastructure the system could call. The emergence of coding agents as primary interface builders created a demand for something that had never needed to exist: a way to make design judgment available as a service, callable at runtime, without a human in the loop.

That demand is what defines the category. Tools like Figma define one category. Systems like Tokens Studio and Storybook define another. The AI-native design stack is the third category – and it is the one the current moment actually needs.

See also: what is expression infrastructure for the foundational definition this category builds on.