If It Doesn't Exist in Code, It's Not a Design System
A code-first design system with an AI builder, interactive catalog, schema-driven rendering, and automated visual testing — built by one designer in 4 weeks.
Everyone's Looking at Something Different
I've been part of a few 0-to-1 design system builds. The same problems come up every time. Engineers build a codified system and educate other engineers on it, but if it's not in lockstep with design, information drifts. Designers think things are available in code that haven't been built yet. Engineers think components are current when they've been redesigned. Everyone's looking at something different and convinced theirs is correct.
This is the same class of problem that AI products face: misalignment between what's designed, what's built, and what ships. The tools are different but the failure mode is identical: knowledge lives in people's heads instead of systems, and it drifts the moment those people aren't in the room.
Having a documented design system in Figma is great. But if engineering isn't leveraging it to build a codified system, it's documentation that a design team can pour months into maintaining, evangelising, and policing — and still have zero guarantee it reflects what's actually shipping. The real source of truth is what's in the customer's hands. What they're seeing and interacting with. That's code. That's the product. Figma is not the product.
But you can't ignore Figma either. It's a knowledge transfer problem. I wanted to see if there was a better way. A single system that keeps both sides honest.
So I started where the truth actually lives — in code.
Start Where the Truth Lives
SystemIQ is a Swift package that is the design system. Importable into any project, refined independently, testable in isolation. I chose SwiftUI because I've experimented with Swift since 2015 and I love iOS as a platform. Products built natively just perform better. I also wanted to learn iOS 26. It was a polarising redesign, and building for it was the best way to understand how flexible it actually is.
But a design system that nobody can explore is just code in a repo.
A Playground, Not Documentation


The catalog is the front door. Every token, component, and view template in the system, browsable in one place. Tap into any component and you get a page with toggle-able settings for every property. Change states, enter text, turn features on and off. This isn't documentation. It's a testing environment. A PM can see what's available. A designer can see how a component actually behaves. An engineer can see the API surface. Everyone's finally looking at the same thing.
Starting at the foundation: tokens. The goal was to remove decisions, not add them. If the platform already solved it, don't reinvent it. Only add structure where it earns its complexity. Anything the platform already handles well — backgrounds, fills, labels — follows what iOS gives you out of the box. Where the system needs to express meaning, I built a semantic layer around intents: positive, negative, neutral, warning, accent, info. You don't pick a colour. You pick what you're trying to say.
DSColor.intent(.positive, .solid) // Green — confirmations, success
DSColor.intent(.negative, .solid) // Red — errors, destructive actions
DSColor.intent(.warning, .surface) // Amber surface — caution statesSpacing borrows from Tailwind's scale but simplified: space/16 maps to 16pt. No mental arithmetic with REMs. Typography stays native to SwiftUI.

Tokens are the foundation. But the catalog isn't just for browsing values.
I wanted to play with component parameters while the component was in view. The best example I found in iOS was the Notes app — a custom toolbar where styling controls are right there, one tap away, results visible instantly.

The floating toolbar uses iOS 26's glass material system and surfaces property overrides with nested controls — toggle "show icon" and an SF Symbol picker appears inline. Getting this working was technically hard. SwiftUI fights you when you want floating overlays that respond to dynamic content. I have a newfound appreciation for developers doing hard, boring things that make UX a hundred times better. It's thankless work.
But showing a list of token values isn't enough. You need to show where those tokens are actually applied — otherwise you're listing values with no spatial context, and the user's thinking: right, where does this actually go?

The measurement system makes that relationship explicit. Pan through a component to see annotated overlays: exact dimensions, token bindings, spatial relationships. In the default state, there's a gyroscope-driven parallax that gives the component physical presence. I was doing it more because it was delightful — and it happens to be functional too.
Figma solves measurements in dev mode on desktop, where you go looking for things. I didn't see a mobile solution. I wanted it to be easy. Tab through details, see something move around with nice, intentional animation. Playground, spec, and measurement as one continuous experience.
Four Templates, Millions of Screens
Once tokens and components existed, the question became: how far can composition take you? Atomic components compose into layout components. Layout components compose into view templates. Tokens → Components → Views.



The answer: surprisingly far. Four view templates cover the vast majority of screens: Context, Instruction, List, Form. That sounds reductionist — until you start counting. Most apps don't have that many distinct screens. They have permutations of the same structures wearing different clothes. The schema defines what's possible. The configuration defines what shows up.


Those two screens are the same view. Same template, same code, same schema. Different configuration. One's an onboarding checklist with step statuses and a skip option. The other's a security gate with toggles and a warning. The template doesn't know the difference. It just renders what the configuration tells it to. Every property you add to the schema (a new action type, a footer variant, an icon style) multiplies what that single template can become.
A system this composable, changing this frequently, needs a safety net. Especially when you're the only one catching regressions.
Can I Give You Eyeballs?
I built this entire system with Claude Code, an AI agent that writes Swift, runs tests, and iterates on components alongside me. It's genuinely powerful. But it has a blind spot: it can't see what it's building. Claude would make a change, tell me it looked right, and I'd check — it would be wrong. Not severely, but not what we agreed. A padding off here, a colour token swapped there. Small things that erode trust over time.
So I thought: can I give you eyeballs?
The idea was simple: render a component, screenshot it, hand the image back to Claude so it could verify its own work. It evolved into a full visual regression system. 50+ automated tests across three types: variant matrices (every combination of style × state in a grid), demo page snapshots (full catalog pages at device size), and edge-case captures (long text, accessibility sizes, narrow widths). Light and dark mode for everything.



If you're building with AI, this is the single highest-leverage thing you can do. Without visual verification, you're reviewing every change manually — which means you're the bottleneck, and the AI's speed advantage disappears. With it, the agent makes a change, runs the snapshot, reads the image, and self-corrects before you ever see it. The feedback loop closes inside the system instead of inside your head. Errors that used to take three rounds of "no, that's still wrong" now resolve in one pass. The AI gets faster because it can trust its own output. You get faster because you can trust the AI.
I guess there's a reason this kind of testing has been used in engineering forever. I just got there on my own, out of frustration, as a designer.
What This Changed
It's easier to test a component in this system than it is in Figma. I can experiment with real data, feel the interactions, the haptics, the scroll behaviour. I can see exactly what will make it into a customer's hands — not an approximation of it. When something drifts, I notice immediately, because I'm looking at the same thing the user will be.
That's the shift. A Figma design system is documentation. A code-first design system is the product. Good scaffolding in a design system package lets people tangibly feel and observe how a product will behave and how the pieces fit together. Not a static representation. The real thing.
And if every component is schema-driven and composable from structured data, you've built something an AI can work with. Not approximate. Precisely. The schema becomes the contract between human intent and machine output.
This system is powerful if you understand it. But the whole point was that people shouldn't have to.
What if you could describe what you wanted — in plain language — and the system built it for you?