Design LeadDesign team of 63 months

Systematise the Expertise, Not the Headcount

How systematising content design rules into tooling absorbed 80% of content review, reallocated two planned hires, and eventually became a Figma plugin.

Content DesignLeadershipAIFigma PluginGovernance

The Knowledge Lived in People, Not the Process

It was early 2025 at MoonPay. The Consumer side of the org was going from one team to four, and the Platform side (identity, payments, fraud) had started owning complete front-end experiences for the first time. Eight-plus teams were actively building things that users would see and interact with.

Content design coverage was thin. One or two teams had dedicated support. The rest were on their own — designers guessing at copy, engineers shipping whatever was in the mock, PMs not thinking about content at all.

Two content design hires had been approved before I'd taken the role. On paper, the solution was obvious: fill the headcount, give more teams content support. But I wasn't convinced that brute-force hiring was the right answer. Not yet. Because the staffing gap wasn't the only problem. It was masking a deeper one.

Diagram showing 8 delivery teams with their composition — only 2 have content design support, while the rules exist but are buried in Notion
Eight teams shipping UI. Two had content support. The rules existed — nobody could find them.

Having dedicated content support creates a false reality. Designers focus on the UI and leave the words to someone else. PMs don't think about content. Engineers ship whatever's in the mock. But half the experience is the words. Understanding what you're doing on a screen comes from language more than layout. Every team without content support ships subpar work.

You can solve that with headcount. But headcount doesn't scale in a startup, and it doesn't fix the underlying problem.


The Rules Were Already Written

This is the part that made the whole thing possible. Excellent work had been done about eighteen months earlier documenting a comprehensive set of content design rules. Voice and tone. Button label conventions. Error message patterns. Heading hierarchy. Accessibility text. All of it. The rules were specific, well-structured, and right.

The problem was where they lived. Notion. A documentation repo that people had to go looking for. Nobody does that. Designers don't stop mid-flow to look up whether a button should say "Submit" or "Save changes." They guess. They're not lazy, they're busy. The path of least resistance is whatever sounds right in the moment.

The rules existed. The knowledge was codified. It just wasn't accessible in the moment it was needed.

The systematised content design rules — voice and tone, button labels, error messages, headings, forms, empty states, accessibility, and more
Eighteen months of excellent documentation. Locked in Notion and Google Docs nobody opened mid-flow.

So the question became: how do you take rules that already exist and put them in people's hands at the exact moment they need them?


A Weekend and a Custom GPT

I'd been experimenting with custom GPTs for a while at that point. This felt like the most obvious use case imaginable. We had a complete writing system, codified and well-documented. We just needed to make it usable.

I spent a weekend structuring the rules into a custom GPT with expansive project guidelines. Voice and tone. Button labels: verb-first, max three words, specific over generic. Error messages: constructive, blame-free, always "what happened" then "what to do." Heading rules, body text, empty states, form labels, accessibility guidance. All of it.

The MVP was ready by Monday. I started testing — uploading screenshots of production screens, pasting Figma designs, asking for audits. One of the first screens I threw at it was a fiat balance education modal that had been a point of contention internally. Too copy-heavy, concepts that were hard for users to grasp (approval rates, funding methods, balance mechanics), and legal ambiguity around fee claims that nobody had flagged. I didn't love the screen. I wanted to see if the GPT would catch what I was seeing.

A MoonPay fiat balance education screen — the screen that was reviewed by the custom GPT
The screen that kicked it off. Too much copy, unclear hierarchy, legal grey areas. 'How is this reading against our content guidelines?'

The response wasn't a vague "looks good" or a list of nitpicks. It was a structured, ten-point audit — grounded in the exact rules I'd given it.

Prompt

How is this reading against our content guidelines?

It wasn't just checking words against rules. It was doing six distinct types of work simultaneously:

Anatomy of a single GPT review — rules audit, structural critique, concrete rewrites, UX escalation, risk flagging, and prioritised actions
One screenshot. Six categories of structured feedback. Instant, on demand, for any team.

Rules-based auditing

Measuring reading level, tone compliance, redundancy against specific codified standards. Perfect recall of every guideline.

Structural critique

Questioning information hierarchy, reframing around user motivation rather than product features.

Concrete rewrites

Not just "change this" but why, with Grade 7-appropriate alternatives. Teaching while reviewing.

UX escalation

Flagging that the problem isn't just copy but screen structure.

Risk flagging

Legal and compliance callouts marked as dependencies, not afterthoughts.

Prioritised actions

A scorecard and "fix these four things first," so designers know where to start.

The traditional model: a content designer picks up work as an IC, but only for the team they're embedded with. The other six teams either got no feedback at all, or got it just before release, causing bottlenecks, rework, or confused customers. This gave every team a structured audit, instantly, on demand. The volume work (the 85% that's pattern-matching against known rules) was handled. The remaining 15% still needed a human: brand intuition, political context, genuine taste calls. But that ratio changed everything.


The Hard Conversation

The tool was producing work as good as, and in some cases better than, the previous process. Not because AI is smarter than a content designer. Because it fixed a sequential workflow. It gave every designer access to the same standard, in the moment, without waiting for someone else's availability.

The two approved content design hires looked different now. The volume of repetitive review that justified those roles had been absorbed by the tool. The remaining work was the genuinely hard stuff: brand voice development, narrative design, content systems thinking. Higher-leverage work, but not enough to justify two additional headcount.

So I reallocated the budget. The plan wasn't to abandon content design. It was to democratise access first, set a solid foundation through tools, and then hire intentionally for people who would think about content systems rather than brute-forcing IC review work.

Before and after team structure — two planned content design hires reallocated to product design, repetitive review absorbed by the system
Not replacing people. Reallocating from work a system can do to work only people can do.

The hard part wasn't the decision. The hard part was how it landed. No matter how well articulated, it was perceived as "we don't care about the words anymore, we're just using AI." The opposite was true. I cared about content so much that I wanted everyone to have access to a superior standard of it, not just the two teams lucky enough to have a dedicated person.

This is the uncomfortable reality of systematising expertise. The people closest to the knowledge can feel threatened by making it more accessible, even when the goal is to amplify their impact. People who establish the rules for how these tools are used become more valuable, not less. But that requires a shift in identity: from "the person who does the work" to "the person who defines how the work gets done."


What Actually Changed

I piloted the tool with a few high-agency ICs who were already using ChatGPT to improve their writing. When I gave them a standardised GPT grounded in our actual rules, they welcomed it immediately.

It didn't take long before people stopped saying "this isn't finished yet, the copy is still lorem ipsum, this needs a content pass." Designs came to review with well-written, consistent content. Patterns solidified across teams.

The teams that had never had content design support became consistent too. Small, fast-moving platform teams with a history of inconsistent output were now hitting the same standard as everyone else. The quality floor rose across the entire org.

A simple governance layer kept things honest. Before anything shipped, an async content check. The GPT got people 85% of the way there. The last 15% was human judgement, applied quickly, at the end.

Quality went up. Speed went up. The dependency on a single person's bandwidth went to zero.

But the tool had real limits.


What It Couldn't See

The custom GPT was text-in, text-out. You'd paste copy or upload a screenshot, and it would review in isolation. A button label that says "Continue" might be fine on a checkout screen and wrong on a destructive action. The GPT couldn't know the difference. It had no awareness of the component being used, the hierarchy of the screen, or the sibling text around the element it was reviewing.

And it couldn't act on its own suggestions. It could suggest "change 'Submit' to 'Save changes'," but someone still had to find that text node in Figma and update it manually.

What if the system could see the canvas? What if it understood the component, not just the words?

That question led to a Figma plugin. The same content design ruleset, but embedded in the designer's actual workflow.

The Figma content co-pilot plugin reviewing a selected text node — identifying it as a body element, critiquing the tone, and offering a one-click rewrite
Canvas-aware. It knows 'Confirm your order' is a heading and the callout is body text. It rewrites in place. One click to apply.

The rules never changed at any point in this story. They were right from the start. The delivery mechanism evolved: from a Notion doc nobody read, to a custom GPT that elevated individual workflows, to a plugin embedded in the canvas itself.

The evolution arc: Notion documentation, custom GPT, Figma plugin — each step removing friction
The rules never changed. The delivery mechanism evolved three times.

The Principle

Same expertise. Zero bottleneck. Take knowledge that lives in someone's head (or a Notion doc nobody opens), systematise it, and put it where people actually work. The team gets leaner. The quality goes up. The people who defined the rules become more valuable, because now they're shaping a system that scales, not reviewing strings one at a time.

This worked for the words. But words are just one layer of design. What about layout decisions? Component choices? Spacing hierarchies? Platform conventions?

What if we did the same thing — for all of design?