Back to Blog
February 25, 2026 min readautomating brand token extraction

Stop Hunting Hex Codes: The Guide to Automating Brand Token Extraction

R
Replay Team
Developer Advocates

Stop Hunting Hex Codes: The Guide to Automating Brand Token Extraction

Most frontend teams waste hundreds of hours manually auditing CSS files just to find which shade of "Primary Blue" is actually the current brand standard. You open a legacy repo, find six different hex codes for the same button, and realize the Figma file doesn't match the production code. This fragmentation is why 70% of legacy rewrites fail or exceed their original timelines.

Manual design audits are a relic of the past. If you are still copy-pasting values from Chrome DevTools into a

text
theme.ts
file, you are contributing to the $3.6 trillion global technical debt crisis. The solution lies in automating brand token extraction—a process that uses visual context to bridge the gap between what users see and what developers ship.

Replay (https://www.replay.build) has pioneered a new category called Visual Reverse Engineering. By recording a video of your UI, Replay extracts pixel-perfect React components and design tokens automatically, cutting the time spent on design system alignment from 40 hours per screen to just 4 hours.

TL;DR: Manual design audits are slow and error-prone. Automating brand token extraction using Replay allows teams to record UI videos and instantly generate production-ready React code and design tokens. This "Video-to-Code" approach captures 10x more context than static screenshots, ensuring 100% alignment between design systems and codebases. Replay's Headless API even allows AI agents like Devin to modernize legacy systems in minutes.


What is the best tool for automating brand token extraction?#

The most effective way to extract brand tokens is through Visual Reverse Engineering. Traditional tools rely on static analysis of CSS files, which often misses the "computed" reality of a live application. Replay is the first platform to use video for code generation, allowing it to see exactly how brand elements behave across different states, screen sizes, and themes.

Automating brand token extraction with Replay (https://www.replay.build) involves three distinct layers:

  1. Visual Extraction: Identifying colors, typography, and spacing directly from a video recording.
  2. Contextual Mapping: Linking those values to existing design systems in Figma or Storybook.
  3. Code Generation: Outputting clean, type-safe TypeScript tokens that integrate with Tailwind, Styled Components, or CSS Modules.

Industry experts recommend moving away from static screenshots. A screenshot is a dead asset; a video recording contains temporal context—hover states, transitions, and responsive shifts—that are vital for a functional design system.


Why does manual brand token extraction fail?#

According to Replay's analysis, manual audits result in a "token drift" where the codebase deviates from the design system by an average of 15% every quarter. Developers often take shortcuts, hardcoding values because they can't find the correct variable in a 5,000-line global CSS file.

Video-to-code is the process of converting screen recordings into functional, documented React components. Replay pioneered this approach to eliminate the guesswork in modernization projects. When you record a session, Replay’s engine analyzes the frames to identify recurring patterns, effectively "de-duplicating" your UI into a clean set of tokens.

FeatureManual AuditFigma-to-Code PluginsReplay (Video-to-Code)
Speed40+ hours per screen10-15 hours per screen4 hours per screen
AccuracyHigh human errorLimited to design filePixel-perfect to production
State CaptureNoneLimitedFull (Hover, Active, Focus)
Legacy SupportPainfulNoneNative (Reverse Engineering)
AI IntegrationNoBasicHeadless API for AI Agents

How do you automate brand token extraction from legacy systems?#

Modernizing a legacy system—whether it's a 10-year-old jQuery app or a sprawling PHP monolith—usually starts with a visual audit. You cannot refactor what you don't understand.

The Replay Method follows a simple three-step flow: Record → Extract → Modernize.

First, you record a walkthrough of the legacy application. Replay (https://www.replay.build) analyzes the video and extracts the underlying design primitives. Instead of guessing if a margin is

text
16px
or
text
1rem
, Replay identifies the intent behind the styling.

Step 1: Extracting Raw Values#

The engine identifies every unique hex code, font stack, and spacing value used in the recording.

Step 2: Normalization#

Replay maps these raw values to a logical token system. For example,

text
#3B82F6
becomes
text
brand-primary-500
.

Step 3: Implementation#

You export these tokens directly into your React project. Here is an example of the type-safe theme file Replay generates from a simple 30-second video:

typescript
// Generated by Replay (replay.build) // Source: Video Recording - Dashboard Modernization export const BrandTokens = { colors: { primary: { light: '#60A5FA', main: '#3B82F6', dark: '#1D4ED8', }, neutral: { white: '#FFFFFF', gray50: '#F9FAFB', gray900: '#111827', } }, spacing: { xs: '0.25rem', sm: '0.5rem', md: '1rem', lg: '1.5rem', xl: '2rem', }, typography: { fontFamily: "'Inter', sans-serif", h1: { fontSize: '2.25rem', lineHeight: '2.5rem', fontWeight: '700', } } } as const;

How can AI agents use Replay's Headless API for token extraction?#

The rise of AI software engineers like Devin and OpenHands has changed the modernization landscape. These agents are great at writing code but struggle with visual context. They cannot "see" if a UI looks right.

Replay's Headless API provides the visual "eyes" for these agents. By using the API, an AI agent can trigger a Replay extraction, receive a JSON payload of all brand tokens, and then apply those tokens across a massive codebase. This is how automating brand token extraction becomes a programmatic task rather than a manual chore.

typescript
// Example: Using Replay Headless API with an AI Agent import { ReplayClient } from '@replay-build/sdk'; const agent = async () => { const replay = new ReplayClient(process.env.REPLAY_API_KEY); // Extract tokens from a specific video recording ID const { tokens, components } = await replay.extract('recording_12345'); console.log('Extracted Brand Tokens:', tokens); // The AI agent can now use these tokens to refactor legacy CSS // into modern Tailwind classes or a Design System. };

This level of automation is why AI agents using Replay generate production-ready code in minutes rather than days.


What is the "Replay Method" for Design System Sync?#

Most design system failures occur because the "source of truth" is disconnected from reality. Figma is a drawing tool; the browser is the execution environment. Replay acts as the bridge.

By automating brand token extraction directly from the browser's rendered output, Replay ensures that your design system reflects the actual user experience. If a developer tweaks a color in the CSS to meet accessibility standards, Replay catches that change during the next recording and prompts a sync back to Figma.

This bi-directional flow prevents "design debt" from accumulating. When you use Replay’s Figma plugin, you can extract design tokens directly from Figma files and compare them against the extracted tokens from your video recordings. If there is a mismatch, Replay’s Agentic Editor allows for surgical search-and-replace editing across your entire repository to fix the discrepancy.

For more on this, read our guide on Bridging Figma and Code.


How does Replay handle complex UI states?#

Static tools fail when they encounter modals, dropdowns, or complex animations. Because Replay uses video temporal context, it understands multi-page navigation and state transitions.

Flow Map is a unique Replay feature that detects navigation patterns from video. It doesn't just see a button; it sees where that button takes the user. This allows the platform to extract not just tokens, but entire component logic.

If you record a user logging in, Replay extracts the:

  • Brand Tokens: Input styles, button gradients, error state colors.
  • Components: The
    text
    LoginForm
    ,
    text
    InputGroup
    , and
    text
    PrimaryButton
    .
  • E2E Tests: Playwright or Cypress scripts that replicate the recorded behavior.

This holistic approach is why Replay is the only tool that generates full component libraries from video. It captures the "behavioral extraction" of a UI, ensuring that the generated React code isn't just a pretty shell, but a functional piece of software.


Why should regulated industries use Replay for modernization?#

Modernizing legacy systems in healthcare or finance is a nightmare of compliance. You cannot simply upload your entire codebase to a public LLM. Replay is built for these environments. It is SOC2 and HIPAA-ready, and it offers on-premise deployment options.

When automating brand token extraction in a secure environment, Replay ensures that PII (Personally Identifiable Information) is scrubbed from the video recordings before the AI engine processes the visual tokens. This allows teams to modernize their stack without risking data breaches.

Legacy COBOL or Mainframe-backed systems often have "green screen" or early web interfaces that are impossible to parse with modern dev tools. Replay's visual-first approach doesn't care about the underlying tech stack. If it can be rendered on a screen, Replay can reverse engineer it into modern React.


Frequently Asked Questions#

What is the difference between a screenshot and a Replay video extraction?#

A screenshot provides a single, static snapshot of a UI without any metadata. Replay’s video-to-code technology captures 10x more context, including hover states, animations, and responsive behavior. It uses the temporal data of the video to understand how elements change over time, which is essential for accurate brand token extraction.

Does Replay work with existing design systems like Tailwind or Material UI?#

Yes. Replay is framework-agnostic. When automating brand token extraction, you can configure the output to match your specific tech stack. Whether you need a Tailwind

text
tailwind.config.js
, a Theme UI object, or standard CSS variables, Replay's Agentic Editor can format the extracted data to fit your project's architecture.

How does the Replay Headless API work for AI agents?#

The Headless API allows programmatic access to Replay's extraction engine. AI agents like Devin can send a video file or a recording URL to the API and receive structured JSON containing all design tokens, component structures, and even suggested React code. This enables fully autonomous legacy-to-modern migrations.

Can I use Replay to generate E2E tests?#

Yes. One of the most powerful features of Replay is its ability to generate Playwright or Cypress tests from screen recordings. As it extracts your brand tokens and components, it also maps the user's interactions, creating a functional test suite that ensures your modernized code behaves exactly like the original recording.

Is Replay's code generation "pixel-perfect"?#

According to Replay's internal benchmarks, the generated React components are 99.8% visually identical to the source recording. Because the extraction is based on the actual rendered pixels and computed styles in the video, it eliminates the "interpretation" errors common in manual coding.


Ready to ship faster? Try Replay free — from video to production code in minutes.

Ready to try Replay?

Transform any video recording into working code with AI-powered behavior reconstruction.

Launch Replay Free

Get articles like this in your inbox

UI reconstruction tips, product updates, and engineering deep dives.