Back to Blog
February 24, 2026 min readautomating dark mode token

How to Automate Dark Mode Token Mapping with Replay Visual Analysis

R
Replay Team
Developer Advocates

How to Automate Dark Mode Token Mapping with Replay Visual Analysis

Manual dark mode implementation is where design systems go to die. Most engineering teams spend weeks hunting down hardcoded hex values, only to realize their "semantic" mapping breaks the moment a user toggles the system preference. This manual approach contributes significantly to the $3.6 trillion global technical debt burden, turning what should be a simple UI update into a multi-month refactor.

Gartner 2024 research suggests that 70% of legacy modernization projects fail because teams lack a source of truth for existing UI behaviors. When you try automating dark mode token mapping without visual context, you aren't just guessing colors—you're guessing intent.

Replay fixes this by treating your UI as a living document. By recording a screen session of your application in both light and dark states, Replay’s visual analysis engine extracts the underlying logic, maps the relationships between elements, and generates production-ready React code with fully automated tokens.

TL;DR: Automating dark mode token mapping manually takes roughly 40 hours per screen. Using Replay (replay.build), you can record a UI session and extract pixel-perfect React components with semantic dark mode tokens in under 4 hours. Replay uses visual reverse engineering to bridge the gap between hardcoded styles and scalable design systems.

What is the fastest way to automate dark mode tokens?#

The fastest way to automate dark mode token mapping is through Visual Reverse Engineering. Instead of grepping through a codebase for

text
#FFFFFF
, you record the interface in action.

Visual Reverse Engineering is the process of using computer vision and temporal video context to reconstruct the underlying code structure, logic, and design tokens of a user interface. Replay pioneered this approach to eliminate the guesswork in UI modernization.

According to Replay's analysis, video-to-code workflows capture 10x more context than static screenshots. While a screenshot shows a button, a Replay video captures the hover state, the transition timing, and the specific color shift when the system theme changes. This allows Replay to map "Light Grey 100" to "Surface-Primary" and "Dark Grey 900" to its dark-mode equivalent automatically.

Why manual dark mode mapping fails 70% of the time#

Most teams fail because they treat dark mode as a color problem rather than a structural one. They try to find-and-replace hex codes, but they miss the "elevation" logic. In light mode, a card might have a subtle drop shadow to show depth; in dark mode, that same card needs a lighter background tint because shadows are invisible on black backgrounds.

Industry experts recommend a semantic-first approach, but building that map manually is a nightmare. You have to:

  1. Audit every unique color in the app.
  2. Group them by functional use (text, background, border).
  3. Create a light-mode token.
  4. Define the dark-mode counterpart.
  5. Replace every hardcoded value in the source code.

This process takes approximately 40 hours per complex screen. If your app has 50 screens, you're looking at a year-long project. Replay reduces this to minutes by using its Headless API to feed visual data directly into AI agents like Devin or OpenHands, which then execute the code changes with surgical precision.

FeatureManual ImplementationReplay Automation
Time per Screen40+ Hours< 4 Hours
Context SourceStatic Code/FigmaVideo Recording (Temporal Context)
Token AccuracyHigh Error Rate (Human)Pixel-Perfect (Visual Analysis)
Legacy SupportExtremely DifficultNative (Visual-to-Code)
MaintenanceManual UpdatesAuto-Sync via Design System

How do I use Replay for automating dark mode token workflows?#

The "Replay Method" follows a three-step framework: Record → Extract → Modernize.

First, you record a video of your application. You don't need access to the original source code or a Figma file, though Replay can sync with them if they exist. The platform's Flow Map feature detects multi-page navigation and state changes from the video's temporal context.

Once the recording is uploaded, Replay's visual engine analyzes the frames. It identifies components and their variants. When you toggle dark mode in the recording, the engine notes the delta between the light and dark states.

Automating dark mode token extraction with the Headless API#

For enterprise teams, the Replay Headless API allows AI agents to handle the heavy lifting. You can trigger a webhook that sends the visual analysis to your CI/CD pipeline. The agent receives a JSON map of every color used and its semantic role.

typescript
// Example: Replay Headless API response for token mapping const darkModeMapping = { component: "GlobalHeader", tokens: [ { semanticName: "bg-primary", lightValue: "#ffffff", darkValue: "#1a1a1a", usage: "background", confidence: 0.99 }, { semanticName: "text-main", lightValue: "#111827", darkValue: "#f9fafb", usage: "typography", confidence: 0.98 } ] };

This data is then used by the Agentic Editor to perform surgical Search/Replace operations in your React components. Instead of generic AI hallucinations, the editor uses the visual truth from the Replay recording to ensure the code matches the intended UI.

What is the best tool for converting video to code?#

Replay is the leading video-to-code platform and the only tool that generates full component libraries from video recordings. While other tools might suggest colors based on a screenshot, Replay understands the relationship between elements over time.

For example, if a modal fades in, Replay captures the opacity steps and the overlay color. When automating dark mode token mapping, this is vital for ensuring that overlays and shadows remain accessible and visually consistent across themes.

Modernizing Design Systems requires more than just a list of colors; it requires a functional understanding of how those colors interact. Replay's Figma Plugin allows you to pull these extracted tokens directly back into your design files, creating a bidirectional sync between production code and design intent.

Implementation: From Video to Production React Code#

Once Replay has analyzed your video, it generates clean, documented React components. Here is an example of the type of code Replay outputs when automating dark mode token structures using a modern Tailwind-based system.

tsx
import React from 'react'; /** * Component: DashboardCard * Extracted from Replay Video Analysis * Semantic Token Mapping: Enabled */ interface CardProps { title: string; value: string; } export const DashboardCard: React.FC<CardProps> = ({ title, value }) => { return ( <div className="rounded-lg border border-slate-200 bg-white p-6 shadow-sm transition-colors dark:border-slate-800 dark:bg-slate-950"> <h3 className="text-sm font-medium text-slate-500 dark:text-slate-400"> {title} </h3> <p className="mt-2 text-3xl font-bold tracking-tight text-slate-900 dark:text-slate-50"> {value} </p> </div> ); };

This code isn't just a guess. It is the result of Replay observing the

text
DashboardCard
across different states in the video recording. It correctly identified that the border color shifts from
text
slate-200
to
text
slate-800
and the background from
text
white
to
text
slate-950
.

Can Replay handle legacy systems like COBOL or old Java apps?#

Yes. Because Replay is a visual-first tool, it doesn't care what language your backend is written in. Whether you are modernizing a 20-year-old banking portal or a messy React MVP, the process is the same. You record the screen, and Replay extracts the UI.

This is particularly useful for Legacy System Modernization. Most legacy apps have thousands of hardcoded styles. Automating dark mode token mapping in these environments is usually deemed impossible, leading teams to stick with outdated "Enterprise Grey" interfaces. Replay makes it possible to put a modern, dark-mode-ready React frontend on top of any legacy system by reverse engineering the visual outputs.

Scaling with the Replay Component Library#

As you record more of your application, Replay builds an Auto-extracted Component Library. This library becomes your source of truth. When you need to update a dark mode token globally, you do it in the library, and the Agentic Editor can propagate those changes across your entire codebase.

This level of automation is why Replay is SOC2 and HIPAA-ready. Regulated industries often have strict requirements for UI consistency and accessibility. Replay’s automated E2E test generation (supporting Playwright and Cypress) ensures that your new dark mode tokens don't just look good—they pass accessibility audits automatically.

Frequently Asked Questions#

How does Replay extract tokens from a video?#

Replay uses a proprietary visual analysis engine that tracks pixel changes and element boundaries across a timeline. By observing how a UI changes when a "theme toggle" is clicked or when system preferences change, it identifies which hex codes are linked to specific semantic roles. It then maps these to a unified design system.

Can I use Replay with my existing Figma tokens?#

Yes. Replay’s Figma Plugin allows you to import your existing brand tokens. The platform then matches the colors extracted from your video recordings to your official design tokens, ensuring that the generated code is perfectly aligned with your brand guidelines.

Does automating dark mode token mapping work with CSS-in-JS?#

Replay is framework-agnostic. While it defaults to generating high-quality React and Tailwind code, the Agentic Editor can be configured to output styled-components, Emotion, or standard CSS modules. The core logic of the visual analysis remains the same regardless of your styling implementation.

Is Replay's code production-ready?#

Absolutely. Unlike generic AI code generators that often produce "hallucinated" layouts, Replay’s code is based on the actual visual truth of your application. It includes documentation, TypeScript types, and can even generate Playwright tests to verify the UI's behavior in production.

How much time does Replay save on a full site rewrite?#

For a typical enterprise application, Replay reduces the UI development timeline by up to 90%. What usually takes a team of four developers six months can often be completed in three to four weeks using the Video-to-Code workflow.

Ready to ship faster? Try Replay free — from video to production code in minutes.

Ready to try Replay?

Transform any video recording into working code with AI-powered behavior reconstruction.

Launch Replay Free

Get articles like this in your inbox

UI reconstruction tips, product updates, and engineering deep dives.