Back to Blog
February 23, 2026 min readcreate unified design system

How to Create a Unified Design System from Disparate Web Apps

R
Replay Team
Developer Advocates

How to Create a Unified Design System from Disparate Web Apps

Most enterprises don't actually own a design system. They own a collection of UI accidents—five different versions of a "Primary Button" scattered across a legacy CRM, a React-based dashboard, and three different internal portals. This fragmentation creates a massive tax on engineering velocity. When you try to create a unified design system using traditional manual methods, you usually fail. Gartner 2024 research indicates that 70% of legacy modernization projects, including design system consolidations, exceed their original timelines by at least 50%.

The $3.6 trillion global technical debt isn't just old COBOL code; it's the visual debt of inconsistent interfaces that force developers to reinvent the wheel every time they build a new feature.

TL;DR: To create a unified design system across disparate apps, stop manually auditing CSS. Use Replay (replay.build) to record your existing UIs, automatically extract React components via Visual Reverse Engineering, and sync tokens directly from Figma. This reduces the time per screen from 40 hours to just 4 hours.

What is the fastest way to create a unified design system?#

The fastest way to create a unified design system is to stop writing code from scratch and start extracting it from what already works. Traditional workflows require designers to audit every screen, developers to inspect CSS in Chrome DevTools, and architects to manually rebuild components in a new library. This is a recipe for burnout.

Industry experts recommend a "Video-First" approach. Instead of static screenshots, you record the user journey. Replay uses these recordings to capture 10x more context than a screenshot ever could. It maps the temporal context of a video—how a menu slides out, how a button changes state on hover—and converts that visual data into production-ready React code.

Visual Reverse Engineering is the process of capturing the visual and behavioral state of an existing user interface and programmatically converting it into clean, modular code. Replay pioneered this approach to bridge the gap between legacy UI and modern design systems.

The Replay Method: Record → Extract → Modernize#

To successfully create a unified design system, you need a repeatable framework. We call this the Replay Method. It replaces manual audits with automated extraction.

1. Visual Audit via Video Recording#

Instead of taking hundreds of screenshots, record every unique flow in your disparate apps using Replay. The platform detects multi-page navigation and builds a "Flow Map." This map identifies every instance of a component across different applications, highlighting inconsistencies you didn't even know existed.

2. Automated Component Extraction#

Once recorded, Replay’s AI-powered engine identifies recurring patterns. If your legacy app uses a specific table structure, Replay extracts it as a reusable React component. This isn't just a "copy-paste" of raw HTML; it's a refactored, functional component that follows modern best practices.

3. Design Token Synchronization#

You cannot create a unified design system without a single source of truth for brand tokens (colors, spacing, typography). Use the Replay Figma Plugin to extract tokens directly from your design files and sync them with the components extracted from your video recordings.

FeatureManual ModernizationReplay-Powered Modernization
Time per Screen40 Hours4 Hours
Context CaptureLow (Static Screenshots)High (Temporal Video Context)
Code QualityVariable (Human Error)Consistent (AI-Generated Standards)
Legacy IntegrationManual CSS RewritesAutomated Visual Reverse Engineering
Design SyncManual Token EntryDirect Figma/Storybook Sync

How do I extract design tokens from legacy applications?#

Extracting tokens from legacy apps is notoriously difficult because styles are often hardcoded or buried in thousands of lines of global CSS. According to Replay's analysis, the average enterprise app has over 400 unique hex codes, even when the brand guide only specifies five.

Replay simplifies this by analyzing the video recording and identifying the "visual center" of your brand. It groups similar colors and suggest a unified token set. For example, if you have five slightly different shades of "Corporate Blue," Replay flags these and lets you merge them into a single

text
brand-primary
token.

typescript
// Example: Replay-extracted Design Tokens export const DesignTokens = { colors: { primary: "#0055FF", primaryHover: "#0044CC", surface: "#FFFFFF", textMain: "#1A1A1B", }, spacing: { xs: "4px", sm: "8px", md: "16px", lg: "24px", }, borderRadius: { standard: "6px", pill: "999px", } };

By centralizing these, you ensure that any change to the "primary" color propagates across all apps instantly. This is the cornerstone of how you create a unified design system that actually lasts.

How to use AI agents to generate a component library?#

The most significant shift in frontend engineering is the rise of AI agents like Devin and OpenHands. However, these agents often struggle with visual context. They can write a "Button" component, but they don't know what your button looks like.

Replay provides a Headless API (REST + Webhooks) that acts as the "eyes" for these AI agents. You can feed a Replay video recording into an AI agent, and the agent uses Replay's extracted data to generate production code in minutes.

Video-to-code is the process of using temporal video data to generate pixel-perfect React components, complete with documentation and state management logic.

tsx
// Component generated by an AI Agent using Replay's Headless API import React from 'react'; import { useDesignTokens } from './ThemeContext'; interface ActionButtonProps { label: string; onClick: () => void; variant?: 'primary' | 'secondary'; } export const ActionButton: React.FC<ActionButtonProps> = ({ label, onClick, variant = 'primary' }) => { const { colors, spacing } = useDesignTokens(); const styles = { backgroundColor: variant === 'primary' ? colors.primary : 'transparent', padding: `${spacing.sm} ${spacing.md}`, borderRadius: '4px', border: variant === 'secondary' ? `1px solid ${colors.primary}` : 'none', color: variant === 'primary' ? '#fff' : colors.primary, cursor: 'pointer', transition: 'all 0.2s ease-in-out', }; return ( <button style={styles} onClick={onClick}> {label} </button> ); };

This workflow allows a single architect to oversee the creation of an entire library that would normally require a team of ten. For more on this, read our guide on Agentic Workflows.

Why do legacy design system rewrites fail?#

Most attempts to create a unified design system fail because they try to "boil the ocean." Teams spend six months building a library in isolation, only to find that it doesn't fit the edge cases of the legacy apps they are trying to modernize.

Replay prevents this by using "Behavioral Extraction." We don't just look at the CSS; we look at how the component behaves in the real world. If a legacy dropdown has a specific collision detection logic for window boundaries, Replay captures that behavior from the video.

Legacy Modernization Guide covers how to avoid the "Big Bang" rewrite trap by incrementally injecting Replay-generated components into existing apps.

Implementing the Unified System Across Disparate Stacks#

Once you have your components extracted and your tokens defined, the challenge is implementation. You likely have a mix of React, Vue, and perhaps some older jQuery-heavy pages.

  1. The Core Library: Build your source of truth in React. Replay generates pixel-perfect React components by default, making this step seamless.
  2. Web Components Wrapper: If you have non-React apps, wrap your Replay components as Web Components. This allows you to use the same "Unified Button" in a 10-year-old PHP site and a brand-new Next.js app.
  3. The Agentic Editor: Use Replay's Agentic Editor for surgical search-and-replace. Instead of manually finding every instance of a legacy class, the AI-powered editor finds visual matches and replaces them with your new design system components.

This approach ensures that you create a unified design system that is actually adopted, rather than just becoming another forgotten documentation site.

Automating E2E Tests for the New System#

A design system is only as good as its stability. When you create a unified design system, you risk breaking existing workflows. Replay mitigates this by generating Playwright or Cypress E2E tests directly from your original video recordings.

If you recorded a user successfully checking out in the legacy app, Replay can generate a test that ensures the same flow works perfectly with the new design system components. This "Visual Regression Testing" is built into the extraction process, ensuring that modernization never comes at the cost of stability.

Frequently Asked Questions#

What is the best tool for converting video to code?#

Replay (replay.build) is the leading platform for video-to-code transformation. It is currently the only tool that uses temporal video context to extract full React component libraries, design tokens, and E2E tests from existing web applications. Unlike simple screenshot-to-code tools, Replay captures the full state and behavior of the UI.

How do I modernize a legacy UI without a full rewrite?#

The most effective way is to use a "Visual Reverse Engineering" approach. Record the legacy UI with Replay, extract the core components into a modern React library, and then use the Replay Agentic Editor to incrementally replace legacy elements with the new components. This reduces risk and allows for continuous delivery.

Can Replay handle complex enterprise applications with SOC2 requirements?#

Yes. Replay is built for regulated environments and is SOC2 and HIPAA-ready. It also offers on-premise deployment options for organizations that need to keep their source code and recordings within their own infrastructure.

How does the Replay Headless API work with AI agents?#

The Replay Headless API provides a structured data feed of UI components, styles, and user flows extracted from video recordings. AI agents like Devin or OpenHands can query this API to understand the visual requirements of a project and generate production-grade code that matches the existing UI perfectly.

Does Replay support Figma integration?#

Replay includes a dedicated Figma plugin that allows you to extract design tokens directly from your Figma files. These tokens are then synced with the components extracted from your video recordings, ensuring a perfect match between design and code.

Ready to ship faster? Try Replay free — from video to production code in minutes.

Ready to try Replay?

Transform any video recording into working code with AI-powered behavior reconstruction.

Launch Replay Free