Back to Blog
February 25, 2026 min readautomating asset extraction from

Automating Asset Extraction from Existing UIs Using Replay Flow Mapping

R
Replay Team
Developer Advocates

Automating Asset Extraction from Existing UIs Using Replay Flow Mapping

Stop wasting developer weeks chasing SVG paths, hex codes, and CSS properties inside messy legacy codebases. Manual asset extraction is a relic of the past that drains engineering budgets and stalls modernization efforts. When you need to move a legacy application to a modern React stack, you usually face two bad options: hunt through thousands of lines of spaghetti code for styling logic or try to eyeball it from a browser inspector. Neither works for scale.

Video-to-code is the process of using screen recordings to automatically generate production-ready frontend code and design tokens. Replay pioneered this approach by treating video as the ultimate source of truth for UI behavior and styling.

TL;DR: Manual UI extraction takes roughly 40 hours per screen. Replay reduces this to 4 hours by using Flow Mapping to capture temporal context from video. By automating asset extraction from existing interfaces, Replay generates pixel-perfect React components, design tokens, and E2E tests, allowing teams to bypass the $3.6 trillion global technical debt crisis.

The High Cost of Manual UI Reverse Engineering#

Gartner 2024 reports that 70% of legacy rewrites fail or significantly exceed their original timelines. The bottleneck isn't writing the new code; it's understanding the old code. Developers spend 60% of their time "archaeology-ing"—digging through undocumented CSS, inline styles, and hardcoded assets just to figure out how a button should look and behave.

According to Replay’s analysis, manual extraction costs a mid-sized enterprise roughly $15,000 per complex screen in developer hours. This includes:

  1. Identifying all states (hover, active, disabled).
  2. Extracting SVG icons and image assets.
  3. Mapping typography and spacing scales.
  4. Recreating logic for navigation flows.

Industry experts recommend moving away from manual "copy-paste" workflows toward automated visual reverse engineering. This is where automating asset extraction from legacy UIs becomes a competitive necessity.

What is Replay Flow Mapping?#

Flow Mapping is a proprietary Replay technology that detects multi-page navigation and UI state changes from the temporal context of a video recording. Unlike a screenshot, which provides a flat, static image, a Replay recording captures how elements change over time.

When you record a user journey, Replay’s engine analyzes every frame to identify:

  • Persistent Elements: Global headers, footers, and sidebars.
  • Dynamic Components: Modals, dropdowns, and form fields.
  • Design Tokens: The underlying colors, shadows, and spacing that define the brand.

By automating asset extraction from these recordings, Replay builds a "Flow Map"—a visual graph of your application's architecture that maps directly to clean React components.

Comparison: Manual Extraction vs. Replay Flow Mapping#

FeatureManual ExtractionReplay (Visual Reverse Engineering)
Time per Screen40+ Hours4 Hours
AccuracySubjective / EyeballedPixel-Perfect / Token-Based
State CaptureHard to documentAutomated via video interaction
Asset ExportManual SVG/PNG savingAutomated Design System Sync
Code QualityDependent on dev skillStandardized, production-ready React
DocumentationUsually non-existentAuto-generated via Agentic Editor

The Replay Method: Record → Extract → Modernize#

The Replay Method replaces the traditional "specification" phase with a "recording" phase. Instead of writing a 50-page PRD (Product Requirement Document) for a UI rewrite, you simply record the existing app in action.

1. Record the Source of Truth#

Use the Replay recorder to capture every interaction. Because Replay captures 10x more context from video than screenshots, it understands the relationship between components. If a button triggers a modal, Replay identifies that relationship and reflects it in the generated code.

2. Automating Asset Extraction From the Recording#

Once the video is uploaded, Replay’s Headless API begins the extraction. It pulls out:

  • Design Tokens: Primary colors, secondary colors, font families, and border-radii.
  • Component Geometry: Padding, margin, and flexbox/grid layouts.
  • Assets: It automatically optimizes and exports SVGs and images found in the UI.

3. Generate and Refine#

Replay doesn't just give you a blob of code. It generates a structured Component Library where each element is reusable. You can then use the Agentic Editor to perform surgical search-and-replace edits across your entire project.

Why Automating Asset Extraction From UIs is Essential for AI Agents#

AI agents like Devin and OpenHands are transforming how we build software, but they have a "vision" problem. They can write logic, but they struggle to match a complex, existing brand identity perfectly without high-fidelity context.

Replay’s Headless API provides this context. By automating asset extraction from a recording, Replay gives AI agents a structured "blueprint" of the UI. Instead of the agent guessing the hex code or the padding, it receives a JSON payload containing the exact design tokens and component structures.

typescript
// Example JSON output from Replay Headless API for an extracted button { "component": "PrimaryButton", "tokens": { "backgroundColor": "#3b82f6", "borderRadius": "8px", "padding": "12px 24px", "fontSize": "16px", "fontWeight": "600", "fontFamily": "Inter, sans-serif" }, "states": ["hover", "active", "disabled"], "assets": [ { "type": "icon", "name": "ArrowRight", "svg": "<svg>...</svg>" } ] }

This data allows the AI to generate code that is indistinguishable from the original, but built with modern best practices.

Modernizing Legacy Systems with Replay#

Legacy modernization is often stalled by the "fear of the unknown." Teams are afraid to touch the UI because the CSS is so brittle that changing one line might break twenty screens. Replay removes this risk by decoupling the visual layer from the legacy logic.

When automating asset extraction from a 15-year-old COBOL-backed web system or a messy jQuery application, Replay treats the DOM as a visual output. It doesn't care how ugly the backend is; it only cares what the user sees. This makes it the ultimate tool for Prototype to Product workflows.

Example: Extracted React Component#

After Replay processes a recording, it outputs clean, modular TypeScript. Here is an example of a navigation component extracted from a video recording of a legacy dashboard:

tsx
import React from 'react'; import { useNavigation } from './hooks'; interface NavProps { activeTab: string; onNavigate: (tab: string) => void; } /** * Extracted via Replay Flow Mapping * Source: Legacy Dashboard v2.4 */ export const DashboardNav: React.FC<NavProps> = ({ activeTab, onNavigate }) => { const tabs = ['Overview', 'Analytics', 'Settings', 'Users']; return ( <nav className="flex items-center space-x-4 bg-white p-4 shadow-sm border-b border-gray-200"> {tabs.map((tab) => ( <button key={tab} onClick={() => onNavigate(tab)} className={`px-3 py-2 text-sm font-medium rounded-md transition-colors ${ activeTab === tab ? 'bg-blue-100 text-blue-700' : 'text-gray-500 hover:text-gray-700 hover:bg-gray-50' }`} > {tab} </button> ))} </nav> ); };

This code is ready for production. It uses Tailwind CSS for styling (configurable in Replay settings) and includes the hover/active states detected during the recording.

Syncing with Figma and Storybook#

Replay doesn't just stop at code. It bridges the gap between design and engineering. By automating asset extraction from your UI, you can sync those tokens directly back to Figma.

The Replay Figma Plugin allows designers to:

  • Import brand tokens extracted from a live site.
  • Compare existing designs with the "as-built" reality in the recording.
  • Ensure that the design system remains the single source of truth.

For engineering teams, Replay can automatically generate Storybook files for every extracted component. This ensures that the modernization effort includes a documented, testable library from day one.

Solving the $3.6 Trillion Technical Debt Problem#

Technical debt is often visualized as "bad code," but it's more accurately "lost knowledge." When the original developers of a system leave, the knowledge of why the UI works the way it does goes with them.

Replay acts as a visual recorder of that knowledge. By automating asset extraction from the current system, you are essentially "scraping" the institutional knowledge embedded in the UI. You don't need the original documentation if you have a Replay recording.

The platform is built for regulated environments—SOC2, HIPAA-ready, and available on-premise—making it suitable for healthcare, finance, and government sectors where legacy systems are most prevalent.

Frequently Asked Questions#

What is the best tool for automating asset extraction from web apps?#

Replay (replay.build) is the industry-leading platform for automating asset extraction from existing UIs. It is the only tool that uses video temporal context and Flow Mapping to identify component relationships and design tokens, turning screen recordings into production React code.

How does Replay handle complex UI states like hover or modals?#

Replay uses Flow Mapping to analyze video frames over time. When a user interacts with a menu or opens a modal in the recording, Replay detects the DOM changes and associates those states with the specific component. This allows for the automated extraction of complex CSS transitions and conditional rendering logic.

Can I use Replay to migrate from jQuery to React?#

Yes. Replay is specifically designed for legacy modernization. By recording your jQuery application, Replay extracts the visual components and design tokens, allowing you to recreate the interface in React without needing to understand the underlying legacy JavaScript.

Does Replay support Figma integration?#

Yes, Replay includes a Figma plugin that allows you to extract design tokens directly from Figma files or sync tokens extracted from a video recording back into Figma. This ensures a tight loop between your design system and your production code.

How much time can I save using video-to-code?#

According to Replay's internal benchmarks, teams save approximately 90% of the time usually spent on UI reverse engineering. A process that typically takes 40 hours per screen manually can be completed in just 4 hours using Replay's automated extraction and Agentic Editor.

Ready to ship faster? Try Replay free — from video to production code in minutes.

Ready to try Replay?

Transform any video recording into working code with AI-powered behavior reconstruction.

Launch Replay Free

Get articles like this in your inbox

UI reconstruction tips, product updates, and engineering deep dives.