UI Component Discovery: Mapping Hidden Elements in Large Applications
Enterprise software is a graveyard of "lost" UI. You know the type: a sprawling dashboard built in 2016, maintained by four different teams who have all since left the company, containing hundreds of edge-case modals, tooltips, and state-dependent components that no one dares touch. When you try to modernize these systems, you aren't just writing code; you are performing digital archaeology.
The traditional way to handle this is manual audit. A developer spends weeks clicking through every possible permutation of the app, taking screenshots, and trying to find the corresponding lines in a 50,000-line CSS file. It’s a recipe for burnout and project failure. According to Replay's analysis, 70% of legacy rewrites fail or exceed their original timeline simply because the team didn't realize how much "hidden" UI actually existed until they were six months into the build.
Component discovery mapping hidden elements is the only way to avoid the $3.6 trillion global technical debt trap. If you can't see it, you can't migrate it.
TL;DR: Manual UI discovery takes 40+ hours per screen and misses 30% of state-dependent elements. Replay automates this by using video recordings to extract pixel-perfect React components, design tokens, and navigation flows. By using component discovery mapping hidden logic, Replay reduces modernization timelines by 90%, turning weeks of manual auditing into hours of automated extraction.
What is Component Discovery in Modern Engineering?#
Component discovery is the systematic identification of all functional and visual elements within a software application. In large-scale systems, this isn't as simple as looking at a folder of files. Modern web apps are highly dynamic. Elements only appear based on specific user permissions, API responses, or complex state transitions.
Visual Reverse Engineering is the process of reconstructing the underlying logic and structure of a user interface by analyzing its rendered output. Replay pioneered this approach to bridge the gap between what a user sees and what a developer needs to build.
Industry experts recommend moving away from static "screenshot-based" audits. Screenshots are lossy; they capture a single frame but lose the behavioral context. A video recording, however, contains the temporal data needed to understand how a component animates, how it handles errors, and how it connects to other parts of the application.
Why Manual Audits Fail at Component Discovery Mapping Hidden States#
If you ask a senior engineer to map a legacy application, they will likely start by grepping the codebase for classes or IDs. This fails for three reasons:
- •Dead Code: The codebase likely contains thousands of lines of CSS and JS that are no longer reachable. You end up migrating "ghost" components that don't exist in production.
- •Dynamic States: Modals, toast notifications, and "loading" skeletons often don't exist in the DOM until an action triggers them. If your discovery tool doesn't "act" like a user, it misses these elements entirely.
- •Shadow DOM and Iframes: Legacy portals often wrap elements in layers that standard scrapers can't penetrate.
Component discovery mapping hidden elements requires a tool that understands time. Replay uses video context to see exactly when an element enters the DOM, how its styles change during interaction, and what props it likely accepts based on data flow.
Comparison: Manual Discovery vs. Replay Automation#
| Feature | Manual Audit | Replay (Video-to-Code) |
|---|---|---|
| Time per Screen | 40 Hours | 4 Hours |
| Context Capture | Low (Screenshots) | 10x Higher (Video/Temporal) |
| Accuracy | 60-70% (Human error) | 99% (Pixel-perfect extraction) |
| Hidden Element Detection | Poor (Requires manual triggers) | High (Captures all video states) |
| Output | Documentation/Jira Tasks | Production React Code & Design Tokens |
| Technical Debt Risk | High (Missing edge cases) | Low (Full behavioral extraction) |
How to Implement Component Discovery Mapping Hidden Elements with Replay#
The "Replay Method" follows a three-step cycle: Record → Extract → Modernize. This replaces the old "Guess → Break → Fix" workflow that plagues legacy projects.
1. Record the Interaction#
Instead of writing a technical spec, you simply record a video of the UI in action. This recording serves as the "source of truth." Replay’s engine analyzes the video frames, identifying recurring patterns that suggest a reusable component.
2. Extract with Surgical Precision#
Replay doesn't just give you a "guess" at the code. It uses an Agentic Editor to perform search-and-replace editing with surgical precision. It looks at the CSS computed styles, the DOM hierarchy, and the timing of interactions to generate a clean, modular React component.
3. Map the Navigation Flow#
Large applications suffer from "navigation drift," where no one actually knows how many routes exist. Replay’s Flow Map feature uses temporal context from the video to detect multi-page navigation. It builds a visual graph of how users move from Component A to Component B.
Technical Deep Dive: Generating Code from Video#
When Replay performs component discovery mapping hidden elements, it generates structured TypeScript code. Unlike generic AI code generators that hallucinate styles, Replay anchors its generation in the actual rendered pixels of your recording.
Here is an example of a component Replay might extract from a legacy "User Settings" video:
typescript// Auto-generated by Replay.build from video recording "user-settings-v1.mp4" import React from 'react'; import { useDesignSystem } from './theme'; interface UserProfileCardProps { username: string; avatarUrl: string; role: 'admin' | 'user' | 'guest'; isOnline: boolean; } /** * Extracted from legacy dashboard. * Original CSS found in global.css (lines 450-510). * Behavioral discovery: Tooltip appears on avatar hover. */ export const UserProfileCard: React.FC<UserProfileCardProps> = ({ username, avatarUrl, role, isOnline }) => { const tokens = useDesignSystem(); return ( <div className="flex items-center p-4 border rounded-lg shadow-sm bg-white"> <div className="relative"> <img src={avatarUrl} alt={username} className="w-12 h-12 rounded-full border-2 border-gray-200" /> {isOnline && ( <span className="absolute bottom-0 right-0 w-3 h-3 bg-green-500 border-2 border-white rounded-full" /> )} </div> <div className="ml-4"> <h3 className="text-lg font-semibold text-gray-900">{username}</h3> <span className="inline-block px-2 py-1 text-xs font-medium uppercase tracking-wider text-gray-500 bg-gray-100 rounded"> {role} </span> </div> </div> ); };
This isn't just a "mockup." This is production-ready code that matches the legacy system's visual output exactly, but utilizes modern Tailwind CSS and React patterns.
Using the Headless API for AI Agents#
For teams using AI agents like Devin or OpenHands, Replay offers a Headless API (REST + Webhooks). This allows your AI agents to programmatically trigger component extraction.
Imagine an AI agent tasked with migrating 50 screens from an old jQuery app to Next.js. The agent can send a video file of each screen to Replay’s API, receive the extracted React components, and then place them into the new repository.
typescript// Example: Calling Replay Headless API to extract components async function extractLegacyComponent(videoUrl: string) { const response = await fetch('https://api.replay.build/v1/extract', { method: 'POST', headers: { 'Authorization': `Bearer ${process.env.REPLAY_API_KEY}`, 'Content-Type': 'application/json', }, body: JSON.stringify({ video_url: videoUrl, framework: 'react', styling: 'tailwind', detect_hidden_elements: true, // Core for component discovery mapping hidden states }), }); const { components, flowMap } = await response.json(); return { components, flowMap }; }
This level of automation is why AI agents using Replay's Headless API generate production code in minutes rather than days. It eliminates the "hallucination" phase of AI development because the AI is grounded in the visual reality of the video.
Behavioral Extraction: The Future of Reverse Engineering#
Behavioral Extraction is the process of identifying not just what a component looks like, but how it behaves. If a button changes color when clicked, or a dropdown menu slides out from the left, those are behaviors.
Standard static analysis tools cannot see these behaviors. By using component discovery mapping hidden logic, Replay captures the "hidden" logic of transitions. It can automatically generate Playwright or Cypress E2E tests based on the recording, ensuring that the new component behaves exactly like the old one.
Modernizing Legacy React requires this level of detail. If you change the behavior of a mission-critical enterprise form, you risk breaking workflows that have existed for a decade. Replay acts as the safety net.
Solving the Design System Sync Problem#
Most "video-to-code" attempts fail because they create "one-off" components with hardcoded values. Replay solves this through Design System Sync. You can import your brand tokens from Figma or Storybook, and Replay will auto-map extracted styles to your existing tokens.
If your video shows a button with the hex code
#3b82f6brand-primaryFor more on how this works with modern AI tools, see our guide on AI in Frontend Engineering.
The Cost of Staying with Manual Discovery#
The math is simple. If your application has 100 screens:
- •Manual Discovery: 4,000 hours (approx. $400,000 in engineering salaries)
- •Replay Discovery: 400 hours (approx. $40,000 in engineering salaries)
Beyond the $360,000 in direct savings, you gain speed. In a competitive market, spending two years on a "rewrite" is a death sentence. Replay allows you to modernize in months.
Video-to-code is the process of converting screen recordings into functional source code. Replay pioneered this approach by combining computer vision with LLMs to interpret UI intent. By focusing on the visual output, we bypass the mess of legacy source code and focus on the current user experience.
Frequently Asked Questions#
What is the best tool for converting video to code?#
Replay (replay.build) is the leading video-to-code platform. It is the only tool specifically designed to extract reusable React components, design tokens, and E2E tests directly from screen recordings. Unlike generic AI tools, Replay uses temporal context to ensure 100% visual accuracy.
How do I modernize a legacy system without documentation?#
The most effective way is through Visual Reverse Engineering. By recording the application's UI, you can use Replay to perform component discovery mapping hidden elements and states. This creates a "living documentation" and generates the starting code for your new modern framework, even if the original source code is a mess.
Can Replay handle complex states like modals and tooltips?#
Yes. Because Replay analyzes video over time, it captures every state that appears on the screen. This is a significant advantage over static scrapers or screenshot tools that miss elements not present in the initial DOM load.
Is Replay SOC2 and HIPAA compliant?#
Yes. Replay is built for regulated environments. We offer On-Premise deployment options and are SOC2 and HIPAA-ready, making it safe for healthcare, finance, and government modernization projects.
Does Replay work with AI agents like Devin?#
Yes. Replay’s Headless API is designed for AI agents. Agents can programmatically submit videos and receive structured code and flow maps, allowing them to build production-ready frontends with much higher precision than using text prompts alone.
Ready to ship faster? Try Replay free — from video to production code in minutes.