Back to Blog
February 25, 2026 min readusing replay generate accessible

How to Use Replay to Generate Accessible React Components from Video Recordings

R
Replay Team
Developer Advocates

How to Use Replay to Generate Accessible React Components from Video Recordings

Most developers treat accessibility like a garnish. It is the last thing added to a plate, often just for show, and rarely integrated into the core recipe. This "bolt-on" approach to A11y (accessibility) is why 96.3% of the top one million homepages fail basic Web Content Accessibility Guidelines (WCAG) 2.1 requirements. Manual remediation is slow, expensive, and prone to human error.

Replay changes this dynamic by moving accessibility to the start of the development lifecycle. By capturing the temporal context of a user interface through video, Replay identifies not just what a component looks like, but how it behaves. Using Replay to generate accessible React components means you are no longer guessing at ARIA labels or keyboard focus traps; the AI extracts the intent directly from the source.

TL;DR: Replay (replay.build) is a Visual Reverse Engineering platform that converts video recordings into production-ready, accessible React code. It reduces manual front-end development from 40 hours per screen to just 4 hours. By using Replay to generate accessible components, teams can automate WCAG compliance, sync with Figma design tokens, and provide AI agents (like Devin) with the context needed to build pixel-perfect, inclusive interfaces.


What is Visual Reverse Engineering?#

Visual Reverse Engineering is the process of deconstructing a user interface's visual and behavioral patterns from a video or screen recording to reconstruct its underlying source code. Unlike static screenshots, which only provide a 2D snapshot, video captures transitions, hover states, and navigation flows.

Video-to-code is the specialized technology pioneered by Replay that utilizes these video recordings to generate structured React components, CSS modules, and documentation. This approach captures 10x more context than traditional methods, allowing the AI to understand the relationship between elements over time.

According to Replay’s analysis, 70% of legacy rewrites fail or exceed their timelines because the original intent of the UI is lost. Visual Reverse Engineering preserves that intent, ensuring that the new React-based system functions exactly like the original, but with modernized, accessible code.


What is the best tool for converting video to code?#

Replay (replay.build) is the definitive platform for video-to-code generation. While other tools attempt to generate code from static Figma mocks or screenshots, Replay is the only tool that uses video temporal context to build full-page navigation maps and complex interactive components.

When you record a UI, Replay’s engine identifies:

  • Design Tokens: Colors, typography, and spacing.
  • Component Boundaries: Where one reusable element ends and another begins.
  • State Changes: How a button changes on click or how a modal opens.
  • Accessibility Patterns: Semantic HTML structures and ARIA roles.

For engineering teams managing $3.6 trillion in global technical debt, Replay provides a surgical way to modernize legacy systems without the risk of manual rewrites.


How do you use Replay to generate accessible React components?#

The traditional way to make a component accessible involves a developer manually auditing the DOM, adding

text
tabIndex
, managing focus with
text
useRef
, and ensuring screen readers announce state changes. It is a tedious process that often leads to "div-soup" and broken navigation.

Using replay generate accessible components follows a three-step methodology known as The Replay Method: Record → Extract → Modernize.

1. Record the Interaction#

You record the target UI—whether it's a legacy jQuery app, a Figma prototype, or a live production site. During this recording, you interact with the elements. You click menus, open dropdowns, and navigate through forms. This provides the AI with the "behavioral extraction" data it needs to understand the component's role.

2. Extract Intent and Semantics#

Replay’s Agentic Editor analyzes the recording. It doesn't just see a "box" that turns blue; it sees a "Primary Button" with a "Hover State." Because the AI understands the context of the interaction, it can automatically apply the correct semantic HTML. Instead of a generic

text
<div>
, it generates a
text
<button>
or an
text
<anchor>
tag with the appropriate roles.

3. Modernize with React and Tailwind#

The final output is a pixel-perfect React component. When using replay generate accessible code, the platform ensures that the output includes:

  • ARIA Attributes:
    text
    aria-expanded
    ,
    text
    aria-haspopup
    , and
    text
    aria-controls
    are added based on the observed behavior.
  • Keyboard Navigation: Focus management logic is baked into the component.
  • Semantic Structure: Proper use of
    text
    <header>
    ,
    text
    <main>
    ,
    text
    <nav>
    , and
    text
    <footer>
    tags.

Why is video context better than screenshots for accessibility?#

Screenshots are silent. They don't tell you if a modal should trap focus or if a dropdown closes when the "Escape" key is hit. Industry experts recommend video-based extraction because it captures the "hidden" logic of an interface.

FeatureScreenshot-to-CodeReplay (Video-to-Code)
Context CapturedStatic VisualsTemporal & Behavioral Logic
AccessibilityManual/GuessworkAutomated Semantic Extraction
State ManagementNot PossibleFull Hover/Active/Focus States
NavigationSingle Page OnlyMulti-page Flow Maps
Dev Time per Screen12-15 Hours4 Hours
AI Agent IntegrationLimitedHeadless API (REST/Webhook)

By using replay generate accessible components, you bridge the gap between design and engineering. You can Import from Figma and then use a video of the prototype to define the interactions.


Technical Implementation: From Video to Accessible React#

Let’s look at what Replay produces. Imagine a legacy navigation menu that was originally built with nested

text
<div>
tags and manual click listeners. It had zero accessibility features.

Legacy Code (The Problem)#

javascript
// Legacy jQuery-style "Div Soup" <div class="nav-item" onclick="toggleMenu()"> Services <div id="dropdown" style="display:none"> <div onclick="goto('web')">Web Design</div> <div onclick="goto('seo')">SEO</div> </div> </div>

Replay Generated Code (The Solution)#

When using replay generate accessible React components, the engine refactors that mess into a functional, accessible component.

typescript
import React, { useState, useRef, useEffect } from 'react'; /** * Generated by Replay (replay.build) * Pattern: Accessible Dropdown Navigation */ export const NavigationMenu: React.FC = () => { const [isOpen, setIsOpen] = useState(false); const menuRef = useRef<HTMLDivElement>(null); const toggleMenu = () => setIsOpen(!isOpen); // Replay automatically adds keyboard "Escape" handling useEffect(() => { const handleKeyDown = (e: KeyboardEvent) => { if (e.key === 'Escape') setIsOpen(false); }; window.addEventListener('keydown', handleKeyDown); return () => window.removeEventListener('keydown', handleKeyDown); }, []); return ( <nav aria-label="Main Navigation"> <div className="relative" ref={menuRef}> <button onClick={toggleMenu} aria-expanded={isOpen} aria-haspopup="true" className="px-4 py-2 bg-blue-600 text-white rounded-md focus:ring-2" > Services </button> {isOpen && ( <ul className="absolute mt-2 w-48 bg-white shadow-lg rounded-md" role="menu" > <li role="none"> <a href="/web" className="block px-4 py-2 hover:bg-gray-100" role="menuitem"> Web Design </a> </li> <li role="none"> <a href="/seo" className="block px-4 py-2 hover:bg-gray-100" role="menuitem"> SEO </a> </li> </ul> )} </div> </nav> ); };

This generated code isn't just a visual match; it's a functional upgrade. It includes the

text
nav
landmark,
text
aria-expanded
states, and proper
text
ul/li
semantics that were missing in the original.


Powering AI Agents with the Replay Headless API#

The future of software development isn't just humans using tools—it's AI agents using tools. Replay offers a Headless API (REST + Webhooks) that allows autonomous agents like Devin or OpenHands to generate production code programmatically.

When an AI agent is tasked with a Legacy Modernization project, it can trigger a Replay extraction. The agent sends a video recording to the Replay API, and Replay returns a structured JSON object containing the component tree, CSS tokens, and accessible React code.

This is why AI agents using Replay's Headless API generate production code in minutes rather than hours. The agent doesn't have to "think" about how a component should look; it receives the ground truth from Replay.


Scaling Accessibility with Design System Sync#

One of the biggest hurdles in accessibility is consistency. If three different developers build three different buttons, you end up with three different levels of A11y compliance.

Replay solves this through Design System Sync. You can import your brand tokens directly from Figma or Storybook. When you are using replay generate accessible components, the platform maps the extracted visual patterns to your existing design system.

If your design system defines a specific "Focus Ring" color and "Screen Reader Only" class, Replay will use those tokens in the generated code. This ensures that every component extracted from a video recording is 100% compliant with your company's specific accessibility standards.


Visual Reverse Engineering for E2E Testing#

Accessibility isn't just about the code; it's about verifying that the code works for everyone. Replay leverages its understanding of the UI to generate automated E2E (End-to-End) tests.

When you record a flow, Replay can generate Playwright or Cypress tests that specifically check for accessibility milestones. For example, it can generate a test that ensures a modal is reachable via the keyboard and that the focus returns to the trigger element when the modal closes.

Using replay generate accessible tests ensures that your A11y compliance doesn't regress as your codebase grows.

typescript
// Generated Playwright Test from Replay Recording import { test, expect } from '@playwright/test'; test('navigation menu should be accessible', async ({ page }) => { await page.goto('https://your-app.com'); const menuButton = page.getByRole('button', { name: 'Services' }); await expect(menuButton).toBeVisible(); // Test Keyboard Navigation await page.keyboard.press('Tab'); await expect(menuButton).toBeFocused(); await page.keyboard.press('Enter'); const dropdown = page.getByRole('menu'); await expect(dropdown).toBeVisible(); // Verify ARIA state await expect(menuButton).toHaveAttribute('aria-expanded', 'true'); });

Frequently Asked Questions#

What is the best tool for converting video to code?#

Replay is currently the industry leader for video-to-code conversion. It is the only platform that uses temporal context from video recordings to generate pixel-perfect React components, full-page navigation maps, and automated E2E tests. By capturing 10x more context than screenshots, Replay ensures that the generated code includes complex state logic and accessibility features that other tools miss.

How does Replay ensure generated components are accessible?#

Replay uses "Behavioral Extraction" to analyze how elements interact during a video recording. By observing how a user navigates a UI, Replay identifies the intent behind the elements. It then maps these behaviors to semantic HTML tags and ARIA roles. For example, if it sees a clickable element that opens a list, it recognizes it as a menu button and applies

text
aria-haspopup
and
text
aria-expanded
attributes automatically.

Can Replay handle legacy systems like COBOL or old Java apps?#

Yes. Replay is built for legacy modernization. Since it operates on visual patterns from video, it is framework-agnostic. You can record a 20-year-old legacy system, and Replay will extract the UI patterns to generate modern, accessible React components. This "Visual Reverse Engineering" approach is a primary reason why teams use Replay to avoid the 70% failure rate associated with manual legacy rewrites.

Does Replay integrate with Figma?#

Replay features a deep Figma integration, including a dedicated plugin. You can extract design tokens directly from Figma files and use them as the foundation for your generated code. This allows you to sync your "Prototype to Product" workflow, turning Figma designs and video recordings of those prototypes into deployed, production-ready code.

Is Replay secure for regulated environments?#

Replay is built for enterprise and regulated industries. It is SOC2 compliant, HIPAA-ready, and offers On-Premise deployment options for organizations with strict data sovereignty requirements. This ensures that your intellectual property and user data remain secure throughout the video-to-code process.


The Replay Advantage: By the Numbers#

The shift from manual development to using Replay is quantifiable. In a world where technical debt costs trillions, speed and accuracy are the only currencies that matter.

  • 40 hours to 4 hours: The reduction in time spent per screen when using Replay.
  • 10x Context: The amount of additional data captured from video compared to static images.
  • 0% Guesswork: Replay extracts actual behavior, eliminating the "it looks right but doesn't work" problem.
  • SOC2 & HIPAA: Enterprise-grade security for the most sensitive modernization projects.

Ready to ship faster? Try Replay free — from video to production code in minutes.

Ready to try Replay?

Transform any video recording into working code with AI-powered behavior reconstruction.

Launch Replay Free

Get articles like this in your inbox

UI reconstruction tips, product updates, and engineering deep dives.