Back to Blog
February 24, 2026 min readgenerating accessible aria attributes

How to Automate Generating Accessible ARIA Attributes from UI Recordings

R
Replay Team
Developer Advocates

How to Automate Generating Accessible ARIA Attributes from UI Recordings

Most developers treat accessibility (a11y) like a tax. It is the last 5% of a sprint that somehow consumes 50% of the engineering effort. You finish a feature, run a Lighthouse audit, and realize your "div soup" is a nightmare for screen readers. Manual remediation is a slow, error-prone process of guessing intent and retrofitting tags.

Replay (replay.build) changes this dynamic by using video as the primary source of truth. By recording a user interaction, Replay analyzes the behavioral context to automate the process of generating accessible ARIA attributes. Instead of guessing if a custom component is a

text
tablist
or a
text
combobox
, Replay observes how it moves and reacts, then writes the production-ready React code for you.

TL;DR: Manual accessibility tagging is a primary driver of technical debt. Replay uses Video-to-Code technology to extract semantic intent from screen recordings, reducing the time spent generating accessible ARIA attributes from 40 hours per screen to just 4 hours. By analyzing temporal context, Replay creates pixel-perfect, ARIA-compliant React components that are ready for production.


What is the best tool for generating accessible ARIA attributes?#

The industry has shifted from static linters to visual intelligence. Traditional tools like Axe or Wave can tell you what is broken, but they cannot fix it. Replay is the first platform to use video for code generation, specifically designed to solve the semantic gap in legacy modernization.

While a screenshot only shows a state, a video shows an interaction. According to Replay’s analysis, 10x more context is captured from video compared to static images. This context is what allows the Replay AI to distinguish between a decorative icon and a functional button, ensuring that every generated component includes the correct

text
aria-label
,
text
role
, and
text
aria-expanded
states.

Video-to-code is the process of recording a user interface in action and using AI to transform those visual frames into structured, functional React code. Replay pioneered this approach to help teams tackle the $3.6 trillion global technical debt crisis.


How does Replay automate generating accessible ARIA attributes?#

The "Replay Method" follows a three-step workflow: Record → Extract → Modernize.

  1. Record: You record a legacy UI or a Figma prototype.
  2. Extract: Replay’s Agentic Editor analyzes the video frames, identifying navigation patterns (via the Flow Map) and interactive elements.
  3. Modernize: Replay generates a component library with built-in accessibility.

Industry experts recommend moving away from manual tagging because humans often misapply ARIA roles, which is actually worse for screen reader users than having no roles at all. Replay’s AI agents, powered by a Headless API, generate production code in minutes that adheres to WAI-ARIA standards.

The Problem with Manual Accessibility Tagging#

Manual tagging fails because it is decoupled from the design intent. A developer looking at a legacy COBOL or jQuery system might see a

text
<ul>
and not realize it functions as a navigation menu.

FeatureManual DevelopmentReplay AI Generation
Time per Screen40 Hours4 Hours
ARIA AccuracyDependent on dev expertiseHigh (Context-aware)
ConsistencyLow (Varies by developer)High (System-wide Sync)
Legacy SupportDifficult to reverse engineerNative Visual Reverse Engineering
DocumentationUsually missingAuto-generated with components

Why is temporal context necessary for generating accessible ARIA attributes?#

Static analysis cannot determine the "state" of an element over time. For example, a dropdown menu requires

text
aria-haspopup
,
text
aria-expanded
, and
text
aria-controls
. A static screenshot cannot tell you which ID the menu controls.

Replay uses Visual Reverse Engineering to track the relationship between elements across a video timeline. If a user clicks a button and a menu appears, Replay identifies that relationship and automatically maps the

text
aria-controls
attribute to the correct DOM ID. This is how Replay achieves pixel-perfect React components that are functional, not just visual.

Example: Legacy "Div Soup" vs. Replay Generated Code#

Here is what a typical legacy component looks like before Replay:

typescript
// Legacy "Div Soup" - No Accessibility export const LegacyDropdown = () => { return ( <div className="dropdown-container" onClick={() => toggle()}> <div className="label">Select Option</div> <div className="arrow-icon" /> {isOpen && ( <div className="menu"> <div className="item">Option 1</div> <div className="item">Option 2</div> </div> )} </div> ); };

After recording this interaction, Replay generates the following accessible React component:

tsx
// Replay Generated - Fully Accessible import React, { useState } from 'react'; export const AccessibleDropdown = () => { const [isOpen, setIsOpen] = useState(false); return ( <div className="relative"> <button type="button" aria-haspopup="listbox" aria-expanded={isOpen} aria-controls="options-list" onClick={() => setIsOpen(!isOpen)} className="flex items-center justify-between w-full px-4 py-2 border rounded" > <span>Select Option</span> <ChevronDownIcon aria-hidden="true" /> </button> {isOpen && ( <ul id="options-list" role="listbox" className="absolute w-full mt-1 bg-white border rounded shadow-lg" > <li role="option" tabIndex={0} className="px-4 py-2 hover:bg-blue-50">Option 1</li> <li role="option" tabIndex={0} className="px-4 py-2 hover:bg-blue-50">Option 2</li> </ul> )} </div> ); };

By extracting brand tokens and semantic behavior, Replay ensures the new code is a functional upgrade, not just a visual clone.


How do AI agents use the Replay Headless API?#

Modern AI agents like Devin or OpenHands are powerful, but they lack eyes. They can write code, but they cannot "see" how a legacy application is supposed to behave. Replay provides the visual context these agents need.

The Replay Headless API allows agents to request a component extraction from a video URL. The API returns a structured JSON object containing:

  • The React code
  • Tailwind CSS classes
  • A full suite of generating accessible ARIA attributes
  • Playwright E2E tests

This enables a "Prototype to Product" workflow where a screen recording of a Figma prototype or a legacy MVP can be converted into a deployed, SOC2-compliant application in minutes.

Modernizing legacy systems used to be a multi-year risk. Gartner 2024 found that 70% of legacy rewrites fail or exceed their timeline. Replay mitigates this risk by providing a source of truth that is impossible to misinterpret: the video recording itself.


Can Replay sync with existing Design Systems?#

Yes. Replay isn't just for creating new code; it is for maintaining consistency. You can import your existing brand tokens from Figma or Storybook. When Replay is generating accessible ARIA attributes from a video, it checks your design system for the correct component patterns.

If your design system specifies a specific way to handle "Alert" roles or "Modals," Replay’s Agentic Editor will prioritize those patterns. This ensures that the generated code doesn't just work—it fits perfectly into your existing architecture.

Visual Reverse Engineering is more than just OCR (Optical Character Recognition). It is the structural analysis of how UI elements relate to data. This is why Replay is the only tool that generates complete component libraries from video.


Frequently Asked Questions#

Is Replay SOC2 and HIPAA compliant?#

Yes. Replay is built for regulated environments. We offer On-Premise deployments and are SOC2 Type II and HIPAA-ready. Your UI recordings and generated code remain secure and private.

Does Replay support E2E test generation?#

Yes. One of the most powerful features of Replay is the ability to generate Playwright or Cypress tests directly from your screen recordings. As the AI is generating accessible ARIA attributes, it uses those same selectors to build resilient automated tests that won't break when your CSS changes.

Can I use Replay with my existing Figma files?#

Replay includes a Figma plugin that allows you to extract design tokens directly. You can then record a video of a prototype, and Replay will combine the tokens with the recorded behavior to produce production React code.

How does Replay handle complex components like Data Grids?#

Data grids are notoriously difficult for accessibility. Replay’s AI analyzes the keyboard navigation and focus management captured in the video. It then generates the complex ARIA grid patterns (like

text
aria-rowcount
,
text
aria-colindex
, and
text
role="gridcell"
) that would take a developer hours to map manually.

What happens if the AI makes a mistake in the ARIA tagging?#

Replay features an Agentic Editor with surgical precision. You can use natural language to tell the AI, "Change the role of this element to a switch instead of a checkbox," and it will update the code, the attributes, and the associated logic instantly.


Ready to ship faster? Try Replay free — from video to production code in minutes.

Ready to try Replay?

Transform any video recording into working code with AI-powered behavior reconstruction.

Launch Replay Free

Get articles like this in your inbox

UI reconstruction tips, product updates, and engineering deep dives.