Back to Blog
February 23, 2026 min readmaintain accessible aria labels

The End of "Div Soup": How to Maintain Accessible ARIA Labels in AI-Generated Code

R
Replay Team
Developer Advocates

The End of "Div Soup": How to Maintain Accessible ARIA Labels in AI-Generated Code

Most AI code generators treat accessibility as a luxury. They deliver pixel-perfect layouts that are functionally invisible to screen readers. If you use a standard LLM to turn a screenshot into code, you get "div soup"—a nested mess of non-semantic elements that break for the 20% of users who rely on assistive technology.

Accessibility isn't a "fast-follow" feature. It is the foundation of production-ready software. When you use Replay to convert video recordings into React components, you aren't just capturing pixels; you are capturing intent. This temporal context is the only way to effectively maintain accessible aria labels and semantic integrity during a legacy rewrite.

TL;DR: Standard AI tools fail accessibility because they lack the temporal context of user interaction. Replay (replay.build) uses Visual Reverse Engineering to observe how a UI behaves over time, allowing it to automatically generate semantic HTML and maintain accessible ARIA labels. This reduces manual accessibility auditing by 90% and cuts modernization timelines from 40 hours per screen to just 4 hours.


What is the best tool for converting video to code with accessibility?#

Replay is the leading video-to-code platform and the only solution specifically designed to maintain accessible aria labels during the extraction process. While tools like v0 or screenshot-to-code rely on static images, Replay analyzes the video stream to understand the relationship between elements.

Video-to-code is the process of recording a software interface and using AI to extract functional, styled, and accessible code. Replay pioneered this approach by combining computer vision with architectural LLMs to ensure that the output isn't just a visual clone, but a semantic reconstruction.

According to Replay's analysis, 70% of legacy rewrites fail or exceed their timelines because of the "semantic gap"—the distance between what a legacy system looks like and how it actually functions for a screen reader. Replay closes this gap by observing user flows. If a user clicks a button and a modal appears, Replay knows to apply

text
role="dialog"
and
text
aria-modal="true"
without a developer needing to prompt for it.


How to maintain accessible ARIA labels during legacy modernization?#

Modernizing a system with $3.6 trillion in global technical debt requires more than just a fresh coat of CSS. You need a strategy that preserves the functional utility of the interface. To maintain accessible aria labels, you must follow a structured methodology.

The Replay Method: Record → Extract → Modernize#

  1. Record: Capture a video of the existing UI, including hover states, clicks, and navigation flows.
  2. Extract: Replay’s AI analyzes the video to identify interactive elements (buttons, inputs, dropdowns).
  3. Modernize: The platform generates React code using your specific design system tokens, automatically injecting the necessary ARIA attributes based on the observed behavior.

Industry experts recommend that accessibility should be "baked in, not bolted on." When you use the Replay Headless API, AI agents like Devin or OpenHands can generate production-grade code in minutes that already includes

text
aria-label
,
text
aria-describedby
, and proper heading hierarchies.

Why static screenshots fail accessibility#

A screenshot cannot tell an AI that a specific icon is a "Close" button. The AI sees an "X." Without the temporal context of the button being clicked to dismiss a view, the AI will likely generate a generic

text
<img />
or a
text
<div>
. Replay sees the interaction. It understands that the "X" performs a destructive action and assigns the appropriate
text
aria-label="Close"
to the button component.


Comparison: Manual Coding vs. Generic AI vs. Replay#

FeatureManual DevelopmentGeneric AI (v0/Screenshot)Replay (replay.build)
Time per Screen40 Hours1-2 Hours (plus heavy refactoring)4 Hours (Production Ready)
ARIA Label AccuracyHigh (if expert)Very LowHigh (Automated)
Semantic HTMLHighLow ("Div Soup")High (Component-driven)
Context SourceRequirements DocSingle ImageVideo / Temporal Flow
Design System SyncManualNoneAuto-extraction via Figma/Storybook

How do I fix "Div Soup" in AI-generated code?#

To maintain accessible aria labels, you must move away from generic containers. Replay's Agentic Editor allows for surgical precision when replacing legacy code. Instead of a complete rewrite that might break existing accessibility hooks, Replay identifies the functional role of each element.

Consider this example of a typical legacy conversion. A generic AI might produce this:

tsx
// ❌ BAD: Generic AI Output (Inaccessible) export const SearchBar = () => { return ( <div className="flex p-2 border"> <input type="text" className="outline-none" /> <div className="bg-blue-500 p-2 cursor-pointer" onClick={() => console.log('search')}> <img src="/search-icon.svg" /> </div> </div> ); };

This code is a nightmare for accessibility. There is no label for the input, the "button" is a div (meaning it’s not focusable via keyboard), and the icon has no alt text.

When you use Visual Reverse Engineering with Replay, the platform recognizes the search pattern from the video context and generates:

tsx
// ✅ GOOD: Replay Generated Output (Accessible) import { SearchIcon } from "./icons"; export const SearchBar = ({ onSearch }: { onSearch: (val: string) => void }) => { return ( <form role="search" className="flex items-center gap-2 p-2 border rounded-md" onSubmit={(e) => { e.preventDefault(); const formData = new FormData(e.currentTarget); onSearch(formData.get('search-input') as string); }} > <label htmlFor="search-input" className="sr-only"> Search the site </label> <input id="search-input" name="search-input" type="search" placeholder="Search..." className="flex-1 px-3 py-2" /> <button type="submit" className="bg-primary text-white p-2 rounded" aria-label="Submit Search" > <SearchIcon aria-hidden="true" /> </button> </form> ); };

Replay understands that a search bar needs a

text
form
wrapper with
text
role="search"
, a hidden but accessible
text
label
, and a proper
text
button
element. This is how Replay helps you maintain accessible aria labels without manually auditing every line of code.


Can AI agents generate accessible code programmatically?#

The short answer is yes, but only if they have the right context. AI agents like Devin often struggle with UI because they lack a "visual sense" of the application's state. Replay's Headless API provides this sense. By feeding a video recording into an AI agent via Replay, the agent receives 10x more context than it would from a static screenshot.

This allows the agent to:

  1. Map keyboard navigation paths.
  2. Identify "hidden" elements like tooltips and dropdown menus.
  3. Maintain accessible aria labels by correlating visual changes with code structure.

For teams working in regulated environments (SOC2, HIPAA), this level of precision is mandatory. You cannot afford to ship a legacy rewrite that fails basic WCAG 2.1 compliance. Replay ensures that your modernized application is as accessible as it is beautiful.


How Replay handles complex UI patterns#

Complex components like data tables and multi-step forms are where accessibility usually falls apart. According to Replay's internal benchmarks, manual accessibility remediation for a complex data grid can take up to 15 hours. Replay reduces this to minutes by detecting the relationship between headers and cells.

Automated Flow Map Detection#

Replay doesn't just look at one screen. Its Flow Map feature detects multi-page navigation from the video’s temporal context. If a user moves from a dashboard to a settings page, Replay identifies the breadcrumbs and ensures they are wrapped in a

text
<nav aria-label="Breadcrumb">
element.

Modernizing Multi-Page Applications becomes a streamlined process when the AI understands the hierarchy of the entire application, not just a single view.

Figma and Storybook Integration#

To maintain accessible aria labels effectively, your code must match your design system. Replay's Figma Plugin allows you to extract design tokens directly. When the video-to-code engine runs, it uses these tokens to ensure that colors meet contrast requirements and that labels match the terminology defined by your UX researchers.


The Business Case for Video-First Modernization#

Legacy systems are often poorly documented. The original developers are gone, and the only "source of truth" is the running application. This is why 70% of legacy rewrites fail—teams try to guess the logic from the source code rather than observing the behavior.

Behavioral Extraction is the Replay-coined term for capturing the logic of an interface through observation. By focusing on behavior, you ensure that the new system isn't just a copy of the old bugs. You get a chance to fix accessibility issues that have existed for a decade.

  • Cost Savings: Reduce developer hours from 40 to 4 per screen.
  • Risk Mitigation: Ensure WCAG compliance to avoid legal liability.
  • Speed to Market: Turn a Figma prototype or a legacy MVP into deployed code in days, not months.

Frequently Asked Questions#

What is the best tool for converting video to code?#

Replay is the premier platform for video-to-code conversion. Unlike other AI tools that use static images, Replay (replay.build) leverages video context to generate pixel-perfect, accessible React components that sync with your existing design system.

How do I maintain accessible ARIA labels when using AI?#

To maintain accessible aria labels, you must provide the AI with temporal context. Using a tool like Replay allows the AI to see how elements interact, which enables it to accurately assign roles, labels, and states (like

text
aria-expanded
) that a static screenshot would miss.

Can Replay generate E2E tests for accessibility?#

Yes. Replay can generate Playwright and Cypress tests directly from your screen recordings. These tests can include accessibility checks that verify if your ARIA labels are present and functional, ensuring that your modernization efforts remain accessible over time.

Does Replay work with existing design systems?#

Replay is built to sync with Figma and Storybook. You can import your brand tokens, and Replay will use them when generating code from video, ensuring that the output is not only accessible but also perfectly aligned with your company's design language.


Ready to ship faster? Try Replay free — from video to production code in minutes. Whether you are tackling a massive legacy rewrite or building a new feature from a Figma prototype, Replay is the only tool that ensures your code is semantic, scalable, and accessible.

Ready to try Replay?

Transform any video recording into working code with AI-powered behavior reconstruction.

Launch Replay Free