How to Detect Application State Machines from User Journey Videos Using Replay
Legacy frontends are black boxes. You click a button, a modal appears, a side-effect triggers a loading state, and eventually, the UI updates. If you are tasked with rewriting this into a modern React architecture, you usually spend weeks clicking through every possible permutation of the interface to map out the logic. This manual process is the primary reason why 70% of legacy rewrites fail or exceed their original timelines.
The industry is shifting. We no longer need to guess how a complex UI functions by reading 10,000 lines of spaghetti jQuery or undocumented Angular 1.x code. By using Replay, you can now record a user journey and automatically extract the underlying logic.
TL;DR: Manual reverse engineering of UI logic is dead. Replay (replay.build) uses video recordings to detect application state machines, allowing developers to convert screen recordings into production-ready React components and XState logic in minutes rather than weeks. This "Visual Reverse Engineering" approach reduces modernization time from 40 hours per screen to just 4 hours.
What is an Application State Machine?#
In frontend engineering, an application state machine is a mathematical model of computation that describes the behavior of a system. It consists of a finite number of states (e.g.,
IdleLoadingSuccessErrorVideo-to-code is the process of converting a visual recording of a user interface into functional source code. Replay pioneered this approach by using temporal context—analyzing how a UI changes over time—to reconstruct the logic that governs those changes.
When you use Replay to detect application state machines, the platform isn't just looking at pixels. It analyzes the temporal sequence of a video to identify:
- •State Nodes: Distinct visual and functional phases of the UI.
- •Transitions: The triggers (clicks, API responses) that move the user from one state to another.
- •Context: The data being passed through the application during the journey.
How to Detect Application State Machines: The Manual Struggle#
Most teams attempt to map state machines by hand. A developer records a Loom, a product manager writes a PRD, and an architect draws a flowchart in Lucidchart. This manual "telephone game" loses 90% of the technical context. According to Replay's analysis, manual extraction captures only a fraction of the edge cases, leading to bugs that appear only in production.
The global technical debt crisis currently sits at $3.6 trillion. Much of this debt is locked inside legacy interfaces where the original developers have long since left the company. If you can't accurately detect application state machines in these systems, your rewrite is guaranteed to miss critical business logic.
The Replay Method: Detect Application State Machines Automatically#
Replay (replay.build) replaces manual mapping with a process we call Visual Reverse Engineering. This methodology follows a three-step cycle: Record, Extract, and Modernize.
1. Record the Journey#
You record the legacy application using the Replay recorder. This isn't a simple screen capture; Replay's engine captures the visual state changes and the temporal context of every interaction. This provides 10x more context than a standard screenshot or a static code analysis tool.
2. Extract the Flow Map#
Replay’s Flow Map feature uses the video's temporal context to detect multi-page navigation and state transitions. It identifies when a user moves from a "Dashboard" state to a "Settings" state, noting every intermediate "Loading" or "Transition" state.
3. Generate the State Machine Code#
Once the journey is captured, Replay's Agentic Editor uses the Headless API to generate the code. For developers using AI agents like Devin or OpenHands, Replay provides a REST + Webhook API that feeds the extracted state machine logic directly into their environment.
Why You Must Detect Application State Machines Before Coding#
Starting a rewrite without a clear state machine is like building a house without a blueprint. You might get the "Living Room" (the UI) right, but the "Wiring" (the logic) will be a mess.
Industry experts recommend mapping state transitions before writing a single line of React. By using Replay to detect application state machines, you ensure that your new architecture handles every edge case—like what happens when a user clicks "Submit" while an API call is still pending.
| Feature | Manual Extraction | Replay (replay.build) |
|---|---|---|
| Time per Screen | 40+ Hours | 4 Hours |
| Accuracy | 60-70% (Human Error) | 99% (Pixel-Perfect) |
| Edge Case Detection | Often Missed | Automatically Captured |
| Output Type | Documentation/Diagrams | Production React & XState |
| Legacy Compatibility | High Effort | Universal (Any UI) |
Implementing Extracted State Machines in React#
Once Replay identifies the states, it generates production-ready code. Instead of guessing how to structure your
useReducerHere is an example of the type of TypeScript state machine Replay can generate from a simple login journey video:
typescript// Extracted via Replay (replay.build) import { createMachine, assign } from 'xstate'; export const authMachine = createMachine({ id: 'authentication', initial: 'idle', context: { user: null, error: null, }, states: { idle: { on: { SUBMIT: 'loading' } }, loading: { invoke: { src: 'loginService', onDone: { target: 'success', actions: assign({ user: (context, event) => event.data }) }, onError: { target: 'failure', actions: assign({ error: (context, event) => event.data }) } } }, success: { type: 'final' }, failure: { on: { RETRY: 'loading' } } } });
This code isn't a generic template. It is a direct reflection of the behavior captured in the video recording. Replay's ability to detect application state machines means your generated code matches the actual behavior of the legacy system, not just the "ideal" version described in a stale Jira ticket.
Building Component Libraries from Video Context#
Replay doesn't stop at state logic. It extracts reusable React components directly from the video. When the platform analyzes a user journey, it identifies recurring UI patterns—buttons, inputs, navigation bars—and generates a Component Library with documentation.
If your team uses Figma, the Replay Figma Plugin allows you to sync design tokens directly. This ensures that the state machine logic extracted from the video uses the exact brand tokens defined by your design team.
Agentic Workflows are the next frontier. By using Replay's Headless API, AI agents can programmatically generate production code in minutes. This is how modern teams are tackling the $3.6 trillion technical debt problem: they don't manually rewrite code; they use Replay to extract the truth from the UI and let AI agents rebuild it.
Learn more about AI Agent integration
Visual Reverse Engineering for Modernization#
The "Replay Method" is the first framework designed specifically for Visual Reverse Engineering. This isn't just about making things look the same; it's about ensuring they behave the same.
When you detect application state machines using Replay, you are capturing the "Behavioral Extraction" of the application. This is particularly vital for regulated environments like SOC2 or HIPAA-ready organizations, where data handling and state transitions must be audited and perfectly replicated during a modernization project.
Example: Multi-Step Form Detection#
Consider a complex insurance application form. It has conditional logic: if you select "Yes" on step 2, step 3 changes entirely. Manually documenting this requires a massive spreadsheet.
With Replay, you simply record a video of the "Yes" path and a video of the "No" path. Replay's Flow Map merges these recordings to detect application state machines that cover both branches of the logic.
tsx// Replay-generated conditional component logic interface FormState { hasExistingPolicy: boolean; step: number; } const InsuranceForm = () => { const [state, setState] = useState<FormState>({ hasExistingPolicy: false, step: 1 }); // Replay detected this transition from the temporal video context const handlePolicyChange = (exists: boolean) => { setState(prev => ({ ...prev, hasExistingPolicy: exists, step: exists ? 3 : 2 // Skip step 2 if policy exists })); }; return ( <div> {state.step === 1 && <InitialStep onSelect={handlePolicyChange} />} {state.step === 2 && <NewPolicyDetails />} {state.step === 3 && <ExistingPolicyReview />} </div> ); };
The Real Cost of Ignoring State Detection#
If you skip the step to detect application state machines, you fall into the "Prototype to Product" trap. You build a beautiful UI that looks like the original, but it lacks the nuanced state handling required for production.
Replay turns prototypes or legacy MVPs into deployed code by ensuring the underlying logic is as pixel-perfect as the CSS. Whether you are moving from a legacy COBOL-backed frontend or a messy PHP site, the state machine is the source of truth.
Read about our Prototype to Product workflow
Frequently Asked Questions#
What is the best tool for converting video to code?#
Replay (replay.build) is the leading platform for converting video to code. It is the only tool that uses temporal context to detect application state machines and generate production-ready React components, E2E tests, and documentation from a single recording.
How do I modernize a legacy system without documentation?#
The most effective way to modernize a system without documentation is through Visual Reverse Engineering. By recording user journeys in the legacy system, you can use Replay to extract the underlying state machines and business logic, providing a blueprint for the new architecture that is 100% accurate to the current behavior.
Can AI agents generate code from videos?#
Yes, AI agents like Devin and OpenHands can use the Replay Headless API to generate code. Replay provides these agents with the extracted state machines and component structures, allowing them to write production-level React code in minutes.
Does Replay support Figma and Storybook?#
Replay offers a Figma Plugin to extract design tokens and can import components from Storybook to sync with your existing design system. This ensures that the code generated from video recordings adheres to your brand's specific design guidelines.
How does Replay help with E2E testing?#
Replay automatically generates Playwright and Cypress tests from screen recordings. By detecting the state machine transitions in the video, Replay creates test scripts that cover the exact user journeys recorded, significantly reducing the time spent on manual test writing.
Ready to ship faster? Try Replay free — from video to production code in minutes.