Case Study · 04

Scrively , AI Workflow Rendering Engine

Designed and built the rendering engine architecture behind a product where every screen is composed at runtime by an AI. Structured interactions, explicit state transitions, and adaptive UX that doesn't fall apart mid-stream.

RoleSenior Full-Stack Engineer & Frontend Architect
Focus areasRendering engine, state machines, AI workflow design, streaming UI
Engagement6 months · architecture & build
Stack
TypeScriptReactXStateOpenAIAnthropicNode
01

Context

Scrively is a product where the UI is composed dynamically by an AI model at runtime. Every screen, every form field, every button is generated from a model response — not authored by a designer in advance.

The founding premise was powerful and the prototype was impressive. It was also brittle. The AI could generate anything, which meant the front end had to render anything — and had no guarantees about what it would receive.

The brief was to take the prototype from a compelling demo to an architecture that a product team could build on: constrained enough to be reliable, flexible enough to be useful.

02

Problem

An AI that can generate any UI component is only useful if the renderer can handle what it generates — consistently, mid-stream, without visual breakage.

The prototype rendered model output directly as JSX. It worked most of the time. When the model was slightly off, the UI broke silently. There was no way to validate output before rendering, no way to recover from a partial stream, and no concept of workflow state.

The product also needed to support long-running interactions — multi-step forms, branching flows, conditional steps — where the model's decisions at each step depend on what the user did in the last one. There was no framework for that.

Why it needed to be done

A rendering engine without constraints isn't an engine, it's a guess.

Risk surface

A rendering engine without constraints isn't an engine, it's a guess.

The prototype demonstrated the concept. The architecture had to make it shippable.

!

Silent rendering failures on model variation

Without a schema boundary, any deviation in model output — a missing field, a renamed component — would silently break the UI. The team couldn't ship that to users.

~

No state machine meant no recoverable workflows

Multi-step interactions require knowing where you are, what's valid next, and how to recover. Without explicit state management, the product couldn't support any workflow longer than a single screen.

$

Provider lock-in without an abstraction layer

Direct OpenAI calls in the rendering path meant the team couldn't A/B test models, add a fallback, or swap providers without a full refactor.

Solution

What was built and how it fits together.

01Schema-bounded component vocabulary
A TypeScript schema defines the set of renderable components and their valid props. The model is prompted to produce output that conforms to the schema. Output is validated before it reaches the renderer.
02XState workflow runtime
A state machine owns the workflow: current step, valid transitions, guard conditions, and recovery paths. The AI operates within the machine's constraints — it can suggest a next step, but the machine decides whether it's valid.
03In-place streaming renderer
Components render as tokens arrive. Partial state is valid — the renderer handles incomplete output gracefully, with stable component IDs so React doesn't remount on each update.
04Structured tool-call protocol
The model uses structured tool calls to produce renderable output, rather than free-form JSON. Tool schemas are generated from the component vocabulary, closing the loop between what the model can generate and what the renderer can handle.
05Provider abstraction layer
A thin provider interface wraps OpenAI, Anthropic, and any other LLM. The workflow runtime calls the interface; the model underneath is a configuration value, not a hardcoded dependency.
06Workflow author SDK
A TypeScript SDK for defining new workflow types: component vocabulary, transition guards, context shape, and recovery logic. Product and engineering can author new workflows without touching the engine.
Key technical work

The pieces of the build that mattered most.

01

Component vocabulary and schema

Zod schemas for each renderable component. The model is prompted with a condensed schema representation; output is parsed and validated before rendering. Schema violations are caught at the boundary, not at render time.

ZodTypeScriptSchema validation
02

XState workflow state machine

State machines per workflow type, with explicit states, guarded transitions, and parallel regions for multi-panel layouts. The machine's context is the source of truth for what's been collected and what's valid next.

XStateState machinesContext
03

Streaming renderer with stable IDs

Components are assigned stable IDs from the first token of each output block. React keys are derived from these IDs, so streaming updates patch in-place rather than remounting.

StreamingReact reconciliationStable keys
04

Structured tool-call protocol

Tool definitions generated from component schema. The model calls tools to produce renderable output. Partial tool calls are buffered and applied on completion, keeping the stream coherent.

Tool callsOpenAIAnthropic
05

Provider abstraction

A provider interface with implementations for OpenAI and Anthropic. Streaming, tool call handling, and retry logic are normalized at the interface layer. Switching providers is a one-line config change.

Provider patternStreaming normalization
06

Workflow author SDK

A typed SDK for authoring new workflow definitions. Vocabulary declaration, transition guards, context shape, and custom recovery logic — without touching engine internals.

SDK designTypeScript genericsDX
Business impact

What came out of it.

placeholderRendering reliability~99%Schema validation catches model output deviations before they reach the renderer. Near-zero silent UI failures since the schema boundary was added.
placeholderWorkflow types1wkApproximate time to author a new workflow type using the SDK, down from an engine change that took weeks and broke existing flows.
placeholderStream stability0remountsStable component IDs mean streaming updates patch in-place. No visible flicker or remount on any workflow step.
placeholderProvider switch time< 1hrTime to swap the underlying LLM provider in production. Tested by switching between OpenAI and Anthropic in a canary deploy.

Values marked placeholder are representative — replace with measured numbers from the live system once available.

Final result

An AI rendering engine the product team builds on, instead of fighting.

Scrively's engine turns a fragile idea into something operable: a constrained component vocabulary, an explicit state machine, a streaming renderer that doesn't blink, and a provider layer that's a swap away from any model. The product team can author new workflows without touching the engine, and the engine handles model variation without breaking the UI.

Schema-bounded rendering with zero silent failures
XState machine owning all workflow transitions
In-place streaming with stable component IDs
Provider-agnostic LLM layer swappable in under an hour
SDK enabling new workflow authoring in days, not weeks
Next engagement

Have a similar system to build or optimize?

If you're building an AI-powered product and need a rendering or workflow layer that holds up in production, send a few sentences. I'll respond directly within one business day.

Book a callbilalasharf@gmail.com