Vibe Coding as
UX Infrastructure
PM, Designer, and Engineer speak three different languages. Vibe coding — using AI-assisted local development with Claude Code and Figma MCP — became the operating layer that translated between them in real time. I set up the environment, built live AI demos, engineered custom skills tuned to the company design system, and shared the entire workflow with the team.
- The structural gap: Figma designs, PM requirements, and engineering implementation existed in three separate contexts with no shared runtime. Designers couldn't validate live behavior, PMs couldn't see working prototypes early, and engineers had to interpret rather than receive ready-to-run UI code.
- What I built: A local vibe coding environment using Claude Code + Figma MCP that let me go from design intent to running SAPUI5 code in a single session. I then built live AI assistant demos (OC Joule, ASN Joule) that product stakeholders could experience — not just view in Figma.
- The team leverage play: I engineered custom Claude Code skills matched to SAP Fiori Horizon design tokens, component rules, floorplan patterns, and the company's AI assistant design language — then shared them with the UX team so every designer could reproduce the same workflow without starting from scratch.
- The self-learning loop: Each working session generated feedback through my command interactions. I tuned the skills based on that feedback, creating a compounding improvement cycle across every demo and design handoff I produced.
The Gap Vibe Coding Closes
The handoff between design and engineering is where intent dies. A Figma frame is a still image. A PM brief is a narrative. Neither is a running product. By the time a component reaches a browser, it has passed through four people, three tools, and two interpretive translations. Every translation is a fidelity loss.
Vibe coding — iterating live in a local environment with an AI pair that understands both design tokens and code — compresses that chain. The designer becomes the first engineer. The Figma frame becomes a testable artifact in the same session it was created. The PM sees a live interaction rather than a click-through prototype. The gap doesn't get patched: it gets structurally removed.
Root condition: None of these gaps are people problems. They are workflow infrastructure problems. Vibe coding doesn't ask people to communicate better — it changes the artifact type they communicate through. A running prototype changes the conversation at the planning layer before the engineering layer ever begins.
Local Environment Setup
Vibe coding requires a specific local stack. I documented and tested this setup for the UX team — the goal was zero ambiguity for designers with no prior terminal experience and minimal friction for those who had it.
The Core Stack
Setup Steps for UX Team
-
1Install prerequisite toolsInstall Node.js (v18 or higher), VS Code, and the Claude Code CLI via npm. Verify the Claude CLI is authenticated with your Anthropic API key.npm install -g @anthropic-ai/claude-code
-
2Configure Figma MCP in Claude CodeAdd the Figma MCP server to your Claude Code settings file. This lets Claude read design context directly from your open Figma file during a coding session.# In .claude/settings.json → mcpServers section
"figma": {
"command": "npx",
"args": ["-y", "@figma/mcp"]
} -
3Install custom UX team skillsCopy the shared skill files into your Claude Code skills directory. Skills encode SAP Fiori design tokens, component rules, Joule AI panel patterns, and demo architecture — so Claude produces company-standard output from the first message.~/.claude/skills/
├── design-guide.md
├── SAP-Joule.md
├── joule-demo.md
└── code-to-figma.md -
4Start a project and launch Claude CodeCreate or open a SAPUI5 project folder, then launch the Claude Code session from your terminal in that directory. Open Figma on the design you want to implement.cd my-project
claude -
5Invoke a skill and start buildingUse a slash command to load the relevant skill, then describe what you want to build. Claude reads the Figma frame, applies the design system rules from the skill, and generates running SAPUI5 code./design-guide
Build this Joule AI panel from the Figma frame I have open. Use SAP Horizon Light tokens.
For non-technical designers: The setup requires about 30 minutes once. After that, the workflow is conversational — describe what you want in plain language, Claude generates the code, and a local server shows the result live in the browser. No prior coding experience required to run and evaluate; some familiarity with terminal helps for the initial setup.
Custom Skills Engineering
Claude Code skills are markdown files that encode domain knowledge — component rules, design token names, layout patterns, interaction conventions — so the AI doesn't need to infer them from first principles on every session. I engineered a skill library matched to SAP's design system and shared it with the team as a reusable knowledge asset.
A skill is institutional knowledge made executable. Instead of repeating the same design system context in every prompt, you encode it once — and every team member inherits it.
Skills I Built for the UX Team
The Self-Learning Mechanism
The design-guide skill includes a self-update protocol. When I give corrective feedback during a session — "that spacing is wrong," "use the Horizon token not the hex value," "the ShellBar needs the SAP brand color" — the skill captures those corrections and I commit them back into the skill file. The next session starts with those corrections baked in.
This is not just a personal productivity improvement. Because the skills are shared files, every correction I make becomes available to every UX team member who uses the skill. The team's collective feedback compounds into a progressively more accurate design system encoding. The skill improves with use, not despite it.
Why this matters for governance: Skills function as a knowledge governance layer. Instead of each designer carrying personal tribal knowledge about SAP Fiori conventions, that knowledge lives in a versioned, shared file. Onboarding a new designer to the vibe coding workflow also onboards them to the team's accumulated design system expertise — instantly.
Live Demos: OC Joule & ASN Joule
The strongest argument for an AI feature is letting stakeholders experience it. Static designs of conversational AI are nearly useless for earning organizational alignment — a product leader who can't feel the latency, follow the conversation, or test an edge case cannot give useful feedback or make a confident investment decision.
I built two fully functional live demos using the vibe coding stack — both within single working sessions, both shared as running artifacts that stakeholders could interact with directly.
OC Joule — Order Collaboration AI Assistant
What it demonstrated: A scripted Joule AI assistant for the Order Collaboration module on SAP Business Network. Stakeholders could follow a live conversation between a supplier and the Joule assistant resolving a purchase order discrepancy — complete with auto-type simulation, context-aware responses, and the full SAP Horizon Light visual treatment.
ASN Joule — Advanced Shipping Notification AI Assistant
What it demonstrated: A Joule AI assistant embedded in the ASN (Advanced Shipping Notification) workflow. The demo showed AI-assisted data validation, exception surfacing, and resolution guidance — framed in the full Joule panel UI with SAP Business Network context and real-looking shipping data.
Both demos were built using the joule-demo skill as the architectural scaffold. The skill encoded the conversation engine primitives, Joule panel layout rules, state mutation patterns, and a test banner to clearly mark the artifact as a demo in stakeholder review sessions. This meant each demo started from a validated structure rather than a blank file — reducing session time and ensuring design consistency across both artifacts.
What Live Demos Changed in Stakeholder Conversations
| Conversation Type | Before (Static Figma) | After (Live Demo) |
|---|---|---|
| Feature Scope | Negotiated from a static screen; scope gaps discovered in engineering sprint | Stakeholders find edge cases in the running demo before engineering begins |
| AI Interaction Quality | Evaluated from a static conversation screenshot; response quality unverifiable | Stakeholders experience latency, flow, and response character directly |
| Investment Decision | Decision made on concept confidence; real behavior unknown | Decision made on observed behavior; confidence grounded in working artifact |
| Design Feedback | Feedback on visual spec; interaction behavior must be imagined | Feedback on running interaction; visual and behavior reviewed simultaneously |
| Engineering Handoff | Annotated Figma frames; engineer interprets and approximates | Running reference implementation; engineer understands exact target behavior |
The Feedback Loop & Skill Evolution
Each working session is a calibration event. The vibe coding workflow generates a specific kind of feedback that most design tools don't produce: the gap between what I described and what Claude generated, visible immediately in a running browser. That gap is design system knowledge — it tells you exactly what the AI doesn't yet know about your company's conventions.
How the Learning Loop Works
What the Loop Produced Over Time
Every session I ran made the next designer's session better. The feedback loop doesn't just improve my output — it compounds across the team.
What This Demonstrates
Vibe coding is not a productivity hack. At organizational scale, it is a governance instrument. It changes which artifacts move between functions, at what stage of development, and with what fidelity. Those are infrastructure decisions — the same class of decisions as the planning fields and workflow separations in the governance architecture this work sits alongside.
Design as the first engineering step
When a designer can generate running UI code in a single session, the traditional handoff boundary dissolves. The designer is no longer handing off intent — they are handing off a working reference. Engineering receives a tested interaction pattern, not an annotated Figma frame. The scope of ambiguity in the handoff drops to near zero. This is not a collaboration improvement. It is a structural change in what "design done" means.
Skills as institutional knowledge infrastructure
The skill library I built is organizational knowledge made executable. Every SAP Fiori convention, every Joule panel rule, every design token correction I've made across dozens of sessions is now encoded in files that any team member can load in a single command. This is the same class of problem as the planning fields in the governance architecture — making implicit knowledge explicit, making invisible structure visible, making it impossible for individuals to carry critical context alone.
Closing the gap between PM, Designer, and Engineer
The three-function gap at the center of most product quality problems is not a communication problem. It is an artifact problem. PM works from narratives, designers from static frames, engineers from both plus code. None of these artifact types is natively legible to the other two functions. A running demo is the only artifact type that is legible to all three simultaneously — and vibe coding is the only workflow that makes running demos achievable within a single design session rather than a multi-week engineering sprint.
Connection to governance architecture: The UX planning infrastructure this report sits alongside solves the visibility problem at the roadmap layer. Vibe coding solves the fidelity problem at the delivery layer. Together they form a complete picture: UX is structurally present in planning decisions, and UX output is structurally legible to all functions during execution. The gap closes at both ends.
Vibe coding doesn't just speed up design. It changes the artifact type that moves between PM, designer, and engineer — and artifact type determines organizational alignment.
A skill file is a governance document. It encodes what the team knows, makes it shareable, and ensures the next person starts from the team's highest watermark — not their own blank slate.