Figma Design to Code, Code to Design: Clearly Explained
AI budgets are under the microscope and most engineering teams only cite time savings from code generation when asked if it’s working. The real impact is in production, where teams spend 70% of engineering time investigating incidents, jumping between tools, and losing time that could go toward shipping product. That operational load only grows with every line of AI-generated code that hits prod. Learn how engineering teams at Coinbase, Zscaler, and Salesforce are seeing AI impact across the full engineering lifecycle. Plus, get a practical worksheet for modeling AI ROI with your own operational data. Turning a design into working code is one of the most common tasks in frontend development, and one of the hardest to automate. The design lives in Figma. The code lives in a repository. Bridging the two has traditionally required a developer to manually interpret layouts, colors, spacing, and component structure from a visual reference. AI coding agents promise to close that gap, but the naive approaches fall short in important ways. Figma launched its MCP server in June 2025 to bring design context into code. This year, they released two new workflows: the ability to generate designs from coding tools like Claude Code and Codex, and the ability for agents to write directly to Figma design. We spoke with Emil Sjölander, Aditya Muttur, and Shannon Toliver from the Figma team behind these releases to understand the details and engineering challenges. This article covers how Figma’s design-to-code and code-to-design workflows actually work, starting with why the obvious approaches fail, how MCP solves them, and the engineering challenges that remain. Before diving into how Figma’s MCP server works, it helps to understand the approaches that came before it, and why each one hits a wall. There are two natural ways to give an LLM access to a design: show it a picture, or hand it the raw data. Both have fundamental limitations that motivated a different approach. The most obvious way to turn a design into code with an LLM is to take a screenshot of your Figma file and paste it into a coding agent. The LLM sees the image, interprets the layout, and generates code. This works for simple UIs. But it breaks down for anything complex. The LLM has to guess values based on pixels. It doesn’t know the exact color or that the spacing between cards is 24px, not 20px. The output may look close, but not identical. So screenshots give the LLM a visual reference but no precise values. The next natural step is to go in the opposite direction: give it all the data. Figma exposes a REST API that returns a file’s entire structure as JSON. Every node, property, and style is included. Now the LLM has real data instead of pixels. But having all the data introduces its own problem: there is far too much of it. A single Figma page can produce thousands of lines of JSON, filled with pixel coordinates, visual effects, internal layout rules, and other metadata that are not useful for code generation. Dumping all of this into a prompt can exceed the context window. Even when it fits, the LLM has to wade through pixel coordinates, blend modes, export settings, and other visual metadata that have nothing to do with building a UI, which degrades the output quality. Neither approach works on its own. Screenshots lack precision. Raw API data has precision but drowns the LLM in noise. What you actually need is something in between: structured design data that preserves exact values like colors, spacing, and component names, but strips out the noise that is not needed for code generation. That is what Figma’s MCP server does. MCP stands for Model Context Protocol, a standard that defines how AI agents discover and call external tools. Figma’s MCP server takes the raw design data from Figma’s REST API, filters out the noise, and transforms what remains into a clean, structured representation. Pixel positions become layout relationships like “centered inside its parent.” Raw hex colors become design token references. Deeply nested layers get flattened to match what a developer would actually build. The result is a compact, token-efficient context that an LLM can act on directly. With that context, let’s look at how the two main workflows, design to code and code to design, actually work under the hood. The design-to-code workflow starts when a developer selects a frame in Figma, copies its URL, and pastes it into a coding agent like Claude Code or Codex with a prompt like “Implement this design.” The agent then produces working code that matches the design. Here is what happens behind the scenes. The coding agent and Figma’s MCP server work together through four steps. The first two are generic MCP mechanics: tool discovery and tool calling. The last two are where Figma’s engineering makes the difference. Step 1. The agent discovers available tools When you first connect the Figma MCP server, the agent receives a list of available tools. These include get_design_context, get_screenshot, get_metadata, and more. Each tool comes with a name, description, and parameter schema. The agent does not know how Figma works internally. It reads these descriptions the same way a developer reads API documentation, then decides which tool to call based on the user’s prompt. Step 2. The agent prepares the arguments and calls the tool The agent prepares the arguments to call the selected tool. In this case, since the selected tool is get_design_context, it needs a file key and a node ID. So it parses both from the Figma URL you pasted and calls the tool. Step 3. The request hits Figma’s backend The tool call is sent over the network to Figma’s MCP server at mcp.figma.com/mcp over Streamable HTTP. The MCP server handles authentication, then calls Figma’s internal services to read the design data such as node trees, component properties, styles, and variable definitions. Step 4. Transform raw design data into LLM-friendly context This is where the most important engineering happens. The MCP server transforms the raw JSON from Figma’s REST API into a structured representation that maps to how a developer thinks about building a UI. Pixel positions become layout relationships like “this element is centered inside its parent.” Color values become references to design tokens like brand-blue instead of raw color codes. Deeply nested layers get simplified to reflect what the user actually sees. And components get enriched with code mappings. For example, when a Figma button component is mapped to src/components/ui/Button.tsx through Code Connect, that reference appears in the output. The LLM reuses the existing component instead of recreating it from scratch. The output defaults to a React + Tailwind framing because that is the most common frontend stack. But it is a structured representation of the design, not generated code. The LLM takes this representation and generates actual code in whatever framework the developer specifies. Design to code is only half the story. In practice, the code often evolves faster than the design files. A developer ships a feature, tweaks the layout based on user feedback, adds a new section, and now the Figma file no longer matches what is actually running in production. Code to design closes that gap. A developer opens Claude Code, types “send this to Figma,” and a few seconds later the live UI appears in Figma as fully editable layers. Not a flat screenshot, but real frames with auto-layout, editable text, and separate components. This is powered by one key tool in the MCP server: generate_figma_design. Here is what happens under the hood. Step 1: The Figma tool launches the capture tool When the developer prompts “send this to Figma,” the agent calls MCP server’s generate_figma_design tool. The tool opens the target URL in a browser and injects a JavaScript capture script. For a local dev server, it connects directly. For production or staging URLs, it uses a browser automation too…
Send this story to anyone — or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to ByteByteGo Newsletter.