I remember scrolling through Twitter on April 17 when Anthropic dropped Claude Design. It was one of those moments where you stop mid-scroll and just stare. Type a sentence, get back a real HTML page—not a wireframe, not a low-fidelity mockup, but something you could hand off. Powered by Opus 4.7. The demos looked slick, sure, but the catch was obvious: you’re tied to Anthropic’s model, their pricing, their API.
Eleven days later, a team called nexu-io pushed Open Design to GitHub. As of today, it’s sitting at nearly 18,000 stars. Not bad for something that’s basically a weekend experiment that went viral.
What makes Open Design interesting isn’t that it’s a clone. It’s more honest than that. The project doesn’t ship its own AI model. Instead, it acts like a bridge—connecting whatever coding agent you already have on your machine (Claude Code, Codex, Cursor, OpenCode, take your pick) to a structured design workflow. You type something like “build me a magazine-style homepage,” and instead of spitting out a half-baked result, it first pops up a form. It asks about your target platform, audience, tone, brand assets. Thirty seconds of checkboxes, then the agent picks from five visual directions, drafts a live to-do list, and starts creating a real project directory on your machine. Reads template files, writes CSS, generates HTML, and renders everything inside a sandboxed iframe. You can interrupt at any point.
The output isn’t a screenshot or a sketch. It’s a single-file HTML exportable as HTML, PDF, PPTX, or ZIP. It treats design as something you can iterate over, not something you prompt once and pray.
Here’s the architecture: a Next.js web interface running in your browser, and a Node daemon running locally. When you submit a request, the daemon assembles a prompt stack from two key files—SKILL.md (design capability descriptions) and DESIGN.md (brand design guidelines)—then calls your coding agent’s CLI via stdio. The agent operates with real file system access. It reads templates, greps hex color values in CSS, writes brand-spec.md, generates actual HTML and images. No sandboxed simulation. No memory-based mock. After each round, the daemon pushes the output into the sandbox iframe for live preview. You can edit files directly in the UI or export.
The daemon auto-scans your PATH on startup, detects whatever CLI tools you’ve installed. No vendor lock-in. Every layer is BYOK (bring your own key). Claude Design forces you to use Opus 4.7. Open Design lets you throw your best agent at the problem—maybe a cheaper model for early drafts, maybe a smarter one for final polish.
But the part that caught my attention is the prompt engineering designed to avoid that “AI-generated” look. Before generation, there’s that initialization questionnaire. Then, before the final output, the AI runs a five-dimensional self-review—scoring itself on each dimension. Anything below 3 gets redone. There’s also a “slop blocklist” that explicitly bans gradient purples, generic emoji icons, hand-drawn SVG faces, and using Inter as a display font. Real data? If there’s no actual number, it writes a dash instead of fabricating one. Small touches, but they make a difference when you’ve seen enough AI designs that scream “I was made by a bot.”
Out of the box, Open Design ships with 71 brand design systems—Apple, Stripe, Vercel, Airbnb, Tesla, Notion, Cursor, Figma, you name it. Pick one from a dropdown, and the next render uses that token set. Plus 19 composable Skills covering web prototypes, magazine-style decks, dashboards, mobile prototypes, pricing pages, email marketing, social media carousels, and more.
Let’s put this in perspective. Before Claude Design, tools like v0.dev and Galileo AI offered similar “prompt-to-design” workflows, but they were all cloud-hosted and subscription-gated. When Anthropic launched Claude Design, it felt like a step forward—but also a step toward more centralization. Open Design’s bet is that the future isn’t a single model with a single brand. It’s a modular setup where you choose your agent, your design system, and your constraints.
Getting it running is straightforward:
git clone https://github.com/nexu-io/open-design.git cd open-design pnpm install && pnpm dev:all
Or just drop the repo into your agent and let it handle setup. Open localhost:3000, pick a Skill, pick a Design System, type your request, hit enter. The form fires, the agent works, live to-do cards stream into the UI, and eventually you see the result in the sandbox. Export as HTML, PDF, PPTX, or ZIP.
There’s something liberating about a tool that doesn’t demand you learn its way of doing things. You bring your own agent, your own design tokens, your own tastes. The project is still young—only a couple of weeks old—but the velocity is real. Over 600 issues on GitHub, a flurry of forks, people experimenting with different agents and models. I expect we’ll see variations that optimize for different agents, maybe even a version that bundles a lightweight local model for people who want everything offline.
For now, if you’re tired of AI design tools that give you the same glossy, generic output, Open Design is worth a look. It reminds me that the best tools are the ones that get out of your way.