Popmelt Taste™ transforms the way new-gen developers and designers build products by bringing design intelligence directly into agentic IDEs like Cursor. We use bleeding edge technology to encode and transmit aesthetic knowledge, instantly granting human-grade design capabilities to AI agents.
Everyone who works in tech has experienced the pain of a design-dev handoff gone wrong, and the frustrating failure of existing tools and processes to fix it. The rise of AI coding assistants like Cursor and Windsurf has turbocharged product development, but it also makes this problem even harder to avoid.
These tools can generate entire apps in record time, but they're almost completely design-blind and it's very clear from their uninspired output. Getting a vibe-coded project from "functional prototype" to "market-ready product" remains a super manual process that takes strong design intuition and a lot of back and forth with the agent.
Old solutions to these problems exist, and they're similarly manual. Design systems, tokens, and specs are all functional, but they're also time-consuming and fragile. Plus, they're all designed for humans, not AI agents, missing a huge opportunity to incorporate LLMs' associative superpowers.
We looked at all this and asked ourselves: what if AI agents could design as flexibly and elegantly as they code?
Anthropic's Model Context Protocol gave us the perfect foundation. Instead of trying to build yet another design documentation tool or, even worse, another prompt directory, we could give anyone using agentic dev tools plug-and-play access to design systems. We started in an obvious place, mapping out the kind of knowledge we figured agents would need: color systems, typography scales, spacing tokens, component patterns, accessibility guidelines.
Early experiments were promising, but we were bumping into the same problems we'd been solving for humans for years and started wondering if our hyper-specific, made-for-humans solutions were part of the problem. When they worked, it was great, but they often didn't. Worse, they seemed to fail where agentic tools often excel: coming up with clever solutions to tricky or ambiguous asks.
The breakthrough came when I realized we had to think beyond design tokens. CSS interpreters demand hex codes and font-size definitions; agents thrive when given a conceptual framework. Systems of hard values were helpful up to a point, but what they really needed were structured principles and philosophy, thoughtful opinions about which aesthetics to favor and which to avoid.
In other words, they needed an approximation of taste.
So, we set about designing a model for that thing that separates great design from bad, fine-tuned for use by agentic dev tools. It began to feel more like coaching a fast-learning student than programming or building a design system, and that's when things started to click:
Our agents were coming up with coherent, product-ready UI in one shot, no zhuzhing required.
We're in the midst of the final stretch of our private beta now, tuning the things every new product needs to get right: positioning, pricing, onboarding, infrastructure. Launch is just around the corner: we'll be opening up our five Gen One taste models to the public in July.
Our eye is already on what comes next. Generating polished UI is an exciting start, but we want to set agents loose on redesigns, brand distillation, and Figma file analysis. And even that is only the beginning.
Once you figure out how to teach AI one kind of taste (say, design), other forms of taste follow.