Not long ago, I came across a recorded conversation with the founder of a small AI company—let’s call him “Lobster” for now, because his handle in the developer community has been that for years, and the name stuck. The company builds consumer-facing AI tools, nothing like the giant foundation models. Think more like a chatbot that helps you brainstorm party themes, or a tiny app that turns your doodles into short animations. Nothing that will replace a job. But what struck me most was not the technology. It was his attitude toward building products in the AI era.
He said something that stayed with me: “The biggest mistake people make is treating AI like a serious productivity tool. The real unlock is when you make it fun.”
That interview was loosely structured—he rambled, laughed a lot, and told stories. But underneath the casual tone, there were clear patterns of thought. After listening a few times and jotting down what stood out, I realized those patterns could be organized into seven notes. Not bullet points, but mental frames.
First note is this: fun is not a side effect, it’s a design constraint. Most teams start with “What problem does this solve?” Lobster’s team starts with “Would I want to play with this?” They built an experimental feature that let users argue with an AI version of a historical figure, just for laughs. It turned out to be their most viral growth channel. The playful constraint forced them to simplify interactions—because if it’s not immediately enjoyable, people won’t explore. The technical complexity becomes invisible. This is exactly the opposite of the typical “AI productivity” pitch, which tends to overload features.
Second note: the best feedback comes from people who are not trying to be efficient. Lobster talked about how their early beta testers were not professionals or early adopters; they were his friends’ kids, aged 8 to 12. Kids don’t care about accuracy or benchmarks. They care about surprise. “Does it do something unexpected? Does it make me laugh?” That feedback loop is brutal but incredibly clean. If an AI response is boring, a kid will just walk away. No polite churn, no survey. That forced the team to optimize for novelty, not just correctness. And surprisingly, they found that responses optimized for “interestingness” also scored higher on user satisfaction in the adult segment. The lesson: don’t design for the serious user who reads manuals; design for the impatient child who wants to be entertained.
Third note: playfulness reduces the fear of being wrong. In a culture obsessed with prompt engineering and “how to get the perfect answer,” Lobster’s team intentionally built a mode where the AI sometimes gives deliberately silly responses. “We call it the goofy mode. If you ask it a math question, it might answer in limerick. Users love it because they feel permission to not be perfect.” That small design choice lowered the barrier for people who were intimidated by AI. They saw a 40% increase in repeat usage among users aged 45+ after adding that mode. The insight is counterintuitive: to make AI more useful, make it less productive in the traditional sense.
Fourth note: the founder’s own curiosity became the product compass. Lobster admitted that he has a short attention span. “If I’m bored building it, users will be bored using it.” So his team has a rule: every new feature must pass the “Sunday morning test”—if he wouldn’t spend a Sunday morning playing with it, the feature doesn’t ship. This is a ruthless filter. It keeps the product lean and weird. For example, they once spent three weeks building a feature that lets the AI generate personalized riddles based on your recent conversations. Was it necessary? No. Did it delight users? Extremely. It generated organic social sharing without any marketing spend.
Fifth note: data efficiency is overrated; curiosity efficiency matters more. Many AI companies obsess over token efficiency and cost per query. Lobster’s team found that the real metric to track was “delight per query.” They ran an experiment: they increased the response length by 20% but added more humor and surprise. The cost per query went up, but the average session length doubled. Users came back more often. The cost of acquisition dropped. His framing: “Don’t optimize for the cheapest answer. Optimize for the answer that makes them want to ask another question.”
Sixth note: the most important question is not ‘Will this work?’ but ‘Is this interesting?’ Lobster described how they killed a project that technically worked flawlessly—a real-time translation tool for online games. It was accurate, low-latency, and users said it solved a genuine pain point. But engagement plateaued after two weeks. “It was a utility, not a toy. People used it when they had to, not because they wanted to.” They shifted resources to a project that was riskier technically but more whimsical: an AI that generates imaginary video game reviews for games that don’t exist. It didn’t solve any problem, but it made people laugh and share. That eventually led to a partnership with a game studio. Playfulness opened a door that utility could not.
Seventh and final note: the biggest risk in AI today is taking yourself too seriously. Lobster said this with a wry smile. “Everyone is trying to build the next AGI or the most powerful coding assistant. But the market for ‘serious’ is already crowded. The market for ‘delightful’ is wide open.” He quoted a line from an old article: “The opposite of play is not work, it’s depression.” In the context of AI, the opposite of fun is not productivity—it’s abandonment. Users have a thousand tools to choose from. The ones they stick with are the ones that make them feel something.
I found this interview refreshing precisely because it avoided the usual narratives. No grand predictions about job displacement, no moral panics, no funding numbers. Just a builder who thinks that in a world full of powerful but sterilized AI, the real competitive advantage is to be the one that brings a smile. That is not naive optimism. It’s a deliberate strategy built on a deep understanding of human nature.
The last thing Lobster said before signing off: “If your AI doesn’t make you laugh at least once a day, you’re doing it wrong.” Maybe that’s the best test of all.