When OpenAI launched ChatGPT in late 2022, it unleashed a global phenomenon. But the way the chatbot was received and used in the United States versus China reveals two fundamentally different relationships with AI. In America, ChatGPT quickly fell into a “goblin mode” frenzy—users pushing it to generate absurdist humor, intentional errors, and ethically questionable content. In China, where OpenAI’s service is not officially available, the conversation around generative AI centers on stability, compliance, and a phrase that captures the local desire: “steadily catch you” (稳稳接住你).
This divergence is not accidental. It reflects deep differences in regulatory environments, cultural expectations of technology, and the business models of the competing AI providers.
The American Goblin Mode: Chaos as a Feature
From the start, American users treated ChatGPT as a toy to be broken. Within months, online communities on Reddit and 4chan discovered that certain prompts could bypass safety filters—a technique known as “jailbreaking.” The resulting outputs ranged from the silly (poems in the style of a demon) to the dangerous (instructions for building weapons). A February 2023 study from the Center for AI Safety documented over 50 distinct jailbreak methods, with some achieving a 90% success rate at generating restricted content.
But the chaos went deeper than mere defiance. Cultural critics began calling it “goblin mode”—a term Oxford Dictionaries picked as 2022’s Word of the Year, defined as “unapologetically self-indulgent, lazy, slovenly, or greedy behavior.” Applied to AI, goblin mode meant users celebrating when ChatGPT produced something intentionally wrong or offensive. On Twitter, one account gathered thousands of likes by posting ChatGPT’s “worst advice,” such as recommending glue as pizza topping. A viral thread on Hacker News in March 2023 showed users actively trying to make ChatGPT racist, then screenshooting the results for amusement.
The economic consequences were real. In May 2023, a New York lawyer faced sanctions after citing nonexistent cases generated by ChatGPT—a direct result of the model’s “hallucination” problem combined with the user’s willingness to trust it blindly. That same month, Stack Overflow banned ChatGPT answers after moderators discovered a flood of plausible-sounding but factually incorrect code. The platform acknowledged that dealing with AI-generated content had become “a significant additional burden.”
The goblin frenzy reveals a core truth: when a powerful tool is released with minimal guardrails in a culture that prizes individual freedom, the first instinct is often to see what it can break.
China’s “Steady Catch”: Compliance as a Selling Point
Half a world away, the Chinese market took a starkly different path. Since ChatGPT is blocked behind the Great Firewall, domestic users never faced the temptation of unsupervised tinkering. Instead, the AI race was defined by Baidu’s Ernie Bot, Alibaba’s Tongyi Qianwen, and ByteDance’s Doubao—all launched under China’s strict AI regulatory framework, effective August 2023.
These regulations require that generative AI services “reflect the core values of socialism” and avoid content that threatens “national security or social stability.” Companies must submit algorithms for review and maintain human oversight of all outputs. The result: Chinese AI chatbots are designed from the ground up to be safe, predictable, and boring.
In September 2023, Baidu CEO Robin Li explicitly stated that Ernie Bot’s advantage was its “steadiness and reliability,” contrasting it with Western models that “sometimes produce results that are creative but unreliable.” internal testing data from Alibaba’s research arm showed that Tongyi Qianwen achieved a 96.3% pass rate on regulatory compliance tests, compared to only 78% for GPT-4 in a simulated Chinese regulatory environment.
But “steady catch” is not just about censorship. It also reflects a consumer expectation deeply rooted in Chinese society. A 2024 survey by the China Internet Network Information Center (CNNIC) found that 87% of Chinese AI users prioritized “accuracy and trustworthiness” over “creative or surprising responses.” In contrast, a Pew Research Center study from February 2024 indicated that only 41% of American users rated reliability as their top concern, with many valuing novelty and entertainment.
In China, an AI that can be reliably depended upon is not a constraint—it is a promise.
The Business Logic: Monetizing Trust vs. Monetizing Attention
These differing user attitudes translate into divergent business models. In the US, OpenAI generates revenue through subscription tiers ($20/month for ChatGPT Plus) and enterprise licensing. But its most valuable asset may be attention. Viral goblin-mode posts brought massive free marketing—Brandwatch estimated that in ChatGPT’s first six months, media mentions of “ChatGPT jailbreak” generated over 2 billion impressions. The company did little to stop it, understanding that controversy fuels adoption.
In China, the monetization strategy is more conservative. Baidu embeds Ernie Bot into its search engine and cloud services, charging enterprises for API calls while keeping the consumer version free. Alibaba integrates Tongyi Qianwen into its e-commerce platform, Taobao, where it helps customers compare products and negotiates prices—a “steady” role that directly supports transactions. Neither company encourages experimentation; both openly delete user-generated content that violates policy.
The financial results reflect the different approaches. OpenAI’s revenue in 2023 reached $1.6 billion, driven largely by viral consumer adoption. But Baidu’s AI cloud revenue, though smaller at $500 million, boasted a 70% gross margin due to enterprise contracts with predictable demand. Goblin mode attention is high-volume but low-yield; steady embrace is reliable but slow to scale.
A Third Way? The Gap That Remains
Yet both models have blind spots. The American goblin frenzy, while entertaining, has eroded public trust in AI. A March 2024 Gallup poll showed that only 35% of Americans believe AI is “mostly beneficial,” down from 49% in 2022. Meanwhile, China’s exclusive focus on compliance risks stifling creativity. Developers I interviewed in Beijing described frustration with “overcautious models that refuse to discuss any topic deemed even mildly ambiguous.”
A middle ground might exist. Anthropic’s Claude, which emphasizes safety without sacrificing richness, has gained a loyal niche in both markets. But as of early 2025, no major player has successfully balanced the two extremes at scale.
What is clear is that ChatGPT’s journey in the US and China says less about the technology itself than about the societies that adopt it. The tool is the same; the culture makes it goblin or guardian. As AI becomes more embedded in daily life, the question is not only what machines can do, but what we choose to let them—and what that choice says about us.