Attempting to Alleviate Some of the Anxiety Brought by LLMs (DeepSeek Translated Version)

As we all know, LLMs (Large Language Models) have currently caused immense panic, a wave of reflection, and a sense of anxiety. These phenomena are prominently manifested in the overwhelming spread of news about ChatGPT, tutorials on LLMs, and industry trends by (self-)media outlets, regardless of whether their primary focus is communication or not. At the same time, roughly hundreds of large, medium, and small legal entities in China (including those not yet registered) have begun to pivot their focus toward conversational LLMs or position themselves as competitors to OpenAI. Meanwhile, a municipal administrative organization released the White Paper on the Development of the Artificial Intelligence Industry, proposing to “support leading enterprises in building large models that rival ChatGPT.” Of course, the worst of all is the open letter titled Pause Giant AI Experiments, which fully conveys a sense of resentment: “If you won’t let me on board, I’ll just tear the road apart.”

In fact, as the title of this article suggests, I am only willing (and only able) to try to alleviate some of the anxiety, while I am actually pleased to see the panic and reflection. Or rather, I have ample reasons to argue that panic and reflection are rational responses, whereas anxiety is not. The reasons why LLMs—or more accurately, OpenAI’s GPT series—have brought us such immense panic and necessitated disruptive reflection can be summarized in three main points:

  1. OpenAI successfully deployed ultra-large models that were previously considered impractical (especially by domestic tech companies), even if each invocation incurs a loss.

  2. OpenAI achieved an incredibly pure form of labor exploitation, amassing an immeasurable amount of training data, and through a preemptive (and loss-incurring) strategy, obtained even more invaluable user feedback data.

  3. OpenAI proved that if something is made well enough, it can afford to be “bad” (e.g., open-source datasets, or the aforementioned points I and II), without needing a specific product methodology.

Of course, this list could easily extend to hundreds of points, much like how Operation Desert Storm shocked a certain large East Asian administrative entity. But that’s not today’s topic, so readers are free to share their thoughts in a friendly manner. Returning to the main theme, I will discuss in three parts why anxiety is unnecessary (or why I cannot provide arguments for its necessity).

  1. Whether You’re Anxious or Not, There’s Little You Can Do About It Right Now On a personal level, a certain influencer’s Dos and Don’ts in the Era of LLMs has been widely circulated on social media. I won’t elaborate further, but my take is that it represents positive thinking within a negative framework—ultimately, there’s little you can do, so focusing on your own work is the right approach.

What I really want to talk about, however, is the organizational level. Even if you do your best as an individual, if you genuinely aspire to address this issue, you’ll still need to rely on an organization. Unfortunately, I hold an extremely pessimistic view of domestic organizations at the moment. In other words, while everyone seems to be working hard, it’s unlikely that anyone will produce something truly comparable to Western achievements. In my understanding, for domestic organizations to catch up—let alone rival GPT—there are only two viable models, neither of which is the “Yanhong model” (bad money driving out good) or the “Huiwen model” (rebellious upstarts), but rather the MBS model or the Stalin model.

The MBS (Mohammed bin Salman) model entails endless investment: starting with buying land and building nuclear power plants for hardware, and for software, directly poaching OpenAI’s core members with money. If that fails, then poach batches of domestic experts, with everyone—including the top Ph.D. holders—working on the front lines. Finally, build a “data factory Faust” that operates nonstop year-round to generate data. Then multiply all of the above by three, creating multiple backups à la the Manhattan Project, and mobilize the entire conglomerate to support this endeavor. Don’t even look at mere millions—start with Microsoft’s $10 billion benchmark. Otherwise, what’s the alternative?

The Stalin (Ио́сиф Виссарио́нович Ста́лин) model, on the other hand, is for those without infinite budgets. Gather a group of young talents regardless of their background, build a city in the wilderness of Siberia (fine, Lop Nur), solve all their living needs, and let them work freely. Slowly but steadily cultivate a group of truly capable individuals, eliminate the浮躁 (frivolity) of the current academic world, and strive to develop a complete knowledge system from fundamentals to engineering within 5–10 years, before competing with the West.

  1. LLMs Themselves May Not Be a Viable Path to Strong AGI We won’t rehash the definition of strong AI here, though I can’t resist noting that if “engaging in conversation like an average human” is the standard, then GPT-4 isn’t just strong AI—it’s practically a deity. Ask it to draft a white paper on promoting the large-model industry in a certain Chinese administrative region, and the result would likely outperform many humans in our administrative bodies. My main argument here revolves around the nature of language, summarized in three layers:

As a representation of the real world, language is inherently ambiguous. The same language can evoke different thoughts, and the same thought can produce different outputs. For example, if your child scores 60 on an exam and you calmly say, “Great job,” the child might cheerfully ask for in-game purchases as a reward—only to be met with a verbal lashing.

Due to this ambiguity, humans are born with biases in language use, which ensures individual consistency in associating language with thought. LLMs trained on massive data may lose this bias. Continuing the example: if we interpret “Great job” as sarcasm at 60 points, at what score does it cease to be sarcastic? There’s no fixed standard—it depends on personal biases toward the world. In other words, someone who often says “Great job” might just enjoy using it sarcastically, regardless of the actual score. Big data naturally erases such biases; an LLM might learn a normal distribution based on scores and sample whether the phrase is encouragement or sarcasm.

Without bias, an LLM cannot create new knowledge or concepts based on a given thought, which deviates from the definition of strong AI. New linguistic knowledge or concepts are essentially products of old knowledge or concepts combined with bias. For instance, the character “翔” (xiáng) originally meant “to soar,” but due to an individual’s linguistic bias and its comedic effect, it acquired its current slang meaning.

  1. Anxiety Is the Very Proof of Your Existence Anxiety itself is not a negative concept; its devaluation stems mostly from世俗 (secular) biases. Physiologically, anxiety is an emotional turbulence and unpleasant feeling caused by uncertainty about expected outcomes (often negative ones), a perfectly normal human reaction. Unlike anxiety disorders, moderate anxiety is a driver of self-improvement. So why not drink this bowl of anxiety soup and pull an all-nighter battling NaNs for another year?

From Heidegger’s existentialist perspective, what LLMs bring me is less “anxiety” and more a concrete manifestation of Angst (dread)—the very being of Dasein (existence). While LLMs haven’t brought me unavoidable death (though they have socially “killed” many), they evoke a primal pursuit of wisdom. If I’m not on the path to seeking wisdom, then I’m already dead.

That’s all. Hope you enjoy it.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Compilation of Nanshan Jokes (DeepSeek Translated Version)
  • 南山笑话集锦
  • When Your Model is as Frustrating as Your Life (DeepSeek Translated Version)
  • 当你的模型与你的人生一样糟心
  • Vision Foundation Models Are Utterly Useless (DeepSeek Translated Version)