Scoffing at a Certain Company 0.3 Parsecs Away (Gemini 2.5 Pro Translated Version)
First, three clarifications:
0.3 is an approximate number. The precise definition of a parsec is 648,000/π astronomical units (a bit of trivia).
Depression is not good. As someone who has struggled with depression, if this illness (being referred to) is real, I wish it were fake. If it’s fake, then I wish it were real (implying a wish for deserved comeuppance).
There are no particular stakes involved. The reason I want to scoff is purely because this company previously judged me as “not outstanding enough.” Based on the principle of reciprocity, I must offer a similar evaluation.
Anyway, I guess everyone expected this company would see this day, though I didn’t expect it to be this soon. Burning through 50 million dollars, even if physically setting the bills on fire one by one, should take at least two years. Of course, I wouldn’t dare to publicly slander that this company has completely collapsed and disappeared. However, in line with the Chinese cultural concept of “one should neither be reckless in valor nor merely aim for survival” (implying perhaps that after such a display of recklessness, mere survival is the only concern, or that they shouldn’t have been so bold if they couldn’t sustain it), after a company experiences such a setback, restoring its (perhaps never very good) reputation in the future will likely be exceedingly difficult. It’s just a pity for those algorithm guys who already joined.
Actually, my desire to write this article wasn’t initially very strong. But I browsed through the comments on a certain community where the average annual income is supposedly in the millions, and found that no one had clearly explained the problems with this company. Out of a noble intention to save the domestic general-purpose artificial intelligence startup scene, I’ll share my views.
The problem is actually very simple: they benchmarked themselves against OpenAI. Not the OpenAI from a few years ago that was struggling on the brink, scoffed at by others, yet persistently produced high academic value. Instead, they only wanted to benchmark against the OpenAI that was already on the cusp of success, about to monetize commercially. They even turned the latter into a formula: Success = Large Language Model = Hardware Infrastructure + Software Architecture + Top-tier Algorithms + Massive Data. Then they built a team and executed work according to this formula: frantically buying “Old Huang’s” (Jensen Huang of Nvidia) primary industrial waste (i.e., GPUs) + recruiting architecture teams (widely considered) of little contribution + unrealistically high hiring standards (I even suspect that to satisfy the boss’s preferences, their team would rather not hire anyone than hire too many people) + hired algorithm engineers having to wrangle data themselves (my guess, please correct me if you have information).
Is there a problem with doing this? From a capital perspective, probably not. But from a skeptical viewpoint, this success formula is just a very naive induction from the conclusions of OpenAI alone; it’s not a certainty. Actually, if you ask me, perhaps the most important item in this success formula, Augustine’s Divine Illumination, was directly overlooked. Or, to put it colloquially, this company did everything right, except they didn’t invite a high priest to consecrate it. I know you’re all materialists, and something like “divine illumination” sounds unreliable. However, through its long period of struggle and academic output, OpenAI may indeed have formed such a culture internally, akin to divine illumination, allowing every new “Cai Kun” (a popular celebrity’s name, used humorously here to mean “newbie” or “rookie”) algorithm student to generate wisdom under its radiance. This accumulated wisdom ultimately produced the miracle that is perhaps closest to general artificial intelligence today (the product closest to money, nothing to be ashamed of).
Finally, one more clarification: the cover image, which has “light,” a “festive New Year atmosphere,” and an “out-of-this-world” (Shangri-La/utopia) feel, was generated by a large model that is “two months different from the one with the same logic as the ‘donkey meat burger’ model.” (This is a very specific, possibly metaphorical or inside joke reference, implying a particular kind or quality of AI generation).
Enjoy Reading This Article?
Here are some more articles you might like to read next: