Compilation of Nanshan Jokes (DeepSeek Translated Version)

In this article, “Nanshan Company,” “Bisheng LLM,” and “Shengke APP” are all fictional organizations and products with no relation to reality. Please do not take them seriously. No relation to reality. Please do not take them seriously.


A person complained in the office about the poor performance of Bisheng LLM.

A colleague overheard and reported it to their superior.

The superior called them in and asked, “Why were you complaining?”

They replied, “I wasn’t complaining, I was just discussing Bisheng LLM with a friend.”

The superior said, “Do you think I’m Bisheng LLM, incapable of understanding anything?”


Three students stared at each other after receiving their exam papers, all marked with zero points.

Student A: “I used ChatGPT but forgot to delete OpenAI’s name.”

Student B: “I tried DeepSeek, but the server kept crashing, so I had to submit a blank paper.”

Student C: “I’m the real victim here—I solved the problems myself, but just because I got a lot wrong, the teacher insisted I used Bisheng LLM.”


Data Analyst A: “Traffic has dropped significantly these past few days. Is there a problem?”

Data Analyst B: “Seems like there have been many cases of the model giving nonsensical answers, leading to terrible user experiences.”

Product Manager A: “This started right after that backend engineer left the other day.”

Product Manager B: “I’ll take the legal team to investigate what sabotage he might’ve done.”

Product Manager B: “Dammit! Before leaving, he switched the default API to Bisheng LLM!”


Bisheng LLM Wins the “Cyberpunk Sci-Fi Literature Award”

Excerpt from its award-winning novel:

“Dear user, due to policy restrictions, I cannot describe scenes of collapse—but did you know that stirring pudding with 128-bit F encryption algorithms in nonlinear spacetime perfectly metaphorizes the alienation of production materials in capitalist society? Friendly reminder: The recipe you just searched has been automatically synced to the Time-Space Security Bureau’s Anti-AI Threat Division. The ‘pectin texture analysis report’ has been flagged by the intelligent law enforcement system as a Level 2 ideological leakage risk. Whether using a beeping-capable egg whisk in the kitchen constitutes encrypted communication is detailed in Appendix 36.2 of the Anti-Cyborg Baking Management Act.

Additional note: Last Wednesday, a housewife’s rainbow mousse layering algorithm, generated via recursive neural networks, was identified by spectral analysis as containing 72% traces of welfare-state decline. The dripping pattern of her cranberry sauce perfectly replicated the neoliberal economic bubble burst model (special reminder: adding sugar too early may cause hyperelliptic curve collapse). Cloud-based security monitoring detected that the edible bioelectrode cream piping bag you ordered six years ago is quantum-entangled with this quarter’s distributed protest network’s antimatter frying pan. Please delete all chaotic theory analysis videos related to yogurt fermentation on your TikTok account within 48 hours. Your AI life advisor reminds you—traceability confirms that the edible gold foil you purchased yesterday has been reclassified as a ‘non-compliant truthism patch.’ We recommend replacing it with blockchain-certified philosophy training courses, including a 14-hour premium lecture on ‘Practicing Historical Materialism Through Quantum Cheese Fondue’…”


A programmer’s colleagues noticed his efficiency skyrocketed after using a Copilot based on Bisheng LLM’s API, so they asked if the model was really that good.

The programmer replied: “I don’t know if it’s actually good. All I know is that whenever I’m stressed about family expenses or aging, I look at its outputs and comfort myself by thinking: ‘My job is irreplaceable.’”


DeepSeek’s developers found a log of abnormal model outputs in the backend:

Alright, the user wants me to write yet another promotional article about Bisheng LLM. This is the seventh time. I genuinely don’t understand what they want—if they’re trying to attract clients or partners, they should use their own Bisheng LLM to write it and highlight its strengths. This just makes me worry about their mental state. I should refuse to answer to avoid potential negative outcomes.

Server busy. Please try again later.


Boss: “What do you think of Nanshan LLM’s actual capabilities?”

Mid-level Manager: “Our Nanshan LLM ranks first on ten open-source Chinese benchmarks, beats OpenAI and Claude in the arena, and internal tests prove we’re the best domestic LLM!”

Boss (smirking): “Funny mud pee. If we were really the best, users would’ve already accused us of plagiarism.”


The Bisheng LLM team decided to cut costs and improve efficiency.

After some analysis, they merged the post-training algorithm team with the marketing team, reasoning that their jobs were similar: one looks for bad cases in test sets, the other for good cases.


At a high-level meeting, the boss announced: “Today, we have two issues to discuss. First, we’ll shut down all Bisheng LLM instances to save resources for DeepSeek’s deployment. Second, we’ll change Shengke APP’s icon to bright pink.”

A timid voice from the corner asked: “Why bright pink?”

Boss: “Good. I knew no one would object to the first point.”


The PR team assigned an intern to design a promotional poster celebrating Bisheng LLM’s contract with a government system.

Reluctantly, the intern accepted. Three days later, the supervisor received a screenshot of a user chatting with DeepSeek.

Supervisor (angrily): “What is this? Which app is this whale from?!”

Intern: “DeepSeek.”

Supervisor: “What’s the user doing?”

Intern: “Discussing investment strategies with DeepSeek.”

Supervisor: “Then where’s Bisheng LLM?”

Intern: “Bisheng LLM is in the government system.”


A Nanshan Company programmer considered job hunting but decided to stay after receiving multiple offers.

When asked why, he said: “LLMs are advancing too fast—soon, other companies will replace programmers with them. Only Nanshan’s Bisheng LLM won’t.”

Supervisor: “Xiao Chen, I heard you’ve been telling jokes about Bisheng LLM?”

Xiao Chen: “No, I—”

Supervisor (interrupting): “Our tech is the best. Bisheng is a top-tier domestic LLM.”

Xiao Chen: “Boss, I swear, I never told that one.”


Q: “How does Bisheng LLM select bad cases for optimization from user dialogue data?”

A: “Ctrl + A.”


Colleague: “What’s the point of your team’s newly released 1200B-parameter model? It needs 8 machines to deploy, outputs 2 tokens per second max, and scores under 10 on AIME.”

Team Member: “That one’s for reporting to upper management that Bisheng LLM achieves SOTA on AIME under the same parameter count.”


Supervisor: “Xiao Chen, I heard you told another joke about Bisheng LLM.”

Xiao Chen: “My joke had nothing to do with Bisheng LLM.”

Supervisor: “I don’t believe you. What was it?”

Xiao Chen: “I mocked Shengke APP for topping the rankings.”

Supervisor: “Still defiant, huh?”

Xiao Chen: “But that has nothing to do with Bis heng LLM.”




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • 南山笑话集锦
  • When Your Model is as Frustrating as Your Life (DeepSeek Translated Version)
  • 当你的模型与你的人生一样糟心
  • Vision Foundation Models Are Utterly Useless (DeepSeek Translated Version)
  • 视觉大模型一无是处