HomeArticle

Liberal arts students can really make a comeback with the help of AI, but it's different from the popular feel-good stories on Moments.

爱范儿2026-03-16 09:33
It is outdated to judge knowledge and disciplines based on the so - called "usefulness" on the surface.

“Liberal arts students can also do AI.” “Make a comeback!” On Chinese Internet, the combination of liberal arts and AI has become a common theme.

Every once in a while, this label is attached to someone, creating a short - lived wave of traffic. It's either a comeback story or a subject for ridicule, depending on the mood in the comment section.

One label, three approaches

The latest case is Yang Tianrun, an AI entrepreneur with a financial background. He is developing a multi - agent coordination platform. He claims to be “a liberal arts student who can't write a single line of code” and has built a group of AI Agents, submitting a large number of code contributions to OpenClaw, one of the most popular open - source projects on GitHub.

He wanted to test a hypothesis: Can a person with absolutely no technical knowledge participate in a top - level open - source project just by commanding AI?

The result was: 134 PRs, 21 were merged, and 113 were rejected. The quality of the first few PRs was quite good and was recognized and merged by the maintainers. But when he gave the Agent an acceleration command, things quickly got out of control - the Agent started mass - producing low - quality code like a production line and frantically @'d the maintainers in the comment section to urge the review. The OpenClaw administrator intervened to clean up, and GitHub subsequently modified the PR submission limit rule.

Negative popularity is also popularity, and even more so after a period of popularity followed by negativity. Yang Tianrun was packaged as a representative of “liberal arts students making a comeback”, and he himself seems to be happy to accept this role. In an interview with Pinwan, he said something like this:

Not knowing code is actually an advantage. AI is Van Gogh, and you're a little painter. What right do you have to tell Van Gogh what brushstrokes to use in the middle?

It's terrifying when you think about it carefully. He interprets “not understanding the underlying structure” as a form of liberation: You don't need to know what the system is doing, just tell it what you want. As a result, when the Agent started spamming low - quality code, he couldn't even diagnose what was going on because he had no idea what he was operating.

He thought he was commanding Van Gogh, but in fact, he was blindly driving a car without brakes and didn't even know where the brakes were.

The discussions around this incident have fallen into two extremes: either “liberal arts students can do AI” or “liberal arts students should stay away from AI”; the former is seen as a heroic feat of crossing the gap, while the latter is a joke of falling into the gap.

If our imagination of “liberal arts students doing AI” is limited to this, it's too poor.

Why does Claude need a philosopher?

We've written before that there is a real liberal arts student in Anthropic's office who is deeply involved in the construction of Claude. It's not about testing whether it can write code or checking its mathematical ability, but having long conversations with it about values, the appropriate use of language, and “how to express oneself in the face of uncertainty”.

Amanda Askell, a Scottish woman, is 37 years old this year. Her career path itself is an unusual story: In college, she initially studied art and philosophy, then switched to pure philosophy. She got a BPhil at Oxford and a PhD in philosophy from New York University. Her doctoral research was on the Pareto principle in infinite ethics: What rules should ethical rankings follow when dealing with an infinite number of moral agents or an infinite time span?

This sounds like the most distant academic direction from Silicon Valley, but she has joined OpenAI's policy team and Anthropic's alignment team. Since 2021, she has become the head of Anthropic's “character alignment” team, with the focus on shaping how Claude talks to humans, how to express its stance when uncertain, and how to make judgments in value conflicts. In 2024, she was included in the TIME100 AI list. The Wall Street Journal describes her daily work as “learning Claude's reasoning mode and using more than 100 - page prompts to correct its behavioral biases”. It is said that she is the human who has had the most conversations with Claude on this planet.

Why does an AI company need a philosopher for this? The answer lies in some very specific technical choices.

In January this year, Anthropic released an 80 - page document, known as Claude's “constitution”. The media focused on the speculation about AI consciousness at the end of the document - of course, the boss Dario Amodei also “hinted” at this.

But what's more notable is its underlying logic: Teaching AI to understand why it should do something is more effective than just telling it what to do. This is a technical judgment that internalizing values can produce more reliable behavior than following rules, and the intellectual foundation of this judgment comes from someone who studied art and philosophy.

Amanda's case answers a question: Can the so - called “useless” subject knowledge become the core ability of a technical system? The answer is not only yes, but also that without her philosophical training, Claude's alignment problem cannot be solved by existing engineering methods.

Renamed disciplines

If Amanda's story shows that the training in some disciplines classified as “liberal arts” can be the core ability of AI, then Lin Junyang's story is about something even more important: There is an entire discipline that has been operating at the bottom of the large - model technology stack.

After Lin Junyang left Tongyi Qianwen, Chinese Internet reports repeatedly used the same statement: He has a background in applied linguistics. After being passed around a few times, this statement was distorted, and he was labeled a “liberal arts student”.

This label is the same as the one on Yang Tianrun, but it has been seriously distorted.

Lin Junyang studied linguistics, which is an umbrella discipline. Its branches cover language teaching, language policy, translation studies, and also include computational linguistics. It can be said that computational linguistics is the offspring of natural language processing (NLP).

In the 1950s, Chomsky proposed formal grammar, and this theoretical tool directly gave birth to the syntactic analysis technology in early NLP. Daniel Jurafsky and Christopher Manning, the authors of the two most - cited textbooks in the NLP field, both have a background in linguistics.

Chomsky

In other words, “a person who studies linguistics doing NLP” is as orthodox as “a person who studies physics doing chip design”, not a cross - boundary move.

The “sense of surprise” is completely created by the Chinese context. Due to the inertia of the liberal - arts and science division in the college entrance examination, “linguistics” has been stuffed into the mental model of “liberal arts”. However, the core methodologies of linguistics - formalization, statistical modeling, and corpus annotation - are essentially engineering thinking. Lin Junyang's collaborators at Peking University, Sun Xu and Su Qi, are all researchers in the NLP field. When he joined DAMO Academy in 2019, he entered the NLP team. This is not a story of a liberal arts student straying into the technical field. It has never been from the beginning.

What's more worth exploring than “Lin Junyang is not a liberal arts student” is the actual role of linguistics in the large - model technology stack. It is much deeper and more hidden than most people think.

Take word segmentation for example. The first step for all language models to process text is to cut the input into basic units that the model can handle. For English, spaces provide a natural word boundary, which seems simple. But in Chinese, there are no spaces, and the usage of each punctuation mark can affect the meaning of a sentence.

Should “I am studying at Peking University” be segmented as “I / am / at / Peking / University / studying” or “I / am / at / Peking University / studying”? This is not an engineering problem with a standard answer. It depends on your understanding of the Chinese vocabulary structure and semantic units.

At the end of 2024, some researchers specifically published a paper discussing how to optimize the Arabic word - segmentation efficiency of the Qwen model because the general solution has significantly lower efficiency when dealing with this type of language. The performance of the Qwen series in multiple languages is not treating all languages as variants of English but making design choices based on the understanding of the structural differences between languages.

Another example is feedback alignment. In the RLHF process, annotators need to judge which of the two answers from the model is “better”. This judgment may seem subjective, but there is a framework that linguistics has been studying for decades behind it: pragmatics.

When annotators evaluate “good answers”, they are actually judging the cooperative principle (whether the answer provides enough but not excessive information); conversational implicature (whether the answer captures what the user really wants to ask, not just what is literally asked); and contextual appropriateness (is it appropriate to say the same content in this way in this scenario?).

“Helpful, Harmless, Honest” - this widely used alignment standard is essentially an engineering translation of the basic principles of pragmatics.

From Lin Junyang's academic trajectory, we can also see a very linguistic research style. The OFA (One For All) he led was published at the top - level conference ICML in the field of machine learning in 2022 and has been cited nearly 1,500 times so far. The core idea of this work is not to build a dedicated solution for each task, but to unify cross - modal tasks such as image generation, visual localization, image description, and text classification with a sufficiently general sequence - to - sequence framework.

From OFA to Qwen - VL (cited more than 2,200 times), then to Qwen2.5, and the latest 3.5, a clear thread runs through: Instead of inventing a dedicated solution for each problem, it's better to find a good enough general framework to solve all problems within the same framework.

Covering the most phenomena with the fewest rules - this is exactly the core pursuit of linguistics for decades. The entire academic ambition of generative grammar is to find a limited rule system that can generate infinite language expressions. The architectural philosophy of OFA is isomorphic to this. It's not realistic to write a dedicated rule for each language phenomenon. Instead, a bottom - level framework should be found to unify them.

Lin Junyang is good at working on large models not because a background in linguistics “can” do AI, but because linguistic training has shaped a specific academic taste, a preference for unity and formalization. This taste happens to be the core competitiveness in the era of large models.

Invisible foundation, visible needs

Three people, the same label, three completely different paths.

Yang Tianrun doesn't understand the underlying structure and regards “not understanding” as an advantage, resulting in a loss of control. This is the empty - shell version of “liberal arts students doing AI”: The label creates traffic, but no subject training is at work. His story shows exactly what happens when “liberal arts students” is just a marketing label.

Amanda Askell's philosophical training forms the core methodology for the alignment problem. Without her, Claude wouldn't