HomeArticle

From the dry cleaner to the Queen Elizabeth Prize for Engineering, Fei-Fei Li defies the Silicon Valley tech myth and focuses on the dehumanizing risks of AI.

超神经HyperAI2025-11-21 18:14
Ethical considerations behind Silicon Valley technology

In the spring of 2025, Fei-Fei Li won the "Queen Elizabeth Prize for Engineering" in recognition of her foundational contributions in the fields of computer vision and deep learning. As the core promoter of the ImageNet project, she pioneered a data-driven approach to visual recognition and proposed the concept of "human-centered" AI. Amid the commercialization wave in Silicon Valley, she has always remained vigilant about AI ethics, social values, and the risk of dehumanization. However, her minority status places her in a delicate position between scientific achievements and industrial practice, sparking ongoing discussions.

In the spring of 2025, Professor Fei-Fei Li, a bachelor of physics from Princeton University and a doctor of computational neuroscience from the California Institute of Technology, won the "Queen Elizabeth Prize for Engineering," which is regarded as the "Nobel Prize in the field of engineering." The jury commended Fei-Fei Li for her foundational work in computer vision and deep learning, stating that her research "enabled machines to see the world in a way close to humans for the first time."

"Engineering is not just about computing power and algorithms; it's about responsibility and empathy," Fei-Fei Li emphasized in her acceptance speech. She pointed out that technological breakthroughs do not necessarily mean progress in understanding. In an era of accelerated AI development, she has always maintained a sense of vigilance: while algorithms are reconstructing language, images, and knowledge systems, they are also reshaping the power structure of society and human self-perception. The greatest risk of AI lies in "dehumanization," she wrote in the preface of her memoir The Worlds I See. "If artificial intelligence forgets human values, it will lose its meaning."

In the industrial narrative of Silicon Valley, Fei-Fei Li's dissenting voice is particularly rare. Instead of emphasizing scale and speed, she is more concerned about the social structure and ethical foundation behind intelligence: As machines increasingly understand humans, do humans still truly understand themselves? Fei-Fei Li's story is not just about scientific achievements but also about the humanistic discourse of a non-mainstream minority. How to bring AI technology back to a human-centered track is the question she really wants to answer beyond awards, honors, and praises.

Photo of Fei-Fei Li receiving the award

As an "outsider," she chose to break away from the grand narrative

In 1976, Fei-Fei Li was born in Beijing. Her father is a physicist, and her mother is an engineer. At the age of 12, she immigrated to New Jersey, the United States, with her parents, hardly speaking any English. Life was very difficult in the early days of immigration. Her parents made a living by working in a dry cleaner and a restaurant. She worked hard to learn English while working part-time in the restaurant and her parents' dry cleaner during her free time to support the family. In an interview, Fei-Fei Li recalled, "Life was really tough as an immigrant or in an immigrant family." This experience became the basis of her "immigrant consciousness" and marginal psychology: in the Western environment, as an "other," Fei-Fei Li witnessed the prosperity of the US scientific and technological system and also experienced the inequality of the social structure. "Other" here refers to those who, due to their female identity, are placed outside the mainstream/subject in the power structure, social narrative, and cultural construction, and are gazed at, defined, marginalized, or othered through the "female" identity. It comes from the concept of "Other/Otherness" in Western philosophy and is widely used in gender studies.

In 2000, Fei-Fei Li pursued a doctorate in computational neuroscience at the California Institute of Technology, focusing on the intersection of visual cognition and artificial intelligence (Visual Object Recognition and the Brain). This interdisciplinary training made her realize that "vision" is not just a matter of perception but also of understanding: Can machines understand the world through experience, context, and memory like humans? This thinking became the ideological basis for her subsequent proposal of the ImageNet project.

Fei-Fei Li's doctoral dissertation

In 2007, while teaching at Princeton University, Fei-Fei Li and her research team launched the influential ImageNet project. In 2009, in her paper "ImageNet: A Large-Scale Hierarchical Image Database," Fei-Fei Li mentioned that at that time, most computer vision algorithms relied heavily on handcrafted features and small datasets, and the idea of "data-driven deep learning" was quite controversial. However, facts have proven that her persistence was not left behind by the times. As the technological paradigm of AI quietly changed, the large-scale data-driven approach, once regarded as a "risky bet" in the academic community, eventually became the mainstream consensus.

As Venturebeat pointed out in a report, the "data-driven paradigm" promoted by Fei-Fei Li changed the development path of computer vision and even the entire field of AI. "After the ImageNet competition in 2012, the media quickly noticed the trend of deep learning. By 2013, almost all computer vision research had shifted to neural networks."

Report on the development of deep learning by VB

Thus, when the wave of AI arrived, this scientist who had struggled on the margins of the immigrant identity was finally pushed to the center of the era.

However, despite her research laying the foundation for the era of deep learning, Fei-Fei Li has never fully integrated into the technology narrative dominated by Silicon Valley. Her marginal identity gives her a unique perspective, allowing her to maintain a cool distance from the global AI frenzy.

In the mainstream narrative of Silicon Valley, AI is portrayed as the core issue of technological competition, capital games, and national strategies. However, Fei-Fei Li chose to re-examine this system from a humanistic and ethical perspective. She pointed out on many public occasions that the development of AI is being overly commercialized and militarized. Research resources and social imagination are concentrated on "larger models" and "stronger computing power," while the social consequences of technology are ignored.

In 2019, Fei-Fei Li returned to Stanford and co-founded the Stanford Institute for Human-Centered Artificial Intelligence (HAI) with Marc Tessier-Lavigne, John Etchemendy, and others. They incorporated ethics, the public sector, and vulnerable groups into the technical design of AI and clearly stated a core principle in its mission statement: AI must serve the broadest well-being of humanity.

In an interview published by the HAI institution, Fei-Fei Li said bluntly, "I'm not a typical tech elite. I'm an immigrant, a woman, an Asian, and a scholar. These identities give me a unique perspective and view. The far-reaching impact of the future of artificial intelligence means that we must maintain autonomy. We must choose how to build and use this technology. If we give up our autonomy, we will fall into a free fall."

Interview with Fei-Fei Li by the Stanford HAI institution

Opposing the Silicon Valley technology myth, Fei-Fei Li warns of the risk of "AI dehumanization"

Different from the mainstream narrative in Silicon Valley, Fei-Fei Li continuously advocates the concept of "AI4Humanity," taking social values and ethics into account in technological development. She warns of the risk of "dehumanization" that technological progress may bring and emphasizes that AI should be human-centered, and technology must be consistent with human needs and values.

In 2018, when facing Project Maven, a military drone image recognition project jointly developed by Google and the US Department of Defense, Fei-Fei Li clearly stated her opposition to the militarization of AI in an email: "AI should benefit humanity. Google can't let the public think that we are developing weapons."

Report on Fei-Fei Li's AI4Humanity by Wired

In an interview with Issues, Fei-Fei Li was also outspoken about the potential risks of AI. "The impact of AI technology is two-sided. For society, this technology can cure diseases, discover drugs, find new materials, and create climate solutions. At the same time, it may also bring risks, such as the spread of false information and drastic changes in the labor market."

Interview report on Fei-Fei Li by Issues

In fact, to further limit the risks of AI, Fei-Fei Li has repeatedly emphasized the necessity of establishing an AI ethical supervision mechanism in public. In an interview with McKinsey & Company, Fei-Fei Li calmly said that it is very urgent to establish a supervision mechanism based on the legal system. "Rationally speaking, this is necessary for humanity when obtaining new inventions and discoveries. This mechanism will be partly achieved through education. We need to let the public, policymakers, and decision-makers understand the power, limitations, and facts of this technology and then integrate norms into it. The supervision framework will be implemented and enforced through legal guarantees."

Interview with Fei-Fei Li by McKinsey & Company

Meanwhile, to promote the driving role of education in AI ethical supervision, at the Semafor Tech event held in San Francisco in May 2025, Fei-Fei Li also called on the Trump administration to reduce its intervention in university finances. Recently, to crack down on immigration, the Trump administration cut billions of dollars in university research grants and revoked thousands of student visas. In response, Fei-Fei Li said that as global technological competition intensifies, sanctioning research institutions will pose potential risks to the ethical development of AI.

"The public sector, especially higher education, has always been a key part of the US innovation ecosystem and an important part of our economic growth. Almost all the classic knowledge of artificial intelligence we know comes from academic research, whether it's algorithms, data-driven methods, or early microprocessor research," Fei-Fei Li said. "The government should continue to provide sufficient resources for higher education and the public sector to conduct this kind of innovative, unrestricted, and curiosity-driven research, which is crucial for the healthy development of our ecosystem and the cultivation of the next generation."

In addition, Fei-Fei Li also said bluntly that the visa quota for citizens of certain countries in the United States has always been a problem for many talents to stay. "To be fair, I hope my students can get work visas and find a way to immigrate."

Report on the Semafor Tech event by Semafor

In short, in the face of the fanatical technological optimism in Silicon Valley, Fei-Fei Li always maintains a reflective stance, vigilant about the risk of "dehumanization" in AI. "Many people, especially in Silicon Valley, are talking about increasing productivity, but an increase in productivity does not mean that everyone can share in the prosperity. We need to recognize that AI is just a tool. The tool itself has no value. The value of the tool fundamentally comes from human values."

She firmly believes that a human-centered approach to artificial intelligence is necessary at the individual, community, and social levels. "We need a human-centered framework with concentric responsibilities at the individual, community, and social levels to ensure the common commitment that AI should improve human well-being."

Based on marginal experiences, interpreting the opportunities and burdens of a complex niche

Facing her multiple marginal identities as a woman, an immigrant, an Asian, and a scholar, Fei-Fei Li admitted that these experiences have greatly influenced her research and propositions. In an interview with HAI, Fei-Fei Li mentioned that it is precisely these marginal experiences that make her view new technologies differently from those children who grew up in a more stable environment and started using computers at the age of five. She can continuously recognize the structural biases in the technological system.

"Scientific exploration is like an immigrant's exploration of the unknown. Both are on a journey full of uncertainties, and you must find your own guiding light. In fact, I think this is exactly why I want to engage in human-centered artificial intelligence. The immigrant experience, the dry cleaner business, and my parents' health - everything I've experienced is deeply rooted in humanity. This gives me a unique perspective and view." Fei-Fei Li said bluntly.

However, the insights brought by her marginal identity are also accompanied by misunderstandings, controversies, and pressures. As one of the most influential women in the global technology field, Fei-Fei Li is often portrayed by the media as the "godmother of AI." However, Fei-Fei Li has expressed her discomfort with this symbolization on many public occasions and is tired of being called a "female role model."

"I don't really like being called the godmother of AI," Fei-Fei Li said in a report by Axios. The technology industry's expectations of women are overly symbolized, causing female scientists to often bear a "role-based imagination": women are frequently invited to tell "inspirational stories," are required to represent diversity, breakthroughs, and hope, but are not regarded as ordinary scientists, researchers, or decision-makers to participate equally in core technology and strategic discussions.

"But I do want to affirm the contributions of women because they are often overlooked in the history of science. I hope there