Catch a glimpse of China's AI future through 1,500 projects.
Right now, pick up your phone, open an e-commerce shopping website, and search for "chargers". Most likely, the first recommended brand that pops up will be Anker Innovations.
If you take a fancy to any of the products and want to compare prices and consult about the parameters, you'll click on the customer service to make an online inquiry.
What you may not know is how much AI large model capabilities are involved in just these two short operations.
At the 2025 Amazon Web Services (AWS) China Summit on June 19th, Gong Yin, the Chief Information Officer of well - known intelligent hardware technology brand Anker Innovations, shared how, with the help of AWS technology, they use AI to innovate intelligent products and improve the company's operational efficiency.
Anker Innovations and AWS have established a high - quality real - time knowledge base large language model system and built over 50 AI Agents; they've built a multi - modal AIGC content production platform named Vela; they've built an intelligent advertising system integrated with the Amazon SageMaker platform, with in - station advertising coverage exceeding 90%; they've carried out product development and upgrades through deep learning algorithms and AI large models...
Advertising placement, material generation, customer service responses, product upgrades... Are they some cutting - edge black technologies? Not at all.
But are they useful? Extremely useful.
Currently, the content production platform Vela of Anker Innovations has produced over 1.2 million images; the AI resolution rate of customer service tickets exceeds 70%; over 20% of in - station advertisements are fully automated and managed by AI. On the AIME platform, the internal company - level AI capabilities base of Anker Innovations, over 300 active AI Agents have been accumulated, and the usage of AI applications built through AIME has exceeded tens of millions of times.
It's neither the so - called "singularity has arrived" nor an "Aha Moment". It's simply about truly transforming daily needs into intelligent business operations, making AI work in real - time, not making empty promises, but solving problems.
This is not an isolated case.
At the 2025 AWS China Summit, a large number of rich scenarios where generative AI is implemented in enterprises were showcased. Besides Anker Innovations, there were also TCL, WPS, Huolala, Kingdee, Hehe Information, Fosun Pharma, and so on.
Chu Ruisong, President of AWS Greater China Region, mentioned: "In my communication with many customers, I've observed that more and more enterprises want to embrace AI."
(Chu Ruisong, President of AWS Greater China Region; Image source: Amazon Web Services)
Among all the over 1,500 generative AI projects that AWS has successfully helped customers implement in mass production, the success rate of customers' projects from Proof of Concept (PoC) to mass production is as high as 82%, exactly double the industry average of 41% statistically reported by Gartner in the 2024 Gartner Enterprise AI Development Task Survey.
Today, no one questions whether generative AI is just a "concept verification" in the laboratory or a "novel gimmick" in the market.
According to the "2024 AI Index Report" of the Stanford Institute for Human - Centered Artificial Intelligence (Stanford HAI), currently, the global AI large model industry is in a stage of accelerated technological innovation and commercialization. Enterprises such as OpenAI, Google, Microsoft, and AWS are in the leading position, dominating the industry implementation in global generative AI, natural language processing, and other fields. Frontier companies such as Meta and DeepSeek are enhancing their ecological influence by continuously exploring the open - source model.
What everyone is asking is how to implement AI and large models? How to commercialize them? How can they create value for me?
What opportunities are there for AI in 2025?
Where do unsuccessful AI projects "die"?
In June 2023, two years ago, AWS established a department called the "Generative AI Innovation Center" to help customers implement and deploy generative AI applications in production.
You know, at that time, the first - generation ChatGPT had only been launched for half a year, and the whole world was immersed in the wonder and dizziness brought by large model technology and was still busy training basic models one after another.
In the more than two years since then, the Generative AI Innovation Center of AWS has gradually gathered over 350 application scientists, data scientists, developers, industry experts, strategic consultants... from around the world.
They rush to the front line of the "battlefield" every day, communicating face - to - face with all customers from the gaming, office, finance, logistics, and pharmaceutical industries to figure out what the most suitable AI solution is at this moment? Where are the problems? How can we solve them? What strategies are there?
Where do failed projects "die"? Where are the problems?
The most common problem is the unclear requirements for the final effect of the project or a deviation in scenario selection. For example, sometimes enterprises find that the problems don't even need to be solved with the help of generative AI.
Another common problem is that sometimes after an initial attempt at generative AI, enterprises find that the actual implementation cost is much higher than expected.
Another common problem in generative AI projects is that enterprises, out of curiosity, often regard generative AI projects as exploratory attempts with the mindset of "finding out whether a trendy technology is feasible" without clearly defining from a strategic perspective that it will become a channel for differentiated competition and innovative breakthroughs.
Ultimately, it's very likely that after completing the PoC, the enterprise will shift its resource focus to other "strategic priorities".
(Shaown Nandi, Global Technology General Manager of Amazon Web Services)
At the China Summit, Shaown Nandi, Global Technology General Manager of AWS, also mentioned three points. Generative AI is suitable for improving employee productivity, optimizing business operations, and enabling innovation in product services and even business models. This means that enterprises need to determine from the very beginning what purpose they have for riding the super wave of generative AI.
Discover the methodology for AI implementation from 1,500 practices
After seeing the failures, how do successful generative AI projects get implemented?
In summary, it's actually quite simple: scenario - technology - mass production - feedback. But just as "the devil is in the details", there's a possibility of making mistakes in each link.
Scenario assessment
First and most importantly, it's the enterprise's assessment of AI application scenarios.
Undoubtedly, today, no CEO of any enterprise is indifferent to the concepts of "AI" and "large models". Although enterprises' needs for intelligent transformation, cost reduction, and efficiency improvement are urgent, when questioned further, few people can immediately answer "Why do we use AI? What practical problems can it solve? How much profit can it actually bring?"
Not all problems need to be solved with generative AI. For example, technologies such as face recognition and OCR already have mature deep neural network algorithms. They are simpler, more economical, and more mature and complete than AI large models and are suitable for scenario requirements.
On the contrary, the essence of AI large model technology is Next Token Prediction, which is good at creative and intelligent content generation and human - machine interaction.
AWS proposes that before launching a generative AI project, enterprises need to conduct a comprehensive assessment from seven key dimensions (team, timeline, risk, data, ROI, budget, feasibility) to ensure the project's implementation.
For example, although they are all AWS customers, each company purchases different services and applies them in different scenarios.
For instance, Fosun Pharma mainly uses AWS's generative AI technology and the intelligent medical content generation center solution; TCL uses AWS to achieve product innovation and iteration and utilizes AWS's global infrastructure for global development; Hehe Information has built an open - source AI Agent terminal management tool called Chaterm.AI with the help of AWS to assist developers in efficient innovation.
Only by clarifying the scenario requirements and ROI modeling can AI not become a "messy unfinished business".
Technology selection
Technology selection is closely related to scenario assessment. Simply put, enterprises need to find the AI model that is most suitable for the current application scenario, not the most popular one at present.
Since the Spring Festival in 2025, DeepSeek has become extremely popular overnight, almost overwhelming the "battle of a hundred models". However, DeepSeek is not omnipotent, just as GPT didn't dominate the world during its prime.
In real industrial application scenarios, what enterprise decision - makers ultimately care about is the business growth and commercial value brought by generative AI. Whether it's DeepSeek, Claude, Nova, or Gemini, they are all parts of helping enterprises achieve commercial value.
On June 19th, Israeli enterprise - level application AI startup Base44 was just sold for $80 million in cash. This is an AI startup miracle that was founded in December 2024, had only 9 employees, never received any external financing, and gained 250,000 users and a net profit of $189,000 in its sixth month of establishment.
(Base44 official website; Image source: Base44)
In an interview, Maor Shlomo, the founder of Base44, revealed that when building the underlying AI capabilities, after multiple evaluations, the team thought the cost of the OpenAI model was too high and finally chose to access the Claude large model through the AWS platform to build Base44's underlying AI capabilities.
Yes, for most enterprise customers, they actually don't really care which large model enterprise their "black cat or white cat" (model) comes from, as long as these models are usable, good, cost - effective, and at the forefront. This is in line with AWS's long - standing strategy of "Choice Matters" in the large model field.
According to the data in the Jefferies & Company report, currently, only 3% of enterprises use only one language model provider, 34% use two, 41% use three, and 22% use four. And according to Gartner's forecast data, by 2027, 80% of Chinese enterprises will choose a multi - model strategy.
After all, the next disruptive breakthrough in large model technology may occur in DeepSeek, Manus, or other unexpected places.
Mass production optimization
In the process of a generative AI project moving from PPT to large - scale implementation, mass production optimization is a crucial stage and also the stage with relatively the most "pitfalls".
Scenario assessment and model selection will directly affect the project cost structure, which in turn is related to whether the project can truly be implemented and create value. And the model customization and tuning strategy directly affects the project's cost, performance, and overall performance during the mass production optimization stage.
Frankly speaking, this stage is a "dirty, hard, and tiring" job, but it's also an inevitable path for project implementation.
Early - stage data storage, annotation, and cleaning; mid - stage model quantization, deployment, and prompt engineering; late - stage cloud reserved instances, caching mechanisms, and pre - configured throughput are all typical multi - link complex engineering problems that require experienced data engineers and AI engineers to balance cost, performance, and efficiency.
For example, with the support of AWS experts, engineers of a domestic cultural industry Internet group combined the prompt optimization function of Amazon Bedrock with Claude 3.5 Sonnet, simplifying the prompt engineering process. While saving Token consumption, they increased the accuracy rate of character dialogue attribution from 70% to 90%, greatly optimizing the mass production performance of the AI project.
In addition, in the data processing stage, Huolala used AI models including Amazon Nova to process the existing customer service dialogue data, storing the unstructured data as graph - structured data associated with dialogue intentions. This enables Huolala to also use emerging technology frameworks represented by CID - GraphRAG on the basis of Retrieval Augmented Generation (RAG), significantly improving the performance of AI functions.
Result monitoring
This is the easiest - to - understand link in the entire generative AI project implementation process, but it's also the most easily overlooked one.
What a generative AI project fears is not that the model isn't large enough or the computing power isn't strong enough, but that suddenly, in the middle of the process, it's found that "the investment is flowing like water, but the output is a mystery".
Especially considering that the cost of training an AI large model can easily reach hundreds of thousands of dollars each time, a mature result monitoring system is like an enterprise's intelligent "dashboard". It not only tells decision - makers whether the system is operating normally through "traffic lights" but also acts as a "real - time navigation" to ensure that the current AI project is on the right track and whether adjustments or redirections are needed immediately.
Specifically, the result monitoring of the project needs to include evaluation indicators in three major dimensions: quality, performance, and application layer. Real - time "navigation" of the project is carried out through data such as system latency, throughput, hallucination degree, user feedback, and dialogue length.
It's hard to imagine how many "pitfalls" the AWS team has stepped into to summarize this valuable experience.
Generative AI in 2025: Say goodbye to PPT and embrace the productivity revolution
Since the second half of 2024, "implementation" has become the mainstream term in the AI industry.
According to a survey by The Information of 50 global leading enterprises, a total of 38 large companies have adopted the OpenAI model; 17 enterprises have adopted the Gemini model; 11 enterprises have adopted Claude. The three are dividing the world and joining forces strongly.
But the world of basic models is changing