HomeArticle

How can overseas-oriented enterprises build a four-dimensional moat of "computing power + data + ecosystem + compliance"?

即氪出海2025-04-29 15:15
When AI companies go global, the real innovation lies beyond the mainstream view.

In the context of the accelerating global AI competition, expanding AI business overseas is becoming the choice of an increasing number of enterprises. In terms of technological capabilities, products, and commercialization paths, many domestic enterprises have already gained international competitiveness globally. However, enterprises going global also face challenges such as legal and cultural barriers.

On April 18th, 36Kr, TiDB, and GMI Cloud jointly hosted a high - end closed - door meeting titled "New Landscape of AI Going Global: From Reaching the World to Integrating into the World". The event invited overseas - expanding enterprises from different fields to share industry insights and experiences related to AI going global, and conducted in - depth discussions and sharing on aspects such as the current AI global expansion landscape, opportunities and challenges, infrastructure, and how to acquire overseas users.

AI Global Expansion Landscape and Opportunities

Guest: Luo Wei, Partner of the Cross - border Fund at Yingdong Capital

Keywords: Technological Innovation, Localization, Language - based Arbitrage

The current global AI expansion landscape can be roughly divided into: North America, Europe, and Southeast Asia. Luo Wei believes that the combination of technological innovation and localization is an opportunity and challenge that overseas - expanding enterprises must face.

Today, technology itself has achieved a high degree of "equal rights". Meanwhile, with the democratization of computing power, different cloud service providers have set up local operations, reducing the technological threshold and costs. Localization is one of the biggest challenges. Too many enterprises have been "copied and eliminated" during the global expansion process due to neglecting localization. Luo Wei suggests that enterprises should not hesitate to invest in localization, as it is not an option but a necessity.

The opportunity for global expansion lies in "arbitrage between different language systems". Users in Japan and South Korea have a similar payment ability to those in Europe and the United States, but the competition is relatively lower. AI applications that have been successfully verified in the European and American markets can be quickly localized in the Japanese and South Korean markets, which is a good entry point for entrepreneurs who want to quickly generate revenue and build a "cash cow".

Currently, overseas - expanding enterprises mainly include technology solution providers, vertical scenario service providers, infrastructure providers, etc., and the profile of end - users has not yet taken shape. Luo Wei reminds that entrepreneurs should not be intimidated by the so - called "overseas experience" of large companies. In many cases, entrepreneurs and large companies are actually on the same starting line, and entrepreneurs may even understand better. This is a real opportunity for entrepreneurs and small - and - medium - sized teams.

Ensuring Inference Computing Power under the Global AI Application Boom

Guest Speaker: King Cui, President of GMI Cloud in the Asia - Pacific Region

Keywords: AI Infrastructure, GPU Cloud, Inference Engine, Elastic Expansion

King observed that since last year, AI applications have entered a stage of rapid growth, which is inseparable from the iterative upgrade of basic models every 3 - 6 months and the significant improvement in the understanding of the physical world and content controllability of multimodal models. At the same time, the training demand is gradually shifting to inference, and the operating cost of inference models is also continuously decreasing, with an annual decline of over 90%. King believes that 2025 is expected to become the real "Year of AI".

Currently, Chinese AI applications are accelerating their global expansion and gradually entering the large - scale stage. As of the end of 2024, there were 1,890 large - scale AI applications globally, with 356 from China, of which 143 were overseas - oriented products, accounting for over 40%. Almost all of these overseas - oriented applications complete the large - model inference service through API calls, with little pre - training. From the perspective of ROI, this method is more cost - effective.

During the explosive global growth of AI applications, there are mainly four major challenges:

First, it is necessary to provide GPU services in many regions around the world.

Second, it is necessary to provide elastic computing power services based on the elastic demand of "user growth" and business growth plans.

Third, AI application enterprises need cost - effective inference APIs to meet inference needs.

Fourth, in the face of large - scale online user traffic, how to ensure the stability of AI application services.

Therefore, when expanding overseas, AI applications must choose GPU services that have the ability to deploy data centers globally, support elastic expansion and contraction, can maintain continuous stability when a large number of users flood in, and have a cost - effective underlying support. To this end, in addition to providing high - performance GPU cloud services, GMI Cloud will also apply two self - developed engines to improve the stability of GPU cloud services and optimize the model inference performance.

The two engines are the Cluster Engine and the Inference Engine. The Cluster Engine is a private cloud platform that can help enterprises with model training and customization; the Inference Engine is an inference platform that helps enterprises expand globally, with four advantages - flexible scheduling globally; zero - code visual deployment; higher cost - effectiveness based on the latest GPU deployment; end - to - end full - process monitoring and service guarantee.

It is worth noting that in terms of cost - effectiveness, King emphasizes the "cost - to - performance ratio per unit of performance". Taking the operation of DeepSeek - FP4 on NVIDIA's official H100, H200, and B200 as an example, the throughput of the optimized H200 is more than six times that of the H100, and the throughput of the B200 is 25 times that of the H100. The higher - end the chip, the lower the overall inference cost. This also means that purchasing advanced GPU services can truly reduce costs and increase efficiency.

The 'Three Pillars' of AI - Native Applications

Guests: Huang Dongxu, Co - founder and CTO of TiDB; Alex Fan, Vice President of TiDB in the Asia - Pacific Region

Keywords: AI - Native Applications, Databases, Marketplace - style Collaboration, Agents

Now we are entering the era of AI Agents. Huang Dongxu focused on sharing his thoughts on AI - native applications. He believes that RAG is outdated, and Agent with memory is a more AI - native product form than RAG - based ones. To achieve truly AI - native applications, three key pillars are needed:

Large - scale models: Whether closed - source or open - source, the current capabilities of large - scale models are sufficient to complete most daily tasks;

MCP (Model Context Protocol): A standard protocol for connecting large - scale models with external capabilities;

Databases: An often - overlooked but crucial part. The current problem is that traditional databases, data lakes, and other systems are designed for "humans" rather than for LLMs.

If designed from scratch from the perspective of large - scale models, Huang Dongxu believes that such a database should: support the input of raw data so that large - scale models can provide the most personalized services; store diverse data for different individuals and be able to interact with large - scale models efficiently; have better access interfaces.

In this process, the value of SQL is magnified again. He believes that, different from natural language, SQL is free of hallucinations, standardized, and logically clear, and is the most stable bridge connecting large - scale models with the real world. Whether it is full - text search, vector search, or structured query, they can all be completed under a single SQL interface.

Currently, Agents interact through the A2A protocol. Huang Dongxu believes that in the future, communication between Agents should not be through inefficient natural language but through shared context memory.

|Going Global Is Not about Copying US Experience

Can Chinese enterprises copy the global expansion path of US enterprises? Alex gave a conclusion based on TiDB's overseas expansion experience: No.

In the past, the global market logic of US enterprises was based on geographical and industrial divisions, such as the China region, the US region, the Japan region, and then further divided into finance, gaming, e - commerce, etc. This path is no longer applicable to Chinese enterprises today. There are many reasons, including differences in culture, organization, supply chain, and technological models. More importantly, the global market structure has undergone profound changes today, especially in the ToB sector. Chinese enterprises must find a new path.

In Alex's view, the key mindset for Chinese enterprises to go global is marketplace - style collaboration. He emphasizes entering the "global collaboration" marketplace and becoming part of the global architecture.

How to Ensure Compliance

Guest Speaker: Liu Tianfeng, International Partner of Herbert Smith Freehills Peace Joint Office

Keywords: Compliance Strategy, Data, Intellectual Property

In the current context, how to deal with the complex and ever - changing legal regulatory systems in different countries and regions is a key challenge that enterprises going global must face directly.

Liu Tianfeng pointed out that from a legal and compliance perspective, AI business can be divided into five levels: hardware foundations such as chips, data storage, infrastructure such as computing power platforms, core technologies such as basic model algorithms, various actual AI applications, and end - users who directly use AI products. At different positions in the industrial chain, compliance requirements and risk points also vary.

Legal areas that need to be focused on during the global expansion process include but are not limited to: personal information protection, consumer rights protection, network and data security, labor laws, specific industry regulations (such as medical, financial, and autonomous driving), technology ethics, anti - monopoly regulations, and intellectual property compliance. Especially in terms of data use, it is necessary to clarify whether legal authorization has been obtained and how to handle personal data; in terms of intellectual property, it is necessary to consider whether the data used in model training is protected by copyright.

Facing the legal challenges in cross - border operations, Liu Tianfeng proposed a systematic compliance management strategy, including establishing an AI supervision team to clarify resources and responsibilities, strengthening internal communication in the enterprise to identify potential risks, identifying applicable regulations in the application area, formulating internal AI policies to form unified operating specifications, integrating AI compliance strategies with ESG and data strategies, formulating standardized clause templates, conducting AI impact assessments, and organizing employee training.

Different Strategies for Different Industries Going Global

In this wave of AI global expansion, some enterprises have achieved remarkable results. These enterprises use AI products as an entry point and deeply expand the market through traffic operation, open - source communities, etc., and have successfully grasped this opportunity for global expansion.

AI Office: Occupying the Traffic Entrance

Guest: Zhang Lei, Co - founder of PixelBloom (AiPPT.com)

Keywords: AI Office, Traffic

PixelBloom was founded in 2018. Its core product, AiPPT.com, achieved a scale of 10 million registered users within 12 months through the "generate a PPT in one minute with one sentence and one click" model, and currently has over 20 million users.

The company's core competitiveness lies in re - engineering the traditional office process, targeting the global market of 1.5 billion white - collar workers worth 400 billion RMB.

For example, in the main product AiPPT.cn, 95% of non - professional users can simplify the originally professionally - designed PPT production into an AI - driven one - stop service. Through the "disruptive innovation" strategy, the team has compressed the traditional production process into intelligent dialogue - based generation and deeply integrated large - scale model technology to achieve functions such as automatic typesetting and theme switching. Currently, this product ranks second on the global AI tools list and ninth on the domestic overall list, and is the only product in the top ten with a mature commercialization model.

In terms of market expansion, PixelBloom integrates resources to form a complete commercial closed - loop from occupying the traffic entrance to penetrating vertical scenarios: On the ToC side, it creates a tool for the general public; on the ToB side, it cooperates with leading platforms such as Doubao and Zhipu to occupy the traffic market with back - end capabilities; on the ToPartner and ToC sides, it is deeply embedded in terminal scenarios such as the government and enterprise version of DingTalk and Honor mobile phones, covering 80 million employees of state - owned and central enterprises.

Global layout is the company's core strategy in 2025. Currently, AiPPT.cn (domestic) / AiPPT.com (overseas) has launched nearly 20 language versions, and the overseas market share is close to that of the domestic market. In the future, it will accelerate the global expansion process through local operation and cooperation with ecological partners.

3D Large - Scale Model Platform: How Tripo Builds a Creator Ecosystem

Guest: Sienna, CMO of VAST

Keywords: 3D Large - Scale Model, Vertical Creator Community, Influence of Open - Source Technology

VAST was founded in March 2023 and is an AI company dedicated to the research and development of general 3D large - scale models. The company's goal is to build a 3D UGC content platform by creating a mass - level 3D content creation tool.

VAST's 3D large - scale model platform, Tripo, realizes core functions through multimodal generation technology: modeling based on text or images, vertical style generation, bone binding for all objects, scene generation, etc.

The company builds a developer ecosystem through open - source. Its TripoSR model has become a benchmark project of StabilityAI for the year. Subsequently, projects such as TripoSG/TripoSF have