Chen Chao, CEO of Optic Link Technology: Optical interconnection is the only way to AGI | WISE2025 King of Business
From November 27th to 28th, the 36Kr WISE 2025 Business King Conference, hailed as the "annual technology and business trendsetter", was held at the Conduction Space in the 798 Art District in Beijing.
This year's WISE is no longer a traditional industry summit but an immersive experience centered around "technology hit drama short - plays". From AI reshaping the boundaries of hardware to embodied intelligence opening the door to the real world; from brand globalization in the wave of going global to traditional industries equipping with "cyber prosthetics" - what we restore is not only trends but also the true knowledge honed in numerous business practices.
In the following content, we will dissect the real logic behind these "hit dramas" frame by frame and witness the unique business "scenery" of 2025.
Chen Chao, CEO of Optolink Technology
The following is the transcript of the speech by Mr. Chen Chao, Partner of True Knowledge Venture Capital and CEO of Optolink Technology, edited by 36Kr:
Good afternoon, everyone! I'm Chen Chao from Optolink Technology. I'm very glad to have the opportunity to share with you. The theme of my speech is "Computing Power · Boundless: Optical Interconnect is the Only Way to AGI".
Before we start formally, I'd like to invite you to take a look at a group of pictures. These three pictures were generated by OpenAI's Sora multimodal large - model. With different computing power scales, from left to right, 300 GPUs, 1250 GPUs, and 10,000 GPUs were used respectively, and the picture quality gets better from left to right. So, can we make a basic assumption that the amount of computing power used can determine the level of an AI's intelligence?
In 2012, AlexNet emerged, representing a major breakthrough in deep learning in the field of computer vision. In 2015, DeepMind published an article in "Nature", which combined deep learning and reinforcement learning, enabling artificial intelligence to have the ability to learn complex tasks autonomously for the first time. In 2016, AlphaGo appeared, and AI defeated the world's top human players in the field of Go for the first time. In 2022, ChatGPT emerged, and AI entered the public view from the professional field for the first time. And this year, with the emergence of GPT - 5, which is three years after ChatGPT, I believe that all of you here or friends online use various AI tools to improve efficiency to some extent.
Looking back at the entire development process of AI, if we regard the first Dartmouth Conference in 1956 as the beginning of AI, it has been nearly 70 years. Why has AI made rapid progress in the past 10 years rather than in the previous 60 years? Because the evolution of AI is driven by computing power. In the past decade, the growth of computing power has exceeded 1 billion times, that is, 10 to the power of 9. It can be said that the year - on - year growth of computing power is nearly 10 times.
To train an AlexNet model, only two NVIDIA GTX580s were used. AlphaGo used 1920 CPUs + 280 GPUs. GPT - 3 used 10,000 NVIDIA G100s, reaching the level of ten thousand units. GPT - 5 used about 200,000 - 300,000 NVIDIA H100s, which is a very large quantity. If we move towards Artificial General Intelligence (AGI), how much computing power will future GPT - 6 and GPT - 7 need to reach AGI?
According to the prediction of Silicon Valley's cutting - edge technology analysts, reaching AGI requires an equivalent computing power of about 10 to the power of 41 FLOPs. Our current computing power level is around 10 to the power of 25. That means there is still a gap of about 10 to the power of 16. If we assume that we can bridge a gap of 10 to the power of 8 at the software algorithm level, then the remaining 10 to the power of 8 must be improved at the hardware chip computing power level. That is to say, achieving AGI requires increasing the current computing power scale by 100 million times, which is a very large number.
Can we reach AGI according to the current development trajectory of the computing power industry?
No. Why? Because the computing power industry is facing two major challenges: bandwidth bottlenecks and energy consumption issues.
Let's first look at bandwidth. Interconnect bandwidth severely limits the development of computing power. The growth rate of interconnect bandwidth is significantly lower than that of computing power. In the past 20 years, the single - chip computing power has increased by 60,000 times. The storage bandwidth has only increased by 100 times, a difference of 600 times compared with the former. The interconnect bandwidth has only increased by 30 times, which is even smaller, only one - two - thousandth of the increase in computing power. The performance of the entire computing cluster is severely limited by bandwidth, not by computing power.
Let's look at a specific case. Musk's Grok3 large - model was trained by the Colossus computing cluster, using about 200,000 NVIDIA H100 chips in total. How did the interconnect bandwidth perform at different levels in such a large - scale cluster? We can see that inside the chip, the video memory bandwidth is 4TB per second. Moving outward, the connection between two GPUs is NVLink, with a bandwidth of 0.9TB per second, about 5 times smaller than the previous one. Moving further outward, servers need to be interconnected by the IB network, with a bandwidth of 0.05TB per second, nearly 20 times smaller than the NVLink bandwidth.
From inside the chip to the server, there are limitations at every level. Interconnect bandwidth has become a bottleneck at every step, limiting the performance of the entire computing cluster. That's why the utilization rate of many computing clusters and chips is only 20% - 30%, rather than reaching the 80% - 90% full - load state.
Now let's look at energy consumption. Let's make two comparisons. On the right are the total installed capacities of the Three Gorges Hydropower Station and the Daya Bay Nuclear Power Station, which are 22.5GW and 6GW respectively. Gigawatt is a very large number.
On the left is the planned scale of global under - construction data centers. Stargate, a super - computing center jointly built by OpenAI and Microsoft at a cost of $100 billion, has a planned total capacity of 5GW, almost equivalent to the total installed capacity of a Daya Bay Nuclear Power Station. xAI, under Musk's flag, is a super - computing center jointly established with a Saudi artificial intelligence company and NVIDIA, with a scale of 6.6GW, even exceeding the total installed capacity of the Daya Bay Nuclear Power Station and about one - third of that of the Three Gorges. One computing center is equivalent to the power generation of a nuclear power station. In the future, to achieve AGI, the computing power needs to be expanded by 100 million times, and the world's electricity is not enough for computing. This path won't work. Where is the problem?
These are the front and back photos of NVIDIA's NVL72 super - node. If we enlarge the backplane on the back, we can see many copper cables made of copper. These copper cables add up to more than two miles. Copper as a medium has a problem: as the transmission rate continues to increase, it will be affected by the skin effect. That is, only the surface of copper conducts signals, and there is no current passing through the inside of copper. This means that electrical interconnection represented by copper has reached its physical limit in terms of bandwidth and power consumption. For a two - mile - long copper cable, 90% of the energy consumption in the computing center is used for data transfer rather than computing, which is a huge waste.
What if copper can't do it? We use light.
Optical interconnection is the optimal solution to break through the computing power bottleneck. Since the successful laying of the world's first submarine optical cable serving the Internet in 1998, we have solved the optical communication and optical connection capabilities from thousands to tens of thousands of kilometers. In the past ten to twenty years, with the booming development of data centers, we have had coherent optical modules, data - communication optical modules, etc., which have solved the optical connection capabilities from one kilometer to hundreds of kilometers. Within one kilometer, from hundreds of meters to a few centimeters, can we use light to solve the connection problem? The answer is yes. We can use chips to emit light to solve the optical connection problem from hundreds of meters to a few centimeters. Use electricity for computing and light for interconnection because we believe that optical interconnection is essentially a further extension and penetration of optical communication in the field of data communication.
Optolink Technology is a provider of next - generation optical interconnection solutions for AI computing clusters. It can enable chips to emit light directly, providing next - generation optical interconnection solutions with high bandwidth, low power consumption, and low latency for computing power companies.
In the future, data centers will be interconnected by optical fibers powered by green energy, replicated row by row, with very low energy consumption but very high bandwidth. We hope to increase the bandwidth - energy efficiency product by more than four orders of magnitude, that is, more than 10,000 times.
This is our product display. On the right is a silicon - photonics wafer manufactured by a domestic wafer factory, showing a wafer - level system test platform. On the left is the first - generation high - speed OIO optical engine evaluation board for high - speed transceivers we released. We can see that the optical fiber on the left can be connected to the GPU to achieve optical interconnection between GPUs.
Optolink Technology's corporate vision is to help the domestic computing power industry develop its own semiconductor parallel ecosystem. I often think of an analogy: in the field of fuel - powered vehicles, it's difficult for us to catch up with BBA directly, but we found an alternative path with electric vehicles and have now completely overtaken them. In the field of semiconductors, can we also find an alternative path? If we compare high - end design companies like NVIDIA and TSMC to the fuel - powered vehicle track, can we change the track to overtake them? With domestic computing power, domestic interconnection, and domestic wafer manufacturing plants, although the computing power of a single chip has a certain gap compared with foreign ones, we can completely surpass foreign countries in the performance and power consumption of the overall computing cluster. We can find a unique way for China's semiconductor breakthrough.
Looking forward to 2030, what will the future computing centers look like? What will the future computing power world be like? I believe that by then, many people will use the low - orbit satellite Internet, communicating with the ground via laser, using light for connection. The backbone network will be connected by optical fibers, and data centers will be connected by optical modules, optical interconnections, optical switches, etc. We look forward to a new era of all - optical interconnection.
Thank you all! That's all for my sharing today.