Larry Ellison, founder of Oracle: AI is more powerful than the Industrial Revolution and will change everything.
October 15, 2025, Las Vegas, USA.
At the Oracle CloudWorld 2025 conference, Larry Ellison took the stage and dropped a bombshell:
AI changes everything. That's quite a bold statement. "Everything." But I think it's getting pretty close to the truth.
He didn't say "change search" or "change productivity." He said "everything."
During his 90 - minute speech, he didn't showcase any AI demos, nor did he harp on how powerful the models were. What he wanted to talk about was why AI would rewrite the way the whole world operates.
He said: From the 20 - watt human brain to the 1.2 - billion - watt AI brain, social and corporate systems must be redefined.
This redefinition involves fundamental questions:
- What is "infrastructure"?
- What is "usable data"?
- Who really has the right to distribute the AI dividends?
These insights don't come from researchers or AI entrepreneurs, but from a tech - business leader who built the world's largest database empire with his own hands and is now transitioning to an AI inference platform.
And his warning to businesses: Stop just training models. The real opportunity lies in who can use AI to understand their private data.
If AI is really more powerful than the Industrial Revolution, are you ready?
Section 1
The model should talk and understand you
Larry Ellison's first point: The rules of the AI game have changed.
He said: "In the past, we only talked about training. Now, we're talking about 'inference.' And this isn't the old - fashioned inference where 'the model makes a judgment.' Now, they're really thinking."
✅ The multimodal AI model is actually like an electronic brain
During the speech, Ellison used an analogy: Today's AI models are made up of multiple neural networks, just like different regions in our brains.
For example:
- Looking at a picture is one network;
- Understanding what's in the picture is another network;
- Judging whether the picture depicts "danger" or "requires action" is a third network.
Each sub - network has its own job: some handle text, some recognize images, some analyze sounds, and some are responsible for inference. Just like our brains, the visual cortex processes colors and movements, and the language area processes logic.
In the past, we thought AI was "learning to talk." In fact, what really matters is that it's learning to "understand."
✅ After "language generation," we've entered the "language understanding" stage
Larry Ellison called ChatGPT 3.0 "the real turning point":
That was the moment when AI first started to talk like a human.
But he said the value doesn't lie in generating seemingly human - like responses. Instead, it's about: Does it really understand your question and then know where to find the answer and how to reason?
This is the transformation of AI from "imitating human language" to "learning human thinking."
ChatGPT, Anthropic, Grok, Gemini - these mainstream models are already very powerful. But Ellison pointed out a problem: They're all trained with public data, which is far from enough.
"The model isn't omnipotent. It doesn't know your company's accounts, what your customers have bought, or what prescriptions you've written in the past. None of these are on the Internet."
The truly valuable data is your private data.
The real potential of the model is to think about the things in front of you after understanding your world.
✅ What's the difference between "generating answers" and "making judgments"?
He gave an example: AI can now observe road videos and determine whether a car will hit you. It can decide whether to brake or turn in a matter of milliseconds.
This isn't based on preset rules. Instead, after watching thousands of videos, the model has learned to judge danger on its own.
Inference is no longer about judging the correct answer, but about making action suggestions under complex conditions. In other words, AI doesn't just help you "find information," but helps you decide "what to do."
When AI truly has the ability to understand, associate, and judge, and when this "electronic brain" can process complex problems at high speed, the real bottleneck is no longer technology.
The bottleneck becomes us: Can we pose questions worthy of its inference?
So stop worrying about whether the model is strong enough.
The key is whether you have questions truly worth its understanding.
Section 2
The 1.2 - billion - watt AI brain is built
If AI is learning to "understand," what's the cost of supporting this understanding?
Larry Ellison presented a vivid contrast: The human brain only uses 20 watts of power, while an AI brain requires 1.2 billion watts.
He said that a 20 - watt light bulb doesn't shine very brightly, but it drives human language, imagination, balance, and reasoning.
And now?
Oracle is building the world's largest AI cluster for OpenAI in Texas. The power supply can support 1 million four - bedroom homes, equivalent to a medium - sized city.
✅ It's not just about buying GPUs; it's about building a whole set of "AI infrastructure"
Ellison said: You think we're just buying GPUs? No, just buying GPUs is far from enough.
He specifically explained what it takes to "train an AI model":
- There must be a power plant for energy supply (using gas turbines to generate electricity);
- There must be a power grid to precisely deliver electricity to each GPU array;
- There must be a cooling system to maintain a stable temperature;
- There must be a network architecture to make 500,000 GPUs work like "one brain";
- There must be people to build all these, with 3,500 workers on - site every day.
This is completely different from when he wrote code in his college dorm.
✅ Enterprises don't just use AI models; they need to be able to carry AI capabilities
He used an analogy: An AI model is like an F1 racing car, but you need to have a race track first.
Most companies aren't even ready with the "gas stations" and "pit stops," yet they're eager to "let AI start the race."
He emphasized: "We're not just building software; we're building power plants."
He was warning enterprises: AI capabilities don't depend on which model is used, but on whether the basic capabilities are in place. Specifically:
Do you have your own data structure that can be understood by the model?
Do you have a pipeline to quickly call AI results?
Do you have an execution environment that can support low - latency inference?
If you don't have these, it's like buying an F1 and parking it on a rural dirt road.
✅ Why is "1.2 billion watts" a turning point?
Because it not only represents the energy consumption of model training but also represents an era where "national - level resources" are being mobilized to build AI.
- It's not just about buying servers, but about laying out energy, communication, basic software, and storage;
- It's not just about demonstrating model demos, but about delivering reliable and continuous AI productivity;
- It's not just about adjusting model parameters, but about knowing what to solve with this set of capabilities.
Ellison summarized:
"We're building AI industrial capabilities, the infrastructure of the whole new world."
This isn't just talk. The first version of Musk's Grok, the model in training, was almost entirely completed on Oracle Cloud.
The 1.2 - billion - watt AI brain has started to operate, and the rules of the game are changing.
Section 3
For AI to enter enterprises, it should start with private data
On - site, Larry Ellison pointed out an overlooked truth:
"These models are trained with public data. They know what's happening in the world, but they don't know how your company's accounts are calculated."
This isn't a complaint; it's an opportunity.
The real opportunity isn't to train another model. Instead, it's to let these models start to understand the materials in your hands.
✅ Why is public data not enough?
Today's large models, such as ChatGPT, Grok, and Gemini, all use public data: content that can be found on the Internet, papers, web pages, encyclopedias, code libraries...
But what enterprises rely on for daily decision - making isn't these things.
"ChatGPT hasn't seen the quotes you've given to customers. It also doesn't know which medical insurance bills you're processing or which suppliers owe you money."
Your database, reports, transaction records, and customer service conversations - these real business data are all hidden inside the company and have never participated in model training.
Ellison pointed out the pain point:
"People hope AI can help them solve problems, but the clues to these problems are hidden in their own data."
✅ I don't want to make my data public, but I want AI to analyze it
This is one of the biggest contradictions when AI enters enterprises.
Nobody wants to upload their customer lists, contract contents, and financial records to external models. But they hope AI can "understand these materials like a knowledgeable colleague" and give feedback.
This is like wanting to protect your privacy while also having the smartest person help you with analysis.
Ellison said: This isn't a dilemma; it can be done.
He revealed that Oracle has designed a whole new set of methods called "AI database" and "AI data platform." The core logic is just one sentence:
Let the model understand your data without taking it away.
How is it done? They used a method called RAG (Retrieval - Augmented Generation).
This is a method that allows AI to temporarily read relevant materials without "learning" your data in advance. Simply put:
"Your data doesn't need to be trained into the model. Before answering a question, the model will 'take a look' like a search engine and then come back to generate an answer."
This is like inviting an expert to your office to look up materials instead of mailing the materials out.
Oracle has embedded this method into its own database, object storage, and can even connect to AWS data. No matter where your data is stored, they can help you create a "window that the model can understand."
✅ Let AI "understand" your data
This step is called "vectorization."
Ordinary people don't need to understand the mathematical details. You just need to know:
- The model doesn't recognize the "invoice number" in your Excel sheet;
- It can understand "what this thing means";
- So a transformation is needed to let AI perceive the associations, similarities, and time sequences between data.
You don't need to change your data. You just need to tell Oracle which parts you want the model to understand, and they'll help you convert it into a language that the model can understand.
This is like hiring a translator to "translate" a whole set of business materials for AI so that it knows how to help you with analysis.
Ellison used the example of a clinic to illustrate the practical value of this method:
"Some small clinics in the US are worried about medical insurance reimbursement every month. If the reimbursement isn't approved, the clinic's cash flow will dry up, and they won't be able to accept patients."
The AI application he designed isn't about "predicting cancer." Instead, it:
- Helps the clinic scan hundreds of bills;
- Checks whether each item complies with the policy;
- Estimates the probability of this batch of reimbursements being received;
- Automatically generates a reliable report for the bank for mortgage loans.
Ellison's view is very practical: AI isn't here to do something lofty. Instead, it first helps you solve those tedious daily tasks.
If training a model is "building a brain," then letting it understand your company's data is "putting in eyes."
Public models are tools, and your data is the key.
AI knows what's happening in the world, but you must teach it what you're doing.
Section 4
The real implementation of AI is in hospitals and farms
The most unexpected part of Larry Ellison's entire speech wasn't about how powerful AI is, but about which specific things AI has already started to solve.
He didn't talk about the empty phrase "AI can help you improve efficiency." Instead, he clearly stated: Which industries are using it, at which step they're using it, and what changes have occurred.
✅ In medical imaging, it not only sees faster but also sees more comprehensively
He gave a personal experience in the emergency room:
"I broke a few ribs in a motorcycle accident. When having an MRI, the doctor actually used a ruler on a picture to count how many ribs I'd broken."
The image was already digital. Why were they still counting manually?
He said that today, AI can do it much faster and more accurately: AI can complete all fracture identifications in one second;
From the same image, it can also identify other potential problems, such as lung shadows and liver lesions, without missing anything.
He said:
"When we're looking at one or two things, AI can see a dozen."
In cancer surgery, the value of AI is even more obvious.
The best surgeons use a microscope to see if they've cut into cancer cells. AI's vision is already at the microscopic level and can make precise cuts between healthy cells and cancer cells.
✅ In agriculture, AI can grow smarter wheat
Ellison said that they collaborated with a project team from the University of Oxford to do something that almost no one had thought of before:
"We modified a type of wheat to produce 20% more grain on the same area of land."
This isn't just a gimmick about genetic modification. It's using an AI model to simulate "how to design more efficient photosynthesis."
What's even more special is that this wheat absorbs more carbon dioxide. We can control it to convert this carbon into calcium carbonate, which is to stably solidify it into minerals and reduce atmospheric carbon emissions.
In the past, carbon neutrality was achieved through accounting. Now, it can be achieved directly through farming.
✅ Pathogen identification can produce results in just a few minutes
He said that traditionally, it takes several days to culture bacteria to detect what disease a person has.
Now, they're developing an AI device. With a tube of blood, it can:
- Identify what viruses, bacteria, and fungi are present;
- Determine whether they're drug - resistant;
- Give suggestions on what medicine to use.
Even for new viruses like COVID back then, this device can identify them immediately.
This isn't just about saving one person. If every hospital in the world had this device at that time, we would have detected the epidemic weeks earlier.
✅ AI isn't just observing; it's also starting to act
Ellison also talked about another project they're working on:
"We're using drones to transport blood samples from clinics to laboratories. The whole process uses RFID for secure identification, and there are no lost items."