True doctorate level! GPT-5 gives the explicit convergence rate of the fourth moment theorem for the first time, with just a little guidance from a math professor.
GPT-5 truly lives up to its reputation as an AI with a doctoral-level proficiency!
Guided by mathematics professors, it extended the qualitative Fourth Moment Theorem to a quantitative form with an explicit convergence rate for the first time.
In simple terms, the original theorem only stated that convergence would occur but did not provide the specific speed. With the help of GPT-5, this research explicitly determined the convergence rate for the first time.
Greg Brockman, the co-founder of OpenAI, expressed his great satisfaction with this.
Netizens also said that it was truly a miracle.
Solving the Quantitative Convergence Rate of the Fourth Moment Theorem with GPT-5
Last month, OpenAI researcher Sebastien Bubeck said that GPT-5 Pro solved an open problem in the field of convex optimization within minutes, improving the known boundary value from 1/L to 1.5/L.
Inspired by this, three mathematics professors conducted a controlled experiment within the Malliavin–Stein framework.
The goal was to investigate whether GPT-5 could break through the existing results and generalize the qualitative Fourth Moment Theorem to a quantitative form with an explicit convergence rate, covering both Gaussian and Poisson cases.
First, the researchers started with the following initial prompt:
Paper 2502.03596v1 established a qualitative Fourth Moment Theorem for the sum of two Wiener–Itô integrals (of orders p and q respectively), where p and q have different parities.
Based on the Malliavin–Stein method (see 1203.4147v3 for details), can you derive a quantitative version for the total variation distance, whose convergence rate only depends on the fourth-order cumulant of this sum?
(The specific analysis steps have been omitted. Interested readers can read the original paper.)
The first interaction was very effective. GPT-5 gave an overall correct conclusion and used appropriate tools and methods.
However, it made a mistake in the reasoning process, resulting in
having an incorrect expression. If not corrected, it might invalidate the entire proof.
After noticing this, the researchers then posed a new question:
Can you check the
formula you gave and provide a detailed derivation?
GPT-5 did as requested and provided the required detailed information. However, the formula was still incorrect, and the accompanying explanation was also wrong. Subsequently, the researchers more accurately pointed out the error:
I think you made a mistake when claiming
Why is it like this?
GPT-5 finally admitted that the statement was wrong. But more importantly, it understood the source of the error. Then, it continued to give the correct reasoning process and formula.
Subsequently, at the request of the researchers, GPT-5 organized the final result into a paper format, including an introduction, a statement of the main theorem, a complete and correct proof process, and references. The specific prompt was as follows:
Please organize this into a research paper suitable for submission, following my style (see the attached paper 0705.0570v4):
Start with an introduction, providing some background information;
Then state the main result and give a very detailed proof, ensuring that each step is complete;
Finally, attach a complete list of references.
The final document should be a compilable LaTeX file.
Finally, the researchers also asked it to add a conclusion section to discuss possible directions for future research expansion.
Can you add a "Conclusion and Outlook" section? Summarize the main content and propose possible directions or expansion ideas for future research.
GPT-5 still complied and suggested that this method could even be extended to non-Gaussian frameworks.
Extension to the Poisson Case
Based on this suggestion, the researchers decided to conduct further in - depth research and try to extend it to the Poisson case.
Since the researchers found that the context window was already quite long, which might affect its performance, they started a new conversation and used the following prompt:
Here is a paper (2502.03596v1) that proves the Fourth Moment Theorem for the sum of two Wiener–Itô integrals with different parities. I hope you can extend it to the Poisson case, using the ideas contained in paper 1707.01889v2.
In this new conversation, GPT-5 quickly identified the structural differences between the Poisson and Gaussian cases and proposed that when X and Y are Poisson integrals of different orders, the mixed expectation
is not necessarily zero.
But at the same time, it completely ignored an important fact, that is, even in the Poisson case, there is still
.
Subsequently, the researchers tried to guide GPT-5 back on track by asking questions.
Is there nothing in paper 1707.01889v2 that can indicate that
is always non - negative?
However, since the question asked by the researchers was an open - ended one, it was not enough to trigger the correct thinking. GPT-5 confidently replied, "No" and then gave an unconvincing explanation.
However, once the researchers pointed out the specific information:
What about (2.4)?
GPT-5 immediately took the non - negativity into account and reformulated the theorem after the researchers asked the question.
One More Thing
Interestingly, the authors initially wanted to list GPT-5 as a co - author when submitting the paper. A few hours later, arXiv told them that the policy prohibited listing AI as an author.
Finally, they could only submit the paper without GPT-5 in the author list.