There is nothing more dramatic and inspiring than a scientific breakthrough. But what happens when different groups of scientists don’t seem to agree on the science?
DeepMind, an Alphabet research firm based in London, published a fascinating research paper last year claiming to have solved the formidable challenge of “simulating matter at the quantum scale with AI.” Now, nearly eight months later, a group of academic researchers from Russia and South Korea may have uncovered a problem with the original research that casts doubt on the paper’s entire conclusion.
The implications for this groundbreaking research could be huge, if the paper’s conclusions are true. Essentially we are talking about the potential to use artificial intelligence to discover new ways to manipulate the building blocks of matter.
A new hope
The big idea here is being able to simulate quantum interactions. Our world consists of matter made up of molecules made up of atoms. At each level of abstraction, it becomes increasingly difficult to simulate.
By the time you get to the quantum level, which exists in atoms, the problem of simulating potential interactions becomes incredibly challenging.
Per a blog post from Deep Mind:
Doing this on a computer requires simulating electrons, the subatomic particles that control how atoms bond to form molecules and are also responsible for the flow of electricity in solids.
Despite decades of effort and several important advances, accurately modeling the quantum mechanical behavior of electrons remains an open challenge.
The fundamental problem is that it is very difficult to predict the chances of a given electron ending up in a specific position. And the complexity increases the more you add.
As DeepMind noted in the same blog post, a few physicists came up with a breakthrough in the 1960s:
Pierre Hohenberg and Walter Kohn realized that it is not necessary to track each electron individually. Instead, it is enough to know the probability that an electron is at any position (i.e., the electron density) to calculate all interactions exactly. Kohn received a Nobel Prize in Chemistry after proving this, and in doing so founded Density Functional Theory (DFT).
Unfortunately, DFT has only been able to simplify the process so far. The “functional” part of the theory relied on humans to do all the heavy lifting.
That all changed in December when DeepMind published a newspaper titled “Pushing the Boundaries of Density Functionals by Solving the Fractional Electron Problem.”
In this article, the DeepMind team claims to have radically improved current methods of modeling quantum behavior through the development of a neural network:
By expressing the functionality as a neural network and incorporating these exact properties into the training data, we learn functionals free from major systematic errors – resulting in a better description of a broad class of chemical reactions.
The academics strike back
DeepMind’s paper passed the initial, formal review process and all was well. Until August 2022, a team of eight academics from Russia and South Korea rolled around and published a reaction doubt his conclusion.
Per a press release from the Skolkovo Institute of Science and Technology:
DeepMind AI’s ability to generalize the behavior of such systems does not follow from the published results and should be reconsidered.
In other words, the academics dispute how DeepMind’s AI came to its conclusions.
According to the comment researchers, the training process DeepMind used to build its neural network taught it to remember the answers to the specific problems it would face during benchmarking — the process by which scientists determine whether one approach is better than another.
In their commentary, the researchers write:
While Kirkpatrick et al.’s conclusion about the role of FC/FS systems in the training set may be correct, it is not the only possible explanation for their observations.
In our view, the improvements in DM21’s performance on the BBB test dataset over DM21m may be due to a much more prosaic reason: an unintended overlap between the training and test datasets.
If true, that would mean that DeepMind hasn’t actually learned a neural network to predict quantum mechanics.
Return of the AI
DeepMind responded quickly. The company published its response the same day as the comment, issuing an immediate and firm rebuke:
We disagree with their analysis and believe that the points made are either incorrect or irrelevant to the main conclusions of the article and to the assessment of the overall quality of DM21.
The team expands on this in its reply:
DM21 does not remember the data; this is simply shown by the fact that the DM21 Exc changes over the full range of distances considered in BBB and does not equal the infinite separation limit, as shown in Fig. 1, A and B, for H2+ and H2. For example, at 6 , the DM21 Exc is ~13 kcal/mol from the infinite limit in both H2+ and H2 (although in opposite directions).
And while it’s beyond the scope of this article to explain the above jargon, we can safely assume that DeepMind was probably prepared for that particular objection.
Whether that solves the problem remains to be seen. At this point, we have yet to see further rebuttal from the academic team to see if their concerns have been allayed.
In the meantime, the implications of this discussion may extend far beyond just a single research paper.
As the fields of artificial intelligence and quantum science become more and more intertwined, they are also increasingly dominated by corporate research tanks with deep pockets.
What happens when there is a scientific deadlock – opposing parties disagree about the effectiveness of a particular technological approach through the scientific method – and corporate interests come into play?
The crux of the problem could lie in the inability to explain how AI models “crack the numbers” to arrive at the conclusions they do.
These systems can go through millions of permutations before giving an answer. It would be impossible to explain every step of the process, which is exactly why we need algorithmic shortcuts and AI to force large-scale brute-force problems that would be too big for a human or computer to solve on the fly.
Ultimately, as AI systems continue to scale, we could reach a point where we no longer have the tools necessary to understand how they work. When this happens, we can see a difference between business technology and technology that passes external peer review.
That’s not to say that DeepMind’s paper is an example of that. As the commenting academic team wrote in their press release:
The use of fractional electron systems in the training set is not the only novelty in DeepMind’s work. Their idea of introducing the physical constraints into a neural network through the training set, as well as the approach to imposing physical sense through training on the appropriate chemical potential, are likely to be widely used in the future in the construction of DFT functionalities. of neural networks.
But we are experiencing a bold new AI-powered technology paradigm. It’s probably time we started thinking about what the future holds in a post-peer-review world.