Feature Internet

Russian experts and DeepMind disagree over quantum AI research.

Russian experts and DeepMind disagree over quantum AI research.

Observe the scientific process

Nothing is more dramatic or inspiring than a scientific discovery. But what takes place when several scientific communities can’t seem to agree on the science?

In an intriguing research paper last year, DeepMind, a London-based Alphabet research organisation, asserted that it had overcome the enormous difficulty of “simulating matter on the quantum scale using AI.” Now, over eight months later, a team of academic researchers from South Korea and Russia may have found a problem with the initial research that calls into question the paper’s entire conclusion.

If the paper’s conclusions are accurate, there could be significant repercussions for this cutting-edge research. In essence, we’re discussing the possibility of using artificial intelligence to find novel approaches of manipulating the constituent parts of matter.

A fresh hope

Hello there, humanoids

For a weekly summary of our favourite AI stories delivered to your inbox, sign up for our newsletter right away.

I consent to TNW storing and using my personal information.

The key concept is the ability to model quantum interactions. Our universe is made of matter, which is made of molecules, which are made of atoms. The difficulty of simulating something increases with the level of abstraction.

When you get to the quantum level, which exists inside of atoms, it is very difficult to simulate potential interactions.

According to a blog post by DeepMind:

The simulation of electron’s, the subatomic particles that control how atoms come together to form molecules and are also in charge of the flow of electricity in solids, is necessary to carry out this task on a computer.

It’s still difficult to precisely simulate the quantum mechanical behaviour of electrons after decades of work and a number of notable advancements.

The fundamental problem is that it is extremely difficult to forecast the likelihood that an electron will end up in a particular position. And as you add more, the complexity rises.

In the same blog post, DeepMind noted that in the 1960s, two scientists made an important discovery:

It was later realised that it was unnecessary to track every electron individually by Pierre Hohenberg and Walter Kohn. The electron density, or probability that any electron will be present at each point, is all that is needed to precisely calculate all interactions. After demonstrating this, Kohn won the Chemistry Nobel Prize, establishing Density Functional Theory (DFT).

Unfortunately, DFT was only able to streamline the process so much. The “functional” aspect of the approach required people to carry out all of the labor-intensive tasks.

When DeepMind released a paper in December titled “Pushing the boundaries of density functionals by solving the fractional electron problem,” everything changed.

In this paper, the DeepMind team argues that the creation of a neural network has significantly enhanced existing approaches for simulating quantum behaviour:

We learn functionals free from significant systematic mistakes by expressing the functional as a neural network and including these precise features in the training data. This leads to a better description of a large class of chemical interactions.

The academics respond.

The initial, formal review process for DeepMind’s paper made smoothly. Up until August 2022, when a group of eight academics from South Korea and Russia submitted a comment that questioned the conclusion.

According to a statement issued by the Skolkovo Institute for Science and Technology:

The presented results may not necessarily support DeepMind AI’s capacity to generalise the behaviour of such systems, necessitating further investigation.

In other words, academics disagree with the methods used by DeepMind’s AI to reach its conclusions.

The training process DeepMind employed to create its neural network, according to the researchers who commented, taught it how to memorise the solutions to the particular challenges it would encounter during benchmarking, the process by which scientists decide whether one approach is superior to another.

The scientists write in their comment:

Although Kirkpatrick et alconclusion .’s of the importance of FC/FS systems in the training set may be accurate, it is not the only explanation for their findings.

In our opinion, an unintentional overlap between the training and test datasets may be the cause of the increases in DM21’s performance on the BBB test dataset compared to DM21m.

This would imply that DeepMind did not truly train a neural network to predict quantum physics if it were to be true.

The AI is back

DeepMind responded right away. The company issued an immediate and stern censure in their reaction, which was published the same day as the comment:

We disagree with their analysis and think the issues stated are either unfounded or unrelated to the paper’s key conclusions and the evaluation of the overall calibre of DM21.

In its response, the team elaborates on this:

The DM21 Exc fluctuates over the whole range of distances taken into account in BBB and is not equals to the infinite separation limit, as shown in Fig. 1, A and B, for H2+ and H2, demonstrating that DM21 is not memorising the data. For instance, the DM21 Exc is approximately 13 kcal/mol away from the infinite limit in both H2+ and H2 at 6. (although in opposite directions).

Furthermore, even if it is outside the scope of this post to explain the vocabulary used above, we may confidently infer that DeepMind was ready for that specific criticism.

It remains to be seen if that resolves the problem. The academic team has not yet provided a point in response to our questions on whether or not their concerns have been addressed.

In the interim, it’s feasible that this discussion’s effects will extend far beyond the scope of a single research paper.

The fields of artificial intelligences and quantum science are entwining more and more, and being more dominated by well-funded corporate research organisations.

What transpires when business interests are at stake and there is a scientific standstill, in which opposing factions are unable to agree a consensus regarding the viability of a particular technical approach using the scientific method?

Now what?

The inability to explain how AI model’s crunch the numbers” to reach their conclusions could be the root of the problem.

Before producing a result, these systems can run through millions of possibilities. We need algorithmic short cuts and AI to brute force mass-scale problems that would be too big for a human or computer to address head-on because it would be difficult to explain every step of the process.

We might eventually reach a point when we work out of tools to fully comprehend how AI systems operate as they scale. We might observe a difference between corporate technology and that which passes external peer evaluation when this occurs.

Not that DeepMind’s paper serves as an illustration of this. As stated in the news release by the academic team that made comments:

The work of DeepMind is innovative in more ways than only the use of fractional-electrons systems in the training set. Their approach of imposing physical sense through training on the appropriate chemical potential and idea of putting physical limitations via the training set into a neural network are likely to be widely applied in the future when creating neural network DFT functionals.

However, a daring, fresh, AI-driven technology paradigm is currently in play. It’s probably time we started thinking about what future will be like after peer review

Related Articles

Nero Platinum 2021 Download Full Version

Stefan Stefan

How to Uninstall Apps in Laptop from Windows 7/8/10

Stefan Stefan

The Top 10 Sites for Royalty Free Sound Effects


Leave a Comment