proves that a theorem simplifies the Hilbert space for quantum AI learning to several points
For artificial intelligence machine learning, this breakthrough beyond the Moore's Law team may be comparable to that of the Manhattan Project team, which found a way to make atomic bomb .
Surprisingly, these two breakthroughs were made at the same Los Alamos National Laboratory in the United States for 80 years apart - this is the brainchild of great men Vanniva Bush and Robert Oppenheimer .
But what surprised me even more was that like the atomic bomb, only the creative genius of a few theorists could make a breakthrough, not found on expensive devices such as the collider , but on the nib - this time, 4 new theorems proved.
- Machine learning (ML) of modern artificial intelligence (AI) systems requires a lot of data. Training models requires huge computing power to process them.
- On the road to computing power growth, Moore's Law, developers hope to overcome it by switching to quantum computer and quantum machine learning.
- For quantum machine learning processes, the number of parameters or variables will be determined by the size of the mathematical structure called Hilbert space when in a large number of qubits (qubits or qubits are the calculation units of basic quantum computing , similar to the bits in classical computing). This size of Hilbert space makes quantum MO almost impossible to calculate.
- So far, it is assumed that a Hilbert space with only 30 qubits will contain a billion states. Then when the training model searches in this space, a billion data points will be required.
This is the breakthrough. According to the 4 theorems proved by theorists in Los Alamos Laboratory, it can be concluded that
Using quantum ML, you do not need to traverse the entire Hilbert space, but just traverse the number of parameters in the model.
For many models, the number of parameters is approximately equal to the number of qubits—that is, a quantum computer with 30 qubits only needs about 30 data points.
The significance of this breakthrough is huge . Even for classic algorithms that imitate the quantum artificial intelligence model, it can ensure efficiency in . In this case, training data and model compilation can be calculated on classical computers (which simplifies the process). The MO model then runs on a quantum computer.
This significantly reduces the performance requirements of quantum computers in terms of noise and error when running meaningful quantum simulations.
From this we can see that we are getting closer and closer to the actual implementation of the quantum superiority of .