Google’s goal this time is ambitious: to build the most powerful and fastest quantum supercomputers destined to solve significant global challenges. The Mountain View giant has also given itself time to reach this milestone and, above all, a new state-of-the-art laboratory.
By 2029, researchers on the new campus will be responsible for building a quantum supercomputer the size of a room and equipped with one million physical qubits. A result goes far beyond the current quantum computers, which often stop at a few tens of qubits and require small dimensions.
Google’s new machine will be a “quantum error correction”, capable of reducing the inherent inaccuracies in the use of quantum technology. A supercomputer with this computing power could give life to a new type of computing so elaborate that it can give concrete help in designing cutting-edge medical treatments, sustainable solutions to combat climate change and still contribute to the technological development of any kind Sector.
Google: A Hi-Tech Campus For The Quantum Supercomputer
Developing the quantum supercomputers of the future equipped with millions of physical qubits requires a state-of-the-art laboratory and the best scientists and engineers. During the 2021 developer event, Google announced the creation of the Quantum AI Campus, a quantum computing laboratory located in Santa Barbara where hundreds of researchers will join forces to reach the goal by 2029 and push new boundaries by achieving quantum supremacy.
One of the main tasks of the researchers will be to lead Google to win the quantum computing challenge, with competitors like IBM and Honeywell to beat, managing to turn these machines into practical tools to use. To do this, the fundamental elements of data processing on which this type of computer is based must be made more reliable: qubits, the quantum equivalent of computer bits.
The biggest problem with qubits is that they are very susceptible to being affected by external forces, thus making errors in the calculations. In the laboratories of Santa Barbara, we will try to perfect the error correction technology to make quantum computers faster and more reliable in data processing.
Quantum Supercomputers: Why They Are So Important
Nature and technological advancement place us in front of the need to face increasingly complex problems for which computers with increasingly higher computing powers are needed. This need finds its solution in quantum computing, which despite the rapid evolution in recent years, does not yet allow for effective management of all types of computing.
One of the significant limitations of quantum computing is the correction of processing errors introduced by the inherent volatility of qubits, the fundamental units of quantum computers. Generally, several physical qubits will be needed to create a single working qubit, which is called a logical qubit. According to Google research, about 1000 physical qubits will be needed for a single logical qubit to eliminate errors.
This means that creating a quantum supercomputer with 1000 logical qubits can minimize the error due to the influence of external forces on the qubits themselves. It will be necessary to build a machine with at least one million physical qubits—a real challenge, considering that current quantum computers have just a few dozen physical qubits.
Google, Not Just Quantum Supercomputers
In Google’s future, however, there are not only quantum supercomputers. The development of these critical new computers goes hand in hand with advances in AI acceleration hardware from the Mountain View giant. During the 2021 developer conference, Google revealed that it was working on new custom processors called tensor processing units (TPUs) for unprecedented computing power.
Thanks to the capabilities of Google’s AI, the design of the configuration of the new processors took just a few hours instead of months, with results that were defined as being equal to or even better than those obtained by “human” engineers. This result was possible because the performance of a chip depends very much on the physical arrangement of its components, and drawing the layout of the circuits is a complex task.
Automating this process through machine learning was undoubtedly not easy: the researchers developed a neural network trained with the machine learning technique called reinforcement learning. In this way, the AI was able to optimize the planning of the chip by evaluating the possible combinations of performance, power consumption and size according to the arrangement of the components.
In less than six hours, the algorithm solved the complex “puzzle”, designing a working chip and choosing the configuration with the best performance, saving thousands of hours of “human” work. With these premises and possibilities, Big G now focuses on the challenge of quantum supercomputers, and the date of 2029, as a result, does not seem at all impossible for the hi-tech giant.
Also Read: Azure, AWS, Google Cloud: Which Public Cloud Suits Your Business?