Previous Episodes:
Let’s start our journey!
Quantum computing is on the rise over the last few years, in part thanks to tech-giants such as Google, IBM and Alibaba that are putting big money into R&D in the race to build a quantum processor for solving practical tasks.
Last year Google’s Quantum AI lab achieved a significant milestone by demonstrating quantum supremacy in a carefully designed experiment. In this experiment Google’s quantum processor Sycamore solved a specific mathematical problem in a matter of minutes, that would be intractable for the best modern supercomputers.
Although some aspects of the experiment are still being debated by scientists (e.g. IBM proposed a way to speed up computations on a supercomputer by utilizing additional HDD storage), “quantum supremacy” experiment is a nice demonstration of the proof of principle. In other words, quantum computing is now becoming more and more of an engineering problem, rather than a conceptual scientific problem.
Interestingly, quantum hardware has striking similarities to dedicated chips such as FPGAs and ASICs that are used for domain-specific applications. This analogy works particularly well when comparing with superconducting quantum processors designed using integrated circuit technology, and Sycamore processor is one of them.
When running algorithms on quantum hardware, scientists use low-level programming languages that are very similar to assembly language. Quantum programs consist of a set of instructions for operations with individual quantum gates.
Quantum gate can be thought as an elementary computational unit, by analogy with logical gates (NAND, OR, XOR, etc) and loosely speaking it is a quantum twin of a transistor.
Quantum computing in three minutes ;)
While there is a lot of popular literature explaining what is quantum computing, I would like to give a very short intro that summarises the main idea of quantum computation.
The core idea of quantum computation can be formulated as follows:
Classical computers perform operations using boolean logic, while quantum computers directly operate with high-dimensional complex-valued matrices and vectors.
Matrix and vector operations are inbuilt in the laws of quantum mechanics, and the quantum mechanical world speaks to us the language of linear algebra.
And what is really remarkable, quantum operations acting on a system of many particles correspond to manipulations with exponentially large matrices.
Since linear algebra lies in the foundation of the vast majority of computer programs used in everyday life and is a blood of machine learning algorithms, it is very tempting to employ quantum mechanics for this sort of tasks.
Even though the idea sounds great, there is a catch. Although quantum mechanics inherently performs matrix operations of exponential size, we do not have direct access to the results of the computation.
The only way to extract the results of calculations is to perform measurement (readout) of the quantum system after computation. And according to the axiom of quantum mechanics, the results of the measurement are probabilistic with the probabilities corresponding to the squares of elements of the exponentially large state vector (wave function). So the rules of the game tell us that the result of the computation can be obtained only via probabilistic sampling that gives us only partial access to the entire exponentially large computational space. This fundamental restriction makes the exploitation of operations with exponentially large matrices tricky, although still possible.
There is an alternative way to think about quantum computers. Imagine you calculating properties of a specific molecule, e.g. molecule of caffeine, or predicting electronic properties of new material. Solving this sort of problems is incredibly important in the context of the pharmaceutical industry and the design of novel materials with desired properties. Existing in-silico computers struggle and quite often miserably fail executing such tasks due to exponential complexity of simulations. Computational complexity grows exponentially with the number of particles in a quantum system, e.g. the number of atoms in a caffeine molecule.
Moreover, each atom in a molecule may contain dozens of interacting electrons, and we need to simulate all of them which is an intractable computational task on its own.
How to deal with such complexity?
We don’t know! But nature solves this sort of problems all the time.
Why can’t we just use another quantum system, which we can control and manipulate very well in order to simulate the system of interest?
This idea was formulated by Richard Feynman and that was the beginning of the era of quantum computation.
Quantum hardware can be imagined as a system of elementary “atoms” (bits) of computation, qubits, that are coupled according to a specific connectivity graph, and operations are performed by applying a sequence of quantum gates to a bunch of qubits.
Today, there are multiple candidate quantum hardware platforms competing with each other. The most well-known platforms are based on superconducting circuits, ion trap devices, trapped neutral atoms, photonics chips.
This is the second post from the series of posts which explain quantum computing, quantum compilation and benchmarking of quantum algorithms, frameworks and quantum hardware “in a nutshell”.
Let’s go on our quantum journey!
To be continued. We will come back with the next episode soon!
Comments