Quantum Computing — The Basics, The Bad, and the Solution

Silen Naihin
12 min readOct 29, 2018

You’re scrolling the web, looking at trending topics and you hear about these “Quantum Computers”, that can do amazing things, faster than normal computers will ever be able to. You may have even heard that a quantum computer with 300 qubits could run more calculations in an instant than there are atoms in the universe. Did you know that Google recently released a 72 qubit quantum processor? Only a few hundred qubits away! Not quite.

The Truth Is

  • Quantum computers have many limitations such as decoherence and noise
  • Completely fault-tolerant quantum computers are still a long way off

On the bright side, solutions are being developed, and when we get there, the end result will be worth it.

Lets Start With The Basics

Normal computers transfer data through transistors. Transistors are essentially just switches, that are on when an electrical current is passed through them, and off when there isn’t. Having two states like this is called binary. In a normal computer, the value 1 is assigned to the on state, and 0 is assigned to the off state. 1 and 0 are just placeholders to represent two different states, which we refer to as “bits”. We could just have “dogs” and “cats” representing the two different states.

A quantum computer doesn’t exactly use binary, and also isn’t easy to explain so bear with me. Quantum computers are built off of the rules that govern quantum mechanics, the rules that at the subatomic scale. A quantum computer works by changing the states of a subatomic particle and using the state of that particle to reflect a data value. Confused?

Quantum computers use “qubits” instead of bits. They can range from an atom (phosphorus: 1.92 x 10^–10 m) to an electron ( 2.82 x 10^–15 m) in size. Qubits can be implemented with electron spins the way bits are implemented by transistors. Spin isn’t an accurate way of describing it because the subatomic particle at hand isn’t actually spinning. Spin is really just referring to a magnetic field, in this case, a tiny magnet of subatomic dimensions, created by a more complex system of charges.

In quantum computing, we refer to 1’s and 0’s as spin up and spin down states, with spin down representing the point of lowest energy. While in a binary system, each bit can hold one piece of information (1 or 0), with a quantum system, each qubit can reflect a wide variety of states. Quantum “transistors” detect small changes in the energy, so to initialize the qubit in a quantum system you set it to the lowest energy state using waves at a specific frequency, unique to each qubit.

A qubit in the |0> state

The technical term for this is superposition. This means that they are both a 1 and a 0 at the same time, to a certain degree. For every computation, fresh qubits in the superposition state must be used, and you must initialize them to the 0 states (lowest energy state). The mathematical representation for superposition is:

|ψ> = α|0> + β|1>

where the trident looking symbol (psi) stands for the state of a superposition and, the 1 and 0 just represent the qubits. I’ll keep the math to a minimal in this article, this is meant to be an introduction to the way notation looks so you aren’t scared away by a more math heavy-article. Don’t worry, the math doesn’t get any harder in this article.

As soon a qubit is measured, it has a certain chance of collapsing based on a probability that is governed by the position of the qubit in between the energy states. The way to represent this is |α²| + |β²| = 1, where α² and β² represent probabilities.

Visual representation of superposition

The way we manipulate qubits is through elementary gates, which are unitary transformations (transformations that preserve the inner product) chosen from a continuous set. To simplify, they are essentially just matrices that perform a certain action on a qubit. For example, this is the Hadamard gate, which is the gate that puts a certain qubit into a random superposition. The gate is described by the following matrix:

Hadamard gate

Before I get into another type of gate, let me describe another interesting quantum phenomena, called entanglement. Entanglement is another way that quantum computers get an edge on a classical computer. I won’t be going over how here, but I will be describing the basics of entanglement. Quantum entanglement occurs when a system of particles interact in a way that the particles can’t be described as independent systems. The distance between the qubits does not matter in entanglement.

To demonstrate, imagine two qubits, one in China and one in Canada. Let the Canadian qubit stand for α, and the Chinese qubit represents β. When qubit α is measured as being 1, then qubit β in China instantaneously collapses in correspondence with how the qubits were put into entanglement. So if the qubits were entangled using 1,0 states, then qubit β would instantly collapse to 0. To reiterate, this is true no matter how far away the qubits are.

Representation of the effect a gate has on a qubit

Alright, now we can get into the CNOT gate. The CNOT gate takes in two inputs, the control bit and the target bit, and entangles or unentangles them. To do this, the following operation is performed: if the control bit is set to zero it does nothing. If the control bit is set to one, the target bit is flipped. The gate is described by the following matrix:

CNOT Gate

The Process For Running A Program On A Quantum Computer:

  1. Initialize qubits to their lowest energy state (|0>)
  2. Apply gates to those qubits (often referred to as the black box, we don’t know what happens here, and if we tried to find out then all of the qubits would collapse)
  3. Measure the qubits

Previously I used spin to describe the implementation of qubits as it is the easiest to understand, but this is just one type of a quantum processor. There are variations in the manufacturing of quantum computers, called platforms (trapped ions, topological qubits, diamond vacancies, etc), but all quantum computers fundamentally work the same. The way that the qubits are measured also depends on the platform that is used. There is a wide variety of platforms as it is unclear which platform will be the best going into the future. I won’t get into how every type of hardware works here, but each platform has its own set of pros and cons.

Different types of platforms

To sum it all up, due to superposition and entanglement quantum computers are faster than normal computers at specialized applications. What this all equates to, is that the speedup with every qubit that is added is 2^n. This is an exponential speedup, which is limited due to several factors including wave function collapse and noise. To put this all into perspective, 100 normal bits just equals 100 pieces of information, while 100 ideal qubits (qubits we get in a computer simulation: they are perfect and are not influenced by external factors that influence a physical qubit) can equate to 1,267,650,600,228,229,401,496,703,205,376 pieces of information. This is just a quick example of how quantum computers are more powerful.

Unveiling The Problems With Quantum Computing

Now that you understand the basics of quantum computing, it’s time to jump into problems with it. The main culprit is noise. “Noise”, is referring to the stability of the qubit, which due to several factors may be compromised. You may also hear noise being referred to as decoherence or decay, but they mainly mean the same thing.

To demonstrate what decoherence is, think of qubits as cans of soda. You can carry around 6 closed cans of soda with ease, without worrying about spilling any liquid. However, once you do open the cans, it gets much more difficult to carry the soda without spilling it. Now start increasing the number of open soda you are carrying exponentially. This makes it harder and harder to carry soda without spilling it, limiting your movement. To make matters even worse, even if you don’t spill a drop of the soda you carry, the act of opening the can means that the soda inside each can will evaporate over time.

6 pack of soda

Decoherence occurs when the qubits (cans) lose information (liquid) due to the environment that they are in (spilling/drinking the soda/imperfections, etc) over time. The race against time doesn’t start until you do something with the qubits, like measure them or perform a computation, similar to opening the cans. This also depends on the qubit platform that is used, which can be thought of as how strong the soda can holder is and what material they are made of.

There’s a certain threshold for noise called fault tolerance which is a theoretical threshold for when quantum computers will be reliable enough to be considered useful.

In reality, it will require anywhere from 50–500 noisy/physical qubits to create one ideal qubit. This is widely regarded to be true. That means that for the ideal 300 qubit goal, we will actually need 150,000 noisy qubits. Even that number could be too low.

This means that Google’s 72 qubit quantum computer “Bristlecone”, is basically useless. With every extra qubit, you add to increase the computing power, it raises the overall vulnerability to decoherence, in the same way that the bigger the size of the object, the less quantum effects play on it. This makes scalability a major problem. We need more qubits to have more power, but that leads to more noise.

This is the quantum physics version of the “Mo’ money, mo’ problems” saying by The Notorious B.I.G rapper.

You could try to relate this to the studies into error correction in normal computers back when they were being developed, but because of quantum computers fragile state and difficulty to control, quantum computers are much harder to work with. Many physicians, scientists, and mathematicians think for various reasons that quantum computing is just theoretical nonsense.

Quantum vs Classical Error Correction

Interaction With Environment — Decoherence

Quantum computers are extremely sensitive to interaction with the surrounding environment because any interaction or measurement would lead to the collapse of the wave function. Qubits could even accidentally entangle with the environment around them! The term “environment” refers to effects such as radiation, photons, sound waves, vibrations, thermal energy, and magnetic fields. Any interaction with these fields could disrupt the process.

A qubit may also undergo some unintended and random transformation during the process. Even if we do our best to isolate the qubit (for example cooling quantum computers close to absolute 0), we will still have to manipulate the qubits, which again introduces errors.

Huge machines are used just to cool down the chip

There are 4 different matrices (unintended transformations/gates) that can influence the qubit due to the environment. What you can do is trace out your environment, and attach a certain probability of each gate happening during a process. In doing that, we can develop a quantum error correction code that can protect against those possible errors.

A method that is being researched is the utilization of static fields (electric, magnetic, gravitational) in the environment, entangled with continuous-wave (electromagnetic) driving fields. This is called the dynamic decoupling approach. The way it works is a continuous, but periodic field, causes the qubit to alter its spin in minute intervals. That way, the qubit does not have time to entangle with its environment before transformations are applied. This is currently being tested and has been successful thus far.

Another method is the construction of a qubit in each cell of an optical superlattice (structure of two layers or more in thickness, usually in the nanoscale), so that interactions between qubits can be monitored and controlled. To create decoherence-free subspaces within the superlattice, four neutral atoms are arranged in a square, called plaquettes. Plaquettes are formed into a lattice, stacked on top of each other to form a superlattice. The center of each plaquette is shielded to magnetic field fluctuations. Superlattices have been constructed before, but without encoding each cell with a qubit.

Structure of superlattice and magnetic field fluctuations

Reliable Execution Of Gates

Currently, we don’t have the means to perform these transformations with perfect accuracy, so every transformation will have small imperfections. With more qubits and transformations, these errors will add up, eventually leading to a failure in the function. Quantum algorithms can be designed to compensate for this.

It is also possible that this is because the qubits may already be entangled with the environment before the gates are even executed, which would also lead to a failure in execution.

Additionally, we don’t know with 100% certainty that the qubit is at the |0> state once we initialize it, because when we measure the qubit the wave function will collapse. That means that the qubit could be slightly off of the |0> state to begin with, and the slight error compounds with every transformation (gate). If you could clone the qubit, you could find out the true state of the qubit by performing as many experiments as you want on the clones. Unfortunately, the no-cloning theorem, states that you can’t clone a qubit with an unknown state.

Measurement

The sole act of measuring a qubit, cause it to lose information, because, as stated previously, as soon as you observe a qubit, its wave function collapses. On top of that, in some cases, you have microseconds to retrieve the information before the qubit “decoheres”. This depends on the platform of the quantum computer. The solutions to this can be interchangeable with solutions in the “interaction with the environment” section.

Imperfections In Gates

We currently do not have manufacturing processes that can build a quantum gate to atom-like precision, so naturally, there will be defects in the system. The approach that we use currently is the top-down approach. For the top-down approach, bulk materials are reduced to the nanoscale (1–100 nanometers) size. The imperfections in the gate are atoms in size, and while you may think that’s small, with quantum computers working at the subatomic world, this matters. This could be solved with the advancements of nanotechnology and the bottom-up approach. This lets us have full control of the build, down to the atom. This is currently being developed, and is inefficient to do right now. It is estimated that if we tried to make a processor using this method right now it will cost around a trillion dollars.

Nanotechnology bottom-up approach uses atoms as building blocks

Final Words

Many think that we are at the point where the math and hardware have developed enough for there to be a very little chance of this all turning out to be a scam. There is a common belief that there are ways of solving all of these problems, but the truth is, we don’t actually know. Only one thing is certain in the quantum realm — we are a long ways away from a truly fault-tolerant computer.

Key Takeaways

  • Quantum computers have the potential to be very powerful
  • There are many difficulties in creating a fault-tolerant quantum computer
  • We are still far away from seeing impactful real-life use cases

So why should we even try? Why are people even developing these things? In my next article, I will explain the practical applications of quantum computing, and prove that it is worth it to invest the time and resources in the development of quantum computers.

Thanks for reading!

I would really appreciate it if you could

  • Clap to show your support
  • Follow me on LinkedIn and Medium (Silen Naihin)
  • Give constructive criticism in the comments

This is my first article, and I would love some exposure. Thanks again!

Email me at silen.naihin@gmail.com for references.

--

--