Quantum circuits (Part 1)

31st March, 2018

I am not at all a physicist, and my knowledge of quantum physics in particular comes solely from undergraduate courses that I followed years ago, and any reading I can get done when feeling mathematical but not inclined to work on my thesis. However, after scanning through some papers by Bartlett, Baez, Lauda, and Lurie, my interest in quantum physics, and quantum computing especially, has come back with a vengeance.


The following is aimed at people who have maybe read some things about quantum physics before and know some linear algebra.

Quantum computing has fascinated me ever since I first studied it at university, despite being a self-proclaimed ‘non-physicist’, swearing to do only ‘mathematical maths’ and none of this applied stuff (a shameful phase that I am really glad I outgrew). This is without a doubt almost entirely due to the incredible teaching of Artur Ekert, whose lectures were probably the best that I ever experienced as an undergrad/masters student. Really, most of what I’m going to say comes from notes I took in these lectures, so hopefully I don’t distort them too much.

One of the reasons that I’m posting this, apart from to help me to not forget it, is that quantum physics has such a strong link to -category theory, but I don’t know at all really how quantum computing fits into this nPOV picture. As a naive guess, I would think that there is some `string diagram’ approach to all this, but I’d need to sit down and have a think about what exactly that would be… So if anybody has any comments or references then I’d be more that interested to hear them!

Foundational ideas

Before jumping in to the jacuzzi that is quantum circuitry, let’s dip our toes in the vast ocean of quantum physics axiomatisation. First of all though, a note on quantum computers in practice. There are a bunch of companies that make ‘quantum computers’ (for example the D-Wave) but these are of a completely different species to those that we’re going to have a look at. These computers work using quantum annealing – a sort of quantum version of thermal annealing; what I’m going to talk about are universal quantum computers. Quantum annealing lets you implement so-called hill-climbing algorithms, but many of the famous quantum algorithms, like Shor’s, cannot be implemented in this way.Quantum annealing lets you implement so-called hill-climbing algorithms, but many of the more well-known quantum algorithms, such as Shor’s, cannot be implemented in this way. I hate to be a cynic about something in which I am far from an expert, but it seems to me that calling systems like the D-Wave ‘quantum computers’ is a way to boost sales by using not-entirely-false-but-definitely-misleading buzzwords.1

Another very important point is that quantum algorithm complexity is by no means trivial. Said in another way, there are some things that quantum computers can do so much better than classical ones (e.g. prime factorisation), but they are neither ‘better’ nor ‘faster’ overall: there are some things that they are much worse at than classical computers. Expecting a quantum computer to be able to run arbitrary programs quicker than a classical computer is somewhat akin to expecting an otter to be able to drink a pint of water quicker than a pigeon could, simply because an otter swims faster in water.2

Anyway, with that aside, let’s go get our feet wet.

Quantum interference

Here is an experimental fact that you can read about in who-knows-how-many places and find numerous videos of: if we send a particle through a barrier with two tiny (really really tiny) holes (or slits, to agree with the name double-slit experiment) and place a detector on the other side, it is not necessarily true that the particle reaches the detector with probability , where is the probability of the particle going through the -th hole. In fact, the probability is given by

where are numbers, called the phase, associated to the (we describe what exactly they are in a bit). This means that the total probability depends on the relative phase of the system: the value . Sometimes this will agree with , and sometimes not.

The point is, the Kolmogorov additivity axiom (the probability of two mutually exclusive events is the sum of the probabilities of the constituent events) seems to fail – but why? As a result of decades of thinking and experiments and failed ideas, the theory that seems the ‘most likely’ is that of quantum theory, which says in particular that the particle passes through both slits simultaneously, and so the events of passing through each slit individually are not mutually exclusive at all. This does seem to explain the formula for the probability of detection that we described above: the only way that it can depend on is if the particle ‘interacts’ with both paths each time it passes through the barrier.

I’m not going to talk much more about actual quantum physics, since there are so many great references written about it already, and I am also not a physicist, and even less so an experimental quantum physicist. However, it’s good to have this basic understanding of quantum physics to really understand how quantum computers work: in some sense that we will make precise, they follow all computational paths at the same time, and give an answer that depends on all of the results.

It is very natural, reading this, to wonder why it is that we don’t experience quantum effects in our daily lives.3 You can try throwing a ball at a wall with two open windows as much as you want, but you’ll never have it go through both windows simultaneously. The answer is the very thing that’s stopping us from being able to actually build quantum computers with our current technology: decoherence. Simply put, no physical system is really truly separate from the external environment – there is always gravity of far away objects pulling on things, and even being able to look at a physical system means that there are photons passing through and touching things and generally being a nuisance. Decoherence, then, is when the external environment ‘gets in the way’ of our very delicate teeny tiny quantum systems, and influences the interference term (the term with the relative phase) in such a way that we can’t recover it. This also ties in with the fact that, given any ‘quantum theorem’ (i.e. formula or equation for a quantum system), if we make all the ‘quantum terms’ tend to zero then we should recover the corresponding ‘classical theorem’. Later on we will talk about decoherence again, and look at what it means explicitly in terms of quantum circuits, but to begin with we just sort of ignore it and work in an ideal world where we can get rid of it entirely.

Axiomatics

In our discussion of the double-slit experiment we mentioned how there were ‘extra’ numbers associated to the probabilities , so let’s try to formalise that now. Quantum theory suggests that we work with probability amplitudes instead of probabilities. That is, rather than just saying an event happens with probability for some real number , we say that it has associated probability amplitude , where is some complex number such that , and then the probability of the event occurring is exactly . The way that amplitudes behave are very similar to they ways that probabilities behave: the amplitude associated to two events occurring one after the other is the product of their probability amplitudes; the amplitude associated to an event that can occur in two different ways it the sum of the amplitudes associated to the individual ways. In fact, writing our complex numbers as , we can recover the probability in the double-slit experiment:

The other generalisation of classical things that we work with is the idea of a qubit. The classical idea of a bit is that we can store information by encoding things in binary and then, for example, inducing a charge between the plates of a capacitor to represent a 1, and leaving it uncharged to represent a 0. To make this quantum, we can look at storing our data not on capacitors but on electrons, or using polarisation of light. For example, we could say that an electron in its ground state (its lowest energy state) represents a 0, and if it’s in a higher energy state then it represents a 1. So far so good, but what’s new? Well, by shining light of varying frequencies and brightness for varying durations, we can change the electron’s energy state, but if we shine light at, for example, half the brightness, then we will find that our electron is now in a state that can’t be described classically: it’s in a coherent superposition of states. These are states where the electron is in both energy levels at once, similar to how the particle went through both slits at once.

Again, I’m not really the best person to explain this, but the idea is that, experimentally, we can prepare particles in such a way that, when we measure their energy, sometimes we find that they’re in their ground state, and other times they’re in their higher energy state (and anything that we can prepare consistently like this we call a qubit; we call one of the states and the other ).4 But, you may be saying, how do we know that it isn’t simply the case that some of the particles we prepare are already in one state, and the others in another state? That is, imagine the two following scenarios:

  1. We have an electron in an ‘equal’ superposition (it’s in both states ‘in equal amounts’) and we measure its energy
  2. We have a bag containing two electrons, one in state and the other in state . We pick one randomly and measure its energy.

In both cases, when we measure the energy of the particle we will find it to be in state or state with equal probability (50%), so how does this supposed ‘superposition particle’ differ from a particle with a randomly picked state? Well, experimentally, we can show a difference, but maybe more convincingly we can give a good mathematical reason why these two situations are completely different. To do so properly, we need at least the basics of quantum circuitry, but we can describe the proof right now.

If it were the case that sending a pulse of light at the electron that puts it in a 50/50 superposition were the same as simply setting the electron to state or randomly, then firing this same beam of light twice in a row at the same electron should be the same as firing it once. In other words, if we randomly set it to 0 or 1, and then randomly set it again to 0 or 1, it’s the same as randomly setting it to 0 or 1 just once. But (and here’s the kicker), we can show that every single time you start with an electron in state and hit it with this pulse of light two times in a row, what you actually end up with is an electron in state . Every single time. It’s a hard thing to understand when you first meet it (and still even after the second and third encounters), but coherent superpositions are not at all the same as a particle being in a randomly chosen state – it really is a particle that is in both states at once.

There is a bunch of really exciting ‘quantum philosophy’ that asks questions like, “if quantum theory is wrong in some way, then can any of it still be right, or does it all break down if we find one problem?”, and “how can we tell that there isn’t a hidden variable?”, i.e. “how do we know that the result of our measurement of a particle isn’t predetermined?”. What is really astonishing is that quantum theory is in some sense more general and powerful that quantum theory. What do I mean by this? Well, as an example that I still think about in awe, even if quantum theory is completely entirely ‘wrong’, in that the universe works by some entirely different theory, then we can still use quantum physics to produce truly random data. This doesn’t sound very exciting or big, but if you say it in a different way you can see why people would get so worked up about it: if all of our lab equipment is made by some private corporation that we don’t trust, and we want to produce a random number, then using quantum physics we can do so with complete certainty,5 even if the equipment has been tampered with to produce certain states of particles more often than others, or to produce seemingly random states that are actually just following a predetermined order.

I don’t want to be turning quantum physics into sensationalism now, so this comes with the obvious caveat that doing things in practice is usually really hard. A quote I remember hearing from Artur is that “in theory there is no difference between theory and practice, but in practice there is”. But, even so, wow this stuff is cool.

Awed ravings aside,6 let’s briefly end this section on axiomatisation by actually stating what our axiomatisation is. This isn’t going to be entirely self-contained for those new to maths I’m afraid – I’m going to freely talk about vector spaces and complex numbers and matrices and the like (basically, linear algebra over ).

The following is all experimentally inspired and guided, but it still a purely mathematical formalism. So far it seems to agree with every physical experiment we can throw at it, but we won’t worry too much about this, and instead just see what happens if we play around with the maths, and leave physical implementations to the physicists (though we won’t forget that, whenever something seems to pop up out of thin air, it’s almost always because it’s been suggested to exist by some experimental experiences).


  1. A qubit is any object that can be reliably prepared to be in one of two states (or any superposition therein), manipulated, and measured. Any qubit can be described by a state vector:7 a qubit in state with probability amplitude and state with probability amplitude is represented by the unit vector with entries and

    This means that the qubit is in an coherent superposition, and so when measured it will be in state with probability and in state with probability .

    Note that we ignore global phase: if we have a qubit in the state then this is the same as a qubit in the state . We do not, however, ignore local phase: is not the same as .

  2. Qubits are modified by quantum evolutions: redistributions of the amplitudes between the states (for example, the changes of energy induced by pulses of light on an electron qubit). Admissible quantum evolutions are described by complex isometries, in particular by unitary matrices. For example, the matrix

    sends the basis state to , and the basis state to , and extends linearly to act on any superposition .

  3. We take measurements corresponding to our choice of basis states and . As mentioned above, the probability of the outcomes of any measurement are given by the moduli of the probability amplitudes, and the choice of basis consists of distinguishable states, so the outcomes of any measurement are mutually exclusive.8

  4. For multiple qubits (and composite systems in general), the corresponding state vectors are the unit vectors in the tensor product of the Hilbert spaces associated to each individual system, e.g. a system consisting of two qubits is described by the complex Hilbert space , and so state vectors are of the form

    The evolutions are then described by tensor products of the relevant unitary matrices.

    Edit. More about this at the end of the post.


 

Finally, for this section, an important note on measurements and the troublesome notion of collapsing. When we have some particle in a coherent superposition and then measure it, we find it in state with probability . But then, after the measurement, the particle is in state , full stop; there is now no uncertainty, and we know that we can measure this particle over and over and always find it in state . Even more worryingly, the original state is completely irretrievable. This seems to be a contradiction to our assumptions on quantum evolutions: this ‘collapse’ that follows a measurement is not described by a hermitian operator (in particular, it’s not invertible, since we lose the original state), but instead something far more blunt and sudden.

There are many ways of dealing with this problem, but the important thing to note is that measurements can still be described in exactly the same way as all other quantum processes, and aren’t subject to a different axiomatisation or something.9 However, for the moment, it’s a convenient way of thinking that simplifies the maths (and conceptualisation) to just think of measurements as collapses.

Classical circuits

Finally for this post, we’ll talk a bit about classical circuits. Let’s take a single bit, which can be in one of two states: (off) and (on). Using the same notation as before, we can use a 2-vector to describe the state of this bit:

Then we can describe the four classical computations on a single bit by matrices:

If we have two bits, we can use that to write state vectors in the form

for .

Then, for example, a gate that acts as NOT on the first bit and as the CONSTANT1 gate on the second bit would be given by10 the matrix


Edit. Somehow I forgot to talk about entanglement, so let me fix that now with a small update.

When we have a multi-particle system (e.g. two qubits), then the states of this joint system are described by tensor products, as we said in our four-point axiomatisation. This means that states of the form

(where the latter two equalities are just us introducing some shorthand notation) describe our system. However, not every element of a tensor product is a tensor of elements in each component: they are linear combinations of such things. For example,

but

cannot be written in the form .

We call states in the former form (tensors of states of the composite systems) pure states, and the latter form (states that don’t ‘split’ as a state in each composite system) entangled states, because somehow the two systems are interacting with each other in such a way that we can’t ‘pull them apart’.


Footnotes

  1. In general, it seems that quantum physics shares the fate of Gödel’s incompleteness theorems – to be misquoted and misapplied for the purpose of sensationalism. 

  2. To be fair, I have neither witnessed an otter-pigeon swimming race nor an otter-pigeon drinking race, so this could be a flawed analogy. 

  3. Well, in some way we do, because quantum theory governs fundamental actions of particles themselves, which sort of make up everything, so of course we’re affected. But what I mean here is that we don’t tend to see things in two places at once, or witness ‘mutually exclusive’ events simultaneously. 

  4. So, in our example, an electron in its ground state is said to be in state , and in its higher energy state its said to be in state

  5. Whatever this might actually mean… 

  6. Enthusiasm isn’t a substitute for knowledge, so I really hope that I haven’t said anything misleading here. If it were the case that being really excited about everything counted the same as understanding it all, then I’d be a much better mathematician. 

  7. We always need a choice of basis, because this is equivalent to being able to interpret a qubit ‘as a 0 or a 1’. 

  8. This is a possible sticky wicket. A particle can be in a superposition of states, but any measurement will result in just one state; we can’t actually observe a superposition. See the comments in a bit about measurement and collapse. 

  9. What even is measurement? The more you think about it and try to define it, the more you realise what a tricky concept it is. It’s when an observer learns information about a quantum system, but then, what is an observer? And how do we formalise the idea of ‘gaining information about something’? There is a lot of ‘quantum philosophy’ here, and many ways of answering these questions. Personally, I like the approach that captures the ‘measurement collapse’ as ‘lots of decoherence’, and describes decoherence as ‘the universe trying to learn information about the quantum system’. Hopefully I’ll write about this some other time, and try to find some good references by real physicists. 

  10. The tensor product of matrices is given by the Kronecker product