Contents

A Busy Developer’s Guide to Quantum Computing - Part 1

Mariusz Krzemień

08 Apr 2024.8 minutes read

A Busy Developer’s Guide to Quantum Computing - Part 1 webp image

Motivation

A quantum computer provides a special form of parallelism. By utilizing it, the quantum computer should be able to compute things faster than its classical counterpart. However, to make use of it, problems need to be approached differently, using specifically designed algorithms.

Here, “faster” means that some problems can actually be solved with better computational complexity. Sometimes, the difference can be quite radical. Certain problems, which require exponential time on a classical computer, can be solved in polynomial time on a quantum one. (Notable examples include Deutsch-Jozsa algorithm and the famous Shor’s algorithm.)

But these are special problems, specifically suited for the platform. As a developer, I’d like to know if it’s possible to solve more general types of problems.

In this short series, I’m going to show you how you can approach a quantum computer with a custom problem and prepare a solution.

Check all the articles from the series:

Plan

The series will consist of four parts. Each part will look at a quantum computer from a slightly different angle:

  • Part 1 — provides an overview of a quantum computer using an analogy to more familiar terms like GPU and vectorization.
  • Part 2 — introduces qubits and other elements behind this overview picture.
  • Part 3 — shows how programming a quantum computer can actually look.
  • Part 4 — contains a walkthrough of a working code example.

Quantum computer overview

Example problem

To make the following discussion more concrete, let’s focus on one particular type of problem — namely discrete optimization. Given a set of inputs we want to find the input that best fulfills given criteria (e.g. the Travelling Salesman Problem — or TSP for short — where we search for the shortest route connecting all nodes of a given graph). More formally, given a cost function, we want to find its minimum.

In many cases it can be solved using numerical methods. However, to make it possible the cost function must fulfill specific requirements. In the general case, we basically need to calculate and compare the results for all possible arguments.

Before moving forward I’d like to make two additional assumptions:

  • For simplicity, we assume that both the arguments and the results of the cost function are single integer numbers from a given range {0…N}. In case of problems with more complex inputs, we can try to encode them as numbers (e.g., in the case of TSP, we might assign a unique number to each route in the graph).

1

A cost function with a single argument

  • We will redefine the optimization problem as a decision problem. Let’s assume we don’t necessarily need the single best result. We just need a result that is good enough. In other words, it fulfills some predefined criteria (e.g. in case of TSP we may be interested in any route shorter than given limit). Then, we can replace the cost function with a predicate returning 1 for good inputs and 0 otherwise.

2

A predicate with a single argument

The second assumption lets us avoid comparing the results between inputs. It will be helpful when trying to parallelize the computation.

Now — if we are lucky — we might get a good result after a single function call. However, if good results are scarce, then the expected number of calls will still be proportional to the number of all arguments.

Calculating a function for all possible arguments may quickly become infeasible — and here is where quantum computers might prove useful.

Why a quantum computer?

The example optimization problem described above would require searching through all possible results of a given cost function. On a classical computer, the complexity of such a task is O(n), where n is the number of all possible arguments. Note: Here we treat the function as a black-box and define the complexity as proportional to the number of function calls.

Later, in Part 4 we are going to solve our example problem on a quantum computer using a quantum search algorithm. With that, the complexity of finding a solution falls down to O(n). Though not an exponential speed-up, it’s still impressive, and the solution is quite generic.

How is it possible?

And what is a quantum computer anyway?

In simplest terms, a quantum computer is a machine that utilizes quantum effects to perform computation. Quantum effects have one special feature — they sometimes behave as if performing many things in parallel. For example, you may recall an electron exploring various paths in parallel in the double-slit experiment. Clever use of this feature makes quantum computation run in parallel.

To some extent it resembles computation on a GPU.

GPU as a cooprocessor

A GPU can be used as a coprocessor for particularly computation-heavy tasks. For example, in Machine Learning pipelines you can find a pattern, where the most demanding parts of computation are delegated to the GPU.

3

GPU as a coprocessor

Other, less demanding tasks, like data preparation and postprocessing, usually happen outside GPU. We can imagine a similar pattern, where some well-defined tasks are delegated to a quantum computer, while data preparation and postprocessing happens on a classical one.

But we can take this analogy further and see how computation on a quantum computer resembles vectorization.

Vectorization on GPU

4

Parallel computation using vectorization

Vectorization is a technique that allows effective use of a GPU to parallelize computation. In order to use it, our function usually needs to be defined using simple arithmetic operations and avoid divergent branching code.

With that, we can pass to our function a single vector containing all arguments and GPU will parallelize the computation. Obviously, the number of parallel operations will be limited by the number of available cores.

But ideally you can describe the whole computation as a sequence of steps. In each step all elements of the input vector are processed in parallel. Each step of the computation produces a new vector. And because each element is processed independently, a resulting vector will have the same length. (Each element of the resulting vector corresponds to one element of the input vector.)

Computation on a quantum computer looks a little bit similar.

Vectorization on quantum computer

image6

Parallel computation on a quantum computer

The picture is the same as before, but here n — the vector length — can be up to 2, where k is the number of qubits (I talk more about qubits in Part 2).

In this case the function also needs to fulfill certain requirements. Some of them are similar as in case of vectorization on GPU. Others are specific for quantum computers.

In particular, the function code must not branch out — i.e., the computation must perform the same set of operations on each argument. Such a function will be able to handle the whole vector of its arguments in parallel. The difference is that on a quantum computer the maximum number of arguments is limited by the number of qubits — with each qubit doubling the limit. The number grows exponentially, quickly outgrowing the number of GPU cores even for a small number of qubits.

On the other hand, we need to take into account some additional, specific restrictions:

  • In case of vectorization on GPU the vector elements could be arbitrary numbers. Here — given k qubits — the vector elements are restricted to binary integers with at most k digits and they must be unique.
  • Contrary to vectorization on GPU, the vector length may change. It is possible, because the computation for parallel inputs is not always independent.
  • Finally, the computation for the whole vector produces a single number.

image3

Parallel computation with a single number in result

Back to our example

Later we’ll see more precisely what these restrictions mean and where they come from. For now, let’s briefly see if they fit our example problem.

The example cost function is said to take inputs from a limited set of integers {0…N}. We need a number of qubits large enough to fit all possible inputs (i.e., N < 2). We are going to create a vector containing all the inputs and use the ability of a quantum computer to run the function for all of them in parallel. We will also need to add some additional qubits to store the intermediate results of the calculation.

image1

Example k bits of input plus some additional bits for intermediate results

Each input (an integer with k binary digits) will be associated with some additional bits. In our case the input part will be constant while the additional bits will change along the computation. The constant part plus the additional bits will form a single vector element.

You can treat each element as a key-value pair — with a constant part as a key and the rest as a mutable value. And because the key is unique and is always present, the whole element will always remain unique throughout the computation.

Finally, we are going to force the quantum computer to return the one input that fulfills the search criteria. But to see how to do it, we need to look a little bit deeper. And that’s what I’m going to do next.

Summary

In this part of the series I gave a quick overview of a quantum computer. I drew an analogy to more familiar terms like GPU and vectorization, showing both similarities and specific restrictions imposed by a quantum computer. I also briefly showed how you could approach a quantum computer with a custom optimization problem.

In Part 2, I'm going to show where these specific restrictions come from. To do that, I’m going to talk a little bit about qubits and other building blocks of the quantum computer.

In Part 3 and Part 4, I'm going to return to the optimization example and analyze it in more detail, including a working code example.

Check all the articles from the series:

Reviewed by Bartłomiej Żyliński, Paweł Stawicki, Bartłomiej Turos

Blog Comments powered by Disqus.