# Rose’s Law for Quantum Computers

[update in 2015: the hardware curve that is "Rose's Law" (blue diamonds) remains on track. The software and performance/qubit (red stars, as applied to certain tasks) is catching up, and may lag by a few years from the original prediction overlaid onto the graph. Updated Graph here]

When I first met Geordie Rose in 2002, I was struck by his ability to explain complex quantum physics and the “spooky” underpinnings of quantum computers. I had just read David Deutsch’s *Fabric of Reality* where he predicts the possibility of such computers, and so I invited Rose to one of our tech conferences.

We first invested in 2003, and Geordie predicted that he would be able to demonstrate a two-bit quantum computer within 6 months. There was a certain precision to his predictions. With one bit under his belt, and a second coming, he went on to suggest that the number of qubits in a scalable quantum computing architecture should double every year. It sounded a lot like Gordon Moore’s prediction back in 1965, when he extrapolated from just five data points on a log-scale (his original plot is below).

So I called it “Rose’s Law” and that seemed to amuse him. Well, the decade that followed has been quite amazing. I commented on Rose’s Law four years ago on flickr, but I share this graph and some potential futures for the first time today.

So, how do we read the graph above? Like Moore’s Law, a straight line describes an exponential. But unlike Moore’s Law, the *computational power* of the quantum computer should grow exponentially with the number of entangled qubits as well. It’s like Moore’s Law compounded. (D-Wave just put together an animated visual of each processor generation in this video, bringing us to the present day.)

And now, it gets mind bending. If we suspend disbelief for a moment, and use D-Wave’s early data on processing power scaling (more on that below), then the very near future should be the watershed moment, where quantum computers surpass conventional computers and never look back. Moore’s Law cannot catch up. A year later, it outperforms all computers on Earth combined. Double qubits again the following year, and it outperforms the universe. What the???? you may ask... Meaning, it could solve certain problems that could not be solved by any non-quantum computer, even if the entire mass and energy of the universe was at its disposal and molded into the best possible computer.

It is a completely different way to compute — as David Deutsch posits — harnessing the refractive echoes of many trillions of parallel universes to perform a computation.

First the caveat (the text in white letters on the graph). D-Wave has not built a general-purpose quantum computer. Think of it as an application-specific processor, tuned to perform one task — solving discrete optimization problems. This happens to map to many real world applications, from finance to molecular modeling to machine learning, but it is not going to change our current personal computing tasks. In the near term, assume it will apply to scientific supercomputing tasks and commercial optimization tasks where a heuristic may suffice today, and perhaps it will be lurking in the shadows of an Internet giant’s data center improving image recognition and other forms of near-AI magic. In most cases, the quantum computer would be an accelerating coprocessor to a classical compute cluster.

Second, the assumptions. There is a lot of room for surprises in the next three years. Do they hit a scaling wall or discover a heretofore unknown fracturing of the physics… perhaps finding local entanglement, noise, or some other technical hitch that might not loom large at small scales, but grows exponentially as a problem just as the theoretical performance grows exponentially with scale. I think the risk is less likely to lie in the steady qubit march, which has held true for a decade now, but in the relationship of qubit count to performance.

There is also the question of the programming model. Until recently, programming a quantum computer was more difficult than machine coding an Intel processor. Imagine having to worry about everything from analog gate voltages to algorithmic transforms of programming logic to something native to quantum computing (Shor and Grover and some bright minds have made the occasional mathematical breakthrough on that front). With the application-specific quantum processor, D-Wave has made it all much easier, and with their forthcoming Black Box overlay, programming moves to a higher level of abstraction, like training a neural network with little understanding of the inner workings required.

In any case, the possibility of a curve like this begs many philosophical and cosmological questions about our compounding capacity to compute... the beginning of infinity if you will.

While it will be fascinating to see if the next three years play out like Rose’s prediction, for today, perhaps all we should say is that it’s not impossible. And what an interesting world this may be.