Take a moment right now to consider just how crazy the world is. Back in the 1970s, a ‘minicomputer’ filled an entire building, and now everyone from 2 week old babies to kids in rural Africa have mobile phones that give them access to more information than Bill Clinton had when he was President 20 years ago. Imagine sitting on a desert island and wondering how to go about creating such a technology, it’s an awesome feat of collaboration, intellect and creativity that really boggles the mind. And we’ve only got better and better at doing awesome things. As renowned futurist/thinker/entrepreneur Ray Kurzweil puts it, the progress of the entire 20th century (from telephones and television to computers) would have been achieved in just 20 years at the rate of advancement of the year 2000. And in 2021, it will only take 7 years to experience the same amount of progress as the entire 20th century.
In fact my favourite way of thinking about the increasing rate of technological progress comes from Tim Urban’s wait but why blog in which he introduces the concept of a ‘die progress unit’ (DPU), which is how far in the future someone would have to be transported in order to actually die from the level of shock they’d experience. ‘Imagine taking a time machine back to 1750 — a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity. This experience for him wouldn’t be surprising or shocking or even mind-blowing — those words aren’t big enough. He might actually die.’ Thus, the DPU would be 250 years.
Urban then goes on to talk about how far back the 1750 guy would have to go to find someone who was sufficiently shocked by the 1750s that they might actually die from it. Another 250 years surely wouldn’t be enough, realistically he’d have to go back maybe as far as 12,000 BC before the agricultural revolution and the concept of civilisation. 1750s London to a hunter-gatherer would be pretty insane it’s safe to say. So the DPU here is more like 13,000 years. Now, the law of accelerating returns (the fact that the more advanced a society is, the faster it can progress and thus reaches an exponentially increasing rate of acceleration) means that it’s entirely possible that the DPU for present day people would only be 50 years or so i.e. one of us would only have to travel to 2070 to be sufficiently shocked by the experience (who knows, maybe there won’t even be any humans around in 2070, just AI overlords, or a nuclear wasteland).
But is this rate of technological progress actually inevitable? Might it slow down or reach a limit at some point? Moore’s famous law states that the number of transistors on a microprocessor chip will double every 2 years (and by corollary, its performance too). Clearly software has played a role too, but it’s fair to say that a large part of this exponential improvement can be attributed to hardware upgrades in line with Moore’s law and that this has carried us from the huge mainframe computers of the 1970s through to the present day ‘internet of things’ society where everything from your thermostat, to your light switch and even the packet of crisps you just bought has a wireless internet connection (so that it can try and sell you stuff probably). But Moore’s law has now reached its limit due to the heat generated when you cram more and more silicon into tiny spaces. We are now at the stage where the top microprocessors are just 14nm across — smaller than most viruses. We are nearing the 2–3nm mark where individual components are less than 10 atoms across. Can things that small even be called components? Unsurprisingly, there isn’t really any room to get much smaller, especially seeing as at that size, weird quantum effects that no-one really understands make things go a bit mental. The other main issue is the costs associated with such advances as each new smallification requires a whole new generation of photolithography machines to deal with such ridiculously small components. Daniel Reed (Computer Scientist and VP for research at the University of Iowa) reckons that we’ll ‘run out of money before we run out of physics’.
So does that spell an end to the exponential advances we’ve seen since the 90s? Well not quite. Whilst Moore’s law originally described transistors on chips, it has since been extrapolated to refer simply to a doubling of consumer experienced value every 2 years, and conveniently, over the last few years advancements in areas other than just getting smaller have just started to really kick off. So the future looks to be ‘more than Moore’ rather than simply ‘more Moore’. ‘More than Moore’ is likely to comprise multiple recent advances from neuromorphic chips that mimic the human brain and artificially intelligent machines through to the birth of cloud computing (whereby our mobile devices actually have much less to do because they can simply interact wirelessly with centralised servers and data centres that do all the heavy lifting) and novel graphene-like materials to replace silicon.
What I want to focus on here though, is the very recent birth of quantum computing which promises to be pretty world changing. It’s been a decades long slog with little to show for all the hard work, but over recent months quantum computing has begun to flower. Prior to 2 months ago, IBM were in the lead with a 17 qubit machine, but John Martinis at Google and Mikhail Lukin from Harvard and the Russian quantum centre have just taken a giant leap forward with their two announcements in July of this year. Martinis’ google group have unveiled a 49 qubit machine which they hope to achieve quantum supremacy by the end of this year, meaning that their computer will be able to solve certain problems that are beyond the capabilities of any classical computer (even china’s Sun Taihulight which is the most powerful classical computer in the world). It is thought that the threshold for quantum supremacy is around the 50 qubit mark, with a 2 qubit fidelity of >99.7% (i.e. 3 in every 1000 operations will go wrong). Mikhail Lukin’s team have achieved the 51 qubit mark with their quantum simulator, but this is a quantum simulator and not a full blown quantum computer, the difference being that a quantum simulator is able to solve the specific problem it is designed to solve whereas a quantum computer would be versatile and able to solve many diverse problems.
Quantum computing has been a lofty goal of researchers, ever since the early 1980s when the physics titan Richard Feynman first purported the idea in a lecture. Conventional computers are ‘either or’ machines — they comprise billions of little switches that can be ‘on’ or ‘off’ and let electrons flow or not. These switches are called transistors and their two potential states give rise to the binary 1s and 0s that we all recognise as the language of these machines that run our lives. And combining these using what’s called Boolean logictransforms them into the basis of everything from your iPhone to your MacBook. In the quantum realm however, ‘either or’ becomes ‘both and’. Or in other words, a quantum bit (qubit — which is generally an atom, photon or electron i.e. something small enough to take quantum states) can be in a superposition of states i.e. a 1 and a 0 at the same time (or in reality a probability distribution giving a percentage likelihood that it is a 1 or a 0), up until the moment at which it is measured, where it assumes a definitive state.
Quantum entanglement is the other distinguishing feature of a quantum computer which results whenever there are more than one qubits. In the words of John Preskill (professor of theoretical physics at Caltech) ‘[entanglement is] the correlations between the parts of a system. Suppose you have a 100-page book with print on every page. If you read 10 pages, you’ll know 10% of the contents. And if you read another 10 pages, you’ll learn another 10%. But in a highly entangled quantum book, if you read the pages one at a time — or even 10 at a time — you’ll learn almost nothing. The information isn’t written on the pages, so you have to somehow read all of them at once’.
So whilst in a normal computer, a calculation requires going through all the different possibilities of 0s and 1s, in a quantum computer a qubit can be in every possible combination of 1s and 0s at once and can thus process them all simultaneously rather than in turn as in a classical computer. It’s all about doing things in parallel rather than serial. Or in other words, the ability to perform computations across parallel universes.
Because of this, a 30 qubit quantum computer would equal the processing power of a classical computer running at 10 teraflops (10 trillions of floating point operations per second). Compare that to a present day desktop computer that runs at gigaflop speeds (billions of floating point operations per second). And remember that’s a comparison between 30 qubits vs and countless billions of classical bits! Now whilst this is all pretty impressive, it’s only useful in specific situations where doing huge numbers of calculations at the same time is necessary. Quantum factoring (breaking large numbers down into their prime factors is prohibitively time consuming for classical computers but trivial for quantum machines) is one example, and renders state of the art encryption methods easy to crack. Quantum search makes database searching exponentially fast. Molecular modelling right down to a quantum level resolution also becomes possible, revolutionising the searches for new drugs, catalysts for industrial processes and carbon capture and new high temperature superconductors.
For decades, these abilities have been far off dreams due to the error rate and extreme fragility of qubits — any slight influence from the outside world will cause a qubit to collapse so that it no longer represents multiple states at once. But novel solutions such as using superconducting materials and weird quantum effects have managed to increase their lifetimes by a factor of over 10,000 so that they now can maintain their state for up to 100ms. Other techniques such as embedding them in purified silicon have purported to be able to extend the lifespan of qubits for as long as 30s.
So much is exciting stuff is happening in the quantum world and it is without doubt that these new machines will transform a diverse range of fields from pharmaceuticals and biochemistry to industrial engineering and climate science. They aren’t going to be particularly applicable to other fields though, so we need to continue our search for more than Moore elsewhere.