Fundamental physics is at an impasse. Some experts say it’s facing its most difficult challenges to date; some say progress is still being made and all is well. Others claim it’s the end of the road, or at least that a fresh start is needed. One thing is certain: despite the Large Hadron Collider’s success in finding the Higgs boson, 21st-century theory is so far struggling to reach anything like the peaks of the early 20th century. Breakthroughs are rare, and current theories are either too far ahead of our experimental capacities to be tested, or have to be tweaked to make up for a lack of evidence. The facts are stark: even the best cases for genuine progress can only be described as “promising” or “on the right track”. It has become a matter of faith, or taste. Some defenders claim that the very definitions of science must now change. What physics really seems to need is a genius, someone to make that critical discovery, find the answer to some fundamental problem – but where will such a person come from?
Exactly a century ago, physics was rocked by a brilliant young Dane, Niels Bohr, who published a theory of the atom. The very first quantum theory of how an atom works when reacting to light, it introduced a new general principle and along the way gave us the terms “quantum mechanics” and “quantum leap”. His model wasn’t perfect – others were to refine it a great deal over the following decades – but nonetheless Bohr had performed the ideal theoretical discovery. Based on known constants, his model accounted for complex phenomena that had already been observed and laid out how they worked at the atomic level. It was elegant, requiring few assumptions, and, crucially, it also made predictions that could be tested using the experimental technology of the time.
The early 20th century was ablaze with advances on the scale of Bohr’s quantum atom and by the 1930s, academia was filled with big-name physicists. In 1925, Erwin Schrödinger, with his wave equation, had given us new ways to visualise and predict things in the quantum realm. Werner Heisenberg discovered the uncertainty principle around the same time, describing the fundamental limits of knowledge and, in a roundabout way, laying the foundations for the solid-state electronics of the microchip. In 1928, Paul Dirac wrote an equation to model electrons at energies up to the speed of light, their maximum limit, and discovered the possibility of antimatter in the process – within four years, it had been observed by experiment and by the 1970s, antimatter was being used in particle accelerators. In 1930, Wolfgang Pauli, a sometime collaborator of Dirac’s, invoked a new particle to explain one of the forms of nuclear radiation: it became the neutrino. There was one vast shift in perspective after another. The existence of galaxies was confirmed in 1923: those fuzzy blobs and spirals, which we had mistaken for relatively nearby nebulae or hot clouds of gas lit up by stars, now made the Milky Way into just one of thousands of galaxies then detectable from Earth. All at once we were a speck in a vast cosmos of specks. And in 1929, Edwin Hubble confirmed that the entire visible universe was expanding, and big bang cosmology was given its first tangible roots in reality.
While figures such as Bohr, Dirac, Heisenberg and Pauli are superstars for scientists and for fans, there were a few who became household names. Hubble was immortalised by the NASA telescope; Schrödinger for his cat-based illustration of quantum weirdness. Yet only Einstein is universally recognised as a genius – and it is worth noting that the work that made his name was undertaken entirely alone. The white heat of the 1920s was fiercely collaborative – all the major players knew each other personally and often corresponded, or worked at the same institutions – but Einstein famously produced his landmark paper on relativity in his spare time. He was of course still drawing on mathematics others had developed. His radical idea that space and time are aspects of a single medium, and that this medium is dynamic, was a revelation, but in calculating the geometry of space-time, Einstein relied on the work of his former tutor, Hermann Minkowski. Despite standing on the shoulders of giants, Einstein was seen to embody a particularly romantic type of lone genius, one whose status as a patent clerk somehow made his achievement all the more remarkable.
It may be that in a period of great collective progress, individual genius is more likely to be recognised and apt to be celebrated with more fervour. This is partly because it is easier in such circumstances for any one person to make a significant contribution to the overall effort – a single serendipitous breakthrough can create far more momentum than the hundred incremental steps that precede it. It may also be true, though reductive, to say, as so many have, that in the early 20th century the “low-hanging fruits” of new physics were ripe for harvest. Certainly, there were more opportunities to make significant discoveries. Does that help explain why Einstein and Schrödinger are permanently enshrined in popular culture, while names such as Witten, Maldacena and Arkani-Hamed are almost completely unknown? Those three are among the brightest lights of modern physics and they undoubtedly have intellectual gifts comparable to those of their forebears. They have made some grand strides, but they are not celebrated outside their field. Can it be that wider recognition, and especially the genius label, is most readily earned when not only the individual but the entire field is winning?
Part of the problem for modern physics lies in the recent divergence between theory and experiment. Experiments need theories to explore; theories need experiments to confirm them. The discovery of the Higgs boson in 2012 was the final confirmation of electroweak, a theory devised in the late 1960s. It also confirmed the broader set of theories that includes electroweak, the standard model, which was finalised in the early ’70s. Since then, there has been no confirmation of any theory that goes beyond the standard model. We know there must be more to find, as the standard model still requires certain constants and values to be plugged in from observation, rather than calculated from the theory itself: it is not comprehensive. There have been lots of ideas in the last few decades, and even a new fundamental model – string theory – but nothing that can be confirmed by experiment. In their defence, the modern theories operate in a domain so extreme that current technology has no hope of probing it.
String theory has offered a few potential avenues, but so far they’ve all drawn a blank. To the lay person, string theory can sound abstruse at best, so it is worth briefly setting out how it became the prime area of research, the test for any would-be genius. It gained credibility by offering a solution to the standard model’s major problem. The standard model contains four sub-theories that explain the fundamental particles and their interactions via forces. At higher energies these interactions all get stronger and their relative strengths start to converge. The hope would be that this kind of convergence is an indication of unification, meaning that the different forces detectable at our everyday levels of energy would actually, if observed at much higher levels, reveal themselves to be aspects of a single force. This would qualify the standard model as a Theory of Everything, which is pretty much what it sounds like: a theory that could explain and link together all known physical phenomena. The problem was gravity. It simply would not fit, would not converge with the other three – it remains extremely weak at the quantum level. Then, in the late 1970s, a possible mathematical solution was found: supersymmetry followed principles that had led to the standard model, but expanded the set of fundamental particles. This offered a potential particle for gravity, which had previously been thought of – based on Einstein’s model – as a continuous curvature of space rather than a stream of point-like objects. Even more miraculously, certain variations of supersymmetry did allow for gravity to converge with the other forces of the standard model – unification was in sight. When this was discovered, string theory was still a fringe discipline, but people began to explore whether all the particles required by the supersymmetry model could be accounted for in terms of astonishingly small vibrating strings. Two theorists, Michael Green and John Schwarz, found that they could, and when the results were announced in 1984, it prompted a revolution. String theory had become the best bet in the game.
Despite its international foundations, string theory’s rise from obscure curio in the 1960s to astonishing prospect in the 1980s was fuelled by American theorists and institutions. The US hungrily adopted the “superstring”, keen to grace any home-grown theorists who made solid contributions with pithy nicknames such as the “Princeton String Quartet”. As a canvas, string theory was vast. Its fundamental principle, that all things are comprised of vibrating strings, could paint a number of different pictures of the physical world that would work in theory, but nothing that could actually be confirmed by experiment. Whether this vast breadth of theoretical possibilities is a strength or a weakness is still under debate, though some leapt to claim it as an indication that the model was true. By the mid-1990s, string theory had mushroomed into a dizzying complexity that even expert theorists had difficulty getting to grips with.
It was then that Ed Witten, one of our contemporary candidates for physics genius, took centre stage. He had already made huge contributions to string theory’s development, and now he gave the decade its big theoretical physics moment. Judging by mathematical similarities between five different types of string theory, it appears there may be an even deeper, more fundamental theory that explains them all. Witten christened it M-theory in 1995, but its status as a true account of nature is still conjecture some 18 years later. What has happened in those intervening years has been an increasing proliferation of stringy concepts and models. The last great advance in the theory was a profound one: a calculation to see how many different configurations of a universe string theory could create, if you added up all the combinations of its parameters. It turned out to be around 10 followed by 500 zeros, meaning that string theory allows for more possible universes than there are atoms in our real one (around 10 followed by 80 zeros). For some, this was proof of the theory’s remarkable power; for others, it proved that string theory would never be able to tell us why our universe works the way it does, for it couldn’t explain why ours would be one of such a staggeringly large number. Witten’s mathematical hunch had opened quite a can of worms, but theoretical physics seemed no nearer to finding out the truth. And amid such an explosion of hypothetical complexity, perhaps it is little wonder that no “geniuses” were crowned.
Another possible candidate, Nima Arkani-Hamed, also a US-based string theorist, came to prominence in the 2000s, thanks to his work on one of the theory’s most controversial aspects: its insistence on extra, hidden dimensions (like width, length and depth, but in other contexts of movement). Before Arkani-Hamed, these dimensions had been modelled as being tiny, smaller than the fundamental strings themselves, but he devised a theory in which they are very large, working on a similar scale to our natural dimensions – which would make for an anomaly that should show up in experiments. Unfortunately, though, the LHC has all but discounted the possibility of Arkani-Hamed’s radical idea. His thinking was just as potent and insightful as that of any of our recognised giants of physics. It just doesn’t seem to agree with nature. Arkani-Hamed continues to work on big ideas. In 2013, in collaboration with a student of his, he announced a new calculation method for quantum mechanics that is radically different from anything that has gone before – but it only works for “toy” universes that bear no relation to nature. Once again, it may be a “promising development”, but it is surely a long way from the elegance and universality of Einstein’s e=mc², or of many other revelations from the birth of modern physics.
Has US academia, in its urgent quest to secure prestige in physics, staked too much on string theory, in the absence of experimental guidance? It has been nearly 20 years since the last significant American discovery in experimental particle physics, the 1995 observation of the top quark, the last piece in the standard model’s formulation of matter (after that only the Higgs remained to be found to round off the forces). The top quark was observed at Fermilab, outside Chicago; a little over a decade later, Fermilab was effectively shut down by the US Congress, just as the LHC was announcing its first results. Fermilab’s virtual closure signalled the end of the US government’s commitment to experimental science in the most extreme domain of physics. It was a strong sign that American physics had begun to base its reputation not on experiment but on theory.
By 2006, a century after Einstein’s first paper on relativity, discontent was brewing. With so much staked on theory alone, two US physicists wrote books attacking the overwhelming institutional emphasis on string theory. Peter Woit at Columbia went right for the throat. His book Not Even Wrong is a savage denunciation of string theory’s lack of unique predictions and failure to arrive at a concise, complete picture. Lee Smolin, a more philosophical theorist working at the Perimeter Institute, weaves a friendlier, more detailed critique of string theory and the sociology of the US field in The Trouble With Physics. This backlash was met with another – the critiques were critiqued across popular science magazines and blogs, but there was a certain defensiveness involved. String theory had been the great hope for more than 20 years and it dominates the modern archives of published papers on fundamental physics. Woit claims that today’s string theorists have probably spent more time learning about their specialism than about the standard model it is supposed to explain. He sees this as troublesome, especially as it means the best minds in the business are not spending much time exploring alternatives. Worryingly, Smolin observes that unless you are doing string theory research, you probably will not get funding in the US, so if it does turn out that the entire theoretical framework is wrong, there is no contingency plan: it’s string theory or bust. A lot of people may have spent their entire careers barking up the wrong tree – and not just “ordinary” theorists, but the cream of the crop. Might this explain where all the geniuses have gone?
Things are very different, of course, in mathematics. Even Woit and Smolin agree that string theory has been useful inasmuch as it has fuelled mathematical research. At least by comparison with physics, mathematics could be said to favour the loner – with no requirement to cross-check your results against nature, internal consistency is all that counts. In the same year that Witten announced M-theory, Andrew Wiles published his final proof of Fermat’s Last Theorem, a problem that had defied the greatest minds in the field for more than 350 years. It had taken Wiles some nine years, working mostly alone and in secret. In 2003, Grigori Perelman solved the Poincaré conjecture, which had been the thorniest problem in mathematical topology for more than a century. It probably took him a decade. Naturally, both men relied on the work of others to reach their conclusions, but they also did it by themselves, just like Einstein.
The contrast with theoretical physics is stark: in the same period, many hopes were pinned on the LHC to reveal at long last some evidence of string theory in nature. In the early 2000s the common line was that we could expect the LHC to “light up like a Christmas tree” with new particles. It would be producing levels of energy at which some versions of supersymmetry (the necessary foundation for string theory) claimed that entirely new particles would appear. But they didn’t. The Higgs dutifully turned up, but in standard model form rather than the more exotic forms predicted by supersymmetry. Some took this as a sign that supersymmetry was false, but so far the status quo has held. The parameters were tweaked, and we shall have to wait until the LHC has been upgraded to see if more adjustment is required, but for now string theory remains as hypothetical as it was in 1984. It may be unfair or facile to compare the last 30 years with that astonishing period from 1900 to 1930, but it is hard to avoid the sense that we are looking back on three decades of stagnation.
In an era when the most recognised scientists are those who make good TV presenters, the definition of genius might have to shift. Logically, the chances of another Einstein coming along are the same as they were in the 1900s, but the context now is very different: there are far more steps to climb before you can reach new territory to explore, and perhaps what must be known takes longer to learn than any individual can realistically manage. Are we reaching some limit of human cognition, so that any fundamental theory beyond the standard model outstrips our capacity to conceive of it? This is not to say that theoretical physics should give up looking for fundamental theories, but perhaps the next great leap in understanding is simply more likely to come from an experiment led by thousands than from one brilliant mind working alone at a desk. In any case, if genius, whether individual or collective, can be said to exist at all, it must surely be something that cannot be predicted or understood in advance – it only ever arrives by surprise. §