Frank Wilczek, the author of this text (in Physics Today, August 2002),* *is the Herman Feshbach Professor of Physics at the Massachusetts Institute of Technology in Cambridge, Massachusetts.

Let's quickly recollect the main points of the two earlier columns in this
series. Gravity appears extravagantly feeble on atomic and laboratory
scales, ultimately because the proton's mass *m*_{p} is
*much* smaller
than the Planck mass *M*_{Planck } = (*c*/*G*_{N})^{1/2}, where is Planck's quantum of action, *c* is the speed of light, and *G*_{N} is Newton's gravitational constant. Numerically, *m*_{p }/*M*_{Planck }» 10^{-18}. If we aspire, in line with Planck's original vision and with modern ambitions for the unification of physics, to use the natural (Planck) system of units constructed from *c*, , and *G*_{N} (see "Scaling Mount Planck I: A View from the Bottom," Physics Today, June 2001, page 12), and if we agree that the proton is a natural object, then the very small ratio appears at first blush to pose a very big embarrassment. It mocks the central tenet of dimensional analysis, which is that natural quantities expressed in natural units should have numerical values close to unity.

Fortunately, we have a deep dynamical understanding of the origin of the proton's mass, thanks to quantum chromodynamics. The value of the proton's mass is determined by the scale L_{QCD}, at which the interaction between quarks --parameterized by the energy-dependent "running" QCD coupling constant *g _{s}*(

A conceptually independent line of evidence likewise points to *M*_{Planck}*c*^{2} as a fundamental energy scale. By postulating the existence of an encompassing symmetry at that scale, and weaving the separate gauge symmetries SU(3) ´ SU(2) ´ U(1) of the standard model into a larger whole, we can elucidate a few basic features of the standard model that would otherwise remain cryptic. The scattered multiplets of fermions and their peculiar hypercharge assignments click together like pieces of a disassembled watch. And, most impressively, the disparate coupling strengths we observe at low energy are derived quantitatively from a single coupling--none other than our friend *g _{s}*(

In all those previous considerations, gravity itself has figured only passively, as a numerical backdrop. It has supplied us with the numerical value of *G*_{N}, but that's all. Now, in this concluding column, I examine how (and to what extent) gravity, as a dynamical theory, fits within this circle of ideas.

A lot of portentous drivel has been written about the quantum theory of gravity, so I'd like to begin by making a fundamental observation about it that tends to be obfuscated. *There is a perfectly well-defined quantum theory of gravity that agrees accurately with all available experimental data.* (I have heard two grand masters of theoretical physics, Richard Feynman and J. D. Bjorken, emphasize this point on public occasions.)

Here it is. Take classical general relativity as it stands: the Einstein-Hilbert action for gravity, with minimal coupling to the standard model of matter. Expand the metric field in small fluctuations around flat space, and pass from the classical to the quantum theory following the canonical procedure. This is just what we do for any other field. It is, for example, how we produce *quantum* chromodynamics from classical gauge theory. Applied to general relativity, this approach gives you a theory of gravitons interacting with matter.

More specifically, this procedure generates a set of rules for Feynman graphs, which you can use to compute physical processes. All the classic consequences of general relativity, including the derivation of Newton's law as a first approximation, the advance of Mercury's perihelion, the decay of binary pulsar orbits due to gravitational radiation, and so forth, follow from straightforward application of these rules within a framework in which the principles of quantum mechanics are fully respected.

To define the rules algorithmically, we need to specify how to deal with ill-defined integrals that arise in higher orders of perturbation theory. The same problem already arises in the standard model, even before gravity is included. There we deal with ill-defined integrals using renormalization theory. We can do the same here. In renormalization theory, we specify by hand the values of some physical parameters, and thereby fix the otherwise ill-defined integrals. A salient difference between how renormalization theory functions in the standard model and how it extends to include gravity is that, whereas in the standard model by itself we need only specify a finite number of parameters to fix all the integrals, after we include gravity we need an infinite number. But that's all right. By setting all but a very few of those parameters equal to zero, we arrive at an adequate--indeed, a spectacularly successful--theory. It is just this theory that practicing physicists always use, tacitly, when they do cosmology and astrophysics. (For the experts: The prescription is to put the coefficients of all nonminimal coupling terms to zero at some reference energy scale, call it e, well below the Planck scale. The necessity to choose an e introduces an ambiguity in the theory, but the consequences of that ambiguity are both far below the limits of observation and well beyond our practical ability to calculate corrections expected from mundane, nongravitational interactions.)

Of course the theory just described, despite its practical success, has serious shortcomings. Any theory of gravity that fails to explain why our richly structured vacuum, full of symmetry-breaking condensates and virtual particles, does not weigh much more than it does is a profoundly incomplete theory. This stricture applies equally to the most erudite developments in string and M theory and to the humble bottom-up approach used here. This gaping hole in our understanding of Nature is the notorious problem of the cosmological term. Perhaps less pressing, but still annoying, is that the above-mentioned ambiguity in the theory of gravity at ultralarge energy-momentum makes it difficult to address questions about what happens in ultraextreme conditions, including such interesting situations as the earliest moments of the Big Bang and the endpoints of gravitational collapse.

Nevertheless it makes good sense to take our working theory of gravity at face value and to see whether it fits into the attractive picture of unification we have built for the strong, weak, and electromagnetic interactions. Again, a crucial question is the apparent disparity between the coupling strengths. For the standard model interactions, logarithmic running of couplings with energy was a subtle quantum-mechanical phenomenon, caused by the screening or antiscreening effect of virtual particles. With gravity, the main effect is much simpler--and much bigger. Gravity, in general relativity, responds directly to energy-momentum. So the effective strength of the gravitational interaction, when measured by probes carrying larger energy-momentum, appears larger. That is a classical effect, and it goes as a power, not a logarithm, of the energy.

Now on laboratory scales, gravity is *much* weaker than the other interactions--roughly a factor 10^{-40}. But we've seen that unification of the standard model couplings occurs at a very large energy scale, precisely because their running is logarithmic. And at this energy scale, we find that gravity, which runs faster, has almost caught up to the other interactions! Since the mathematical form of the interactions is not precisely the same, we cannot make a completely rigorous comparison, but simple comparisons of forces or scattering amplitudes give numbers like 10^{-2}. Gravity is still weaker, but not absurdly so. Given the enormity of the original disparity, and the audacity of our extrapolations, this relatively slight discrepancy qualifies, if not quite as full success in achieving, at least as further encouragement toward trusting, the ideal of unification.

Let me summarize. Planck observed in 1900 that one could construct a system of units based on *c*, , *G*_{N}. Subsequent developments displayed those quantities as conversion factors in profound physical theories. Now we find that Planck's units, although preposterous for everyday laboratory work, are very suitable for expressing the deep structure of what I consider our best working model of Nature, as sketched in this three-part series of columns. Planck proposed, implicitly, that the mountain of theoretical physics would be built to purely conceptual specifications, using just those units. Now we've taken the measure of Mount Planck from several different vantage points: from QCD, from unified gauge theories, from gravity itself--and found a consistent altitude. It therefore comes to seem that Planck's magic mountain, born in fantasy and numerology, may well correspond to physical reality. If so, then reductionist physics begins to face the awesome question, compounded of fulfillment and yearning, that heads this column.