Varying constants, Gravitation and Cosmology
Abstract
Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. It is thus of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We thus detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, Solar system observations, meteorites dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the lowenergy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent finetuning that they confront us with.
general theory of gravitation, fundamental constants, theoretical cosmology
Contents
 1 Introduction
 2 Constants and fundamental physics

3 Experimental and observational constraints
 3.1 Atomic clocks
 3.2 The Oklo phenomenom
 3.3 Meteorite dating

3.4 Quasar absorbtion spectra
 3.4.1 Generalities
 3.4.2 Alkali doublet method (AD)
 3.4.3 Many multiplet method (MM)
 3.4.4 Single ion differential measurement (SIDAM)
 3.4.5 Hi21 cm vs UV:
 3.4.6 Hi vs molecular transitions:
 3.4.7 OH  18 cm:
 3.4.8 Far infrared finestructure lines:
 3.4.9 “Conjugate” satellite OH lines:
 3.4.10 Molecular spectra and the electrontoproton mass ratio
 3.4.11 Emission spectra
 3.4.12 Conclusion and prospects
 3.5 Stellar constraints
 3.6 Cosmic Microwave Background
 3.7 21 cm
 3.8 Big bang nucleosynthesis
 4 The gravitational constant
 5 Theories with varying constants
 6 Spatial variations
 7 Why are the constants just so?
 8 Conclusions
 A Notations
1 Introduction
Fundamental constants appear everywhere in the mathematical laws we use to describe the phenomena of Nature. They seem to contain some truth about the properties of the physical world while their real nature seem to evade us.
The question of the constancy of the constants of physics was probably first addressed by Dirac [154, 155] who expressed, in his “Large Numbers hypothesis”, the opinion that very large (or small) dimensionless universal constants cannot be pure mathematical numbers and must not occur in the basic laws of physics. He suggested, on the basis of this numerological principle, that these large numbers should rather be considered as variable parameters characterizing the state of the universe. Dirac formed five dimensionless ratios among which and and asked the question of which of these ratio is constant as the universe evolves. Usually, and vary as the inverse of the cosmic time. Dirac then noticed that , representing the relative magnitude of electrostatic and gravitational forces between a proton and an electron, was of the same order as representing the age of the universe in atomic units so that his five numbers can be “harmonized” if one assumes that and vary with time and scale as the inverse of the cosmic time.
This argument by Dirac is indeed not a physical theory but it opened many doors in the investigation on physical constants, both on questioning whether they are actually constant and on trying to understand the numerical values we measure.
First, the implementation of Dirac’s phenomenological idea into a fieldtheory framework was proposed by Jordan [267] who realized that the constants have to become dynamical fields and proposed a theory where both the gravitational and finestructure constants can vary (Ref. [492] provides some summary of some earlier attempts to quantify the cosmological implications of Dirac argument). Fierz [196] then realized that in such a case, atomic spectra will be spacetimedependent so that these theories can be observationally tested. Restricting to the subcase in which only can vary led to definition of the class of scalartensor theories which were further explored by Brans and Dicke [66]. This kind of theory was further generalized to obtain various functional dependencies for in the formalization of scalartensor theories of gravitation (see e.g. Ref. [123]).
Second, Dicke [149] pointed out that in fact the density of the universe is determined by its age, this age being related to the time needed to form galaxies, stars, heavy nuclei… This led him to formulate that the presence of an observer in the universe places constraints on the physical laws that can be observed. In fact, what is meant by observer is the existence of (highly?) organized systems and this principle can be seen as a rephrasing of the question “why is the universe the way it is?” (see Ref. [251]). Carter [81, 82], who actually coined the term “anthropic principle” for it, showed that the numerological coincidences found by Dirac can be derived from physical models of stars and the competition between the weakness of gravity with respect to nuclear fusion. Carr and Rees [79] then showed how one can scale up from atomic to cosmological scales only by using combinations of , and .
To summarize, Dirac’s insight was to question whether some numerical coincidences between very large numbers, that cannot be themselves explained by the theory in which they appear, was a mere coincidence or whether it can reveal the existence of some new physical laws. This gives three main roads of investigations

how do we construct theories in which what were thought to be constants are in fact dynamical fields,

how can we constrain, experimentally or observationally, the spacetime dependencies of the constants that appear in our physical laws

how can we explain the values of the fundamental constants and the finetuning that seems to exist between their numerical values.
While “Varying constants” may seem, at first glance, to be an oxymoron, it has to be considered merely as jargon to be understood as “revealing new degrees of freedom, and their coupling to the known fields of our theory”. The tests on the constancy of the fundamental constants are indeed very important tests of fundamental physics and of the laws of Nature we are currently using. Detecting any such variation will indicate the need for new physical degrees of freedom in our theories, that is new physics.
The necessity of theoretical physics on deriving bounds on their variation is, at least, threefold:

it is necessary to understand and to model the physical systems used to set the constraints. In particular one needs to determine the effective parameters that can be observationally constrained to a set of fundamental constants;

it is necessary to relate and compare different constraints that are obtained at different spacetime positions. This often requires a spacetime dynamics and thus to specify a model as well as a cosmology;

it is necessary to relate the variations of different fundamental constants.
We shall thus start in § 2 by recalling the link between the constants of physics and the theories in which they appear, as well as with metrology. From a theoretical point of view, the constancy of the fundamental constants is deeply linked with the equivalence principle and general relativity. In § 2 we will recall this relation and in particular the link with the universality of free fall. We will then summarize the various constraints that exist on such variations, mainly for the fine structure constant and for the gravitational constant in § 3 and 4 respectively. We will then turn to the theoretical implications in § 5 in describing some of the arguments backing up the fact that constants are expected to vary, the main frameworks used in the literature and the various ways proposed to explain why they have the values we observe today. We shall finish by a discussion on their spatial variations in § 6 and by discussing the possibility to understand their numerical values in § 7.
2 Constants and fundamental physics
2.1 About constants
Our physical theories introduce various structures to describe the phenomena of Nature. They involve various fields, symmetries and constants. These structures are postulated in order to construct a mathematicaly consistent description of the known physical phenomena in the most unified and simple way.
We define the fundamental constants of a physical theory as any parameter that cannot be explained by this theory. Indeed, we are often dealing with other constants that in principle can be expressed in terms of these fundamental constants. The existence of these two sets of constants is important and arises from two different considerations. From a theoretical point of view we would like to extract the minimal set of fundamental constants, but often these constants are not measurable. From a more practical point of view, we need to measure constants, or combinations of constants which allow to reach the highest accuracy.
These fundamental constants are thus contingent quantities that can only be measured. Such parameters have to be assumed constant in this theoretical framework for two reasons:

from a theoretical point of view: the considered framework does not provide any way to compute these parameters, i.e. it does not have any equation of evolution for them since otherwise it would be considered as a dynamical field,

from an experimental point of view: these parameters can only be measured. If the theories in which they appear have been validated experimentally, it means that, at the precisions of these experiments, these parameters have indeed been checked to be constant, as required by the necessity of the reproductibility of experimental results.
This means that testing for the constancy of these parameters is a test of the theories in which they appear and allow to extend our knowledge of their domain of validity. This also explains the definition chosen by Weinberg [521] who stated that they cannot be calculated in terms of other constants “… not just because the calculation is too complicated (as for the viscosity of water) but because we do not know of anything more fundamental”.
This has a series of implications. First, the list of fundamental constants to consider depends on our theories of physics and thus on time. Indeed, when introducing new, more unified or more fundamental, theories the number of constants may change so that this list reflects both our knowledge of physics and, more important, our ignorance. Second, it also implies that some of these fundamental constants can become dynamical quantities in a more general theoretical framework so that the tests of the constancy of the fundamental constants are tests of fundamental physics which can reveal that what was thought to be a fundamental constant is actually a field whose dynamics cannot be neglected. If such fondamental constants are actually dynamical fields it also means that the equations we are using are only approximations of other and more fundamental equations, in an adiabatic limit, and that an equation for the evolution of this new field has to be obtained.
The reflections on the nature of the constants and their role in physics are numerous. We refer to the books [28, 216, 503, 502] as well as Refs. [164, 217, 387, 517, 521, 534] for various discussions on this issue that we cannot develop at length here. This paragraph summarizes some of the properties of the fundamental constants that have attracted some attention.
2.1.1 Characterizing the fundamental constants
Physical constants seem to play a central role in our physical
theories since, in particular, they determined the magnitudes
of the physical processes. Let us sketch briefly some of their properties.
How many fundamental constants should be considered? The set of constants which are conventionally considered as fundamental [214] consists of the electron charge , the electron mass , the proton mass , the reduced Planck constant , the velocity of light in vacuum , the Avogadro constant , the Boltzmann constant , the Newton constant , the permeability and permittivity of space, and . The latter has a fixed value in the SI system of unit () which is implicit in the definition of the Ampere; is then fixed by the relation .
It is however clear that this cannot corresponds to the list of the fundamental constants, as defined earlier as the free parameters of the theoretical framework at hand. To define such a list we must specify this framework. Today, gravitation is described by general relativity, and the three other interactions and the matter fields are described by the standard model of particle physics. It follows that one has to consider 22 unknown constants (i.e. 19 unknown dimensionless parameters): the Newton constant , 6 Yukawa couplings for the quarks () and 3 for the leptons (), 2 parameters of the Higgs field potential (), 4 parameters for the CabibboKobayashiMaskawa matrix (3 angles and a phase ), 3 coupling constants for the gauge groups ( or equivalently and the Weinberg angle ), and a phase for the QCD vacuum (), to which one must add the speed of light and the Planck constant . See Table 1 for a summary and their numerical values.
Constant  Symbol  Value 

Speed of light  
Planck constant (reduced)  
Newton constant  
Weak coupling constant (at )  
Strong coupling constant (at )  
Weinberg angle  
Electron Yukawa coupling  
Muon Yukawa coupling  
Tauon Yukawa coupling  
Up Yukawa coupling  
Down Yukawa coupling  
Charm Yukawa coupling  
Strange Yukawa coupling  
Top Yukawa coupling  
Bottom Yukawa coupling  
Quark CKM matrix angle  
Quark CKM matrix phase  
Higss potential quadratic coefficient  ?  
Higss potential quartic coefficient  ?  
QCD vacuum phase 
Again, this list of fundamental constants relies on what we accept as a fundamental theory. Today we have many hints that the standard model of particle physics has to be extended, in particular to include the existence of massive neutrinos. Such an extension will introduce at least seven new constants (3 Yukawa couplings and 4 MakiNakagawaSakata (MNS) parameters, similar to the CKM parameters). On the other hand, the number of constants can decrease if some unifications between various interaction exist (see § 5.3.1 for more details) since the various coupling constants may be related to a unique coupling constant and an energy scale of unification through
where the are numbers which depend on the explicit model
of unification. Note that this would also imply that the variations,
if any, of various constants shall be correlated.
Relation to other usual constants. These parameters of the standard model are related to various constants that will appear in this review (see Table 2). First, the quartic and quadratic coefficients of the Higgs field potential are related to the Higgs mass and vev, and . The latter is related to the Fermi constant which imposes that GeV while the Higgs mass is badly constrained. The masses of the quarks and leptons are related to their Yukawa coupling and the Higgs vev by . The values of the gauge couplings depend on energy via the renormalisation group so that they are given at a chosen energy scale, here the mass of the boson, . and are related by the Weinberg angle as . The electromagnetic coupling constant is not since is broken to so that it is given by
(1) 
Defining the finestructure constant as , the (usual) zero energy electromagnetic fine structure constant is is related to by the renormalisation group equations. In particular, it implies that
More familiar constants, such as the masses of the proton and the neutron are, as we shall discuss
in more details below (see § 5.3.2), more difficult to relate to the fundamental parameters because they
depend not only on the masses of the quarks but also on the electromagnetic and strong binding
energies.
Constant  Symbol  Value 

Electromagnetic coupling constant  
Higss mass  GeV  
Higss vev  GeV  
Fermi constant  
Mass of the  GeV  
Mass of the  GeV  
Fine structure constant  
Fine structure constant at  
Weak structure constant at  
Strong structure constant at  
Gravitational structure constant  
Electron mass  keV  
Mu mass  MeV  
Tau mass  MeV  
Up quark mass  MeV  
Down quark mass  MeV  
Strange quark mass  MeV  
Charm quark mass  GeV  
Bottom quark mass  GeV  
Top quark mass  Gev  
QCD energy scale  MeV  
Mass of the proton  MeV  
Mass of the neutron  MeV  
protonneutron mass difference  MeV  
protontoelectron mass ratio  
electrontoproton mass ratio  
quark mean mass  MeV  
quark mass difference  MeV  
proton gyromagnetic factor  5.586  
neutron gyromagnetic factor  3.826  
Rydberg constant 
Are some constants more fundamental?
As pointedout by LevyLeblond [326], all constants of
physics do not play the same role, and some have a much deeper role
than others. Following Ref. [326], we can define three
classes of fundamental constants, class A being the class of the
constants characteristic of a particular system, class B being
the class of constants characteristic of a class of physical
phenomena, and class C being the class of universal
constants. Indeed, the status of a constant can change with time. For
instance, the velocity of light was initially a class A constant
(describing a property of light) which then became a class B constant
when it was realized that it was related to electromagnetic phenomena
and, to finish, it ended as a type C constant (it enters special relativity and
is related to the notion of causality, whatever the physical phenomena).
It has even become a much more fundamental constant
since it now enters in the definition of the metre [408] (see
Ref. [503] for a more detailed discussion). This has to
be constrasted with the proposition of Ref. [534]
to distinguish the standard model free parameters as the gauge
and gravitational couplings (which are associated to internal and
spacetime curvatures) and the other parameters entering
the accomodation of inertia in the Higgs sector.
Relation with physical laws.
LevyLeblond [326] thus proposed to rank the constants in terms of their universality
and he proposed that only three constants be considered to be
of class C, namely , and . He pointed out two
important roles of these constants in the laws of physics. First, they act as
concept synthetizer during the process of our understanding
of the laws of nature: contradictions between existing theories have often been resolved by
introducing new concepts that are more general or more
synthetic than older ones. Constants build bridges between quantities
that were thought to be incommensurable and thus allow new
concepts to emerge. For example underpins the synthesis
of space and time while the Planck constant allowed to related the
concept of energy and frequency and the gravitational constant creates
a link between matter and spacetime. Second, it follows that
this constants are related the domains of validity of
these theories. For instance, as soon as velocity approaches ,
relativistic effects become important, relativistic effects cannot be negligible.
On the other hand, for speed much below , Galilean kinematics
is sufficient. Planck constant also acts as a referent, since if the
action of a system greatly exceeds the value of that constant, classical
mechanics will be appropriate to describe this system. While the
place of (related to the notion of causality) and
(related to the quantum) in this list are well argumented, the place of remains debated
since it is thought that it will have to be replaced by some mass scale.
Evolution. There are many ways the list of constants can change
with our understanding of physics. First, new constants
may appear when new systems or new physical laws are discovered; this
is for instance the case of the charge of the electron or more recently
the gauge couplings of the nuclear interactions. A constant can also
move from one class to a more universal class. An example is that of the electric
charge, initially of class A (characteristic of the electron) which then
became class B when it was understood that it characterizes the
strength of the electromagnetic interaction. A constant can also
disappear from the list, because it is either replaced by
more fundamental constants (e.g. the Earth acceleration due
to gravity and the proportionality constant entering Kepler law
both disappeared because they were “explained” in terms
of the Newton constant and the mass of the Earth or the Sun)
or because it can happen that a better understanding of physics
teaches us that two hitherto distinct quantities have to be considered
as a single phenomenon (e.g. the understanding by Joule that heat
and work were two forms of energy led to the fact that
the Joule constant, expressing the proportionality between work
and heat, lost any physical meaning and became a simple
conversion factor between units used in the measurement of heat
(calories) and work (Joule)). Nowadays the calorie has fallen in disuse.
Indeed demonstrating that a constant is varying will have direct
implications on our list of constants.
In conclusion, the evolution of the number, status of the constants can teach us a lot about the evolution of the ideas and theories in physics since it reflects the birth of new concepts, their evolution and unification with other ones.
2.1.2 Constants and metrology
Since we cannot compute them in the theoretical framework in which they appear, it is a crucial property of the fundamental constants (but in fact of all the constants) that their value can be measured. The relation between constants and metrology is a huge subject to which we just draw the attention on some selected aspects. For more discussions, see Refs. [272, 273].
The introduction of constants in physical laws is also closely related
to the existence of systems of units. For instance, Newton’s law
states that the gravitational force between two masses is
proportional to each mass and inversely proportional to the square of their
separation. To transform the proportionality to an equality one
requires the use of a quantity with dimension of independent of the separation
between the two bodies, of their mass, of their composition
(equivalence principle) and on the position (local position
invariance). With an other system of units the numerical value of
this constant could have simply been anything. Indeed, the numerical value of
any constant crucially depends on the definition of the system of units.
Measuring constants. The determination of the laboratory value of constants relies mainly on the measurements of lengths, frequencies, times,… (see Ref. [409] for a treatise on the measurement of constants and Ref. [214] for a recent review). Hence, any question on the variation of constants is linked to the definition of the system of units and to the theory of measurement. The behavior of atomic matter is determined by the value of many constants. As a consequence, if e.g. the finestructure constant is spacetime dependent, the comparison between several devices such as clocks and rulers will also be spacetime dependent. This dependence will also differ from one clock to another so that metrology becomes both device and spacetime dependent, a property that will actually be used to construct tests of the constancy of the constants.
Indeed a measurement is always a comparison between two physical systems of the same dimensions. This is thus a relative measurement which will give as result a pure number. This trivial statement is oversimplifying since in order to compare two similar quantities measured separately, one needs to perform a number of comparisons. In order to reduce this number of comparisons (and in particular to avoid creating every time a chain of comparisons), a certain set of them has been included in the definitions of units. Each units can then be seen as an abstract physical system, which has to be realised effectively in the laboratory, and to which another physical system is compared. A measurement in terms of these units is usually called an absolute measurement. Most fundamental constants are related to microscopic physics and their numerical values can be obtained either from a pure microscopic comparison (as is e.g. the case for ) or from a comparison between microscopic and macroscopic values (for instance to deduce the value of the mass of the electron in kilogram). This shows that the choice of the units has an impact on the accuracy of the measurement since the pure microscopic comparisons are in general more accurate than those involving macroscopic physics.
It is also important to stress that in order to deduce the value of constants from an experiment, one usually needs to use theories and models. An example [273] is provided by the Rydberg constant. It can easily be expressed in terms of some fundamental constants as . It can be measured from e.g. the triplet transition in hydrogen, the frequency of which is related to the Rydberg constant and other constants by assuming QED so that the accuracy of is much lower than that of the measurement of the transition. This could be solved by defining as but then the relation with more fundamental constants would be more complicated and actually not exactly known. This illustrates the relation between a practical and a fundamental approach and the limitation arising from the fact that we often cannot both exactly calculate and directly measure some quantity. Note also that some theoretical properties are plugged in the determination of the constants.
As a conclusion, let us recall that (i) in general, the values of the
constants are not determined by a direct measurement but by a chain
involving both theoretical and experimental steps, (ii) they depend on
our theoretical understanding, (iii) the determination of a
selfconsistent set of values of the fundamental constants results
from an adjustment to achieve the best match between theory and a
defined set of experiments (which is important because we
actually know that the theories are only good approximation and have
a domain of validity) (iv) that the
system of units plays a crucial role in the measurement chain, since
for instance in atomic units, the mass of the electron could have been
obtained directly from a mass ratio measurement (even more precise!)
and (v) fortunately the test of the variability of the constants does
not require a priori to have a highprecision value of the
considered constants.
System of units. One thus need to define a coherent system of units. This has a long, complex and interesting history that was driven by simplicity and universality but also by increasing stability and accuracy [28, 502].
Originally, the sizes of the human body were mostly used to measure the length of objects (e.g. the foot and the thumb gave feet and inches) and some of these units can seem surprising to us nowaday (e.g. the span was the measure of hand with fingers fully splayed, from the tip of the thumb to the tip of the little finger!). Similarly weights were related to what could be carried in the hand: the pound, the ounce, the dram…Needless to say that this system had a few disadvantages since each country, region has its own system (for instance in France there was more than 800 different units in use in 1789). The need to define a system of units based on natural standard led to several propositions to define a standard of length (e.g. the mille by Gabriel Mouton in 1670 defined as the length of one angular minute of a great circle on the Earth or the length of the pendulum that oscillates once a second by Jean Picard and Christiaan Huygens). The real change happened during the French Revolution during which the idea of a universal and non anthropocentric system of units arose. In particular, the Assemblée adopted the principle of a uniform system of weights and measures on the 8th of May 1790 and, on March 1791 a decree (these texts are reprinted in Ref. [503]) was voted, stating that a quarter of the terrestrial meridian would be the basis of the definition of the metre (from the Greek metron, as proposed by Borda): a metre would henceforth be one ten millionth part of a quarter of the terrestrial meridian. Similarly the gram was defined as the mass of one cubic centimetre of distilled water (at a precise temperature and pressure) and the second was defined from the property that a mean Solar day must last 24 hours.
To make a long story short, this led to the creation of the metric system and then of the signature of La convention du mètre in 1875. Since then, the definition of the units have evolved significantly. First, the definition of the metre was related to more immutable systems than our planet which, as pointed out by Maxwell in 1870, was an arbitrary and inconstant reference. He then suggested that atoms may be such a universal reference. In 1960, the BIPM established a new definition of the metre as the length equal to 1650763 wavelengths, in a vacuum, of the transition line between the levels and of krypton86. Similarly the rotation of the Earth was not so stable and it was proposed in 1927 by André Danjon to use the tropical year as a reference, as adopted in 1952. In 1967, the second was also related to an atomic transition, defined as the duration of 9162631770 periods of the transition between the two hyperfine levels of the ground state of caesium133. To finish, it was decided in 1983, that the metre shall be defined by fixing the value of the speed of light to and we refer to Ref. [478] for an up to date description of the SI system. Today, the possibility to redefine the kilogram in terms of a fixed value of the Planck constant is under investigation [271].
This summary illustrates that the system of units is a human product and
all SI definitions are historically based on nonrelativistic classical physics.
The changes in the definition were driven by the will to use more
stable and more fundamental quantities so that they closely follow
the progress of physics. This system has been created for legal use and indeed
the choice of units is not restricted to SI.
SI systems and the number of basic units. The International System of Units defines seven basic units: the metre (m), second (s) and kilogram (kg), the Ampere (A), Kelvin (k), mole (mol) and candela (cd), from which one defines secondary units. While needed for pragmatic reasons, this system of units is unnecessarily complicated from the point of view of theoretical physics. In particular, the Kelvin, mole and candela are derived from the four other units since temperature is actually a measure of energy, the candela is expressed in terms of energy flux so that both can be expressed in mechanical units of length [L], mass [M] and time [T]. The mole is merely a unit denoting numbers of particule and has no dimension.
The status of the Ampere is interesting in itself. The discovery
of the electric charge [Q] led to the introduction of a new units,
the Coulomb. The Coulomb law describes the force between
two charges as being proportional to the product of the
two charges and to the inverse of the distance squared.
The dimension of the force being known as [MLT], this
requires the introduction of a new constant
(which is only a conversion factor), with
dimensions [QMLT] in the Coulomb law,
and that needs to be measured. Another route could have been
followed since the Coulomb law tells us that no new constant
is actually needed if one uses [MLT] as
the dimension of the charge. In this system of units,
known as Gaussian units, the numerical
value of is 1. Hence the Coulomb can be
expressed in terms of the mechanical units [L], [M] and [T],
and so will the Ampere. This reduces the number of conversion
factors, that need to be experimentally determined, but this choice of units
assumes the validity of the Coulomb law so that keeping
a separate unit for the charge may be a more robust attitude.
Natural units. The previous discussion tends to show that all units can be expressed in terms of the three mechanical units. It follows, as realized by JohnstoneStoney in 1874, that these three basic units can be defined in terms of 3 independent constants. He proposed [25, 266] to use three constants: the Newton constant, the velocity of light and the basic units of electricity, i.e. the electron charge, in order to define, from dimensional analysis a “natural series of physical units” defined as
where the factor has been included because we are using the SI definition of the electric charge. In such a system of units, by construction, the numerical value of , and is 1, i.e. etc.
A similar approach to the definition of the units was independently proposed by Planck [413] on the basis of the two constants and entering the Wien law and , which he reformulated later [414] in terms of , and as
The two systems are clearly related by the finestructure constant since .
Indeed, we can construct many such systems since the choice of the 3 constants is arbitrary. For instance, we can construct a system based on (, that we can call the Bohr units which will be suited to the study of the atom. The choice may be dictated by the system which is studied (it is indeed far fetched to introduce in the construction of the units when studying atomic physics) so that the system is well adjusted in the sense that the numerical values of the computations are expected to be of order unity in these units.
Such constructions are very useful for theoretical computations but not adapted
to measurement so that one needs to switch back to SI units. More important,
this shows that, from a theoretical point of view, one can
define the system of units from the laws of nature, which are
supposed to be universal and immutable.
Do we actually need 3 natural units? is an issue
debated at length. For instance, Duff, Okun and
Veneziano [164] respectively argue
for none, three and two (see also Ref. [531]). Arguing for no
fundamental constant leads to consider them simply as conversion
parameters. Some of them are, like the Boltzmann constant, but some
others play a deeper role in the sense that when a physical quantity
becomes of the same order of this constant new phenomena appear, this
is the case e.g. of and which are associated respectively
to quantum and relativistic effects. Okun [386] considered that only
three fundamental constants are necessary, as
indicated by the International System of Units. In the framework of quantum field
theory + general relativity, it seems that this set of three constants
has to be considered and it allows to classify the physical theories (with the famous
cube of physical theories).
However, Veneziano [510] argued that in the
framework of string theory one requires only two dimensionful
fundamental constants, and the string length . The use
of seems unnecessary since it combines with the string tension
to give . In the case of the GotoNambu action
and
the Planck constant is just given by . In this view,
has not disappeared but has been promoted to the role of a UV
cutoff that removes both the infinities of quantum field theory and
singularities of general relativity. This situation is analogous to
pure quantum gravity [384] where and
never appear separately but only in the combination so that only and are
needed. Volovik [516] made an analogy with quantum
liquids to clarify this. There, an observer knows both the effective and microscopic
physics so that he can judge whether the fundamental constants of the
effective theory remain fundamental constants of the microscopic
theory. The status of a constant depends on the considered theory
(effective or microscopic) and, more interestingly, on the observer
measuring them, i.e. on whether this observer belongs to the world of
lowenergy quasiparticles or to the microscopic world.
Fundamenal parameters. Once a set of three independent constants has been chosen as natural units, then all other constants are dimensionless quantities. The values of these combinations of constants does not depend on the way they are measured, [109, 163, 431], on the definition of the units etc… It follows that any variation of constants that will let these numbers unaffected is actually just a redefinition of units.
These dimensionless numbers represent e.g. the mass ratio, relative magnitude of strength etc… Changing their values will indeed have an impact on the intensity of various physical phenomena, so that they encode some properties of our world. They have specific values (e.g. , , etc.) that we may hope to understand. Are all these numbers completely contingent, or are some (why not all?) of them related by relations arising from some yet unknown and more fundamental theories. In such theories, some of these parameters may actually be dynamical quantities and thus vary in space and time. These are our potential varying constants.
2.2 The constancy of constants as a test of general relativity
The previous paragaphs have yet emphasize why testing for the consistency of the constants is a test of fundamental physics since it can reveal the need for new physical degrees of freedom in our theory. We now want to stress the relation of this test with other tests of general relativity and with cosmology.
2.2.1 General relativity
The tests of the constancy of fundamental constants take all their
importance in the realm of the tests of the equivalence principle [536].
Einstein general relativity is based on two independent hypotheses, which
can conveniently be described by decomposing the action of the
theory as .
The equivalence principle has strong implication for the functional form of . This principles include three hypothesis:

the universality of free fall,

the local position invariance,

the local Lorentz invariance.
In its weak form (that is for all interactions but gravity), it is satisfied by any metric theory of gravity and general relativity is conjectured to satisfy it in its strong form (that is for all interactions including gravity). We refer to Ref. [536] for a detailed description of these principles. The weak equivalence principle can be mathematically implemented by assuming that all matter fields are minimally coupled to a single metric tensor . This metric defines the length and times measured by laboratory clocks and rods so that it can be called the physical metric. This implies that the action for any matter field, say, can be written as
This socalled metric coupling ensures in particular the validity of the universality of freefall. Since locally, in the neighborhood of the worldline, there always exists a change of coordinates so that the metric takes a Minkowskian form at lowest order, the gravitational field can be locally “effaced” (up to tidal effects). If we identify this neighborhood to a small lab, this means that any physical properties that can be measured in this lab must be independent of where and when the experiments are carried out. This is indeed the assumption of local position invariance which implies that the constants must take the same value independent of the spacetime point where they are measured. Testing the constancy of fundamental constants is thus a direct test of this principle and thus of the metric coupling. Interestingly, the tests we are discussing in this review allow to extend them much further than the Solar scales and even in the early universe, an important information to check the validity of relativity in cosmology.
As an example, the action of a pointparticle reads
(2) 
with . The equation of motion that derives from this action is the usual geodesic equation
(3) 
where , being the proper time; is the covariant derivative associated with the metric and is the 4acceleration. Any metric theory of gravity will enjoy such a matter Lagrangian and the worldline of any test particle shall be a geodesic of the spacetime with metric , as long as there is no other long range force acting on it (see Ref. [191] for a detailed review of motion in alternative theories of gravity).
Note that in the Newtonian limit where is the Newtonian potential. It follows that, in the slow velocity limit, the geodesic equation reduces to
(4) 
hence defining the Newtonian acceleration . Remind that the proper time of a clock is related to the coordinate time by . Thus, if one exchanges electromagnetic signals between two identical clock in a stationary situation, the apparent difference between the two clocks rates will be
at lowest order. This is the so called universality of gravitational redshift.
The assumption of a metric coupling is actually well tested in the Solar system:

First, it implies that all nongravitational constants are spacetime independent, which have been tested to a very high accuracy in many physical systems and for various fundamental constants; this the subject of this review.

Third, the universality of free fall can be tested by comparing the accelerations of two test bodies in an external gravitational field. The parameter defined as
(5) can be constrained experimentally, e.g. in the laboratory by comparing the acceleration of a Beryllium and a Copper mass in the Earth gravitational field [4] to get
(6) Similarly the comparison of Earthcorelike and Moonmantlelike bodies gave [34]
(7) The Lunar Laser ranging experiment [540], which compares the relative acceleration of the Earth and Moon in the gravitational field of the Sun, also set the constraints
(8) Note that since the core represents only 1/3 of the mass of the Earth, and since the Earth mantle has the same composition of the Moon (and thus shall fall in the same way), one looses a factor 3 so that this constraint is actually similar as the one obtained in the lab. Further constraints are summarized in Table 3. The latter constraint also contains some contribution from the gravitational binding energy and thus includes the strong equivalence principle. When the laboratory result of Ref. [34] is combined with the LLR results of Refs. [539] and [362], one gets a constraints on the strong equivalence principle parameter, respectively

Fourth, the Einstein effect (or gravitational redshift) has been measured at the level [513].
We can conclude that the hypothesis of metric coupling is extremely welltested
in the Solar system.
The second building block of general relativity is related to the dynamics of the gravitational sector, assumed to be dictated by the EinsteinHilbert action
(9) 
This defines the dynamics of a massless spin2 field , called the Einstein metric. General relativity then assumes that both metrics coincide, (which is related to the strong equivalence principle), but it is possible to design theories in which this indeed not the case (see the example of scalartensor theories below; § 5.1.1) so that general relativity is one out of a large family of metric theories.
The variation of the total action with respect to the metric yields the Einstein equations
(10) 
where is the matter stressenergy tensor. The coefficient is determined by the weakfield limit of the theory that should reproduce the Newtonian predictions.
The dynamics of general relativity can be tested in the Solar system by using the parameterized postNewtonian formalism (PPN). Its is a general formalism that introduces 10 phenomenological parameters to describe any possible deviation from general relativity at the first postNewtonian order [536, 537] (see also Ref. [58] for a review on higher orders). The formalism assumes that gravity is described by a metric and that it does not involve any characteristic scale. In its simplest form, it reduces to the two Eddington parameters entering the metric of the Schwartzschild metric in isotropic coordinates
Indeed, general relativity predicts .
These two phenomenological parameters are constrained (1) by the shift of the Mercury perihelion [452] which implies that , (2) the Lunar laser ranging experiments [540] which implies that and (3) by the deflection of electromagnetic signals which are all controlled by . For instance the very long baseline interferometry [455] implies that while the measurement of the time delay variation to the Cassini spacecraft [51] sets .
The PPN formalism does not allow to test finite range effects that could be caused e.g. by a massive degree of freedom. In that case one expects a Yukawatype deviation from the Newton potential,
that can be probed by “fifth force” experimental searches.
characterizes the range of the Yukawa deviation of
strength . The constraints on
are summarized in Ref. [257] which
typically shows that on scales ranging from the
millimetre to the Solar system size.
General relativity is also tested with pulsars [124, 190] and in the strong field regime [419]. For more details we refer to Refs. [126, 491, 536, 537]. Needless to say that any extension of general relativity has to pass these constraints. However, deviations from general relativity can be larger in the past, as we shall see, which makes cosmology an interesting physical system to extend these constraints.
2.2.2 Varying constants and the universality of free fall
As the previous description shows, the constancy of the fundamental constants and the universality are two pillars of the equivalence principle. Dicke [151] realized that they are actually not independent and that if the coupling constants are spatially dependent then this will induce a violation of the universality of free fall.
The connection lies in the fact that the mass of any composite body, starting e.g. from nuclei, includes the mass of the elementary particles that constitute it (this means that it will depend on the Yukawa couplings and on the Higgs sector parameters) but also a contribution, , arising from the binding energies of the different interactions (i.e. strong, weak and electromagnetic) but also gravitational for massive bodies. Thus the mass of any body is a complicated function of all the constants, .
It follows that the action for a point particle is no more given by Eq. (2) but by
(11) 
where is a list of constant including but also many others and where the index in recalls that the dependency in these constant is a priori different for body of different chemical composition. The variation of this action gives the equation of motion
(12) 
It follows that a test body will not enjoy a geodesic motion. In the Newtonian limit , and at first order in , the equation of motion of a test particle reduces to
(13) 
so that in the slow velocity (and slow variation) limit it reduces to
where we have introduce the sensitivity of the mass with respect to the variation of the constant by
(14) 
This simple argument shows that if the constants depend on time then there must exist an anomalous acceleration that will depend on the chemical composition of the body .
This anomalous acceleration is generated by the change in the (electromagnetic, gravitational,…) binding energies [151, 245, 382] but also in the Yukawa couplings and in the Higgs sector parameters so that the dependencies are a priori compositiondependent. As a consequence, any variation of the fundamental constants will entail a violation of the universality of free fall: the total mass of the body being space dependent, an anomalous force appears if energy is to be conserved. The variation of the constants, deviation from general relativity and violation of the weak equivalence principle are in general expected together.
On the other hand, the composition dependence of and thus
of can be used to optimize the choice of materials
for the experiments testing the equivalence principle [118]
but also to distinguish between several models if data from
the universality of free fall and atomic clocks are combined [143].
From a theoretical point of view, the computation of will requires the determination of the coefficients . This can be achieved in two steps by first relating the new degrees of freedom of the theory to the variation of the fundamental constants and then relating them to the variation of the masses. As we shall see in § 5, the first issue is very model dependent while the second is especially difficult, particularly when one wants to understand the effect of the quark mass, since it is related to the intricate structure of QCD and its role in low energy nuclear reactions.
As an example, the mass of a nuclei of charge and atomic number can be expressed as
where and are respectively the strong and electromagnetic contributions to the binding energy. The BetheWeizäcker formula allows to estimate the latter as
(15) 
If we decompose the proton and neutron masses as [231] where is the pure QCD approximation of the nucleon mass (, and being pure numbers), it reduces to
(16)  
with , the neutron number. For an atom, one would have to add the contribution of the electrons, . This form depends on strong, weak and electromagnetic quantities. The numerical coefficients are given explicitly by [231]
(17) 
Such estimations were used in the first analysis of the relation between variation of the constant and the universality of free fall [130, 165] but the dependency on the quark mass is still not well understood and we refer to Refs. [119, 156, 158, 207] for some attempts to refine this description.
For macroscopic bodies, the mass has also a negative contribution
(18) 
from the gravitational binding energy. As a conclusion, from
(16) and (18), we expect the mass to depend on all the
coupling constant,
.
We shall discuss this issue in more details in § 5.3.2.
2.2.3 Relations with cosmology
Most constraints on the time variation of the fundamental constants will not be local
and related to physical systems at various epochs of the evolution of the universe.
It follows that the comparison of different constraints requires a full cosmological
model.
Our current cosmological model is known as the CDM (see Ref. [404] for a detailed description). It is important to recall that its construction relies on 4 main hypotheses: (H1) a theory of gravity; (H2) a description of the matter components contained in the Universe and their nongravitational interactions; (H3) symmetry hypothesis; and (H4) a hypothesis on the global structure, i.e. the topology, of the Universe. These hypotheses are not on the same footing since H1 and H2 refer to the physical theories. These hypotheses are however not sufficient to solve the field equations and we must make an assumption on the symmetries (H3) of the solutions describing our Universe on large scales while H4 is an assumption on some global properties of these cosmological solutions, with same local geometry. But the last two hypothesis are unavoidable because the knowledge of the fundamental theories is not sufficient to construct a cosmological model [499].
The CDM model assumes that gravity is described by general relativity (H1), that the Universe contains the fields of the standard model of particle physics plus some dark matter and a cosmological constant, the latter two having no physical explanation at the moment. It also deeply involves the Copernican principle as a symmetry hypothesis (H3), without which the Einstein equations usually cannot been solved, and assumes most often that the spatial sections are simply connected (H4). H2 and H3 imply that the description of the standard matter reduces to a mixture of a pressureless and a radiation perfect fluids. This model is compatible with all astronomical data which roughly indicates that , and . Cosmology thus roughly imposes that , that is .
Hence, the analysis of the cosmological dynamics of the universe and of its large scale structures requires the introduction of a new constant, the cosmological constant, associated with a recent acceleration of the cosmic expansion, that can be introduced by modifying the EinsteinHilbert action to
Note however that it is disproportionately large compared to the natural scale fixed by the Planck length
(19) 
Classically, this value is no problem but it was pointed out that at the quantum level, the vacuum energy should scale as , where is some energy scale of highenergy physics. In such a case, there is a discrepancy of 60120 order of magnitude between the cosmological conclusions and the theoretical expectation. This is the cosmological constant problem [524].
Parametre  Symbol  Value 

Reduced Hubble constant  
baryontophoton ratio  
Photon density  
Dark matter density  
Cosmological constant  
Spatial curvature  
Scalar modes amplitude  
Scalar spectral index  
Neutrino density  
Dark energy equation of state  
Scalar running spectral index  
Tensortoscalar ratio  T/S  
Tensor spectral index  
Tensor running spectral index  ?  
Baryon density 
Two solutions are considered. Either one accepts such a constant and such a finetuning and tries to explain it on anthropic ground. Or, in the same spirit as Dirac, one interprets it as an indication that our set of cosmological hypotheses have to be extended, by either abandoning the Copernican principle [505] or by modifying the local physical laws (either gravity or the matter sector). The way to introduce such new physical degrees of freedom were classified in Ref. [497]. In that latter approach, the tests of the constancy of the fundamental constants are central since they can reveal the coupling of this new degree of freedom to the standard matter fields. Note however that the cosmological data still favor a pure cosmological constant.
Among all the proposals quintessence involves a scalar field rolling down a runaway potential hence acting as a fluid with an effective equation of state in the range if the field is minimally coupled. It was proposed that the quintessence field is also the dilaton [230, 428, 494]. The same scalar field then drives the time variation of the cosmological constant and of the gravitational constant and it has the property to also have tracking solutions [494]. One of the underlying motivation to replace the cosmological constant by a scalar field comes from superstring models in which any dimensionful parameter is expressed in terms of the string mass scale and the vacuum expectation value of a scalar field. However, the requirement of slow roll (mandatory to have a negative pressure) and the fact that the quintessence field dominates today imply, if the minimum of the potential is zero, that it is very light, roughly of order [80].
Such a light field can lead to observable violations of the universality of free fall if it is nonuniversally coupled to the matter fields. Carroll [80] considered the effect of the coupling of this very light quintessence field to ordinary matter via a coupling to the electromagnetic field as . Chiba and Kohri [95] also argued that an ultralight quintessence field induces a time variation of the coupling constant if it is coupled to ordinary matter and studied a coupling of the form , as e.g. expected from KaluzaKlein theories (see below). This was generalized to quintessence models with a couplings of the form [10, 111, 161, 312, 313, 349, 399, 528] and then to models of runaway dilaton [135, 136] inspired by string theory (see § 5.4.1). The evolution of the scalar field drives both the acceleration of the universe at late time and the variation of the constants. As pointed in Refs. [95, 165, 527] such models are extremely constrained from the bound on the universality of freefall (see § 6.3).
We thus have two ways of investigation

The field driving the time variation of the fundamental constants does not explain the acceleration of the universe (either it does not dominate the matter content today or its equation of state is not negative enough). In such a case, the variation of the constants is disconnected from the dark energy problem. Cosmology allows to determine the dynamics of this field during the whole history of the universe and thus to compare local constraints and cosmological constraints. An example is given by scalartensor theories (see § 5.1.1) for which one can compare e.g. primordial nucleosynthesis to local constraints [129]. In such a situation, one should however take into account the effect of the variation of the constants on the astrophysical observations since it can affect local physical processes and bias e.g. the luminosity of supernovae and indirectly modify the distance luminosityredshift relation derived from these observations [31, 429].

The field driving the time variation of the fundamental constants is also responsible for the acceleration of the universe. It follows that the dynamics of the universe, the level of variation of the constants and the other deviations from general relativity are connected [345] so that the study of the variation of the constants can improve the reconstruction of the equation state of the dark energy [20, 161, 385, 399].
In conclusion, cosmology seems to require a new constant. It also provides a link between the microphysics and cosmology, as forseen by Dirac. The tests of fundamental constants can discriminate between various explanations of the acceleration of the universe. When a model is specified, cosmology also allows to set stringer constraints since it relates observables that cannot be compared otherwise.
3 Experimental and observational constraints
This section focuses on the experimental and observational constraints on the nongravitational constants, that is assuming remains constant.
The various physical systems that have been considered can be classified in many ways. We can classify them according to their lookback time and more precisely their spacetime position relative to our actual position. This is a summarized on Fig. 1. Indeed higher redshift systems offer the possibility to set constraints on an larger time scale, but this is at the expense of usually involving other parameters such as the cosmological parameters. This is in particular the case of the cosmic microwave background or of primordial nucleosynthesis. The systems can also be classified in terms of the physics they involve. For instance, atomics clocks, quasar absorption spectra and the cosmic microwave background require only to use quantum electrodynamics to draw the primary constraints while the Oklo phenomenon, meteorites dating and nucleosynthesis require nuclear physics.
System  Observable  Primary constraints  Other hypothesis 

Atomic clock    
Oklo phenomenon  isotopic ratio  geophysical model  
Meteorite dating  isotopic ratio    
Quasar spectra  atomic spectra  cloud physical properties  
Stellar physics  element abundances  stellar model  
21 cm  cosmological model  
CMB  cosmological model  
BBN  light element abundances  cosmological model 
For any system, setting constraints goes through several steps. First we have some observable quantities from which we can draw constraints on primary constants, which may not be fundamental constants (e.g. the BBN parameters, the lifetime of decayers,…). This primary parameters must then be related to some fundamental constants such as masses and couplings. In a last step, the number of constants can be reduced by relating them in some unification schemes. Indeed each step requires a specific modelisation and hypothesis and has its own limitations. This is summarized on Table 5.
3.1 Atomic clocks
3.1.1 Atomic spectra and constants
The laboratory constraints on the time variation of fundamental constants are obtained by comparing the longterm behavior of several oscillators and rely on frequency measurements. The atomic transitions have various dependencies in the fundamental constants. For instance, for the hydrogen atom, the gross, fine and hyperfinestructures are roughly given by
respectively, where the Rydberg constant set the dimension. is the proton gyromagnetic factor and . In the nonrelativistic approximation, the transitions of all atoms have similar dependencies but two effects have to be taken into account. First, the hyperfinestructures involve a gyromagnetic factor (related to the nuclear magnetic moment by , with ) which are different for each nuclei. Second, relativistic corrections (including the Casimir contribution) which also depend on each atom (but also on the type of the transition) can be included through a multiplicative function . It has a strong dependence on the atomic number , which can be illustrated on the case of alkali atoms, for which
The developments of highly accurate atomic clocks using different
transitions in different atoms offer the possibility to test a
variation of various combinations of the fundamental
constants.
It follows that at the lowest level of description, we can interpret all atomic clocks results in terms of the gfactors of each atoms, , the electron to proton mass ration and the finestructure constant . We shall thus parameterize the hyperfine and finestructures frequencies as follows.
The hyperfine frequency in a given electronic state of an alkalilike atom, such as Cs, Rb, Hg, is
(20) 
where is the nuclear factor. is a numerical factor depending on each particular atom and we have set . Similarly, the frequency of an electronic transition is wellapproximated by
(21) 
where, as above, is a numerical factor depending on each particular atom and is the function accounting for relativistic effects, spinorbit couplings and manybody effects. Even though an electronic transition should also include a contribution from the hyperfine interaction, it is generally only a small fraction of the transition energy and thus should not carry any significant sensitivity to a variation of the fundamental constants.
The importance of the relativistic corrections was probably first emphasized in Ref. [417] and their computation through relativistic body calculations was carried out for many transitions in Refs. [169, 173, 174, 198]. They can be characterized by introducing the sensitivity of the relativistic factors to a variation of ,
(22) 
Table 6 summarizes the values of some of them, as computed
in Refs. [174, 209]. Indeed a reliable
knowledge of these coefficients at the 1% to 10% level is
required to deduce limits to a possible variation of the
constants. The interpretation of the spectra in this context
relies, from a theoretical point of view, only on
quantum electrodynamics (QED), a theory which is well tested
experimentally [272] so that we can safely obtain
constraints on , still keeping in mind that the
computation of the sensitivity factors required numerical body
simulations.
Atom  Transition  sensitivity 

H  0.00  
Rb  hf  0.34 
Cs  0.83  
Yb  0.9  
Hg  
Sr  0.06  
Al  0.008 
From an experimental point of view, various combinations of clocks have been performed. It is important to analyze as much species as possible in order to ruleout speciesdependent systematic effects. Most experiments are based on a frequency comparison to caesium clocks. The hyperfine splitting frequency between the and levels of its ground state at 9.192 GHz has been used for the definition of the second since 1967. One limiting effect, that contributes mostly to the systematic uncertainty, is the frequency shift due to cold collisions between the atoms. On this particular point, clocks based on the hyperfine frequency of the ground state of the rubidium at 6.835 GHz, are more favorable.
3.1.2 Experimental constraints
Clock 1  Clock 2  Constraint (yr)  Constants dependence  Reference 

Rb  Cs  [344]  
Rb  Cs  [57]  
H  Cs  [197]  
Hg  Cs  [56]  
Hg  Cs  [215]  
Yb  Cs  [403]  
Yb  Cs  [402]  
Sr  Cs  [59]  
Dy  Dy  [99]  
Al  Hg  [434] 
We present the latest results that have been obtained and refer to § III.B.2 of FCV [495] for earlier studies. They all rely on the developments of new atomic clocks, with the primarily goal to define better frequency standards.

Rubidium: The comparison of the hyperfine frequencies of the rubidium and caesium in their electronic ground state between 1998 and 2003, with an accuracy of order , leads to the constraint [344]
(23) With one more year of experiment, the constraint dropped to [57]
(24) From Eq. (20), and using the values of the sensitivities , we deduce that this comparison constrains

Atomic hydrogen: The transition in atomic hydrogen was compared tp the ground state hyperfine splitting of caesium [197] in 1999 and 2003, setting an upper limit on the variation of of