A Proposal to Upgrade UTC (rather than to degrade it)

Rob Seaman
National Optical Astronomy Observatory

The original version of this document appeared on 9 April 2001 as a text message to the Leap Second Discussion List, LEAPSECS@ROM.USNO.NAVY.MIL. The list archives are available by joining the LEAPSECS list.

Steve Allen of Lick Observatory has provided an excellent bibliography related to UTC and leap second issues, along with an initial attempt to characterize the costs to astronomical observatories of such a change.


In the Fall of 1999, when the rest of the world was thinking about the looming prospect of civil unrest resulting from undiscovered Y2K bugs, the precision timing community was beginning to discuss making a far more basic change to our civil time standards. A November 1999 article, in of all places GPS World, provided a good description of a problem that will affect Coordinated Universal Time at an accelerating pace over the next few centuries. At issue is the fact that the Earth's rotation is slowing as angular momentum is transferred to the Moon's orbit. This is not a new phenomenon - far from it - the day has been getting longer and the Moon's orbit wider for billions of years. What is new is that our (atomic) clocks have gotten good enough to make the detection of this slowing a (relatively) trivial affair.

Coordinated Universal Time (UTC in the international acronym) is the compromise timescale that was developed to both permit the world's clocks to continue to track the Earth's rotation, while also benefiting from the precision of International Atomic Time (TAI in the international acronym). UTC is always an integral number of seconds different from TAI, but every year or two a leap second is inserted to compensate for the slowing Earth (not precisely true, see below).

Thus UTC conveys information about not just one time scale (the spinning Earth), but a second (the international ensemble of atomic clocks). UTC is thus a good choice to serve as a civil time standard for the nations of the world. It is not the only choice. Many - if not most - nations (for example, the United Kingdom), have never formally adopted UTC or any other time standard. Rather the standard of time in the UK is familiar old Greenwich Mean Time (or GMT). In fact, the original UTC standard explicitly stated that:

"GMT may be regarded as the general equivalent of UT."
Nevertheless, it appears that a committee of the International Telecommunication Union - Radiocommunications is prepared to propose at a Colloquium on the UTC Timescale ( official program) in Torino, Italy at the end of May, 2003 that the UTC standard be detached from the notion of the spinning Earth. In April of 2002, the moderator of the LEAPSECS mailing list, Demetrios Matsakis, announced:
"The ITU/R Special Rapporteur Group met last week in Paris. This is the group whose charter is to advise the International Telecommunications Union (ITU) on whether UTC should be redefined. Their charter is to complete their report by this October, but the date may slip a bit (see below). I will forward any formal announcement by the group when I receive it, but the following are the main conclusions, more or less:

1. The group voted to exclude from consideration all redefinition options except freezing the number of leap seconds at some so-far unspecified date. This means they won't try to change the definition of the second from 9,192,631,700 periods of a certain hyperfine transition of the undisturbed cesium atom, replace leap seconds by leap hours, etc."

By March of 2003, this position had evolved into (also Matsakis):
At the March 20, 2003 meeting of the Timing Subcommittee of the Civil GPS Interface Committee, Ron Beard gave an update on the ITU/R Special Rapporteur Group. The viewgraphs should shortly be publicly available below http://www.navcen.uscg.gov/cgsic/meetings/default.htm

Of course, the ITU/R Special Rapporteur Group has not come to a decision on what to recommend, and recommending no change at all is a distinct possibility. However, it was indicated that _if_ they decide to recommend a change then the option they would probably recommend is to replace leap seconds by leap hours. It was noted that this could be accommodated by skipping a daylight time switch. Any leap-hour would occur many centuries from now; according to Nelson et al.'s Figure 7 (Metrologia, 2001, 38,509-529), we should only accumulate 140 seconds by 2100.

It is heartening that it is a "distinct possibility" that they will recommend no change to current procedures, however one might suggest that if support for a change is so unsteady, that the debate over this change should never have reached this point. It is also heartening that the ITU/R SRG (a rather obscure body to be debating such an issue) has now recognized the necessity of proposing some future mechanism to avoid turning night into day. However, in a practical sense, there is no difference between issuing a leap hour every 1,000 years and abandoning the rotation of the Earth entirely as the basis for civil time.

It appears to this writer that the precision timing community has followed an extremely ad hoc process while considering this issue, and that an artificial Y2K-like crisis is being created on a very abrupt timescale. The current standard is viable for decades, if not centuries, unchanged.

What's the hurry?


The November 1999 GPS World article by McCarthy and Klepczynski offers five options for near-to-mid future UTC policies:

  • Continue Current Procedure
  • Discontinue Leap Seconds
  • Change the Tolerance for UT1-UTC
  • Redefine the Second
  • Periodic Insertion of Leap Seconds

Virtually all of the discussion to date has centered on the single choice of discontinuing leap seconds. Naively this appears to "solve" the largest number of "emerging problems", while intrinsically discomfiting only odd ducks such as astronomers and traditional sextant navigators. There is a significant price to pay in any change to civil time, but the argument goes that a similar price tag is attached to doing nothing - and that we simply can't continue our current procedures for much longer. (Where "much longer" is some hazy period of time from 5 to 500 years.)

In the past I've argued in various ways that we should not so easily dismiss the many subtle requirements our society places on civil time - on UTC, that is - to continue to track the rotation of the Earth. Let's invert the process and look at the current leap second scheduling algorithm itself. Perhaps in one of M&K;'s other options lies the cure.


What is it that we are really trying to do? A lot of technical jargon is disguising a very simple need - the need to keep two clocks synchronized. In general, to "synchronize our watches" there are four possibilities: Or some combination of the above - and perhaps introduce higher order rate terms. In addition, if the rates continue to differ any delta-t adjustments must be repeated on some regular or irregular schedule to match specified tolerances.

In our case watch A is Atomic time (TAI) and watch B is the Earth (call it watch "E") approximated by UT1. Our current procedure to synchronize TAI and UT1 - watches A and E, that is - is to reset watch E periodically using leap seconds. We measure UT1, but we distribute and use UTC. We can immediately reject a couple of the other options. The whole point of TAI is to remain the "best" time our species is capable of keeping. Resetting watch A is not an option. Similarly adjusting watch A's rate, which is equivalent to M&K;'s "Redefine the Second" is obviously a proposal that would be denounced by every physical scientist and engineer on the planet. So our choice becomes:

Let's also dispense with the latter choice, but not before acknowledging our species' impact on the Earth. It may be absurd to suggest synchronizing Universal Time with Atomic Time by speeding up the Earth's rotation - but humanity's activities can indeed affect the Earth's rotation - for instance through the high latitude impoundment of water in reservoirs or the polar ice caps. (Or adversely by allowing global warming to melt those ice caps.) Returning to M&K;'s options, we can categorize them using the watch method:

  • Continue Current Procedure
 - reset watch E
  • Discontinue Leap Seconds
 - forget the whole thing
  • Change the Tolerance for UT1-UTC
 - reset E
  • Redefine the Second
 - adjust A's rate
  • Periodic Insertion of Leap Seconds
 - reset E

All of M&K;'s realistic options amount to exactly the same mechanism - except for the one that has received all the notice. Should we really just be throwing up our hands in dismay that we are incapable of doing this job correctly?


So, let's acknowledge that the folks who designed and implemented our current leap second system did indeed create an excellent mechanism. Let's assume that we will continue to use this excellent (if not perfect) mechanism in the future. Should we be considering only options to degrade the system? Or should we rather take a detailed look at ways of improving what is already good and serviceable into something even better?

In short - we (humanity that is) should continue to synchronize civil time to UT1, and we should do it right.

The precision timing community has been doing an excellent job and cannot be faulted for failing to provide detailed information beyond the call of duty. For instance, the USNO maintains a file of daily predictions and standard results for the Earth Orientation Service. Some interesting plots can be constructed from these data. Here is a graph of UT1-UTC for the last quarter century (labeled Figure 3).

Historical Leap Second Scheduling

The first thing to note is a bias toward positive values. A leap second is issued as soon as possible (presumably to provide "slack" later). The average DUT1 is around a +0.1s bias (~0.14s over the last decade). The other thing to note is how many days fall outside of the magic -0.5s to 0.5s window. Any values larger than 0.5s (absolute) represent dates that could have been better served by a different scheduling of leap seconds. About 15% of the days are thus poorly served.

The data also show that half of the leap seconds in the last decade have been in June and half in December. When there is talk about a 1 or 1.5 year cadence of leap seconds, what is really meant is a 6 month sampling rate. This is the Nyquist frequency needed to fully sample the current leap second "waveform". (As many readers will know better than the author, the Nyquist Theorem states that the minimum rate to properly sample a waveform containing frequencies up to a value of "f" is 2*f. Thus the digital sampling rate of a CD is 40+ KHz, but our ears only respond to frequencies up to 20 KHz.) Another aspect of pursuing such a Nyquist analysis (or at least, analogy) is that it should be clear that to reliably schedule leap jumps (of however many seconds tolerance) every ten years, for instance, will still require the scheduling freedom to issue a leap jump every five years. It is the guarantee of "read my lips - no new leap seconds" that the squeaky wheels among the precision timing clients want - not the mere reality of it having happened in retrospect.

The CCIR 460-4 standard that governs UTC requires only that: "A positive or negative leap-second should be the last second of a UTC month" There is only a preference, not a guarantee, given to December and June. I presume the authors of the standard were familiar with Nyquist, too. In past discussions, some mention has been made of March and September as possible leap second scheduling candidates. This may suggest that there are folks "on the inside" thinking along the same lines as this proposal. It is striking how much room is already available in the standard to implement the facilities needed to improve the current UTC mechanism. Note that the standard is explicitly designed to transfer UT (UT1, that is) accurate to 0.1s. A tenth second is also considered an appropriate buffer for IERS against "unpredictable changes in the rate of rotation of the Earth". On the other hand, virtually all civil use revolves around the raw uncorrected UTC clock. Should we be focusing all of the discussion on the tenth second effects but none on the whole seconds?

A large part of the document is also concerned with transmitting the DUT1 signal on top of the old radio system. The mechanism requires modifying 16 of the second markers to count up to eight tenth second ticks positively and another eight negatively. If the most obvious modification were made to the system to loosen the current 0.9s UT1-UTC tolerance, this would only allow about 3 seconds worth of growth for DUT1 in any event. Instead of asking how large we can make the discrepancy between UTC and UT1 in order to permit longer and longer delays between leap seconds - we should ask what the best leap second schedule is to minimize the divergence between UTC and UT1.

The Bulletin B numbers ( Bulletin A is really quite a good predictor these days, labeled Figure 2, below) allow a reconstruction of the overall trend over the last 25 years (labeled Figure 1, below). One thing to note is the steep slope resulting from initializing UT to the old Ephemeris Time definition of so many seconds in 1900. This slope all but guarantees that a negative leap second will never occur - not only would the Earth's current slowing trend have to be combatted, but the accumulated bias over a coherent 1.5 year timescale would have to be reversed. We don't have leap seconds because of current slowing - we have leap seconds because the Earth has already spun down. The trend will make the slope even steeper in the future (that's the whole problem, after all) and a negative leap second will become ever more unlikely.

measured UT1 versus predicted

Plot of the difference between measured (after the fact) and predicted (before the fact) Universal Time (UT1) versus date. Divergence between civil time (UTC) and mean solar time (UT1) that would have occurred over the last two decades in the absence of leap seconds. The divergence over the long term will grow quadratically.


The change I propose is an explicit monthly scheduling cadence.

Most months would include no leap second, of course. The relative current epoch frequency of leap seconds would remain the same at one per year or per year-and-a-half. However, the freedom to position a leap second 12 times a year will allow a much closer pragmatic fit of UTC to UT1 (labeled Figure 4, below), dramatically reducing the outlier dates. Only a bit more than 2% of the dates now lie outside of the plus-or-minus half second window. Other statistical measures are similarly improved even in this non-optimized application of a monthly sampling rate. (Various of the example leap seconds would have been better scheduled either a month earlier or later.) The Bulletin A predictions can be used as input to a variety of scheduling algorithms and various "best fit" criteria can be evaluated. I don't want to tell the appropriate agencies how to do their jobs - rather it's obvious that they know what they're doing and therefore that an even better product should be requested from them in the future.

monthly UTC scheduling example

It is also revealing to consult past bulletins of the IERS. These have managed six month advance predictions of leap seconds for at least the last five years. The 460-4 standard only requires 8 weeks. Note the single biggest advantage of this modest suggestion. It is already written into the standard. I'm somewhat embarrassed making even this level of fuss over it, since it is obvious from reading the text of CCIR 460-4 that a monthly scheduling pace was foreseen from the beginning. Let's just implement what was always inherent in the standard.


Some of the PTTI discussions focused on the difficulty of making the case to one's funding agencies for the significant resources necessary to retrofit large complex projects in the field to support new timing standards - new timing standards that will produce no benefit to your project. Commercial projects may face even larger funding pressures. Continued leap seconds or a much larger DUT1 range - both have a cost. There is also the question of a time table for implementing any of the various relaxed tolerance options. It appears that these proposals are being fast-tracked with the intent of benefiting current systems such as GLONASS which do not reliably handle leap seconds. (Whether the same systems will be able to handle values of DUT1 larger than 0.9s is an open question.) In any event, in order to benefit current systems, any change to UTC must happen during the lifetime of those projects.

Note the artificial crisis that is being created. Most changes to widespread standards include specific attention to issues of backwards compatibility. Certain systems or usages may be "grandfathered in". Every care is taken to avoid trampling on current users. The normal pattern of adoption of a new standard is to rely on convincing new users, and old users with new projects, to design to the new rules. The old rules may be deprecated, but are available for the lifetime of pre-existing systems. We, however, don't have that luxury. If we make a change - we change the system for everybody, both new and old - and we force the schedule for that change. Don't underestimate the expense of publicizing such a change. With Y2K we had worldwide media attention shining on our efforts and still we are hearing of "glitches". We will have none of the positive aspects of Y2K, but all of the negative aspects. How many new projects will continue to design DUT1 < 0.9s into their code while operating off of old copies of the standard? (We could perhaps mitigate that somewhat by removing the proprietary restrictions on this particular standard.)

On the other hand, imagine the benefits of restricting our efforts to only changes that are supported by the current standard. No need for a worldwide publicity campaign to try to reach the full extent of our users. No need for any specific schedule at all - we could implement our new leap second scheduling policies in stages over the next several decades - or centuries. And imagine the relative ease of selling your own funding agency on a modest sized project to verify that your own code supports the full letter of a pre-existing standard. A project that is necessary, not because the precision timing community has decided to degrade the quality of civil time - but specifically because we have decided to support an upgrade to UTC. And then realize that there is no need to seek even this modest level of supplemental funding - because we can simply decide to delay implementation so far into the future that all current projects - in the field or in planning - will have lived out their complete life cycles.

Meanwhile, let's consider what happens if we reach agreement along the lines suggested in this proposal in the next five or ten years. (And does anybody really expect the other proposals to advance much more quickly?) New projects could begin to code to the new leap second scheduling guidelines immediately - in fact, projects could begin to prepare before agreement was reached since this is, after all, the current standard. Backwards compatibility with the December/June scheduling (that will likely continue to persist for quite some time) is guaranteed. Our attentions could be focused on developing tools for testing and verification and on writing documentation and user's guides. Public relations would be a breeze. Not "here are some unpersuasive technical arguments about why we decided to allow UTC to diverge from UT1", but rather "here is how we have decided to improve timekeeping for the world community". Which do you think will play better in Scientific American and on Nightline?


Civil time and the UTC standard are such basic concepts that all policy discussions embrace exceptionally long time scales. These time scales can make even pragmatic discussions sound like science fiction. For example, the "Report of the URSI Commission J Working Group on the Leap Second" includes this passage under Appendix III:
     "2.) I also received many comments about the effects on society
     when UT1 diverges.  Note that we are talking about a minute in the
     next century.  Society routinely handles a one-hour switch with
     every daily savings time, and a half-hour offset if they live at
     the edge of a time zone.  By the time leap seconds add up to an
     hour, the world will be very different.  If we have settled the
     solar system, a whole new scheme will probably have evolved.  Even
     if we have not changed our system, society has enough slop in its
     timekeeping that people will slowly shift without even knowing it.
     More people will start showing up to work at 9:00 AM, and less at
     8:30 AM, etc."
Beside the confusion here between periodic and secular effects (not to mention a strange tolerance for "slop" to be coming from the precision timing community) we start speculating about what "scheme" our descendents will use while exploring the solar system. All right then, let's speculate. The current standard has lasted for 25 years and could reasonably serve us well for ten or a hundred times longer than that - so presumably any replacement must be expected to last equally far into the future. As such it is not just our idle whim to attempt to predict what scheme our species will be using - it is our current obligation at undertaking such a revision. I'll be bold enough to provide an answer to the speculation right now. What scheme will our species ultimately use to synchronize Atomic time and Earth time? Answer: we will redefine the second. What else is available?

Eventually the leap second pace will indeed accelerate to the point that a monthly (or even weekly or daily) rate fails to sample the underlying waveform acceptably well. At that point - what? Will our multi-millennial grandchildren find it reasonable to allow their clocks to register day for night? Perhaps the thought is that some entirely different clock will be used (one imagines some futuristic variation on a "metric" clock). What difference does that make? The same underlying issues will remain. The rates between the Earth and the Atomic (or Antimatter :-) clocks will diverge and some scheme will be needed to synchronize them.

So - they will need to find some reasonable revision of the fundamental unit of time. Or more likely, they will define a civil second that is some fraction longer than the "scientific" second. And then they will continue to issue leap seconds on a palatable schedule according to the Earth's whims. (The future PTTI community would have the freedom to select an epoch for such a change that would correspond to some nice round number or even ratio of fundamental physical constants, and thus perhaps even simplify the handling of time units.)

Redefine the unit to match the long term variation. Schedule leap seconds for the short term deviations. That is - adjust watch A's rate every few millennia AND reset watch E in between.

In any event, the suggested monthly scheduling of leap seconds should be good for the next 500 or 1000 years - by which time the more sci-fi suggestions that we won't care in the future as we spread through the cosmos will be shown to be silly. Are most of the Earth's inhabitants ever likely to leave the ground? How many millennia will pass before the majority of our species lives elsewhere? Actually, let's assume that we will indeed have a future interest in synchronizing multiple diurnal rates - Earth AND Mars, say - with our master clock (or vice versa). Won't that be made much easier with a more rapid scheduling cadence of single leap seconds (on both planets), than with the infrequent and inflexible scheduling of much larger jumps? It certainly won't be possible if we schedule no leap seconds at all.


Allowing UTC to drift from UT1 would be to abandon a central mission of the precision timing community. The unacceptable nature of this would become clear in decades or merely in years - not only after millennia or centuries. We would be implementing a hobbled time scale that would be the ridicule of future historians and scientists.

Degrading UTC to allow much larger discontinuities on a much less frequent schedule will only encourage lazy engineering, programming and management practices. Both the heightened amplitude and lengthened period are guaranteed to make future Y2K-like crises more likely. We would sacrifice everyone's (literally everyone's) long term peace of mind for the short term expediency of a minority of special interests.

On the other hand, a monthly leap second sampling rate would improve UTC. It is already permitted under the standard. A monthly pace of leap second handling would quickly sort out those precision timing projects that actually need to use UTC from those that would better benefit from an unsegmented timescale such as TAI. This is not an issue that can be avoided, and as such we should face it head on. Quickly reaching a consensus on a solution to this non-crisis would allow the community to turn its attention back to the real needs of both UTC and TAI clients - how to communicate standard time signals reliably and remotely.

CCIR 460-4 states:

"GMT may be regarded as the general equivalent of UT."
Let's preserve the original intent of the standard.

Rob Seaman
National Optical Astronomy Observatory
2 April 2003