Comment 5 for bug 970966

Revision history for this message
James Haigh (james.r.haigh) wrote :

UTC is uniform but discontinuous. If someone wanted to precisely and reliably measure time between to points, they would need a uniform, continuous standard such as TAI.

TAI can be implemented using the defined relation:
TAI = UTC + 10s + Announced leap seconds since 1972 (Published here: http://maia.usno.navy.mil/ser7/tai-utc.dat)

(24 leap seconds so far)

So if UTC incorrectly fails to insert a leap second, TAI would appear to skip a second. I could therefore incorrectly measure a 25ms time interval as 1025ms.

I could also implement UT1 (which is continuous but non-uniform) by the defined relation:
UT1 = UTC + DUT1 (Published here: http://maia.usno.navy.mil/ser7/finals.all)

See: https://en.wikipedia.org/wiki/DUT1

Again, if UTC incorrectly fails to insert a leap second, UT1 would appear to skip a second, and incorrectly be discontinuous.

See IERS who publish the astronomical data and announce leap seconds:
http://www.iers.org/
http://maia.usno.navy.mil/

Anyway, how does it make sense to sync a clock over the network to high precision using time protocols, when the system's UTC can't even be relied on to a precision of a second?