UTC is uniform but discontinuous. If someone wanted to precisely and reliably measure time between to points, they would need a uniform, continuous standard such as TAI.
So if UTC incorrectly fails to insert a leap second, TAI would appear to skip a second. I could therefore incorrectly measure a 25ms time interval as 1025ms.
I could also implement UT1 (which is continuous but non-uniform) by the defined relation:
UT1 = UTC + DUT1 (Published here: http://maia.usno.navy.mil/ser7/finals.all)
Anyway, how does it make sense to sync a clock over the network to high precision using time protocols, when the system's UTC can't even be relied on to a precision of a second?
UTC is uniform but discontinuous. If someone wanted to precisely and reliably measure time between to points, they would need a uniform, continuous standard such as TAI.
TAI can be implemented using the defined relation: maia.usno. navy.mil/ ser7/tai- utc.dat)
TAI = UTC + 10s + Announced leap seconds since 1972 (Published here: http://
(24 leap seconds so far)
So if UTC incorrectly fails to insert a leap second, TAI would appear to skip a second. I could therefore incorrectly measure a 25ms time interval as 1025ms.
I could also implement UT1 (which is continuous but non-uniform) by the defined relation: maia.usno. navy.mil/ ser7/finals. all)
UT1 = UTC + DUT1 (Published here: http://
See: https:/ /en.wikipedia. org/wiki/ DUT1
Again, if UTC incorrectly fails to insert a leap second, UT1 would appear to skip a second, and incorrectly be discontinuous.
See IERS who publish the astronomical data and announce leap seconds: www.iers. org/ maia.usno. navy.mil/
http://
http://
Anyway, how does it make sense to sync a clock over the network to high precision using time protocols, when the system's UTC can't even be relied on to a precision of a second?