libCom's epicsEventTest measures epicsEventWaitWithTimeout() delays from 2.0**0 down to 2.0**-19=0.000001907 and 0.0. However it isn't particularly careful with the starting point of those timings, and it also drops the sign from the calculated delay error. I just added a call to epicsEventWaitWithTimeout(id, 0.000001) before the start of each delay measurement to synchronize each one to the start of an OS tick, and made the error calculation just measured delay - requested delay. The results on VxWorks are interesting. These are with the default 60Hz tick rate: # epicsEventWaitWithTimeout(1.000000) delay error 0.000013 sec # epicsEventWaitWithTimeout(0.500000) delay error 0.000006 sec # epicsEventWaitWithTimeout(0.250000) delay error 0.000003 sec # epicsEventWaitWithTimeout(0.125000) delay error -0.008331 sec # epicsEventWaitWithTimeout(0.062500) delay error -0.012500 sec # epicsEventWaitWithTimeout(0.031250) delay error -0.014583 sec # epicsEventWaitWithTimeout(0.015625) delay error 0.001042 sec # epicsEventWaitWithTimeout(0.007813) delay error 0.008854 sec # epicsEventWaitWithTimeout(0.003906) delay error 0.012760 sec # epicsEventWaitWithTimeout(0.001953) delay error 0.014716 sec # epicsEventWaitWithTimeout(0.000977) delay error 0.015690 sec # epicsEventWaitWithTimeout(0.000488) delay error 0.016178 sec # epicsEventWaitWithTimeout(0.000244) delay error 0.016423 sec # epicsEventWaitWithTimeout(0.000122) delay error 0.016545 sec # epicsEventWaitWithTimeout(0.000061) delay error 0.016605 sec # epicsEventWaitWithTimeout(0.000031) delay error 0.016636 sec # epicsEventWaitWithTimeout(0.000015) delay error 0.016654 sec # epicsEventWaitWithTimeout(0.000008) delay error 0.016658 sec # epicsEventWaitWithTimeout(0.000004) delay error 0.016663 sec # epicsEventWaitWithTimeout(0.000002) delay error 0.016665 sec # epicsEventWaitWithTimeout(0.000000) delay error 0.000003 sec I then set the clock to 1KHz and got these: # epicsEventWaitWithTimeout(1.000000) delay error 0.000025 sec # epicsEventWaitWithTimeout(0.500000) delay error -0.000005 sec # epicsEventWaitWithTimeout(0.250000) delay error -0.000003 sec # epicsEventWaitWithTimeout(0.125000) delay error -0.000001 sec # epicsEventWaitWithTimeout(0.062500) delay error -0.000501 sec # epicsEventWaitWithTimeout(0.031250) delay error -0.000250 sec # epicsEventWaitWithTimeout(0.015625) delay error -0.000625 sec # epicsEventWaitWithTimeout(0.007813) delay error -0.000813 sec # epicsEventWaitWithTimeout(0.003906) delay error -0.000907 sec # epicsEventWaitWithTimeout(0.001953) delay error -0.000953 sec # epicsEventWaitWithTimeout(0.000977) delay error 0.000023 sec # epicsEventWaitWithTimeout(0.000488) delay error 0.000512 sec # epicsEventWaitWithTimeout(0.000244) delay error 0.000756 sec # epicsEventWaitWithTimeout(0.000122) delay error 0.000878 sec # epicsEventWaitWithTimeout(0.000061) delay error 0.000938 sec # epicsEventWaitWithTimeout(0.000031) delay error 0.000969 sec # epicsEventWaitWithTimeout(0.000015) delay error 0.000984 sec # epicsEventWaitWithTimeout(0.000008) delay error 0.000992 sec # epicsEventWaitWithTimeout(0.000004) delay error 0.000996 sec # epicsEventWaitWithTimeout(0.000002) delay error 0.000998 sec # epicsEventWaitWithTimeout(0.000000) delay error 0.000002 sec The results are quite repeatable, only the last digit seems to change by about ±2 between runs. On Linux I see no negative delay errors at all. On both RTEMS-pc386-qemu and VxWorks I get negative delay errors for requests that are larger than and not an exact multiple of the tick quantum, so those requests are returning earlier than they are supposed to. I am not seeing negative delay errors that are greater than the tick quantum, but as I said above I have synchronized the time measurements to start immediately after a clock tick. I think I now understand the problem: We pass the time delay to the OS as an integer tick count, but callbackRequestDelay() is now measuring and calculating its delays much more accurately than it used to, using the monotonic clock. Previously those delays would have always been multiples of the tick quantum. In epicsEventWaitWithTimeout() we calculate the delay time in ticks but throw away the remainder, which can now be up to just under a whole tick interval long. We also aren't taking into account that the start of a delay is not aligned to the OS tick event but the end always is, so we're throwing away up to another whole tick there too. By inserting a busy-wait of just under a tick long after the synchronization to the OS tick that I described above I can produce delay errors of almost 2 ticks. Here the tick rate was 1000Hz and I had inserted a 990µs delay after the previous tick event before starting the measurement: # epicsEventWaitWithTimeout(1.000000) delay error -0.000976 sec # epicsEventWaitWithTimeout(0.500000) delay error -0.000962 sec # epicsEventWaitWithTimeout(0.250000) delay error -0.000956 sec # epicsEventWaitWithTimeout(0.125000) delay error -0.000953 sec # epicsEventWaitWithTimeout(0.062500) delay error -0.001451 sec # epicsEventWaitWithTimeout(0.031250) delay error -0.001200 sec # epicsEventWaitWithTimeout(0.015625) delay error -0.001574 sec # epicsEventWaitWithTimeout(0.007813) delay error -0.001762 sec # epicsEventWaitWithTimeout(0.003906) delay error -0.001855 sec # epicsEventWaitWithTimeout(0.001953) delay error -0.001903 sec # epicsEventWaitWithTimeout(0.000977) delay error -0.000926 sec # epicsEventWaitWithTimeout(0.000488) delay error -0.000437 sec # epicsEventWaitWithTimeout(0.000244) delay error -0.000194 sec # epicsEventWaitWithTimeout(0.000122) delay error -0.000072 sec # epicsEventWaitWithTimeout(0.000061) delay error -0.000011 sec # epicsEventWaitWithTimeout(0.000031) delay error 0.000020 sec # epicsEventWaitWithTimeout(0.000015) delay error 0.000035 sec # epicsEventWaitWithTimeout(0.000008) delay error 0.000043 sec # epicsEventWaitWithTimeout(0.000004) delay error 0.000047 sec # epicsEventWaitWithTimeout(0.000002) delay error 0.000048 sec # epicsEventWaitWithTimeout(0.000000) delay error 0.000002 sec I wouldn't want to just round up the delay by 2 ticks, that could badly hit performance. The solution to this probably has to involve reading the monotonic clock and calling semTake(id, 1) again if the OS returns before the hi-res delay has expired (we might have to do that twice). However there are a number of other places that also use the tick rate and might need similar adjustments, so there's more work to this fix than just changing osdEvent.c for VxWorks and RTEMS.