Comment 7 for bug 1861612

Revision history for this message
Andrew Johnson (anj) wrote :

I don't have any definitive answers (or access to a vxWorks system to experiment with down here at about 18 Degrees North), but I can provide more data.

callbackRequestDelayed() is implemented using an epicsTimer; the timer queue compares the current time (which since commit 4f2228fb1 gets read using epicsTime::getMonotonic() instead of epicsTime::getCurrent()) with a delayed version of the time from the same source to work out when to fire the callback.

On vxWorks-ppc the getMonotonic() time source is the CPU's time-base register, which ticks at some multiple of the CPU's bus clock frequency. Different boards have different bus rates and multipliers, and if the BSP doesn't provide the optional routine sysTimeBaseFreq() to return the nominal tick rate it gets measured in osdMonotonicInit() which prints "osdMonotonicInit: Measuring CPU time-base frequency ... %llu ticks/sec." – I think that's called from a C++ static initializer so it should appear when loading the munch file, but I might be mistaken about that part.

A workaround would be to increase your system clock tick rate by calling sysClkRateSet(100) or higher which I would put towards the top of your startup scripts; you can call it later and our code should adjust (thanks to Dirk IIRC), but before loading the munch file is probably safest. The OS routines and any taskDelay() calls will still be using this tick rate for delay granularity, and that includes the semTake() that gets used when the timer queue needs to sleep. In epicsEventWaitWithTimeout() we calculate how many OS ticks to sleep using the equivalent of
    max(1, (int) (delay * sysClkRateGet()))

I haven't had any revelations over what might be causing the problem yet though.