interval between timer callbacks

Hi, I’m running 9.0.0 on an ARM CM4. I’m using SysTick at 1000 Hz. I can see that xPortSysTickHandler() gets called precisely every 1000 usec. The problem I’m seeing is that my timer callback function gets called every 1075 usec (worse if I use tickless). I understand that FreeRTOS like any OS has a processing delay, but that should only mean that the first call to my callback is delayed. If the processing delay is constant, the interval between all the following calls to my callback should still be 1 ms, right (I have no other timers in my system)? The precise code and values doesn’t matter at this time, I’m trying to understand theoretically why there is a cumulative delay presented to timer callbacks. thanks Knut

interval between timer callbacks

Am I right in thinking you have a 1ms tick and a timer that expires every 1ms? If so you have a timer resolution of 1ms, and if the tick is occurring every 1ms it would seem unlikely you could get a cumulative delay with a different resolution. If the timer is expiring on every tick then what else is the system doing? All your other tasks will never get a whole time slice. How long does the timer callback take to execute?

interval between timer callbacks

Yes my post is confusing, sorry late at night. I’ll try to make it clearer here. Systick is 1 ms, but the timer is set to expire approximately every 1.3s, so say 1300 ticks. I’m reading an independent timer (PRO Timer) at the entry of the systick ISR and of the timer callback function. The callback only sends a message to a task (outside of FreeRTOS) and then gives a xSemaphoreGiveFromISR to the same task. What I did was to measure the interval between calls to the callback and see that they happen with over 1.4 s intervals, then I took that value and divided with the amounts of ticks the timer is set to and got my 1075 usec vs exactly 1000 usec for systick. Let’s forget about that math for a second and just say that the intervals between calls to my callback is > 0.1s longer than expected. What I was elluding to was that if all processing between OS and user code actually takes 0.1s (not likely I’m running a CM4 at 38.4 MHz!) we can call this delay D. I can see that systick is very exact, so if I’m not running tickless I should see T0, 1300*t, T1 … where Tn is a tick resulting in callback and ‘t’ is an empty tick. Now the absolute time I get my callback should be T0+D, T1+D, … The interval however ( (T1+D) – (T0+D) ) should still be my expected time of 1300 ticks. My intervals are actually increased so if I when I measure my intervals I see absolute times of T0+D, T1+2D, … etc. I can come up with more exact values when I get access to the lab on Monday, but right now I’m struggling with something that doesn’t even make theoretical sense to me. Oh one more thing: I have tested with different intervals of both ticks and timer values. Each give me different D. I noticed that tickless mode more than doubles the delay for everything else being the same. I get your point that if my callback actually took longer to execute than the programmed timer intervals (autoreload) I would get the effects I see, but it wouldn’t change with tick/tickless or different tick intervals. Hopefull I made more sense this time 🙂 thanks!

interval between timer callbacks

The callback only sends a message to a task (outside of FreeRTOS) and then gives a xSemaphoreGiveFromISR to the same task.
I don’t follow that sentence. If the scheduler is running then there can’t be a task that is outside of FreeRTOS, and if there were such a task you would not be able to give it a semaphore (because it was outside of FreeRTOS so would not know what the semaphore was and the FreeRTOS scheduler would not be able to schedule it anyway).
What I was elluding to was that if all processing between OS and user code actually takes 0.1s
The time between a tick causing a timer to expire, and the timer’s callback executing, should be a tiny fraction of that time – provided there is nothing higher priority than the timer service task executing. What are your configTIMERTASKPRIORITY and configMAXPRIORITIES values? Are there any tasks running at a priority above configTIMERTASK_PRIORITY?

interval between timer callbacks

All tasks are FreeRTOS tasks. It’s the message queue that is proprietary. Here what I had set: ~~~

define configMAX_PRIORITIES ( 3 )

define configTIMERTASKPRIORITY ( configMAX_PRIORITIES – 1 ) /* Highest priority */

task.h

#define tskIDLEPRIORITY ( ( UBaseTypet ) 0U ) <=== QUESTION ON THIS!

define TASKPRIORITY (tskIDLEPRIORITY + 1)

define configUSE_PREEMPTION ( 1 )

define configSLEEP_MODE ( 0 )

/* Definition used only if configUSETICKLESSIDLE == 0 */

define configUSESLEEPMODEINIDLE ( 0 )

/* Energy saving modes */

define configUSETICKLESSIDLE ( 0 )

~~~ I have since last time found that a 3rd party driver to which I have no source code is probably at fault – the problem occurs when I call one of their functions from a task, but not if I call it from the FreeRTOS timer callback. Speculations welcome, but it doesn’t seem to be an error caused by FreeRTOS. I have one question though. In task.h, tskIDLE_PRIORITY is set to 0, with a comment not to ever change it. But on an ARM this is the highest priority. Is the Idle task supposed to have the highest prio? My system is very simple, I have two user tasks + idle and timer tasks. How would you set up priorities for such a system? Right now my two tasks have the same prio.

interval between timer callbacks

But on an ARM this is the highest priority.
No, FreeRTOS works the same on all architectures, and 0 is definitely the lowest task priority on all architectures. Don’t confuse task priority with interrupt priority.