Background
I understand, maybe I'm wrong, an asynchronous delay would be one that does not completely take over the CPU.
On the other hand, the synchronous delay remains in a loop, counting clock cycles, until the desired time is reached.
A type of synchronous delay is the instructions called delay, depending on the compilation language, for example, in C they are something like this:
delay_ms(30); // wait 30 millisecons
An asynchronous delay is a bit more complex to use, it requires a timer that is running freely and auxiliary variables.
At a given time, the auxiliary variable captures the value of the timer, again a rough example in C would be like this:
instantaneousValueTimer = SysTick->VAL;
Then, to create a delay, I must perform the subtraction between the timer value and the captured value; if the result of this subtraction is greater than or equal to the number of cycles that equal the delay, we can say that the desired time has been achieved.
Again in C, an example:
if (((SysTick->VAL) - instantaneousValueTimer) > _500ms)
{
gpio_toggle_output_bit(LED_GREEN_GPIO_PORT, LED_GREEN_PIN);
instantaneousValueTimer = SysTick->VAL;//I again capture the timer value to generate another delay
}
Where _500ms, can be the value that the number of cycles counted is equivalent to 500 ms.
This way, the program counter is not so to speak "stuck" in a loop, the CPU can do other tasks, and then return to the asynchronous delay to determine if the time has elapsed.
What are the disadvantages of this?
- A free-running timer, usually 24 bits wide or better yet 32 bits wide, since if you want large delay values, you may not be able to achieve this with a 16-bit or smaller timer and require auxiliary variables, making it somewhat shabby.
- The difference between two variables, even if they are unsigned, can give a negative value, which implies using a function or other method to calculate the absolute value of said difference. In some compilers, I have noticed that when subtracting a large number from a small number (both being unsigned integers), the absolute value is obtained, without having to resort to what was explained above.
- It's not exact, this is because if the delay hasn't been met, and the CPU is going to do other tasks (which should also be asynchronous), the more things it does before re-evaluating if the delay has been met, the more likely it is that when it checks again, the elapsed time will be slightly longer than expected (hence the "greater than" > condition). If someone wants something precise, they'll have to use interrupts or sacrifice system performance by using synchronous delays. So an asynchronous delay would be useful for things where time precision isn't critical, for example turning on and off an alarm LED.
Question
I have seen another way of doing it, they use the mentioned timer to generate an interrupt, let's say every 1 ms.
When the interrupt occurs, a variable is incremented, and with that variable, similar to what I do, that is, capturing any value and doing the mentioned subtraction, an asynchronous delay is achieved.
But my doubt is that this method introduces a latency to the system, an interrupt, in this case every 1 ms.
In some projects it may not be important, in others, it can be a nuisance for other processes that are interrupted every moment.
My question is: Are there other ways to achieve the asynchronous delay that I propose?
Any comments or suggestions are welcome
Update
Thanks to Scott Seidman's comment, and as I said before, maybe I'm wrong, i.e. using the words synchronous and asynchronous.
I've taken the concept from Harmony 3 for 32-bit microcontrollers.
Initially they (MCHP) used the concept of "blocking" and "non-blocking", but now it's synchronous and asynchronous.
