-
Notifications
You must be signed in to change notification settings - Fork 520
Tasklet design notes
This information is adapted from Miklos Maroti's posts to the tinyos-devel mailing list, documenting the Tasklet interface as implemented in the RFXLINK stack.
The RFXLINK stack almost never uses atomic sections (unlike other radio stacks) because we ensure that the stack is not reentered asynchronously. If a (radio or timer) interrupt arrives and the previous interrupt handler has not returned, then we do not reenter the radio stack but remember that we have to handle this interrupt when the previous one is finished. Similarly, if we are in the middle of sending of a packet, then we are not going to handle an incoming message interrupt until we have finished with the send. This simplifies all stat transitions and the readability of the code, and does not impose much limitations since there is usually a shared resource (SPI, radio chip) which would prevent concurrent access anyways (this is achieved by huge and long running atomic blocks in other drivers).
Given the above we need something with task like semantics, lets call it tasklet, which allows only one tasklet to run at a time. This tasklet scheduling domain sits between real tasks and interrupts in a way that it can be configured either to be regular tasks (if TASKLET_IS_TASK is defined) or will run in interrupt context.
Tasklet can be scheduled to be executed (similar to posting a task) and the Tasklet.run will be executed when no other tasklet is runing. You can disable/enable the execution of tasklets with the disable/enable commands, which is used when we want to enter the radio stack to send a message and want to get exclusive access (no interruption). Of course only the time critical parts of RFXLINK are protected by tasklets.
If Tasklets are configured to run in interrupt context, then taking the first interrupt (A) we call Tasklet.schedule() which will call the Taskletr.run() command immediately, but it remembers internally that a tasklet is already running. When a subsequent interrupt arrives (B) before (A) has returned, and calls Tasklet.schedule() again, then we just remember that Tasklet.run() must be called again, and we immediatelly return from the (B) interrupt. When interrupt (A) returns, then we check whether there were other interrupts so whether we should call Tasklet.run() again. This is why you see the forever loop in the tasklet code: the first interrupt handler acts as a dispatcher (tasklet scheduler) for all subsequent tasklet calls.
A typical tinyos application can have long running tasks, for example some signal processing. There us no way one can mandate that all tasks should be executed in less than 1ms, because that cannot be enforced on a global scale.
However, the radio stack with software acks should respond with an ACK to incoming packets withing 0.3-0.6 ms, and this is NOT achievable without handling a significant portion of the radio logic in interrupt context. So for some applications, it makes sense to run tasklets in interrupt context. On the other hand, if some other high priority stuff (e.g. ADC at 40 KHz) is running, then using TASKLET_IS_TASK makes more sense, but that has consequences (larger latency, etc).
By the way, hardware ACK is NOT the solution in general, because the software ack makes sure that the mote has enough memory buffer to store the incoming packet, and under hardware ACK the stack might be forced to drop the packet before it can actually notify the AM handler.
Nested interrupts should be enabled when the tasklets are running in interrupt context. You do not block other interrupts, because tasklets never disable interrupts, they will just prevent the same tasklet to be run concurrently. Of course running tasklets are blocking the task scheduler, but that is expected and nothing unusual, so yes other tasks can be blocked. Note that nested interrupts can require more stack space than what would otherwise be needed.
I hope this makes sense. A similar logic is used in the fast serial path where we prevent the reentrance of the serial interrupt driver by bufferring subsequent received bytes in memory and later calling the handler when the previous one has returned. In fact, I think a general Tasklet infrastructure should be moved to the core tinyos library. It can adapt the Tasklet from the RFXLINK library, but it should allow the creation of independent tasklets (the RFXLINK library has a single one shared by all components, and everyone will be notified if someone called schedule). Let me know if someone would like to play with such an idea.