[Tinyos-devel] thoughts on buffer management....
cire831 at gmail.com
Mon Oct 7 00:17:25 PDT 2013
On Sun, Oct 6, 2013 at 6:37 PM, Philip Levis <pal at cs.stanford.edu> wrote:
> I don't think the cost of a single buffer is significant enough to warrant
> a different software architecture.
But it isn't really a different software architecture. A simple addition
to RadioReceive would allow the interface to evolve gracefully.
> The upper layer can always tell it to discard packets by passing it back.
The upper layer discards the packet but the LLD doesn't but continuously
copies new packets into a buffer and then passes it up.
> The 'extra work' here is pretty minimal, unless you're thinking there are
> power implications.
Definitely power implications. But the details of what it costs depends
on how the driver is implemented. For example, the cc2420x driver does
busy waits waiting for each incoming byte. It doesn't have to do it this
way but is how it is currently implemented. So if it is moving data into a
buffer that just then gets tossed, that is much more expensive then if the
LLD knows to just drop the packet via RXFLUSH. If RXFLUSH is used, the
cpu will spend a whole bunch more time sleeping.
Now even if the 2420x driver is implemented in lets say a more efficient
fashion, there is still a bunch of work it actually has to do moving the
packet bytes out of the on chip fifo into the memory buffer. The SPI bus
is on and the cpu is moving things via the interface.
> Not to mention that a given driver might not be able to make this
> judgement, instead having to get it from network layers.
The LLD can never make this judgement. It has to be told by buffer
management/network layers what is going on.
I'm interested in adding a simple change that would make the interface a
superset of the current one that can be used to gracefully evolve the
interface. Existing code wouldn't need to support it.
> Can you be more precise what you mean by 'better tuning'?
I'm thinking about the problems the system faces under load and ways for a
given node to gracefully shed so that nodes continue to be able to perform
in real deployed environments. Yeah yeah, motherhood and apple pie.
The situation in particular that I'm thinking of starts with the premise
that the radio is seeing lots of traffic. The new C2520 driver (cc2520-v2)
actually uses what the h/w provides to minimizes
The tuning I'm referring to is designing in mechanisms that allow if
desired for the system to adapt to what is happening. i.e.. being able to
tell the driver to really stop copying packets into a buffer that will
basically get thrown away so that the driver can do things differently that
helps the system deal with getting hammered.
So consider, the rxfifo on the 2520 can hold multiple packets. And the
premise is that the interface is being inundated but we have temporarily
run out of buffers upstairs. Using the current interface we will need to
process each packet one by one, including extracting the packet data from
the rxfifo and moving it into the packet buffer (which is then thrown away
If the LLD on the other hand, knows that we are in a buffer starvation
situation, it can on the other hand simply flush the rxfifo rather than
actually processing each packet. This results in a significantly reduced
load on the driver and the cpu when the system starts getting hammered.
This is what I mean by tuning. The system has the knobs to change how
different parts behave as conditions change.
Now in particular, here is what I'd like to add to RadioReceive
first, if the upper layer (the event handler for RadioReceive.receive) is
in a buffer starvation condition it can return NULL indicating no buffers
And we also need a new call into the LLD that tells the driver about what
buffer it should use for receiving into. Call it
Clearly the upper layer and a given driver have to agree to use this part
of the interface. And if so, the optimization can be accomplished. But
existing layers and drivers don't have to use the new stuff. And having
the interface can allow improvements in the behavior of the driver when in
a saturated environment.
> Philip Levis
> Associate Professor
> Computer Science and Electrical Engineering
> Stanford University
> On Oct 6, 2013, at 6:19 PM, Eric Decker <cire831 at gmail.com> wrote:
> > I've been rewriting the CC2520 driver and have been looking at how we
> receive packets and the interactions with upper layers. In particular
> buffer management.
> > By default, a low level driver has a buffer (usually defined in the
> driver itself) that is "parked" and available for use by the low level
> driver (LLD). When the packet is filled in RadioReceive.receive is
> signaled. On return the LLD is handed the next buffer to use for the next
> > Now if we are in buffer starvation, the upper layer is forced to hand
> back the buffer that it has just been handed. There are no other
> provisions for upper layer and/or buffer management code to inform a LLD
> that the system is in buffer starvation and it should just discard incoming
> > Is first any LLD in the systems will each be holding onto buffers (one
> per LLD) and 2nd the LLD will be doing work copying packets into its only
> buffer and handing it off to the upper layer where it is thrown away. Not
> doing the extra work would be better.
> > Clearly, this is how the current system works. And it is clear that it
> does work and is stable enough.
> > However, buffer starvation and mechanisms for flow control push back as
> well as buffer management mechanisms would allow for better tuning in
> system behaviour. And they don't have to be complex.
> > Do we want LLD to continue to behave this way or is it desireable to add
> slightly more capability to better deal with buffer starvation issues?
> > thoughts?
> > eric
> > --
> > Eric B. Decker
> > Senior (over 50 :-) Researcher
Eric B. Decker
Senior (over 50 :-) Researcher
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Tinyos-devel