[Tinyos-devel] Drip design review

Ben Greenstein ben at cs.ucla.edu
Wed Feb 16 10:24:35 PST 2005


Gil,
I believe that the "bother the client" philosophy will only lead to making
applications more difficult to develop. Let me throw out a straw man:
suppose drip had access to a memory pool (gasp!). Then drip could cache
variably sized messages at will. I know dynamic memory access is voodoo east
of the bay, but down south we use it all the time.
Ben

----- Original Message ----- 
From: "Gilman Tolle" <gilman.tolle at gmail.com>
To: "tinyos-devel" <tinyos-devel at millennium.berkeley.edu>
Sent: Monday, February 14, 2005 8:51 PM
Subject: [Tinyos-devel] Drip design review


> Before formally releasing the Drip dissemination layer, I'd like your
> feedback on a design point.
>
> The purpose of Drip is to provide reliable dissemination for TinyOS
> messages, from a host to every mote in a network. This is intended to
> replace Bcast.
>
> The core problem of Drip, and why it has been so much harder to design
> than Bcast, is caching. Bcast must only cache a single item of data
> for the small amount of time between receiving it, signaling the upper
> layer, and retransmitting it. To do so, Bcast allocates a single
> TOS_Msg buffer and swaps it with the buffer provided by the lower
> layer.
>
> ** DRIP DESIGN
>
> To provide epidemically reliable delivery by retransmitting with the
> Trickle algorithm, Drip must be able to cache an item of data
> forever. Drip should also provide this reliable delivery for multiple
> different data items with different identifiers, independently. Thus,
> Drip needs to cache multiple data items, with potentially different
> sizes, indefinitely.
>
> Drip could, in theory, allocate one TOS_Msg structure for each client
> component and manage the cache itself. This is incredibly wasteful,
> given that the outer headers are set by the radio, the payload is
> likely to be smaller than the maximum size, and all the payload sizes
> may be different.
>
> Instead, this is the approach Drip takes: the client component is
> responsible for managing the cache. This allows each client to
> allocate just enough space to hold its particular message.
>
> First, the client receives new messages with the standard Receive
> interface:
>
>   ClientM.Receive -> DripC.Receive[AM_CLIENTMSG]
>
>   event TOS_MsgPtr Receive.receive(TOS_MsgPtr msg, void* payload,
>    uint16_t payloadLen) {
>     // client acts on the message here
>     memcpy(&clientCache, payload, sizeof(clientCache));
>   }
>
> The client is responsible for saving a copy of the message. Then, the
> client must provide that data upon request by Drip. This is done
> through the following interface:
>
>   ClientM.Drip -> DripC.Drip[AM_CLIENTMSG]
>
> The client implements an event:
>
>   event result_t Drip.rebroadcastRequest(TOS_MsgPtr msg, void* payload) {
>     memcpy(payload, &clientCache, sizeof(clientCache));
>     call Drip.rebroadcast(msg, payload, sizeof(clientCache));
>   }
>
> This approach has been flexible enough to support messages whose size
> is unknown until runtime, by way of the size field in the
> Drip.rebroadcast() call. It has also been extensible enough to support
> TTL scoping by returning FAIL to the Drip.rebroadcastRequest() event,
> indicating that Drip should not retransmit the message.
>
> But, it does require that the client perform the drudgery of cache
> management when all it should be doing is receiving notification of
> newly disseminated messages.
>
> ** ALTERNATE DRIP DESIGN
>
> Consider the following alternate approach:
>
>   command result_t StdControl.init() {
>     call Drip.init(&clientCache, sizeof(clientCache));
>   }
>
> The client provides Drip with a pointer to its cache at initialization
> time, and the size of that cache. In doing so, it transfers custody of
> that memory to Drip. Drip will save the pointer to the buffer, and
> will save the length, so that it can copy newly received messages into
> the correct cache and access that cache without bothering the client
> at all.
>
> But, we have traded off client convenience for wasted RAM. The
> Drip.init() method, as proposed, includes a pointer to the cache and
> the size of the cache. Drip needs both to perform any memory
> copies. But, both are known at compile time, and will never
> change. Storing them in RAM is unnecessary.
>
> For the length, the client could implement an event, like:
>
>   event uint8_t Drip.getLength() {
>     return sizeof(clientCache);
>   }
>
> If Drip bothered the client for the length, it could avoid storing a
> byte per client. If Drip bothered the client for the cache, as in the
> original Drip design, it could avoid storing a 2-byte cache
> pointer. Additionally, once the length has been stored, that is the
> only length that can be used. If the message happens to be temporarily
> shorter than its maximum length, Drip has no way of avoiding the
> transmission of the unnecessary bytes at the end of the message.
>
> Also, letting Drip store a pointer will introduce the possibility of a
> race condition between Drip updating the cache and the client reading
> the cache. This would be a way for the client to take advantage of the
> fact that it already has the value, and accessing it when necessary
> instead of storing "its own" copy of it. To cope with this problem,
> Drip could introduce lock() and unlock() methods that the client can
> use to temporarily regain control of the buffer from Drip. Or, Drip
> could perform all cache manipulations in an atomic section. This
> problem is absent when the client owns the cache, and Drip must ask
> for it.
>
> ** BOTHER THE CLIENT
>
> I suggest that a broader design principle is in play here: When a
> component can save resources by "bothering the client", should it?
>
> Example 1: we currently save a 2-byte task pointer for each client
> so that the task function can be called directly by the
> scheduler. But, in TinyOS 2.x, we have switched to the "bother the
> client" model, with the following event:
>
>   event void TaskBasic.run() { // do the task stuff in here }
>
> With that event, we save 2 bytes per task.
>
> Example 2: in the prototype of the Nucleus attribute system, a
> component providing an attribute provides the length of that attribute
> to an initialization function, similar to that used by Drip. In the
> current version, saving that byte has been replaced with an event:
>
>   event uint8_t Attr.getLength() { return ATTR_SIZE; }
>
> So, I believe that the "bother the client" approach is good for
> cleanliness, concurrency control, and resource savings. Storing
> pointers into other clients' memory seems like a violation of the
> already fragile walls between components.
>
> But, it does require the programmer to do a bit more typing.
>
> Is it worth it? Let me know.
>
> In the future, this problem seems like an ideal application of generic
> components. If each client of Drip instantiated a generic with a type
> argument corresponding to the message structure it would be
> disseminating, this could be used to allocate the right amount of
> cache space for Drip to manage by itself. Drip could also get the
> length from a sizeof() call on the type argument. But, I'd like to
> release Drip sometime before we have enough of TinyOS 2.0 working to
> even consider writing Drip.
>
> P.S. For those of you wondering where the remote group setting
> component is, doing it right requires getting Drip right. I could
> release the single-hop group setter in the meantime.
> _______________________________________________
> Tinyos-devel mailing list
> Tinyos-devel at Millennium.Berkeley.EDU
> http://mail.Millennium.Berkeley.EDU/mailman/listinfo/tinyos-devel



More information about the Tinyos-devel mailing list