[Stackless] Algorithm for Stackless Scheduler

Andrew Francis andrewfr_ice at yahoo.com
Fri Nov 21 19:59:06 CET 2008


Hi Jeff:

--- On Wed, 11/12/08, Jeff Senn <senn at maya.com> wrote:

> Heh.  Well "all" is a taller order... one I'm not prepared to guess at >w/o a time-consuming code review...

I am not sure what you mean? I am thinking about the operations that can cause a re-schedule (i.e., schedule(), blocking, time slice expired).

JAS> In learning-to-fish mode: If you want to examine pre/post
JAS> conditions you can certainly easily examine the runnables by using
JAS> stackless.getcurrent() and tasklet.next/tasklet.prev...

I started to read channelobject.c and its comments gives a fairly good explanation of the algorithm. 

In the case of receiver preference (the default), on a send(), the data is transferred, the receiver tasklet is placed at the head of the runnable list and is scheduled.

AF> I am assuming in the case of sender preference (and a
AF> single producer many receivers), things are faster because
AF> fewer context switches occur.
 
JAS> Hm. "Faster" is not the most precise word here. 

You are right: I should be more precise in my terms. 'The things are faster' referred to the time it took a producer with sender preference to send n messages. Nevertheless it was an off-the-cuff remark.

AF> It might mean absolute number of cycles to finish a problem,
AF> or latency to get to some particular answer... realize the
AF> *order* may or might not affect the number of
AF> context switches (it is difficult to predict except in
AF> extremely simple cases)

True. I have included a simple programme to test things like average response time, throughput, execution time in regards to channel preference. The example is simple but still insightful.

JAS> Anyway... remember that Stackless (if you are using
JAS> soft-switching (i.e. not involving stacks in
JAS> C extensions) has (relatively speaking) *very* fast context
JAS> switching.

Yes context switching is fast. 

Some observations.....

>From the tests I ran (10, 100, 10000 tasklets), all other things being equal, the context switches start to add up in the single producer and many consumers example (on my ASUS EEE, the cross-over was at 1000 tasklets). With 1000 consumers and 1 producer, sender preference results in 2002 context switches (2 producer context switches). With receiver preference, 3002 context switches. The execution time was also slightly quicker (i don't know about the precision of of time.time() so I have to look into that). 

Things would be different if I started to add, say cpu "weight" to either the sender or the receiver.

I guess what I am trying to do is apply various process analysis ideas (and with it, typical "levers") to the scheduling algorithm - so bear with me.


AF> Another question, at what stage is information
AF> transferred from the sender to the receiver?

JAS> Hm. Not sure what you mean by "transferred" -
JAs> remember Stackless is all memory references...so
JAS> the address of the object is just handed to the receiver as
JAS> it picks up in it's context.

Again you are right. And the channelobject.c comments explain things. What interests me is issues of safety. Also I am interested in the process analysis concept of "work in progress." 

Cheers,
Andrew 





      
-------------- next part --------------
A non-text attachment was scrubbed...
Name: serverTest3.py
Type: text/x-python
Size: 1646 bytes
Desc: not available
URL: <http://www.stackless.com/pipermail/stackless/attachments/20081121/35be07a9/attachment.py>


More information about the Stackless mailing list