[Stackless] Implementing CSP

Tom Locke tom at livelogix.com
Thu Jan 8 04:58:32 CET 2004


[I sent this yesterday, but (by accident) only to Chris]

Hi Chris,

[this got a bit long because I thought I better merge our two parallel
conversations into one (concurrent is not always better!)]

> > Timers - check? I've seen some evidence of this but I've not figured
it
> > out yet. Does the current scheduler implement a timer queue? If so,
how
> > does this interact with support for 'user schedulers'?
> 
> Did you say CSP or TCSP? :-)
> No, we don't have timers, yet.
> We also don't have priorities, which are needed
> as well.

Priorities would be cool but I think they only become really important
in hard real-time situations - not one of Python's strengths!

Timers though, are very important. Right now we can do this:

def sleep(duration):
    t = time.clock()
    until = t + duration
    while time.clock() < until:
        stackless.schedule()

But we really want timers without the busy wait. The basic idea (you
probably know this) is to have two process queues. One is the 'regular'
circular queue for processes blocked on channels, the other is the timer
queue. It is sorted by wake-up time (process with earliest wake-up at
the head). Scheduling then looks something like this

def schedule():
    if timer_queue.head.wakeup_time <= current_time():
        schedule timer_queue.head
        remove from timer_queue
    else:
        schedule regular_queue.head

We also need something like tasklet.sleep_until(t) which basically means
remove tasklet from regular queue and insert (sorted by wakeup time)
into timer queue.
 
> > I'd also *really* appreciate some help with understanding the
> > architecture. I know that's a hopelessly open ended request, but if
> > anyone would care to post (or point me to) a short high-level
> > description, I can drill down with more specific questions if/when I
> > need to.
> 
> I will come back to that one!

I'm going to be sticking to Python for a while, so don't worry.

> > Perhaps a very brief glossary of important functions and data types?
> 
> help(stackless) will tell you most of it.
> 
> > I'll be slogging through this the hard way anyway, over the coming
week.
> > At least there's not too much code there - just 14 .c files in
> > src/Stackless. Anywhere else I should be looking?
> 
> You don't need to care about the small assembly pieces, the
> pickling support and such. Tasklets and channels are *it*,
> and they don't hide much functionality, most is in the
> doc strings.

Here I think you are talking about learning to program with the
stackless module right? I was actually concerned with learning about the
*internal* architecture - the C code. But like I said, I'm going to stay
in Python as much as I can, so don't worry.

[2nd post]

> Note: Even the channels are not necessarily built-in, since
> I simulated them, too, for my Zope demo.

Really? They seem pretty fundamental in the C code. Could you explain
how you simulated channels?

> It is crucial
> to find the "right" (meaning most effective) implementation
> of "ALT". Especially, I'd like to get at the order of speed of
> FreeBSD's kernel queues. Their primary trick is that they don't
> rebuild a huge list of channels to be waited for on every call,
> but they can reuse a given structure.
> Given that we are waiting on events on N channels, and we got
> answers on K channels, then computation time for the KEvent
> approach is dependent from K, not N.
> 
> That means I'm not only aiming on a general ALT that creates
> a new list all the time, but also something that re-uses a list
> effecitvely. Makes sense?

I'm not sure I'm following you here. My current ALT algorithm (which
I'll post tomorrow) is basically the original transputer algorithm. (if
anyone is wondering, that's the processor that ran occam). It's O(n)
where n = no of channels alting over. A lot of very bright people have
thought hard about this algorithm - it's not likely to be bettered.

I'm not familiar with FreeBSD kernel queues. Can you describe the
situation a bit more? My initial take on "waiting on events on N
channels" is that is sounds more like a PAR, e.g:
(this is a funny occam/stackless pseudo code)

  PAR
    SEQ
      x = chan1.receive()
      ... process message
    SEQ
      x = chan2.receive()
      ... process message
    SEQ
      x = chan3.receive()
      ... process message

Here the performance depends on process startup/termination and context
switch times. On occam/transputer these were *incredible*. The general
idea was "concurrency with the cost of looping, communication with the
cost of arithmetic".

Time to go to bed (I've moved a long way since my original post, can you
tell where I am now?)

Tom.

p.s. My Stackless install crashes whenever a tasklet raises an
expcetion. I haven't looked into it further yet - whether catching the
exception will prevent the crash. Is this a known problem? Something I'm
doing wrong?


_______________________________________________
Stackless mailing list
Stackless at stackless.com
http://www.stackless.com/mailman/listinfo/stackless



More information about the Stackless mailing list