[Stackless] Implementing CSP
tom at livelogix.com
Thu Jan 8 15:36:46 CET 2004
> > I'm not sure I'm following you here. My current ALT algorithm (which
> > I'll post tomorrow) is basically the original transputer algorithm.
> > anyone is wondering, that's the processor that ran occam). It's O(n)
> > where n = no of channels alting over. A lot of very bright people
> > thought hard about this algorithm - it's not likely to be bettered.
> In FreeBSD, you can ask for "give me as much as possible", but
> asking for a single channel at a time is the same thing here.
> If you are waiting on N channels, and K have data for you,
> then FreeBSD's KQueue implementation runs at O(K), while ALT
> runs on O(N), since it wastes all the time on building structures.
This sounds more like a shared channel than an ALT (i.e. a channel that
can queue up multiple writers, as all stackless channels can).
Also, the O(N) behaviour of the standard ALT algorithm does not come
from building structures, but rather from polling each channel to see if
it wants to talk. In fact it polls each channel twice, first to 'enable'
it, then later to 'disable' it.
> This is a relevant issue.
> I'm not talking about simple sample programs with 5
> channels, but people implementing WEB service, waiting
> on 10K channels.
You are right! To use an O(N) algorithm on so many channels would be a
very bad design decision. What this means to me is that you should not
be ALTing at all. Again, this sounds like an appropriate situation for a
single shared channel. ALTing is for different situations - in
particular when you want to provide service to some clients, but refuse
to service others.
> So, if you have started an ALT statement on 10K channels, and
> you got one channel answering, and you want to process
> the next one, you don't want to spell all the 10K channels again.
> That's the KQueue advantage, it just recycles.
It sounds like KQueue is not an ALT-like behaviour.
> There is an article about that, comparing different approaches
> on waiting on multiple events most efficiently, but I can't
> find it right now. Will send it later.
Great - I'd like to see that.
All the best
p.s. I've had fun hacking with Stackless today - thanks for a great job!
Stackless mailing list
Stackless at stackless.com
More information about the Stackless