[Stackless] Event-based scheduling

Laurent Debacker debackerl at gmail.com
Wed Feb 13 22:05:25 CET 2008


First, thanks for all your support :)

I don't understand why the implementation forces me to use channels, if I do
not need to 'transfer' data. You use the channel as a semaphore. I would
think that it is better to implement semaphore, and then implement channels
based on semaphores. Of course, it makes sense for coroutines exchanging
data, but there is cases where no data need to be transfered. Of course, by
semaphore, I mean semaphores working with the µthreads, not necessarily the
ones of the kernel. I implemented the latter a while ago in C#.

I guess I should compare combinations of asyncore, twisted, stackless'
scheduler, and custom scheduler using continuations provided by stackless.

(On a side note, it is scaring that so few languages support continuations)

Laurent.

On Feb 13, 2008 2:44 PM, Arnar Birgisson <arnarbi at gmail.com> wrote:

> Hello Laurent,
>
> On Feb 13, 2008 10:23 AM, Laurent Debacker <debackerl at gmail.com> wrote:
> > Is it possible to schedule microthreads based on event? For example,
> let's
> > suppose that a µthread writes data through a socket. The write will
> block
> > the µthread, but is asynchronous for the real thread. Consequently, the
> > async write is started, the scheduler is called, and the µthread is put
> on a
> > waiting list until its write is completed. In addition, the scheduler
> shall
> > never call back a µthread that is still blocked, and the stackless.run()
> may
> > not return while there is still blocking µthreads.
>
> In general you do this by having the create a channel, store it
> somewhere and read from it. This will block the µthread. When the
> event happens, the event dispatching code writes to the channel which
> wakes up the µthread.
>
> > I have looked at the wsgi server sample, but the performance become so
> poor
> > on windows when the number of concurrent connection increases. In
> addition,
> > there is that strange asyncore_loop function with the pool call which
> scares
> > me.
>
> The current WSGI server uses the asyncore module from the std. Python
> library. It's performance is certainly not optimal. I think someone
> was working (or had already completed) a WSGI server based on
> libevent's http module, something I've been meaning to do myself but
> time escapes me. Let's hope the person in question speaks up.
>
> > What I want, is a WSGI server, with one thread per CPU/core, and one
> µthread
> > per connection :)
>
> If you wrote a WSGI middleware that dispatched requests to multiple
> underlying WSGI servers (in your case one for each CPU/core) and
> "fixed" the performance problems for the current WSGI server on
> stacklessexamples you should be in business.
>
> cheers,
> Arnar
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.stackless.com/pipermail/stackless/attachments/20080213/ef5f5d7e/attachment.htm>


More information about the Stackless mailing list