[Stackless] Event-based scheduling

Arnar Birgisson arnarbi at gmail.com
Wed Feb 13 14:44:13 CET 2008

Hello Laurent,

On Feb 13, 2008 10:23 AM, Laurent Debacker <debackerl at gmail.com> wrote:
> Is it possible to schedule microthreads based on event? For example, let's
> suppose that a µthread writes data through a socket. The write will block
> the µthread, but is asynchronous for the real thread. Consequently, the
> async write is started, the scheduler is called, and the µthread is put on a
> waiting list until its write is completed. In addition, the scheduler shall
> never call back a µthread that is still blocked, and the stackless.run() may
> not return while there is still blocking µthreads.

In general you do this by having the create a channel, store it
somewhere and read from it. This will block the µthread. When the
event happens, the event dispatching code writes to the channel which
wakes up the µthread.

> I have looked at the wsgi server sample, but the performance become so poor
> on windows when the number of concurrent connection increases. In addition,
> there is that strange asyncore_loop function with the pool call which scares
> me.

The current WSGI server uses the asyncore module from the std. Python
library. It's performance is certainly not optimal. I think someone
was working (or had already completed) a WSGI server based on
libevent's http module, something I've been meaning to do myself but
time escapes me. Let's hope the person in question speaks up.

> What I want, is a WSGI server, with one thread per CPU/core, and one µthread
> per connection :)

If you wrote a WSGI middleware that dispatched requests to multiple
underlying WSGI servers (in your case one for each CPU/core) and
"fixed" the performance problems for the current WSGI server on
stacklessexamples you should be in business.


More information about the Stackless mailing list