[Stackless] StacklessWSGI example, HTTP daemons in general

Japhy Bartlett japhy at nolimyn.com
Sat May 17 07:58:02 CEST 2008

Hi -

So, I have been playing around with the stacklesswsgi.py, and it's
pretty great -

Unfortunately, it seems to quietly die after a random (as far as I can
tell) period of inactivity.  I added a few debug prints in the main
loop, without any luck.  Anyone have any thoughts on what's triggering

I also ran some load testing on it with http_load
(http://www.acme.com/software/http_load/), and the wsgiref.demo_app.
Using the -seconds parameter crashes the server when http_load
finishes, due to some error handling issues.

More interestingly, the server seems to handle loads pretty well (not
surprising, considering the nature of Stackless), and had me thinking
about server models:

( Briefly, is there any info on what happens when Stackless starts
hitting a max number of tasklets?  What determines that upper limit? )

If you added a counter to the pool of client tasklets added, as you
started to approach the upper limits of the individual process, you
could spawn a new server process, (perhaps listening on 8001) and
start forwarding clients to that process.  Which, in turn, could spawn
a new process as it began to peak, ( a process per CPU on the host
machine? ) perhaps on another server altogether.

Anyhow, just an idea which would maybe address Stackless' inability to
take advantage of multiple CPUs.  Which makes sense to me, but I'm not
particularly learned about these things.

Worst case, I think that counting the # of tasklets served and then
throttling and sending Server Busy codes would be interesting.

OK, enough rambling - thanks for some great example code. :)


More information about the Stackless mailing list