[Stackless] Event-based scheduling

Carlos Eduardo de Paula carlosedp at gmail.com
Wed Feb 13 21:45:34 CET 2008


Instead of using one thread per core, you could think about using one  
interpreter per core. With this the GIL problem is gone.

Take a look in processing module (http://pypi.python.org/pypi/ 
processing). It makes this much easier because you will work with  
multiple interpreters like working with multiple threads  
transparently. It even has IPC implemented and works on multiple  
platforms...

Carlos


On 13/02/2008, at 16:38, Andreas Kostyrka wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> First hint. you do not need more than one thread ;)
>
> => Python has a Global Interpreter Lock, which means that at most one
> thread runs inside the Python interpreter at a given time.
>
> Second, stackless has the given "abstraction" in form of Channels.
> µthreads can block on channels and get resumed when something is  
> sent to
> the channel.
>
> Third the wsgi sample is just a demo. AFAIK, it does not even set the
> socket reuse flag, ...
>
> And as a last hint, on my 2.5 years old Centrino 1.6GHz laptop, and  
> the
> wsgiref demo app, I have managed between 500-750 requests per second.
> Depending upon how many concurrent (100-1000) requests I make Apache
> Bench do. Philosophically, it's probably at least partially limited by
> ab running on the same host as the tested server. (Admittingly, on
> Linux, but then, why would you want to use a legacy system for heavy
> computing? :-P)
>
> Andreas
>
> Laurent Debacker wrote:
> | Hello,
> |
> | I'm new to stackless, so I apologize if this is trivial.
> |
> | Is it possible to schedule microthreads based on event? For example,
> | let's suppose that a µthread writes data through a socket. The write
> | will block the µthread, but is asynchronous for the real thread.
> | Consequently, the async write is started, the scheduler is called,  
> and
> | the µthread is put on a waiting list until its write is completed.  
> In
> | addition, the scheduler shall never call back a µthread that is  
> still
> | blocked, and the stackless.run() may not return while there is still
> | blocking µthreads.
> |
> | I have looked at the wsgi server sample, but the performance  
> become so
> | poor on windows when the number of concurrent connection  
> increases. In
> | addition, there is that strange asyncore_loop function with the pool
> | call which scares me.
> |
> | What I want, is a WSGI server, with one thread per CPU/core, and one
> | µthread per connection :)
> |
> | Regards,
> | Laurent.
> |
> |
> |  
> ------------------------------------------------------------------------
> |
> | _______________________________________________
> | Stackless mailing list
> | Stackless at stackless.com
> | http://www.stackless.com/mailman/listinfo/stackless
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.2 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFHszkaHJdudm4KnO0RAqVSAKCShuehlCrUq8F6dAiANZy/aettmgCgofjE
> DGoCQeyAiWCALVPDlNbaAcQ=
> =vcRW
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Stackless mailing list
> Stackless at stackless.com
> http://www.stackless.com/mailman/listinfo/stackless





More information about the Stackless mailing list