[Stackless] Population simulation in Stackless

Aaron Watters aaron at reportlab.com
Wed Aug 28 17:29:28 CEST 2002


Here is the official word on free threaded Python from Greg Stein,
which was obtained offlist.

Greg Stein wrote:

>On Wed, Aug 28, 2002 at 01:15:10AM -0400, Aaron Watters wrote:
>
>>(I copy Greg Stein so he can correct my misstatements)
>>
>>Simon Frost wrote:
>>
>>>Dear Chris and Aaron...
>>>I'm referring to your test programs.  When I run them on a dual 
>>>processor machine, the load is balanced between them. 
>>>
>>IO operations release the interpreter lock... if you have enough of them 
>>compared
>>to interpreter computations you might get a balanced load. Maybe that is 
>>what is happening....
>>
>
>That could definitely explain a balance, yes.
>
>>...
>>For "free threaded python" Greg Stein found that you'd have to allocate 
>>a lock
>>for every list and get and release it for many list operations, 
>>etcetera.  This ate up the hypothetical
>>advantage of using threads on most systems :(, I think.
>>
>
>Yah. Python ran somewhere between 30 and 50 percent slower overall. I never
>looked into it hard enough. I believe a good amount was lock contention on
>some central resources. The individual objects' locks were fast.
>
>>For stackless I would head this way (maybe): use channels as the only 
>>communication mechanism
>>and then develop some sort of a safe (and fast) way to share channels 
>>between
>>processes.  Then you could balance the microthreads between a number of 
>>processes (maybe).
>>
>
>Yup. To expand the concept a bit, if I were to ever do the free-threading
>stuff again, I would add a flag to the relevant objects. If the flag is set,
>then access is synchronized. When the flag is clear, then it can operate at
>full speed. Another, somewhat orthogonal flag is a "read only" flag. That
>would shut down all mutex handling, even on shared data items.
>
>Cheers,
>-g
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.stackless.com/pipermail/stackless/attachments/20020828/12403885/attachment.htm>


More information about the Stackless mailing list