[Stackless] Population simulation in Stackless
aaron at reportlab.com
Wed Aug 28 17:29:28 CEST 2002
Here is the official word on free threaded Python from Greg Stein,
which was obtained offlist.
Greg Stein wrote:
>On Wed, Aug 28, 2002 at 01:15:10AM -0400, Aaron Watters wrote:
>>(I copy Greg Stein so he can correct my misstatements)
>>Simon Frost wrote:
>>>Dear Chris and Aaron...
>>>I'm referring to your test programs. When I run them on a dual
>>>processor machine, the load is balanced between them.
>>IO operations release the interpreter lock... if you have enough of them
>>to interpreter computations you might get a balanced load. Maybe that is
>>what is happening....
>That could definitely explain a balance, yes.
>>For "free threaded python" Greg Stein found that you'd have to allocate
>>for every list and get and release it for many list operations,
>>etcetera. This ate up the hypothetical
>>advantage of using threads on most systems :(, I think.
>Yah. Python ran somewhere between 30 and 50 percent slower overall. I never
>looked into it hard enough. I believe a good amount was lock contention on
>some central resources. The individual objects' locks were fast.
>>For stackless I would head this way (maybe): use channels as the only
>>and then develop some sort of a safe (and fast) way to share channels
>>processes. Then you could balance the microthreads between a number of
>Yup. To expand the concept a bit, if I were to ever do the free-threading
>stuff again, I would add a flag to the relevant objects. If the flag is set,
>then access is synchronized. When the flag is clear, then it can operate at
>full speed. Another, somewhat orthogonal flag is a "read only" flag. That
>would shut down all mutex handling, even on shared data items.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Stackless