[Stackless] stackless and multi-core

Jeff Senn senn at maya.com
Mon Apr 28 17:39:59 CEST 2008


On Apr 28, 2008, at 10:31 AM, Andrew Francis wrote:
>
> I don't know much about multiple CPU programming and
> don't understand all the issues surrounding the GIL.
> However from my understanding, from a behavioral
> standpoint, the GIL acts on a principle similar to the
> the watchdog in the Stackless scheduler. Again, how to
> exploit this?

Hm.  I don't understand your analogy with the "watchdog".

You should probably think of the GIL as something designed
to stop you from doing exactly what you think you want to
do. :-)   It guarantees that the Python VM/interpreter
cannot be interrupted except where it "wants" to be.
That is: a single python interpreter is only running ONE
place (in one thread) at a time NO MATTER how many
threads you have.

It's "Global" so that lots of little incremental checks
don't need to be sprinkled throughout the Python interpreter
potentially slowing it down.

Therefore:

> One thing I am curious about is what happens when you
> run multiple CPUs but you still keep the GIL. However
> you replace thread locks with simple user space spin
> locks? The idea being to avoid the expensive OS
> context switch.  What sort of performance gain will
> you get? Can this be done cheaply?

This could be done fairly quickly (in terms of developer
time), but you would find that it would only help if
you had a lot of overlap of things that were not python.
(e.g. blocking I/O or extensions running code not in the
python interpreter).

And it would hurt you *a lot* if you had a lot of python
that wanted to overlap and/or more than one process on your
machine!!

Typically you would use spin-locks for a situation where
you are guarding access to something that is quickly
locked and unlocked (and has a fairly high chance of not
being locked when requested).  The GIL is the
opposite -- it is almost always locked (because the
python interpreter is almost always doing something!)

-Jas


>
>
> Cheers,
> Andrew
>





More information about the Stackless mailing list