[Stackless] Using Stackless Python to overcome GIL issues interoperating with a C++ runtime

Adam Preble adam.preble at gmail.com
Sun May 6 12:25:37 CEST 2012


I wondered if anybody here has used Stackless Python in some clever ways to
work around the global interpreter lock.  I have some C++ objects wrapped
with Boost.Python, and I run into some interesting stuff when I start
implementing interfaces defined in these wrappers in Python.  At that point
I have to do some work that involves acquiring the GIL to run the Python
version of the code whenever I'm running into it from the C++ side in other
threads.  This normally works okay until I get into situations like this:

1. Python interpreter calls something with a wrapper into C++.
2. C++ code calls back into a Python implementation of a C++ interface (GIL
involved here).
3. Python implementation calls some other thing wrapped from C++ . . . note
that step #1 hasn't actually completed yet.
4. C++ code calls back into Python.  Here we deadlock trying to get the GIL.

I'm looking at a situation here where I need to place all these pending
GIL-related operations on some common stack, and have something comb over
them.  When that comber realizes it already took the GIL, and it's being
told to do a new operation, it can at least do some negotiation with
itself.  I don't know how the GIL works entirely here since I haven't
implemented this logic yet, but it's clear I have to do something.  At the
worst I assume I will have to save the interpreter stack from the prior
call before moving on to the next call.

It occurred to me there could be some nice ways to do this with coroutines.
 I'm already pondering some stuff Boost has for coroutines that might make
this a lot less ugly, but I wondered if Stackless had something that could
come to the rescue.  I don't know what it could possibly be, but I thought
I'd ask the list for advice.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.stackless.com/pipermail/stackless/attachments/20120506/f4e82812/attachment.html>


More information about the Stackless mailing list