[Stackless] Putting apps to sleep and waking them up from the API

Kristján Valur Jónsson kristjan at ccpgames.com
Thu Jun 10 21:40:23 CEST 2010

The fact that you are using Sleep() there indicates to me either
a) a misunderstanding or
b) you don't hold the python GIL at this point.

If you hold the GIL while going to Sleep(), nothing will happen, all of python is frozen.
On the other hand, if the thread in the pool runs in the context of the python GIL, then it will
never encounter that Sleep(), because the race condition doesn't exist:  It is protected by the GIL.
between queueRequest() and PyStackless_Schedule(), no other GIL-requiring thread will run.

Now, I´m assuming your worker thread is in fact holding the GIL, otherwise so much could go wrong.

Other than that, a cursory review doesn't indicate anything clearly broken.

How are you manging the GIL?  You must be releasing it at some point to deal with your "request", otherwise this whole excercise wouldn't be worthwhile.

One thing to try is this:
Instead of using PyStackless_Schedule / Insert, 
create a Channel object for your request, and set channel preference to 0.
Then block by calling PyChannel_Receive(), and wake up the recipient by calling
PyChannel_Send().  You can do a sanity check on the channel by examining its balance
using PyChannel_Balance() before sending.
This is the API we use at CCP for tasklet blocking, and I know that it works across threads.


> -----Original Message-----
> From: stackless-bounces at stackless.com [mailto:stackless-
> bounces at stackless.com] On Behalf Of Chris
> Sent: 10. júní 2010 15:46
> To: stackless at stackless.com
> Subject: [Stackless] Putting apps to sleep and waking them up from the
> I must be doing something silly and wrong, hopefully I can get some
> help
> from this list. I am writing a blocking call in the c api which is
> serviced by a [pool] of other threads, of course when the call is made
> I
> want to let stackless continue processing, so I did this, which works
> great:
> PyObject* getData(PyObject *self, PyObject *args)
> {
>      // blah blah construct request on the heap
>      request->task = PyStackless_GetCurrent();
>      queueRequest( request ); // this will get picked up by the thread
> pool
>      PyStackless_Schedule( request->task ); // away it goes back into
> python-land
>      Py_DECREF( request->task ); // if we got here its because we were
> woken up
>       // blah blah construct response and return
> }
> The thread pool does this:
> worker()
> {
>         // blah blah wait on queue and get a job and do some work
> waiting for awhile (or returning immediately)
>          while( !PyTasklet_Paused(request->task) ) // prevent race
> (deadlock if this runs before the _Schedule call)
>          {
>                Sleep(0); // yield
>          }
>          PyTasklet_Insert( request->task );
> }
> This works just fine. Until I stress it. this program from python
> crashes instantly in unpredicatbale (corrupt memory) ways:
> import stackless
> import dbase
> def test():
>      while 1:
>          dbase.request()
> for i in range( 10 ):
>      task = stackless.tasklet( test )()
> while 1:
>      stackless.schedule()
> BUT if I change that range from 10 to 1? runs all day long without any
> problems. Clearly I am doing something that isn't thread safe. but
> what?
> I've tried placing mutexes around the PyTask_ calls but since schedule
> doesn't normally return thats not possible everywhere, If I add Sleep's
> it helps, in fact if I make them long enough it fixes the problem and
> all 10 threads run concurrently.
> I assume PyTasklet_Schedule()/_Insert() are meant to be used this way,
> (ie- multiple threads) can anyone tell me which stupid way I have gone
> wrong?
> _______________________________________________
> Stackless mailing list
> Stackless at stackless.com
> http://www.stackless.com/mailman/listinfo/stackless

More information about the Stackless mailing list