[Stackless] Putting apps to sleep and waking them up from the API

Chris chris at northarc.com
Thu Jun 10 22:39:22 CEST 2010


Thank you for your comments!

On 6/10/2010 3:40 PM, Kristján Valur Jónsson wrote:
> The fact that you are using Sleep() there indicates to me either
> a) a misunderstanding or
> b) you don't hold the python GIL at this point.
>    

It seems possible that on a loaded system, the worker thread could be 
scheduled right after the queue call, but before the 
PyStackless_Schedule(). This will always result in a deadlock when the 
worker calls PyTasklet_Insert() before the OS can reschedule the 
original thread so the _Schedule() call can occur ie:

Thread A                                 Thread B

q up request
<preempt>
                                                service request, wake up 
tasklet [which has not gone to sleep yet!]
<preempt>
go to sleep with _Schedule,
waiting for _Insert call
<deadlock>

Placing a breakpoint on that line it never hits, and I don't expect 
under normal circumstances it ever will, but with heavy stress its possible.


> How are you manging the GIL?  You must be releasing it at some point to deal with your "request", otherwise this whole excercise wouldn't be worthwhile.
>    

I didn't think I needed to acquire it for this, but just to test (refer 
to the other email I sent out on this subject) I tried:

             PyEval_AcquireLock();
             PyTasklet_Insert( request->tasklet );
             PyEval_ReleaseLock();

and it still crashes on the _insert instantly, as soon as the program is 
invoked, but adding:

             PyEval_AcquireLock();
             Sleep(2);
             PyTasklet_Insert( request->tasklet );
             PyEval_ReleaseLock();

Allows it to run for quite awhile before dying (actually only just now a 
test died, up until then I thought it was a complete workaround)

Again to emphasize- the Sleep() is there because I am forcing the task 
to complete trivially and return, as a test. In other words as soon as 
the worker thread wakes up on the queue entry, it immediately completes 
and tries to wake up the caller. *crash* What it feels like is that 
stackless needs to do <something> after that _Schedule() call and if 
that <something> is not allowed to complete, an _Insert on the same 
tasklet dies. otherwise everything works. Although I am still wondering 
how I am going to ensure thread safety with the _Schedule call, since it 
doesn't return to the calling stack when it swaps. but one thing at a time.


>> -----Original Message-----
>> From: stackless-bounces at stackless.com [mailto:stackless-
>> bounces at stackless.com] On Behalf Of Chris
>> Sent: 10. júní 2010 15:46
>> To: stackless at stackless.com
>> Subject: [Stackless] Putting apps to sleep and waking them up from the
>> API
>>
>> I must be doing something silly and wrong, hopefully I can get some
>> help
>> from this list. I am writing a blocking call in the c api which is
>> serviced by a [pool] of other threads, of course when the call is made
>> I
>> want to let stackless continue processing, so I did this, which works
>> great:
>>
>>
>> PyObject* getData(PyObject *self, PyObject *args)
>> {
>>       // blah blah construct request on the heap
>>       request->task = PyStackless_GetCurrent();
>>       queueRequest( request ); // this will get picked up by the thread
>> pool
>>       PyStackless_Schedule( request->task ); // away it goes back into
>> python-land
>>
>>       Py_DECREF( request->task ); // if we got here its because we were
>> woken up
>>
>>        // blah blah construct response and return
>> }
>>
>>
>> The thread pool does this:
>>
>>
>> worker()
>> {
>>          // blah blah wait on queue and get a job and do some work
>> waiting for awhile (or returning immediately)
>>
>>           while( !PyTasklet_Paused(request->task) ) // prevent race
>> (deadlock if this runs before the _Schedule call)
>>           {
>>                 Sleep(0); // yield
>>           }
>>
>>           PyTasklet_Insert( request->task );
>> }
>>
>>
>> This works just fine. Until I stress it. this program from python
>> crashes instantly in unpredicatbale (corrupt memory) ways:
>>
>> import stackless
>> import dbase
>>
>> def test():
>>       while 1:
>>           dbase.request()
>>
>> for i in range( 10 ):
>>       task = stackless.tasklet( test )()
>> while 1:
>>       stackless.schedule()
>>
>>
>> BUT if I change that range from 10 to 1? runs all day long without any
>> problems. Clearly I am doing something that isn't thread safe. but
>> what?
>> I've tried placing mutexes around the PyTask_ calls but since schedule
>> doesn't normally return thats not possible everywhere, If I add Sleep's
>> it helps, in fact if I make them long enough it fixes the problem and
>> all 10 threads run concurrently.
>>
>> I assume PyTasklet_Schedule()/_Insert() are meant to be used this way,
>> (ie- multiple threads) can anyone tell me which stupid way I have gone
>> wrong?
>>
>>
>> _______________________________________________
>> Stackless mailing list
>> Stackless at stackless.com
>> http://www.stackless.com/mailman/listinfo/stackless
>>      
>
> _______________________________________________
> Stackless mailing list
> Stackless at stackless.com
> http://www.stackless.com/mailman/listinfo/stackless
>
>
>    




More information about the Stackless mailing list