[Stackless] Putting apps to sleep and waking them up from the API

Chris chris at northarc.com
Thu Jun 10 22:05:38 CEST 2010


Update:

After some heavy lifting I have more or less isolated the resultant 
crash to always occur here:

taskletobject.c
--------------

void
slp_current_insert(PyTaskletObject *task)
{
     PyThreadState *ts = task->cstate->tstate;
     PyTaskletObject **chain = &ts->st.current;

     SLP_CHAIN_INSERT(PyTaskletObject, chain, task, next, prev);
     ++ts->st.runcount;
}


due to 'task' being in a bad (but not generic memory corrupted, at least 
it doesn't look like it) state. task->cstate is null, as are next/prev.. 
refcnt is 4 and recursion depth is 1.. yeah few more crashes it always 
looks about that way.

BUT if I amend worker to look like this:

worker()
{
        // blah blah wait on queue and get a job and do some work 
waiting for awhile (or returning immediately)

         EnterCriticalSection( &stacklessLock ); // do not risk access 
to a thread-unsafe (apparently) API
         while( !PyTasklet_Paused(request->task) ) // prevent race 
(deadlock if this runs before the _Schedule call)
         {
               Sleep(0); // yield
         }

         Sleep(2); // LOSE THE RACE
         PyTasklet_Insert( request->task );
         LeaveCriticalSection( &stacklessLock );
}

it works, 5000 tasklets running full speed with 16 worker threads, all 
busy (no logging or anything, just pounding the interface) the mutex I 
should have put in to begin with, but the Sleep(2); is also critical it 
seems.

No way I'm leaving it that delicate.

So question is, what do I need to lose the race to? I am up to my 
armpits in stackless code, have tried some pretty ghastly things, such 
as adding a global flag that is set during the initial _Schedule() until 
it is released at the end of slp_transfer.. really figured that would 
work but it didn't. I'll keep plugging away but if any brainstorms 
thunder down please let me know.


On 6/10/2010 11:46 AM, Chris wrote:
> I must be doing something silly and wrong, hopefully I can get some 
> help from this list. I am writing a blocking call in the c api which 
> is serviced by a [pool] of other threads, of course when the call is 
> made I want to let stackless continue processing, so I did this, which 
> works great:
>
>
> PyObject* getData(PyObject *self, PyObject *args)
> {
>     // blah blah construct request on the heap
>     request->task = PyStackless_GetCurrent();
>     queueRequest( request ); // this will get picked up by the thread 
> pool
>     PyStackless_Schedule( request->task ); // away it goes back into 
> python-land
>
>     Py_DECREF( request->task ); // if we got here its because we were 
> woken up
>
>      // blah blah construct response and return
> }
>
>
> The thread pool does this:
>
>
> worker()
> {
>        // blah blah wait on queue and get a job and do some work 
> waiting for awhile (or returning immediately)
>
>         while( !PyTasklet_Paused(request->task) ) // prevent race 
> (deadlock if this runs before the _Schedule call)
>         {
>               Sleep(0); // yield
>         }
>
>         PyTasklet_Insert( request->task );
> }
>
>
> This works just fine. Until I stress it. this program from python 
> crashes instantly in unpredicatbale (corrupt memory) ways:
>
> import stackless
> import dbase
>
> def test():
>     while 1:
>         dbase.request()
>
> for i in range( 10 ):
>     task = stackless.tasklet( test )()
> while 1:
>     stackless.schedule()
>
>
> BUT if I change that range from 10 to 1? runs all day long without any 
> problems. Clearly I am doing something that isn't thread safe. but 
> what? I've tried placing mutexes around the PyTask_ calls but since 
> schedule doesn't normally return thats not possible everywhere, If I 
> add Sleep's it helps, in fact if I make them long enough it fixes the 
> problem and all 10 threads run concurrently.
>
> I assume PyTasklet_Schedule()/_Insert() are meant to be used this way, 
> (ie- multiple threads) can anyone tell me which stupid way I have gone 
> wrong?
>
>
> _______________________________________________
> Stackless mailing list
> Stackless at stackless.com
> http://www.stackless.com/mailman/listinfo/stackless
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.stackless.com/pipermail/stackless/attachments/20100610/241fb80d/attachment.htm>


More information about the Stackless mailing list