[Stackless] Re: Stackless bug/typo

Jeff Senn senn at maya.com
Tue Jun 18 14:48:21 CEST 2002

Christian Tismer <tismer at tismer.com> writes:

> Ayeeeeh, cut & paste is the worst invention after punch cards.

Yep, I figured. :-) Get out the tape and run the card through again

> Thanks a lot, and please tell me about the scheduler, I am
> a bit undecided what kind to build first.

What I did to "break the ice" was to simply (and somewhat naively) put
in a call to slp_schedule_task the same place as the interpreter
thread lock release/acquire in ceval.c.  Also I added a routine to
set/clear the slicing_lock flag in the slp_state (so I could do atomic
sections ala continuation.continuation_uthread_lock).

Note: you probably intended this to be an internal flag for atomic
sections and there should be another one for a scheduler lock.

And... it works... for all relatively simple tests I can build.
However, my complex case of an irreducible application (with Tk and
network I/O running in separate, real threads) crashes fairly quickly.

Initially the crash was in GC -- I've compiled without cycle_gc and
it still crashes.  Given the only somewhat random symptoms I'm
guessing it is a memory corruption caused by a incorrectly reference-
counted object. Or some bad interaction with threads. 


P.S. Here's my diff for ceval.c:

Index: ceval.c
RCS file: /home/cvs/stackless/src/Python/ceval.c,v
retrieving revision 1.10
diff -u -r1.10 ceval.c
--- ceval.c     2002/06/03 20:45:41     1.10
+++ ceval.c     2002/06/18 12:37:50
@@ -679,6 +679,20 @@

+                        if ( ! tstate->slp_state.current->slicing_lock) {
+                         PyTaskletObject *task = tstate->slp_state.current;
+                          PyTaskletObject *next = (PyTaskletObject*)task->next;
+                          if (task != next && tstate->slp_state.runcount != 0) {
+                           if(slp_schedule_task(task, next)) {
+                               why = WHY_EXCEPTION;
+                               goto on_error;
+                           }
+                         }
+                       }
                        if (interpreter_lock) {
                                /* Give another thread a chance */

Stackless mailing list
Stackless at www.tismer.com

More information about the Stackless mailing list