[Stackless] a stackless crash

Christian Tismer tismer at stackless.com
Tue Nov 6 02:06:56 CET 2007


Kristján Valur Jónsson wrote:
> I don't think this is an issue.
> Remember, the channel is being destroyed because the
 > last reference just went away.
> Even if the tasklet catches the exception it can't reinsert
 > itself into it because it has no reference, i.e. it knows
 > nothing of the channel anymore.

This is not true, see the start of channelobject.c .
The channel gets temporarily resurrected, and it is
anyway visible through its members, since it is the
head of the taskets that it contains.
Therefore, it is possible to create new references,
and the beast stays alive.
Maybe this was a bad hack in order to save a pointer,
but it is so right now.

> Killing the tasklets at the time we are destroying the channel
 > seems like a much better place to do it and may even make the
 > late "kill tasklets with cstacks" code unnecessary.

a) better place than what?
    You cannot really kill a tasklet immediately. Maybe it would
    be nicer to try to kill it earlier, yes, considerable,
    but no guarantee.

b) Why should this late code become unnecessary. Maybe I should
    write up a (bad) design paper?
    There always can be any number of taskets floating around,
    which are not blocked, just remove()d. They need to be cleaned
    up in the end.

I was about to say "yeah, all right", but then I looked back into
the sins of another like, and corrected it into
"sorry, you loose, almost everywhere" :-)

cheers - chris

-- 
Christian Tismer             :^)   <mailto:tismer at stackless.com>
tismerysoft GmbH             :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9A     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 802 86 56  mobile +49 173 24 18 776  fax +49 30 80 90 57 05
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
       whom do you want to sponsor today?   http://www.stackless.com/




More information about the Stackless mailing list