[Stackless] Channel Deadlock

Andrew Francis andrewfr_ice at yahoo.com
Tue Apr 29 17:11:00 CEST 2008


Hello Justin:

>I'm distributing Stackless across multiple processes.
>There are several real OS threads that wait for
>incoming messages and then pass
>those messages to a single stackless thread.

Out of curiosity, do you need real OS threads to wait
for incoming messages? I found it tricky to mix
Stackless with OS threads so I avoid it. Moreover, I
think the behaviour of threads with Stackless has
changed.

>This means it's possible that all channels will be
>blocking, since at any time a real OS thread
>might come along and pass a message into the thread.
>Stackless doesn't like this though. It thinks it's
>going into deadlock and throws an
>exception. Is there any way to avoid this?

Justin, what Stackless does not like is when the main
tasklet is blocked. Simple example

if __name__ == "__main__":
   ch = stackless.channel()
   ch.receive()

at this moment, a RuntimeError should be thrown.

If if the main thread is not blocked, you can get
deadlock in the traditional sense. This is harder to
track down. Some things you can do:

1) Use channel.__reduce__() to dump channel states.
This may give insights.

2) Check for silly mistakes (whatever a silly mistake
is in the context of your application)

3) Start hunting for the four deadlock conditions:
mutual exclusion, non pre-emption, hold-and-wait,
circular-wait. 

Right off the bat, you get two: non-preemption, mutual
exclusion. Now check for the other two - circular
wait, hold and wait.

hold-and-wait manifests itself in the following form:

ch1.send()             # the wait
ch2.receive()          # the hold

circular wait:

tasklet1:
   ch1.receive()
   ch2.send()

tasklet2:
   ch2.receive()
   ch1.send()


Cheers,
Andrew










Thanks,
Justin
--- stackless-request at stackless.com wrote:

> Send Stackless mailing list submissions to
> 	stackless at stackless.com
> 
> To subscribe or unsubscribe via the World Wide Web,
> visit
> 	http://www.stackless.com/mailman/listinfo/stackless
> or, via email, send a message with subject or body
> 'help' to
> 	stackless-request at stackless.com
> 
> You can reach the person managing the list at
> 	stackless-owner at stackless.com
> 
> When replying, please edit your Subject line so it
> is more specific
> than "Re: Contents of Stackless digest..."
> 
> 
> Today's Topics:
> 
>    1. Re: stackless and multi-core (inhahe)
>    2. Re: stackless and multi-core (Andrew Francis)
>    3. Re: stackless and multi-core (Jeff Senn)
>    4. Re: stackless and multi-core (Andrew Francis)
>    5. Re: stackless and multi-core (Jeff Senn)
>    6. Channel Deadlock (Justin Tulloss)
>    7. Re: Proposed modification WRT threading and
> scheduling
>       (Richard Tew)
> 
> 
>
----------------------------------------------------------------------
> 
> Message: 1
> Date: Mon, 28 Apr 2008 09:30:04 -0400
> From: inhahe <inhahe at gmail.com>
> Subject: Re: [Stackless] stackless and multi-core
> To: stackless at stackless.com
> Message-ID:
> 
>
<da776a8c0804280630o173a24efi68cd541d65c92e2b at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> ok, thanks. i thnk i understand, although it seems
> that if the os doesn't
> assign one process per core then the extra processes
> are an arbitrary number
> of which which will effectively deterimine how much
> overall cpu time the app
> gets.  i.e. if you have 4 cores,  4 processes but
> they're not distributed
> evenly it might as well have been 3 processes, 4
> processes, 5 processes or 6
> processes depending on how much cpu time you want.
> 
> i want to write a webserver, though, and i was
> thinking the webserver would
> probably be i/o bound and not cpu bound (even if it
> gets lots of traffic),
> so perhaps there's no reason to write it
> multi-core/multi-processor...?
> 
> 
> On Sun, Apr 27, 2008 at 11:32 PM, Justin Tulloss
> <jmtulloss at gmail.com>
> wrote:
> 
> > The scheduler doesn't really work like that. A
> process doesn't get
> > assigned to a single core and then run to
> completion. The OS is
> > actually switching between running processes every
> few milliseconds.
> > If you start 2 processes, 1 may start on one
> processor and one may
> > start on another, but after their timeslice is up,
> they may very well
> > switch. For performance reasons, the OS will try
> to run a process on
> > the same core that it was running on, but it's not
> necessary or
> > guaranteed.
> >
> > The reason you would start 4 processes for 4 cores
> is that that's the
> > maximum number of processes you can possibly have
> running at one time.
> > However, everything else on your system is sharing
> in on that time.
> > You might find that you get more CPU time if you
> double the number of
> > processes, making it 2 per core.
> >
> > Justin
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
>
http://www.stackless.com/pipermail/stackless/attachments/20080428/5996a086/attachment.html
> 
> 
> ------------------------------
> 
> Message: 2
> Date: Mon, 28 Apr 2008 07:31:07 -0700 (PDT)
> From: Andrew Francis <andrewfr_ice at yahoo.com>
> Subject: Re: [Stackless] stackless and multi-core
> To: stackless at stackless.com
> Message-ID:
> <522427.25447.qm at web34207.mail.mud.yahoo.com>
> Content-Type: text/plain; charset=iso-8859-1
> 
> Hi Justin:
> 
> >Python doesn't do well with multiple cores due to a
> >global lock on interpreter. You won't see much of a
> >benefit distributing tasklets to multiple real
> threads >because of this. There's a thread from last
> month that >discusses doing this with multiple
> processes instead.
> 
> I saw the Carlos post - pretty cool.
> 
> I don't know much about multiple CPU programming and
> don't understand all the issues surrounding the GIL.
> However from my understanding, from a behavioral
> standpoint, the GIL acts on a principle similar to
> the
> the watchdog in the Stackless scheduler. Again, how
> to
> exploit this?
> 
> One thing I am curious about is what happens when
> you
> run multiple CPUs but you still keep the GIL.
> However
> you replace thread locks with simple user space spin
> locks? The idea being to avoid the expensive OS
> context switch.  What sort of performance gain will
> you get? Can this be done cheaply?
> 
> Cheers,
> Andrew
> 
> 
> 
> 
>      
>
____________________________________________________________________________________
> Be a better friend, newshound, and 
> know-it-all with Yahoo! Mobile.  Try it now. 
>
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
> 
> 
> 
> ------------------------------
> 
> Message: 3
> Date: Mon, 28 Apr 2008 11:39:59 -0400
> From: Jeff Senn <senn at maya.com>
> Subject: Re: [Stackless] stackless and multi-core
> To: Andrew Francis <andrewfr_ice at yahoo.com>
> Cc: stackless at stackless.com
> Message-ID:
> <E61861D8-2216-4127-A91F-0DEBBE7B084F at maya.com>
> Content-Type: text/plain; charset=US-ASCII;
> format=flowed
> 
> 
> On Apr 28, 2008, at 10:31 AM, Andrew Francis wrote:
> >
> > I don't know much about multiple CPU programming
> and
> > don't understand all the issues surrounding the
> GIL.
> > However from my understanding, from a behavioral
> > standpoint, the GIL acts on a principle similar to
> the
> > the watchdog in the Stackless scheduler. Again,
> how to
> > exploit this?
> 
> Hm.  I don't understand your analogy with the
> "watchdog".
> 
> You should probably think of the GIL as something
> designed
> to stop you from doing exactly what you think you
> want to
> do. :-)   It guarantees that the Python
> VM/interpreter
> cannot be interrupted except where it "wants" to be.
> That is: a single python interpreter is only running
> ONE
> place (in one thread) at a time NO MATTER how many
> threads you have.
> 
> It's "Global" so that lots of little incremental
> checks
> don't need to be sprinkled throughout the Python
> interpreter
> 
=== message truncated ===



      ____________________________________________________________________________________
Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ




More information about the Stackless mailing list