[Stackless] integrate stackless with other event driven framework: merge two

Jimsfba at aol.com Jimsfba at aol.com
Sat Oct 14 19:26:14 CEST 2006

Richard, thank you for the comment. For CCP game's custom yielding  approach, 
I got a question: when finish the work in stackless world and  switching to 
another world (the other event driven framework, C++ watchdog for  low level 
tcp/file io, etc), how long it want to stay there before switching  back to 
stackless world ?
- If the approach is "stay only when it has job to do", then when the two  
event driven frameworks are idle (no job to do), the code will be busy switching 
 between the two frameworks again and again. It can cause a fast tight loop 
with  up to 99% cpu usage.
- If the approach is "always stay for x hundreds millisecond", then that "x  
hundreds millisecond" is wasted when framework we are waiting has no job for 
us  to do.
The wild guess that stackless internally (at system call level) is blocking  
at select()/ poll() call (after finish all callback jobs it need to do), 
because  that's only way I can think of (in Unix part of implementation) which can  
listening/bockling on many different events queues without using a tight loop 
to  scan all of them. If stackless use a different way to do it at unix 
system call  level,  can someone shed some light with more info ? 
Thanks again.
- Jim

On 10/14/06 Richard Tew richard.m.tew at gmail.com wrote:
> I have a C++ event driven main loop framework that embedded the  stackless
> framework.
> The only way I can make these two framework  work together is: switching
> between two frameworks in a fix time  range.
> 1. timeout the select() call in my C++ main loop every  few hundreds
> milliseconds,
> 2. give a few hundress millisecond  for stackless code to run:
>    watchdogHandle =
>  handle<>(allow_null(PyStackless_RunWatchdog(1000)));
This runs for around 1000 instructions.  Which means that it will not  exit
the currently running tasklet where you explicitly cooperatively  schedule
or yield using some other manner, but in a preemptive manner.   As long as
you know that.
At CCP Games we take a different approach, which you can read about  in
the slideshow Kristján Jónsson gave at PyCon 2006. Instead of  interrupting
the permanently scheduled tasklets, we use custom yielding  methods (BeNice,
Sleep or on a channel waiting for an event) and never  schedule to divert
all the scheduled tasklets so that it is natural for all  tasklets that have
been scheduled to run once and then the scheduler will be  empty.  At which
point the watchdog exits cleanly, cooperative  scheduling is maintained and
the embedding C++ code can do whatever it  needs.  From what Kristján tells
me (I don't work on this section of  code myself) it is common that the
scheduler is empty so we use a very large  timeout for the watchdog in order
to detect badly programmed tasklets which  are infinite looping or something
Links of interest:
The slideshow:
Basic C++ framework with alternate yielding method BeNice:
Basic C framework with alternate yielding method BeNice:
> 3. go back to the C++ main loop and block on select()  again
> This way works but not so elegant. It might be idle and  waste a few hundred
> millisecond per loop in one framework while the  other framework has work
> waiting for it to do.
> I am  thinking a "perfect" model will be ...
> If stackless internally  is also blocking on a select( fdset2 ) call, we can
> add a stackless API  to export that file descriptor set (fdset2).
> Then we can merge two  select() calls of two main loops into a single 
> fdset1+ fdset2 )  call. We can then listen to all events in one blocking
> select  call.
Stackless does not do any calls to select.  All it does is add  tasklets,
channels and scheduling to the Python runtime.  It has no need  to wait
on any file descriptors itself.
Hope this helps,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.stackless.com/pipermail/stackless/attachments/20061014/4b06a41b/attachment.htm>
-------------- next part --------------
Stackless mailing list
Stackless at stackless.com

More information about the Stackless mailing list