[Stackless] Newbie questions

Christian Tismer tismer at tismer.com
Sun Feb 16 20:42:42 CET 2003


Konrad Hinsen wrote:
> On Sunday 16 February 2003 01:39, Christian Tismer wrote:
> 
> 
>>Fascinating to have *you* in this. I'm pretty sure
>>you will force me to implement MPI and other stuff, soon.
> 
> 
> No! I have switched from MPI to BSP (see www.bsp-worldwide.org, or a recent 
> copy of Scientific Python), and I intend to implement it in Stackless myself 
> :-)

Will lurk into that tomorrow.

...

> So when do task switches occur? Only when channel operations are called?

Yes, or when you call schedule().
There are tasklets, sitting blocked in channel
operations, and there is a circular chain of
tasklets which are ready to run.
These are scheduled by schedule(). This function
*can* be called by a timer in Stackless 3.0,
but this is all optional. Most serious users
prefer having explicit control over switching.

>>In other words: If your extension does not call back into
>>Python, or if it does, but there are no switches intended,
>>you have no problem.
> 
> 
> Fine. So if I make sure that all callbacks set the "atomic" flag, then 
> everything should be fine.

Doing this is not necessary at the moment, but
it is a good idea for future versions.

>>The sent objects of channels are identical to the
>>sent ones. This might change, since I want to use
> 
> 
> But the object ids are not the same as the ones of the sent objects - as I 
> just discovered experimentally.

Then either you or me must have made a mistake.
The channels do not create any objects.

...

> OK, so here is a brief description. I work on high-level parallelization using 
> the BSP model. Python/BSP is explained in 
> http://starship.python.net/~hinsen/ScientificPython/BSP_Tutorial.pdf, which 
> for the moment is all the available documentation.  BSP divides a computation 
> in consecutive "compute" and "communicate" steps. One of the advantages is 
> that communication is simple to implement, and easy to optimize.

Whow, this is water on my mills.

> My next project is automatic load balancing. The idea is to have many more 
> "virtual" processors than real ones. Each real CPU measures the time spent on 
> dealing with each "virtual" CPU, and once in a while the real CPUs exchange 
> virtual CPUs among each other to equalize CPU load.

Ok. With multiple processors, what about the GIL?

> The obvious solution for each "virtual" CPU would be a thread. But there are 
> two problems: Many low-level communications libraries don't coexist well with 
> threads, and there is a significant overhead, which is completely unnecessary 
> because pseudo-simultaneous execution is not even required, only the 
> possiblity of task switches. Which is where Stackless comes in. I plan to 
> make each virtual CPU a tasklet, which runs atomically most of the time, 
> until it reaches the communication step and waits.

Ok, if you don't need total parallism yet, it is fine.
Using many tasklets per CPU, and one Python process
for very CPU might be some way.

> BTW, I saw hints about thread pickling, is that a reality already? It could be 
> useful in moving threads between CPUs.

It has been implemented for Stackless 1.0 by twinsun.
With 2.0, it was too hard to support. 3.0 will get
the stuff back that is needed for pickling.
I don't know exactly when I get to this. Probably,
when 3.0 is out, they will help with it.

ciao - chris
-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Johannes-Niemeyer-Weg 9a     :    *Starship* http://starship.python.net/
14109 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
work +49 30 89 09 53 34  home +49 30 802 86 56  pager +49 173 24 18 776
PGP 0x57F3BF04       9064 F4E1 D754 C2FF 1619  305B C09C 5A3B 57F3 BF04
      whom do you want to sponsor today?   http://www.stackless.com/


_______________________________________________
Stackless mailing list
Stackless at www.tismer.com
http://www.tismer.com/mailman/listinfo/stackless




More information about the Stackless mailing list