[Stackless] Multi-CPU Actor Based Python
ldickson at cuttedge.com
Wed Nov 19 19:01:12 CET 2008
Is there a non-object-oriented flavor of Stackless Python? I've run into
this sort of thing before. OO techniques seem to require extreme
centralization, which kills "fast", makes "safe" impossible in the real
world, and I'm not even clear on "nice-looking"...
On 11/19/08, Jeff Senn <senn at maya.com> wrote:
> On Nov 19, 2008, at 9:49 AM, Timothy Baldridge wrote:
> What I'm more looking to reduce, is the overhead of transferring data
>> from one Python VM to another. In current implementations transferring
>> data from one VM to another requires pickling data (and that requires
>> traversing the entire object being transmitted, pickling each part
>> along the way), transmitting it across the wire, then unplickling it
>> at the other end. So where talking thousands of cycles.
>> In the method I'm proposing, you could have multiple "VMs" in the same
>> process, with a unified GC, these VMs would would share nothing. If
>> all messages are immutable, then all that is required is to copy a
>> pointer from one VM to the other and increment the GC ref count on the
>> message. That's what, 100-200 cycles or so (yes I did just pull that
>> out of the air).
>> My core idea here is that multitasking in modern languages isn't as
>> pervasive because of the overhead/risks involved. In C you have shared
>> memory issues. In Erlang, well, many people can't stand the Erlang
>> syntax. And in Python you can't have to pass messages via
>> So does anyone else see this being possible, or am I off my rocker?
> Hm. I'll defer judgment on the "off the rocker" bit... :-)
> However there does seem to be a fundamental issue here that probably
> goes to the basis of how the universe works.
> Locality is scarce. You make things fast by making them fit in a
> small space so that the speed of light does not matter.
> You decouple their behavior from other things that are "far"
True. But Unix piped chains of commands are a simple example of this.
You make things robust and architectural ("componentized") by making
> them "big"... with well-defined boundaries that take up space
> and well-defined interactions that require synchronous coupling
> at the edges.
Architectural, maybe, but robust I think not! Things that are big are never
robust, because their behavior is too complex to understand. Big insides and
small boundaries are possible, but only if you ABSOLUTELY eliminate side
effects, including spec ambiguities, which means inheritance and
especially polymorphism are bad.
So you want-your-cake-and-to-eat-it-too... you're not the first one...
> and perhaps you shouldn't be discouraged by no-sayers... you might
> just invent something wonderful... However there are many issues
> you are not considering (even in your simple example):
> -- notice that both incrementing and decrementing the refcnt
> have to involve some sort of interlock. (Not to mention GC
> and heap structure management!)
> -- notice that you are starting to change the very nature of python. If,
> for example, I want several processes co-operating
> to add results to a search list, I can't just pop them into
> the same object, I now need to invent a whole structure to
> "re-combine" things again. How much more memory am I going
> to use to do that? How "pythonic" is it going to look when
> I'm done? Or will it look more like an Erlang program? :-)
Well, you have n processes working and an n+1-st process managing the
list... but they cannot all be accessing the same object... if you free
yourself from OO, I think it makes this sort of thing a lot easier.
The other absolute killer is exceptions. They are an inadmissible design
shortcut when asynchronous workers are cooperating. All outcomes have to be
designed as normal.
So it could be a fundamental trade-off:
> "fast", "safe", "nice-looking"**; choose 2!
> Erlang is fast/safe but non-nice-looking (ugly?).
> Python is nice/safe but slow.***
> C is fast/nice but unsafe.
> ** I almost said "understandable" rather than "nice-looking"...
> I'm not sure exactly what the right word is.
> *** Python would be slower if it were safer for multiple-threads;
> i.e. the GIL is a hack to keep python safe by trading
> multiple-thread utilization for single-thread speed.
> Now... Criticism aside: you are probably on the right track:
> I believe the future is in architectures that specifically divide
> computing into asynchronously communicating components. But IMHO
> the interesting question (currently) is probably more along the
> lines of how to get human-programmers to do the dividing well
> (hence Michael's "conciously partitioning" comment) rather
> than how to have an environment that abstracts the problem away.
> BTW: I'd love to have a Stackless Python w/o a GIL... I just
> can't afford to do the (ton of) work! I gather the "posh" thing
> Michael mentioned is a hack to put separate locks (LILs? :-) )
> around pieces of memory that contain python objects -- seems like
> a lot of hoops to jump through, for questionable benefit...
> I don't immediately see performance data, but my
> too-complicated-solution intuition bell rings a little...
> Stackless mailing list
> Stackless at stackless.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Stackless