[Stackless] Planning Stackless Lite
tismer at stackless.com
Tue Jan 19 21:33:05 CET 2010
Yes I do. Pre-emptive scheduling will be supported in the things-to-do
periodic interrupt of the interpreter. That is between-opcodes
scheduling, while the cooperative swithes are in-opcode, typically a
function call. These will operate at high speed, using multiple stacks.
Thank you very much for the input, very helpful. I am thinking to get
rid of the stack slicing approach. All code that is totally described
in terms of how to save/restore stack state can execute on any of the
stacks. I need to find out how to manage stacks efficiently without
using much memory. Stacks must be created often. On tasklet
deactivation, its stack will be shrunk to the minimum, and on
reactivation a new one will be allocated. I need to draw some pictures
about the resulting forest. It is probably necessary to restrict free
scheduling a bit to a handful of contexts.
Cheers - chris
Sent from my iPhone
On 19.01.2010, at 20:10, Jeff Senn <senn at maya.com> wrote:
> I assume what you mean by this is that you would require Psyco (or
> part of it) to "re-write"
> tasklet code in such a way to allow tasklets to exit/enter at many
> points. While that
> might be interesting (and perform well) for scenarios where there
> are relatively few
> entry/exit points (i.e. co-operative scheduling), I wonder: do you
> have a plan that makes
> it work well to do "pre-emptive" scheduling of taskets?
> I don't really understand very deeply how Psyco works so maybe there
> is "magic" in there
> that I don't know about...
> On Jan 19, 2010, at 1:29 PM, Christian Tismer wrote:
>> I have a new concept in mind that might replace Stackless as it is
>> known. Just a quock note, as I am away from my machine.
>> The plan is to get a Stackless without any restrictions, as a plain
>> extension module, and with full Psyco support.
>> Sounds like an April joke? No, I'm just leveling up on craziness. ;-)
>> Sent from my iPhone
>> Stackless mailing list
>> Stackless at stackless.com
> Stackless mailing list
> Stackless at stackless.com
More information about the Stackless