[Stackless] Stackless API

Giovanni Bajo giovannibajo at libero.it
Thu Jan 29 18:34:37 CET 2004

Tom Locke wrote:

> Thought I'd jump in with some thoughts.

I agree with you 100% about your views of stackless. Let me quote fomr you the
main points that I wanted to make too:

> The idea of a return value from a tasklet seems like a nonsense to me,
> [...]
> If programmers start thinking about the order in which tasklets are
scheduled, they are
> thinking there way into a black hole from which they will never return!

Then, I have some questions/comments.

> I feel we need something more like Thread.start() in Java. A method
> that simply means 'make this tasklet runnable', and if we're doing
> cooperative scheduling, perhaps it should explicitly be a scheduling
> point.
> I think tasklet.start() is the way to go, and drop tasklet.run()

What does Thread.start() do in Java? In Stackless, we just create a tasklet to
make it runnable. I then pause/continue it by inserting/removing it from the

>>> * If the tasklet has a single frame and it performs a python
>>>   "return", the caller of tasklet.run gets the return value.
> The caller or run() is somewhere else by now, because run() does not
> block. That's the whole point!

What run() does right now is blocking the current tasklet. It's basically a
yield_to(). Under this point of view, I don't think of it as a problem.
Somtimes, I feel the need to make sure that my tasklet *boots*, which is,
executes its starting code till the first yield point, and I want to make sure
this happens before other code gets executed. So, I just say "yield_to"
(through run). Of course, I could use channels etc, but it just lookes easier
this way. But I don't really need any other feature from run(), and I don't
like the calling stack and return values.

> OK... what about various ideas about custom scheduling? Here's my
> proposal. (I'm thinking as I go here).
> A module property stackless.scheduler, to which one can assign an
> object.

This is ok to me. A thing I would like to discuss about the scheduler is we
need/want to support multiple schedulers. My idea was to be able to create N
different schedulers, each one with its list of tasklets. A tasklet might even
be inserted in more than a scheduler. When I call scheduler.run(), that
scheduler becomes the active one and starts executing its tasklets until either
the list is empty (all tasklets are dead/removed), scheduler.run() is executed
on another scheduler object, or maybe a specific member function like
scheduler.break() is called. This would return execution to the point where the
original scheduler.run() was executed.

I don't know if this sounds sensible to do, it's just a first proposal. The
problem I'm trying to solve is that I need to have different group of
"tasklets". When my program is in state A, I want to only some tasklets to run.
When it switches to state B, A's taklets are disabled (removed, whatever), and
B's taklets become active. I currently implement this by subclassing the
tasklets themselves so that I can use isinstance() has a way to identify the
tasklet groups (all tasklets for state A derive from TaskletGroupA, which is an
emtpy class inherited by stackless.tasklet, etc.), and then adding/removing
them to the main scheduler. This looks suboptimal to me, this is why I thought
of having different schedulers.

> I think we need a some kind of syncing primitive that is lower-level
> than a channel. Maybe a good old semaphore?

I think a semaphore can easily be implemented in terms of a channel. Actually,
someone should *really* try and rewrite the good old uthread.py for Stackless
3.0. That'd be a feature boost for most users.

> Once this, and the scheduler are in place, I feel methods like
> capture() and become() should be removed. These methods scare me :)
> They look like they exist to expose cool things you can do as a
> result of the current implementation strategy, but are semantically
> hair-raising.

I must say that I always liked become()/capture(). Yes, they look weird at
first, but they save you quite a lot of forwarding code in many situations. I
hit segfaults using them though, and Christian adviced me to just get rid of
them in my code. That's what I did, but I wouldn't mind being able to use them
again in the future.

Giovanni Bajo

Stackless mailing list
Stackless at stackless.com

More information about the Stackless mailing list