[Stackless] Google's new Go programming language

Larry Dickson ldickson at cuttedge.com
Fri Nov 20 19:22:21 CET 2009

Here is the key description from Russ Cox on that thread:

>> Is there another possibility that I missed?

>It is okay for a channel to have many writers.
>In this case I think you can just have N goroutines
>each reading from a specific channel and writing
>to a shared channel.  The main goroutine just reads
>the one channel.

This is effectively a round robin select. If the actual ability to
discriminate between nearly simultaneous inputs is desired, as opposed to
just a FIFO, then each type of input should get a shared channel of its own,
and a real select operate between these. Within the shared channel there
would be many users of the same type who are treated the same (i.e. FIFO),
but discrimination could operate between types, of which there might be half
a dozen, and thus a select is still efficient at this level. I think this
would cover all the usual cases, e.g. players on a large computer game.

Larry Dickson
Cutting Edge Networked Storage

On 11/17/09, Guy Hulbert <gwhulbert at eol.ca> wrote:
> On Tue, 2009-17-11 at 19:41 +0000, Kristján Valur Jónsson wrote:
> > First off, I don't understand what select does in go, but I assume
> > some conceptual similarity to unix select()
> Here is the documentation (link from thread below):
> http://golang.org/doc/go_spec.html#Select_statements
> >
> > One thing I'd like to point out is that the select() model of event
> > handling is inherently non-scalable. Managing it is an O(N) process
> > each time you want to send or handle an event.
> According to this thread, it can be done in O(1):
> http://groups.google.com/group/golang-nuts/browse_thread/thread/3ba2157b3259ee54/410a3c8c187b1bf7?lnk=raot
> --
> --gh
> _______________________________________________
> Stackless mailing list
> Stackless at stackless.com
> http://www.stackless.com/mailman/listinfo/stackless
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.stackless.com/pipermail/stackless/attachments/20091120/930ae8bd/attachment.htm>

More information about the Stackless mailing list