> John M. Klassa writes:
> > Sounds like Ken doesn't think much of Linux...
> > > My experience and some of my friends' experience is that Linux
> > > is quite unreliable. Microsoft is really unreliable but Linux is
> > > worse. In a non-PC environment, it just won't hold up. If you're
> > > using it on a single box, that's one thing. But if you want to use
> > > Linux in firewalls, gateways, embedded systems, and so on, it has a
> > > long way to go.
> Several weeks old --- he moderated his comments a bit when prompted by
> Eric Raymond. ISTR that Dennis Ritchie posted something at about the
> same time saying that *his* friends' experience had been entirely
> positive, but I've lost the pointer. Searching for "Thompson" on
> linuxtoday.com will get a few related articles, including the Raymond
This is older stuff too (January), but it is nice for symmetry:
Linux Torvalds on plan 9:
[..discussing how design flaws in the OS API can be cast into stone....]
Another example of this happened in the Plan 9 operating
system. They had this really cool system call to do a better process
fork--a simple way for a program to split itself into two and
continue processing along both forks. This new fork, which Plan 9
called R-Fork (and SGI later called S-Proc) essentially creates two
separate process spaces that share an address space. This is helpful
for threading especially.
Linux does this too with its clone system call, but it was
implemented properly. However, with the SGI and Plan9 routines they
decided that programs with two branches can share the same address
space but use separate stacks. Normally when you use the same
address in both threads, you get the same memory location. But you
have a stack segment that is specific, so if you use a stack-based
memory address you actually get two different memory locations that
can share a stack pointer without overriding the other stack.
While this is a clever feat, the downside is that the overhead in
maintaining the stacks makes this in practice really stupid to
do. They found out too late that the performance went to hell. Since
they had programs which used the interface they could not fix
it. Instead they had to introduce an additional properly-written
interface so that they could do what was wise with the stack space.
While a proprietary vendor can sometimes try to push the design flaw
onto the architecture, in the case of Linux we do not have the
latitude to do this.
This is from 'Open Sources: Voices from the Open Source Revolution', an
O'Reilly book they recently put on the internet for free too:
http://www.oreilly.com/catalog/opensources/book/toc.html . The book has
some nice stuff in it, but for people who have been following the movement
for some time it will not contain that many new insights.
On the issue of multithreading forks(), I have no idea who has the best
design here. In physics data processing, I find that a plain old
fork-exec is entirely sufficient for your needs (like keeping 240 CPUs
busy), so I never found an excuse to look into multithreading.