[FoRK] Cheap Electronics Dissection Project

Stephen D. Williams < sdw at lig.net > on > Thu Oct 19 06:32:29 PDT 2006

Exactly.  Anything that you scale up is going to have more and more 
latencies, even if you do make it look like it's flat and shared.
16+ processor systems have something like a 10+x latency for other-node 
memory vs. local memory.  If you write programs assuming flat, equal 
speed access, it will crawl.

sdw

Eugen Leitl wrote:
> On Thu, Oct 19, 2006 at 11:23:21AM +0100, Tony Finch wrote:
>
>   
>> I'm not entirely convinced that transputer-style comms is faster than
>> zero-copy pointer passing in a shared memory multiprocessor.
>>     
>
> There isn't anything shared in this universe, and whenever you're
> buying into a sharing illusion, you pay the price in terms of
> relativistic latency, gate delays, bandwidth, design complexity,
> die real estate and power dissipation.
>
> This universe speaks OOP. You'd better learn to formulate your
> problem in terms of message passing, or prepare to pay the price.
>
>   
> ------------------------------------------------------------------------
>
> _______________________________________________
> FoRK mailing list
> http://xent.com/mailman/listinfo/fork
>   


More information about the FoRK mailing list