>Tony Finch wrote:
>> But the bits that make context switches expensive aren't in the CPU
>> core! The main cost comes from reprogramming the MMU and invalidating
>> the level-1 cache.
>Interesting. But I have an answer to this one: throw out 98% of
>the MMU and all of the 1st level cache. If your memory is on-die,
>you don't need the (rather bulky) cache, as you're fetching a few
>kBit word with essentially 1st level cache latency.
Then you don't have enough memory, and you don't have enough
flexibility in your choice of system configuration.
>Retain only enough of the MMU to protect the OS and only the OS.
Not good enough. You have to protect applications from each other.
>[...] Since objects residing in different nodes talk by hardware
>message passing only, their individual address spaces are mutually
Oh, and throw away all the software that you use that has been
developed over the last 20 years and start from scratch again.
Note that my previous message assumed that the first level cache is
virtually addressed, which is usually the case so that MMU lookups
don't slow down cache accesses. If the cache is physically addressed
then you can't access it quite as quickly, but context switches are
much less expensive.
I don't know enough about SMP to say how cache coherency and
synchronization primitives (which often require a bus lock) affect the
choice of cache design...
-- f.a.n.finch firstname.lastname@example.org email@example.com "Perhaps on your way home you will pass someone in the dark, and you will never know it, for they will be from outer space."
This archive was generated by hypermail 2b29 : Fri Apr 27 2001 - 23:17:59 PDT