IBM Invests $4 Billion to Build

Koen Holtman koen@hep.caltech.edu
Wed, 22 Aug 2001 17:29:00 -0700 (PDT)


On Wed, 22 Aug 2001, Adam L. Beberg wrote:

>  You really think a company big
> enough to need 50,000 CPUs doesnt already HAVE them in house, oh please.

It's not just having the CPUs.  Managing 50,000 CPUs to work on a
computation is hard.  Currently you need lots of admin manpower, except
for things which are not that data-intensive like seti@home.

The latest attempt to make managing 50,000 CPUs and associated disks etc
easier is called 'grid technology', with projects like Globus, Condor,
Legion, SRB.  IBM is now also buying into this.  Grid technology is
already big in some parts of academia where they do huge computations.  
Because really, if you are a physics collaboration you want to do physics
and not have many of your people play sysadmin all the time.  Besides
particle physics you have astronomy data analysis, climate modelling,
aerodynamics modelling, earthquake simulations, etc.

The physics experiment that I work for needs to manage 10,000 CPUs in
2006.  We calculated that.  That includes the effects of Moore's law.  
Currently we are spending at lot of time managing even a few 100s of CPUs
to do large physics computations, and the only way this is going to scale
to 10,000 is by putting smarter management software in place.  That is
where `grid technology' comes in so I am spending most of my working time
these days interacting with `grid' research projects.

Now as far as IBM is concerned, I don't see many of these data intensive
science people in academia buying compute power on their $4B farms: we'll
by our own cheap PC boxes.  Currently they are putting up a new building
at CERN just to have enough room to put all the future cheap PC boxes in.  
However if you are a pharmaceutical company the idea of outsourcing the
whole lot to IBM has definate appeal.  It would be interesting to know
however how IBM calculated this $4B figure.

Koen.