[FoRK] Why Fukushima made me stop worrying and love nuclear power

Ken Ganshirt @ Yahoo ken_ganshirt at yahoo.ca
Fri Mar 25 21:30:19 PDT 2011

--- On Fri, 3/25/11, Stephen Williams <sdw at lig.net> wrote:

> From: Stephen Williams <sdw at lig.net>
> Subject: Re: [FoRK] Why Fukushima made me stop worrying and love nuclear power
> To: fork at xent.com
> Received: Friday, March 25, 2011, 2:09 PM
> On 3/25/11 11:31 AM, Ken Ganshirt @
> Yahoo wrote:
> > --- On Fri, 3/25/11, Gregory Alan Bolcer<greg at bolcer.org>  wrote:
> > 
> >> From: Gregory Alan Bolcer<greg at bolcer.org>
> >> Subject: Re: [FoRK] Why Fukushima made me stop
> worrying and love nuclear power
> >> To: fork at xent.com
> >> Received: Friday, March 25, 2011, 9:30 AM
> >> It's not binary, design is an evolutionary
> process.  "Overcome" and
> >> "Eliminate" are two different things.  You
> can always
> >> design something better.
> >> 
> >> Obviously you aren't a fan of fault tree
> analysis?
> >> 
> > In instances like these it is pretty much binary.
> Either a decision is made or an action taken that causes or
> enables a critical problem sometime in the [immediate or
> foreseeable] future or it isn't. Either the design allows
> for that to occur or it prevents it.
> > 
> > Yes, you can make it incrementally "better" (and I am
> not arguing against trying). From an analytical standpoint
> (fault tree/whatever) you might even approach perfection.
> > 
> > But you still don't get it. It doesn't matter how good
> the design is if people choose to simply ignore critical
> aspects of it.
> So, if you design a self-contained reactor that is small
> enough, with a type of material that could not possibly melt
> down no matter how reconfigured, and a container that is
> hermetically sealed and designed and tested to withstand
> time and forces well in excess of anything conceivable (and
> then, probably, you bury the result):
> Your position is that it is inevitable that A) all of the
> design and testing will inevitably have a fatal flaw or B)
> no number or type of safeguards will prevent an out of spec
> reactor from getting out of the factory and C) doing
> widespread damage?  Just how do we fly planes? 
> Get to space?  Build large buildings?  Manage not
> to blow ourselves up with thousands of nuclear weapons?
> The semi-implicit assumption in all of this is that failure
> is truly catastrophic and totally unacceptable.  ...

Good points, sw, but you miss the point. WHEN the RESULT of failure is truly catastrophic and totally unacceptable it is highly unlikely that "design" will "overcome" that, never mind "eliminate" it. 

As is the case with aircraft, failures of which result in truly catastrophic and totally unacceptable consequences we appear to have reduced the probability of disaster to a level that is acceptable to sufficient numbers of people that they are willing to live with the remaining risk.

Same thing with automobiles (my own example). The last time I checked the numbers, I think they were still the largest cause of death and destruction in North America.  And buildings. Aren't household accidents another very significant source of damage to humans?

The question is not "HOW" we fly planes or go to space or ... etc. It's "WHY". The fact you ask the question makes it clear that even you understand the implicit risk involved in all of those activities. Because we have been completely incapable of designing it out.

The mitigating factor is, again, human, not "design". That is, we have weighed the tradeoffs and appear to be willing, in sufficiently large numbers, to accept the risks for the utility they (cars, airplanes, whatever) provide. The ongoing push by the public to improve safety in all of the examples you provide is proof positive that "design" has not - yet - won. And may never. It has simply reduced it to the point where large numbers of us are willing to take the remaining risk.

Your micro-reactor example designs something entirely different. That changes the problems. It MAY change them sufficiently for the solution to be safely deployed if you look at it tactically; producing something with similar failure consequences to, say, the failure of a large electric motor. But does it cause other problems that are equally destructive, just different?

Ferinstance, the resources consumed to build all those teeny-tiny reactors. The severely limited numbers who could afford them. Do they "scale" -- large numbers of small reactors -- as "well" as the big ones? Etc?

Viewed holistically I'm not certain it's a "better" or even "safer" approach. But the possibilities are clearly intriguing. 

Unfortunately that's not what the main proponents of nuclear power -- in particular governments (those that are supportive) and the power companies -- desire to build. The current power infrastructures (electrical grid and politics/business) can't easily accommodate them. The Powers That Be want to build the big complicated thingies that produce really big problems, even if only occasionally.

No, I'm not proposing we stop doing anything that's unsafe (hell, I own and ride two motorcycles and, even though I'm coming up 65, one of the things at the top of my bucket list is to learn to ski). That's not the discussion. The debate is whether "design" can at least "overcome", if not "eliminate", the most significant risks of endeavors like building and operating large nuclear plants. I, and at least a couple others, maintain that it cannot. It can't even succeed in smaller, less complex objects like personal motor vehicles. Or extension cords.

Because you can't design around humans who WILL, through stupidity or cupidity, choose to ignore, impair or dismantle the very things in the designs that would, otherwise, keep us safe.


More information about the FoRK mailing list