24 Hours at FukushimaGo read the whole article. It is well worth your time. The rest of the article goes through the 24 hour timeline detailing the complications and mis-steps that led to the disaster.
A blow-by-blow account of the worst nuclear accident since Chernobyl
By Eliza Strickland / November 2011
Sometimes it takes a disaster before we humans really figure out how to design something. In fact, sometimes it takes more than one.
Millions of people had to die on highways, for example, before governments forced auto companies to get serious about safety in the 1980s. But with nuclear power, learning by disaster has never really been an option. Or so it seemed, until officials found themselves grappling with the world's third major accident at a nuclear plant. On 11 March, a tidal wave set in motion a sequence of events that led to meltdowns in three reactors at the Fukushima Dai-ichi power station, 250 kilometers northeast of Tokyo.
Unlike the Three Mile Island accident in 1979 and Chernobyl in 1986, the chain of failures that led to disaster at Fukushima was caused by an extreme event. It was precisely the kind of occurrence that nuclear-plant designers strive to anticipate in their blueprints and emergency-response officials try to envision in their plans. The struggle to control the stricken plant, with its remarkable heroism, improvisational genius, and heartbreaking failure, will keep the experts busy for years to come. And in the end the calamity will undoubtedly improve nuclear plant design.
True, the antinuclear forces will find plenty in the Fukushima saga to bolster their arguments. The interlocked and cascading chain of mishaps seems to be a textbook validation of the "normal accidents" hypothesis developed by Charles Perrow after Three Mile Island. Perrow, a Yale University sociologist, identified the nuclear power plant as the canonical tightly coupled system, in which the occasional catastrophic failure is inevitable.
On the other hand, close study of the disaster's first 24 hours, before the cascade of failures carried reactor 1 beyond any hope of salvation, reveals clear inflection points where minor differences would have prevented events from spiraling out of control. Some of these are astonishingly simple: If the emergency generators had been installed on upper floors rather than in basements, for example, the disaster would have stopped before it began. And if workers had been able to vent gases in reactor 1 sooner, the rest of the plant's destruction might well have been averted.
The world's three major nuclear accidents had very different causes, but they have one important thing in common: In each case, the company or government agency in charge withheld critical information from the public. And in the absence of information, the panicked public began to associate all nuclear power with horror and radiation nightmares.
We've learned a great deal about the Fukushima accident in the past seven months. But the nuclear industry's trial-and-error learning process is a dreadful thing: The rare catastrophes advance the science of nuclear power but also destroy lives and render entire towns uninhabitable. Three Mile Island left the public terrified of nuclear power; Chernobyl scattered fallout across vast swaths of Eastern Europe and is estimated to have caused thousands of cancer deaths. So far, the cost of Fukushima is a dozen dead towns ringing the broken power station, more than 80 000 refugees, and a traumatized Japan. We will learn even more as TEPCO releases more details of what went wrong in the first days of the accident. But as we go forward, we will also live with the knowledge that some future catastrophe will have yet more lessons to teach us.The foot-dragging and refusal to share information or access outside experts by TEPCO is criminal. Every engineer I know was always willing to share information. It is a tough career and you want to learn through others experiences. But management puts up walls to protect "intellectual property" or, in the case of TEPCO, to hide facts from the prying eyes of the wider world.
One of the hats I wore was "software safety engineer" and my experience was that it was difficult to get management to pay more than lip service to safety. Worse, the analysis to uncover failure modes was not a popular activity for engineers who got paid for producing designs. I also worked as a performance engineer and I ran into similar problems getting engineers to take the analysis for performance seriously. But I found they were more amenable to performance because it was more concrete. I built models and that was something that was tangible. Safety analysis produced paper but it didn't come across as "real". I'm not saying that engineers were hostile to the analysis. Some were interested, but most were indifferent. They just saw it as another hoop they had to jump through to get their job done. There are excellent books on safety but, just like human-centred design, these books aren't on the bookshelves of most engineers.