Connect with us

Science

Why are nuclear plants so expensive? Safety’s only part of the story

Published

on

Should any discussion of nuclear power go on for long enough, it becomes inevitable that someone will rant that the only reason they’ve become unaffordable is a proliferation of safety regulations. The argument is rarely (if ever) fleshed out—no specific regulation is ever identified as problematic, and there seems to be no consideration given to the fact that we might have learned something at, say, Fukushima that might merit addressing through regulations.

But there’s now a paper out that provides some empirical evidence that safety changes have contributed to the cost of building new nuclear reactors. But the study also makes clear that they’re only one of a number of factors, accounting for only a third of the soaring costs. The study also finds that, contrary to what those in the industry seem to expect, focusing on standardized designs doesn’t really help matters, as costs continued to grow as more of a given reactor design was built.

More of the same

The analysis, done by a team of researchers at MIT, is remarkably comprehensive. For many nuclear plants, they have detailed construction records, broken out by which building different materials and labor went to, and how much each of them cost. There’s also a detailed record of safety regulations and when they were instituted relative to construction. Finally, they’ve also brought in the patent applications filed by the companies who designed the reactors. The documents describe the motivations for design changes and the problems those changes were intended to solve.

There are limits to how much even this level of detail can provide. You can’t determine, for example, whether the cost of a specific number of workers on a given building should be assigned to implementing safety regulations. And in many instances, design changes were done for multiple reasons, so there’s not simply a safety/non-safety breakdown. Still, the collection of sources they have allows them to make some very direct conclusions about the sources of changing costs and to build very informed models that can infer the reasons for other costs.

The researchers start out with a historic analysis of plant construction in the US. The basic numbers are grim. The typical plant built after 1970 had a cost over run of 241 percent—and that’s not considering the financing costs of the construction delays.

Many in the nuclear industry view this as, at least in part, a failure to standardize designs. There’s an extensive literature about the expectation that building additional plants based on a single design will mean lower costs due to the production of standardized parts, as well as management and worker experience with the construction process. That sort of standardization is also a large part of the motivation behind small, modular nuclear designs, which envision a reactor assembly line that then ships finished products to installations.

But many of the US’ nuclear plants were in fact built around the same design, with obvious site-specific aspects like different foundation needs. The researchers track each of the designs used separately, and they calculate a “learning rate”—the drop in cost that’s associated with each successful completion of a plant based on that design. If things went as expected, the learning rate should be positive, with each sequential plant costing less. Instead, it’s -115 percent.

Where’s that money go?

Figuring out what’s causing those changes involved diving into detailed accounting records on the construction of these nuclear plants; data on that was available for plants built after 1976. The researchers broke out the cost for 60 different aspects of construction, finding that nearly all of them went up, which suggests there wasn’t likely to be a single, unifying cause for the price increases. But the largest increases occurred in what they termed indirect costs: engineering, purchasing, planning, scheduling, supervision, and other factors not directly associated with the process of building the plant.

The increased indirect costs affected nearly every aspect of plant construction. As far as direct costs went, the biggest contributors were simply the largest structures in the plant, such as the steam supply system, the turbine generator, and the containment building.

Some of the changed costs are rather complicated. For example, many reactors shifted to a design that allowed greater passive cooling, which would make the plant more safe in the case of hardware failure. That in turn required separating the reactor vessel from the containment building walls. And that in turn allowed the use of lower-quality steel (which lowered the price), but more of it (which more than offset those savings). All of this also changed the construction process, although it’s difficult to determine exactly how this altered the amount of labor required.

To try to dive into the details, the researchers tracked the progress of material deployment rates—how quickly material brought to the site ended up being incorporated into a finished structure. While those rates have declined slightly for construction as a whole over the study period, they plunged for nuclear projects. Already, at the time of the Three Mile Island accident, steel was being deployed at about one third of the rate of the construction industry at large. Interviews with construction workers indicated that they were spending as much as 75 percent of their time idle.

Regulation

Since many of the researchers are in the Department of Nuclear Engineering at MIT, they are able to go through and connect the cost changes to specific motivations and check these connections by looking at patents and journal papers that describe the ideas driving these changes.

Some of the driving factors are definitely regulatory. After the Three Mile Island accident, for example, regulators “required increased documentation of safety-compliant construction practices, prompting companies to develop quality assurance programs to manage the correct use and testing of safety-related equipment and nuclear construction material.” Putting those programs in place and ensuring that documentation both added costs to the projects.

But those were far from the only costs. They cite a worker survey that indicated that about a quarter of the unproductive labor time came because the workers were waiting for either tools or materials to become available. In a lot of other cases, construction procedures were changed in the middle of the build, leading to confusion and delays. Finally, there was the general decrease in performance noted above. All told, problems that reduced the construction efficiency contributed nearly 70 percent to the increased costs.

In contrast, R&D related expenses, which included both regulatory changes and things like the identification of better materials or designs, accounted for the other third of the increases. Often, a single change met several R&D goals, so assigning the full third to regulatory changes is probably an over-estimate.

So, while safety regulations added to the costs, they were far from the primary factor. And deciding whether they were worthwhile costs would require a detailed analysis of every regulatory change in light of accidents like Three Mile Island and Fukushima.

As for the majority of the cost explosion, the obvious question is whether we can do any better. Here, the researchers’ answer is very much a “maybe.” They consider things like the possibility of using a central facility to produce high-performance concrete parts for the plant, as we have shifted to doing for projects like bridge construction. But this concrete is often more expensive than materials poured on site, meaning the higher efficiency of the off-site production would have to more than offset that difference. The material’s performance in the environment of a nuclear plant hasn’t been tested, so it’s not clear whether it’s even a solution.

In the end, the conclusion is that there are no easy answers to how to make nuclear plant construction more efficient. And, until there are, it will continue to be badly undercut by both renewables and fossil fuel.

Joule, 2020. DOI: 10.1016/j.joule.2020.10.001  (About DOIs).

Continue Reading

Science

The 2020 Atlantic hurricane season is finally over. What should we make of it?

Published

on

Enlarge / All of 2020’s tropical storms and hurricanes in a single image.

NOAA

Monday was the last “official” day of the Atlantic hurricane season, drawing down the curtain on what has been a frenetic year for storms forming in the Atlantic Ocean, Gulf of Mexico, and Caribbean Sea.

The top-line numbers are staggering: there were a total of 30 tropical storms and hurricanes, surpassing the previous record of 28 set in the year 2005. For only the second time, forecasters at the National Hurricane Center in Miami ran out of names and had to resort to using the Greek alphabet.

Of all those storms, 12 made landfall in the United States, obliterating the previous record of nine landfalling tropical storms or hurricanes set in 1916. The state of Louisiana alone experienced five landfalls. At least part of the state fell under coastal watches or warnings for tropical activity for a total of 474 hours this summer and fall. And Laura became the strongest hurricane to make landfall in the Pelican State since 1856.

Not all records broken

By some measures, however, this season was not all that extraordinary. Perhaps the best measurement of a season’s overall activity is not the number of named storms but rather its “accumulated cyclone energy,” or ACE, which sums up the intensity and duration of storms. So a weak, short-lived tropical storm counts for almost nothing, whereas a major, long-lived hurricane will quickly rack up dozens of points.

The ACE value for the 2020 Atlantic season to date is 179.8—and another weak tropical or subtropical storm could still form. This is notably higher than the climatological norm for ACE values (about 104), but it would not quite make the top 10 busiest Atlantic seasons on record, which is paced by the 1933 and 2005 seasons.

In terms of estimated damages, this season has been far from a record-breaker as well. So far, damages across the Atlantic basin are estimated at $37 billion. This is substantially less than the devastating 2017 season that included hurricanes Harvey and Irma and totaled more than $300 billion. It is also less than 2005, which featured Katrina, Rita, Wilma, and other storms topping $200 billion. One factor in 2020 was that most of the biggest storms missed heavily populated areas.

Also, the hyperactive Atlantic basin stands out amidst the other basins where tropical activity typically occurs, including the northeastern and northwestern Pacific Ocean, which were much quieter than normal this year. Overall, in 2020, the Northern Hemisphere is seeing an ACE value about 20 percent below normal levels for a calendar year.

Legacy of 2020

Perhaps the biggest legacy of this Atlantic hurricane season is the disturbing trend of tropical storms rapidly developing into strong hurricanes. This “rapid intensification” occurs when a storm’s maximum sustained winds increase by 35mph or more within the period of 24 hours, and it was observed in 10 storms this year.

Moreover, three late season storms—Delta, Eta, and Iota—increased their speeds by 100mph or more in 36 hours or less. Iota, which slammed into Nicaragua on November 17, was the latest Category 5 hurricane on record in the Atlantic.

Some recent studies, including a paper published by Nature Communications in 2019, have found that climate change has goosed intensification. The study observed, for the strongest storms, that rate of intensification over a 24-hour period increased by about 3 to 4 mph per decade from 1982 through 2009. Storms that strengthen more quickly, especially near landfall, leave coastal residents and emergency planners with less time and information to make vital preparations and calls for evacuation.

Continue Reading

Science

Arecibo radio telescope’s massive instrument platform has collapsed

Published

on

The immense instrument platform and the large collection of cables that supported it, all of which are now gone.

On Monday night, the enormous instrument platform that hung over the Arecibo radio telescope’s big dish collapsed due to the failure of the remaining cables supporting it. The risk of this sort of failure was the key motivation behind the National Science Foundation’s recent decision to shut down the observatory, as the potential for collapse made any attempt to repair the battered scope too dangerous for the people who would do the repairs.

Right now, details are sparse. The NSF has confirmed the collapse and says it will provide more information once it’s confirmed. A Twitter account from a user from Puerto Rico shared an image that shows the support towers that used to hold the cables that suspended the instrument platform over the dish, now with nothing but empty space between them.

The immense weight of the platform undoubtedly caused significant damage to the disk below. The huge metal cables that had supported it would likely have spread the damage well beyond where the platform landed. It’s safe to say that there is very little left of the instrument that’s in any shape to repair.

It’s precisely this sort of catastrophic event that motivated the NSF to shut down the instrument, a decision made less than two weeks ago. The separate failures of two cables earlier in the year suggested that the support system was in a fragile state, and the risks of another cable snapping in the vicinity of any human inspectors made even evaluating the strength of the remaining cables unacceptably risky. It’s difficult to describe the danger posed by the sudden release of tension in a metal cable that’s well over a hundred meters long and several centimeters thick.

With inspection considered too risky, repair and refurbishment were completely out of the question. The NSF took a lot of criticism from fans of the telescope in response to its decision, but the collapse both justifies the original decision and obviates the possibility of any alternatives, as more recent images indicate that portions of the support towers came down as well.

The resistance the NSF faced was understandable. The instrument played an important role in scientific history and was still being used when funding was available, as it provided some capabilities that were difficult to replicate elsewhere. It also played a role as the most important scientific facility in Puerto Rico, drawing scientists from elsewhere who engaged with the local research community and helped inspire students on the island to go into science. And beyond all that, it was iconic—until recently, there was nothing else like it, which made it a feature in popular culture and extended its draw well beyond the island where it was located.

Lots of its fans were sad to contemplate its end and held out hope that some other future could be possible for it. With yesterday’s collapse, the focus will have to shift to whether there’s a way to use its site for something that appropriately honors its legacy.

Continue Reading

Science

Russian spaceport officials are being sacked left and right

Published

on

Vladimir Putin, center, and Dmitry Rogozin, far right, tour Russia’s new Vostochny Cosmodrome in October 2015.

Kremlin

The controversial leader of Russia’s space enterprises, Dmitry Rogozin, has continued a spree of firings that have seen many of the nation’s top spaceport officials fired, arrested, or both.

Most recently, on November 27, Russian media reported that Rogozin fired the leader of the Center for Exploitation of Ground-Based Space Infrastructure, which administers all of Russia’s spaceports. Andrei Okhlopkov, the leader of this Roscosmos subsidiary, had previously faced a reprimand from Rogozin for “repeated shortcomings in his work.” The spaceport organization has more than 12,000 employees.

Earlier this month, Rogozin also fired Vladimir Zhuk, chief engineer of the center that administers Russian spaceports. According to Russian media reports, Zhuk was then arrested for abusing his authority in signing off on water supply contracts.

Both of these officials were working to bring Russia’s newest spaceport, Vostochny, in the far eastern region of the country, up to its full capacity. In an article titled “At Vostochny A Day Never Goes By Without Someone Going to Jail,” The Kommersant newspaper reported that Zhuk knew that water supply networks for the Vostochny spaceport were not completed when he authorized their payment. (This article was translated for Ars by Rob Mitchell).

Construction project drags on

Several other key officials connected with the Vostochny Cosmodrome—under development since 2011 and intended to reduce Russia’s reliance on the Baikonur Cosmodrome in Kazakhstan—have also been recently let go. These include Vostochny head Evgeny Rogoz (fired and under house arrest), Vostochny Director Roman Bobkov (fired and arrested), and Defense Ministry Inspector General Dmitriy Fomintsev (arrested).

Construction of the spaceport has been riven with corruption, often through embezzlement, and overall cost estimates of the facility have increased to more than $7.5 billion. Of the planned seven launch pads, just one is operational. A Soyuz-2 rocket first launched from this “Site 1S” in April 2016. A second pad, “Site 1A,” may see the launch of an Angara rocket next year.

Russian President Vladimir Putin has been critical of delays at Vostochny, most recently in 2019, citing concerns about corruption. It is not clear whether the latest round of firings is related to a recent meeting Putin had with Rogozin to go over the country’s space affairs. It seems that by firing and arresting his subordinates, Rogozin has so far been able to shirk the blame for the Vostochny troubles onto other officials.

Nevertheless, his time may be coming. Rogozin is no stranger to corruption concerns, and Roscosmos is facing serious financial challenges. Not only is Russia no longer receiving large payments from NASA for Soyuz seats to carry its astronauts to the International Space Station, but funding from United Launch Alliance for the RD-180 rocket engine will also be ending within a few years. And there are serious questions about whether Russia’s next-generation Angara rocket will be able to compete with SpaceX’s Falcon 9 rocket for commercial launches.

Continue Reading

Trending