Computer modelers can trip up in two ways. The model may be poorly designed in the first place. Secondly the data input may be flawed. As the old saying goes, “garbage in, garbage out.” Models need to be tested, but if your model is designed to predict the future. and you foresee major changes in the future, and you want to predict the outcome of those changes, how are you going to test the model?
Google.org thought that 1100 GWs of mainly renewable generating capacity would be sufficient to run the American economy. Google believes that 360 new GWs of wind generating capacity will be needed along with 250 gigawatts by 2030 of solar installation. The rest is going to come from Geothermal, hydroelectric, and not more than 30 new nuclear plants. But where are we going to get the 1100 GWs of generating capacity? First hydro is not going to provide us with much new electricity, most of the best hydro sites are already in use, and environmental organizations have stopped hydro expansion for over a generation. We already have about 350 GWs of hydro generation capacity. Yet that hydro capacity only produces 6% of the electricity generated in the United States. Hydro generation capacity should have signaled Google.org that use that it had a problem with its model.
No one at Google seems to have been aware of the problem of solar and wind indeterminacy. Solar power systems operate at 20% of their rated capacity in the desert southwest, and much less in the cloudy Southeast. Wind generators generate at best at a little more 40% of their rated capacity on the great planes. The electricity generated by renewables will not be reliable, and will not come simply if customers throw a light switch. Customers in New York City will have to wait until the wind picks up in Amarillo before they turn on the light. Making sure that you have renewable generated electricity when consumers actually want it is going to be hugely expensive. Google did not seem to have a clue.
So I wondered why Google did not see its mistake. Then I watched a video of Google’s Chairman and CEO Eric Schmidt, talking about the Google energy plan:
Son of a gun, Eric Schmidt had been drinking the cool aid with the oracle of Snowmass, Amory Lovins. The Google energy plan relied a lot on efficiency to bridge gaps in renewable energy output. This was straight out of the gospel according to Amory Lovins, but not at all realistic. I am not an Amory Lovins fan. I am only one among a goodly numer of other reviewers have pointed to flaws in Mr. Lovins’ thinking. In response to numerous criticisms, Amory Lovins appears to have abandoned his defense of his efficiency theories, as well as many of his other contentions. While Lovins has abandoned his defense of many of his energy related theories, he has not abandoned the theories themselves. One of Lovins pet theory is that nuclear power is too expensive. Eric Schmidt, however, still has faith in Lovins. And drinks the Kool air from cups marked “nuclear power is too expensive,” and “efficiency will save us.”
The American Energy Information Agency theorizes that the levelized cost of wind generated electricity without the cost of measures required to make it reliable is competitively with nuclear, but cannot be expected to be substantially lower. A report coming from the UK, and published by Parsons Brinckerhof estimated that the cost of onshore wind generated electricity would be the same as the cost of nuclear. A chart from Joe Romm reflects the reality of renewable nuclear and renewable costs:Note: Romm’s Chart includes the subsidized cost of renewables, as well as their unsubsidized cost. Of course even with subsidies someone pays the difference between the subsidized and the unsubsidized cost.
Levelized costs reflect the cost of electricity as it leaves the generating facility, but not as it arrives at the consumer’s business or home. If the electrical generation system is unreliable, some ways must be found to overcome that unreliability, and those ways usually cost money. The cost of making energy reliable will inevitably be passed on to the consumer. In the case of renewable energy this will include a system of back up generators, or redundant renewable generators, or energy storage. None of these come free, and the consumers will have to pay. Thus the cost of reliable renewable electricity is likely to be considerably higher than the levelized cost of electricity produced by renewable installations.
It has recently been argued that low cost Chinese manufactured back up systems will diminish the cost of making renewables reliable, but the cost of Chinese labor is rapidly rising and Chinese labor is far less efficient than American or Western European labor. As a consequence jobs have already started moving backs from China to the United States, and this trend will probably continue for some time to come. Thus the use of low cost labor in China, will not decrease the cost of renewable energy backup in the long run.
Renewables require higher material inputs than nuclear power, but currently have an advantage in labor input. However labor costs can be lowered by factory manufacture, but transportation will limit the size of factory manufactured reactors. Serial manufacture also lowers reactor costs.
Large reactors require a huge amount of labor, millions of work hours, to build. In a factory, the labor tasks that are preformed by skilled workers in the field assembly of large reactors, can be turned over to machines. The use of labor saving devices makes sense if you are going to build a lot of small reactors, so small factory built modular reactors are the way to go if you want to a lot of reactors in a hurry.
The advantage of large reactors is that as reactor energy output rises, materials and labor per unit of input falls. This lowers price. There are ways to counteract this problem. As we have already seen labor costs can fall if you substitute mechanical slaves for wage earning human workers. Thus the labor costs for factory built modular reactors will decline especially as the number of manufactured unit rises. Secondly, materials can be used more if the reactor is designed to be compact. Conventional nuclear technology requires large amounts of steel in the reactor core, in pressure vessels, heat exchanges, in steam turbines, in generators and in outer containment structures. Conventional nuclear power plant design also requires a lot of concrete.
Materials inputs into nuclear power plant can be controlled by compact design, simplicity, and by choice of nuclear technology. Increasing reactor operating temperature may increase the efficiency of materials use. Paradoxically, some low temperature reactors are materials hogs, while some high temperature nuclear technologies are very parsimonious with materials. Per F. Peterson, Haihua Zhao, and Robert Petroski of University of California note,
analysis presented here suggests that the ESBWR uses 73% of the steel, and 50% of the concrete required to construct an ABWR. This suggests that new Generation III+ nuclear power construction in the U.S. will have substantially lower capital costs than was found with Generation III LWRs.
Then they add that closed cycle gas turbines
technology that will be demonstrated by the Next Generation Nuclear Plant (NGNP) has the potential to achieve comparable material inputs to LWRs at much smaller unit capacities, and when extrapolated to larger reactors, to further reductions in steel and concrete inputs.
In particular the University of California researchers like Advanced High Temperature Reactor, molten salt cooled, compact reactors.
In nuclear energy systems, the major construction inputs are steel and concrete, which comprise over 95% of the total energy input into materials. To first order, the total building volume determines total concrete volume. The quantity of concrete also plays a very important role in deciding the plant overall cost:
• Concrete related material and construction cost is important in total cost (~25% of total plant cost for 1970’s PWRs );
• Concrete volume affects construction time;
• Rebar (reinforcing steel in concrete) is a large percentage of total steel input (about 0.06 MTrebar per MT reinforced concrete for 1970’s PWRs );
• Rebar is about 35% of total steel for 1970’s PWRs ;
• Concrete volume affects decommissioning cost.
Not only reactors, but also generating turbines can be made compact. For example the super critical Carbon dioxide which are compatible with high temperature reactors will be extremely compact and highly efficient. V. Dostal, M.J. Driscoll, and P. Hejzlar of MIT state,
The thermal efficiency of the advanced design is close to 50% and the reactor system with the direct supercritical CO2 cycle is ~ 24% less expensive than the steam indirect cycle and 7% less expensive than a helium direct Brayton cycle. It is expected in the future that high temperature materials will become available and a high performance design with turbine inlet temperatures of 700oC will be possible. This high performance design achieves a thermal efficiency approaching 53%, which yields additional cost savings.
The turbomachinery is highly compact and achieves efficiencies of more than 90%. For the 600 MWth/246 MWe power plant the turbine body is 1.2 m in diameter and 0.55 m long, which translates into an extremely high power density of 395 MWe/m3. The compressors are even more compact as they operate close to the critical point where the density of the fluid is higher than in the turbine. The power conversion unit that houses these components and the generator is 18 m tall and 7.6 m in diameter. Its power density (MWe/m3) is about ~ 46% higher than that of the helium GT-MHR (Gas Turbine Modular Helium Reactor).
Simplicity can also lower reactor cost. Again high operating temperature and compactness are not necessarily enemies of simplicity in NPP design.
Small compact reactors will be easier than large reactors to deploy. In order to replace fossil fuels in a little over a generation, post-carbon energy technology must be capable of large scale deployment. The nuclear manufacturing system that has been developed over the last 50 years, in addition to requiring a large skilled labor input takes several years from the time the first shovel full of soil is moved, until the electrical generators are turned on. Thus it is extremely desirable to develop energy technology that can be deployed rapidly during the next 40 years.
The small reactor is drawing increasing attention. A recent report from the Organization for Economic Cooperation and Development titled “Current Status, Technical Feasibility and Economics of Small Nuclear Reactors,” noted the potential of small reactors to be a game changer. Yet the latest Google modeling effort “Examining the Impact of Clean Energy Innovation on the United States Energy System and Economy,” entirely ignores the possibilities opened up by small reactors and advanced nuclear technology.
Matt Hourihan, a Clean Energy Policy Analyst at the Information Technology and Innovation Foundation (ITIF) notes significant problems with the new Google report,
Of course, the big, obvious catch is that Google makes some fairly substantial assumptions about energy costs. Some of these are quite aggressive indeed. For example, under Google’s assumptions, onshore wind costs decline by more than 50 percent by 2050 – twice as much as the IEA has predicted. The assumptions for solar PV, CCS, and the other technologies are at least as aggressive – some would say unrealistic.
Despite these flaws, Hourihan sees some good things coming out of the Google Report,
But the efficacy of these assumptions are not the point of the report, nor does it mean the report doesn’t have value: it makes clear the enormous upside, economically and environmentally, of spurring breakthrough clean technologies — so long as we get both the technology and the policy right. It’s not a question of either/or. Any efforts to mitigate emissions that don’t seek to accelerate energy innovation will likely end in failure, and miss an economic opportunity. Under Google’s model, neither the application of a $30 per ton carbon price nor a more robust set of policies and mandates to drive cleantech adoption reduced emissions as effectively on their own as when they were coupled with breakthrough innovations to drive cost declines. It’s a similar finding we published in a report a few months ago. And relying on these policies without also driving technology would lead to slower growth relative to the innovation approach. In terms of outcomes, the best policy mix thus appears to be one that incorporates an urgent push for radical technological innovation with a broad batch of policies.
This view is clearly consistent with the views I present on Nuclear Green. The Nuclear Green views are:
* Current renewable technology is too expensive
* Technological breakthroughs are unlikely to drive the cost of renewables down
* Large Light Water Reactors are and will be too expensive, as well as too limited to satisfy many energy needs
* There are technology, product manufacturing and product packaging routs that will drive the cost of advanced nuclear power to a cost that is significantly lower than the cost of either conventional renewables or conventional renewables.
* Small, low cost, advanced nuclear power plants can solve many post-carbon energy problems that are not solvable by solar, wind, or conventional nuclear technology.
* Developmental paths that are likely to produce low cost, advanced nuclear technology have been known for over a generation, but have been ignored.
We are not at a place yet that will allow us to agree on a technology, but the time is near at hand when society must agree on its energy goals, and start to set policy. We are not there yet, but the time for confusion is clearly over. We must begin to act soon.