In this case ARPA-E will be overseeing the following, from the agency's report:
- Plants Engineered To Replace Oil (PETRO). Technologies for low-cost production of advanced biofuels are limited by the small amount of available energy captured by photosynthesis and the inefficient processes used to convert plant matter to fuel. PETRO aims to create plants that capture more energy from sunlight and convert that energy directly into fuels. ARPA-E will fund technologies that optimize the biochemical processes of energy capture and conversion to develop farm-ready crops that deliver more energy per acre with less processing prior to the pump. If successful, PETRO will create biofuels for half their current cost, finally making them cost-competitive with fuels from oil. Up to $30 million will be made available for this program area.
- High Energy Advanced Thermal Storage (HEATS). More than 90% of energy technologies involve the transport and conversion of thermal energy. Advancements in thermal energy storage - both hot and cold - would dramatically improve performance for a variety of critical energy applications. ARPA-E seeks to develop revolutionary cost-effective thermal energy storage technologies in three focus areas: 1) high temperature storage systems to deliver solar electricity more efficiently around the clock and allow nuclear and fossil resources the flexibility to meet peak demand, 2) fuel produced from the sun's heat, and 3) HVAC systems that use thermal storage to improve the driving range of electric vehicles by up to 40%. Up to $30 million will be made available for this program area.
- Rare Earth Alternatives in Critical Technologies (REACT). Rare earths are naturally-occurring minerals with unique magnetic properties that are used in many emerging energy technologies. As demand for these technologies continues to increase, rare earths are rapidly becoming more expensive due to limited global supply - prices of many have increased 300-700% in the past year. Rising rare earth prices have already escalated costs for some energy technologies and may jeopardize the widespread adoption of many critical energy solutions by US manufacturers. ARPA-E seeks to fund early-stage technology alternatives that reduce or eliminate the dependence on rare earth materials by developing substitutes in two key areas: electric vehicle motors and wind generators. Up to $30 million will be made available for this program area.
- Green Electricity Network Integration (GENI). Recent advances in computation, networking, and grid monitoring have shed light on potential ways to deliver electricity more efficiently and reliably. Today, however, the equivalent of one out of every five electricity dollars is lost to power outages and 30% of the grid's hardware needs replacing. ARPA-E seeks to fund innovative control software and high-voltage hardware to reliably control the grid, specifically: 1) controls able to manage 10 times more sporadically available wind and solar electricity than currently on the grid, and 2) resilient power flow control hardware - or the energy equivalent of an internet router - to enable significantly more electricity through the existing network of transmission lines. Up to $30 million will be made available for this program area.
- Solar Agile Delivery of Electrical Power Technology (Solar ADEPT). The DOE SunShot Initiative leverages the unique strengths across DOE to reduce the total cost of utility-scale solar systems by 75% by the end of 2020. If successful, this collaboration would deliver solar electricity at roughly 6 cents a kilowatt hour - a cost competitive with electricity from fossil fuels. This would enable solar electricity to scale without subsidies and make the US globally competitive in solar technology. ARPA-E's portion of the collaboration is the Solar ADEPT program, which focuses on integrating advanced power electronics into solar panels and solar farms to extract and deliver energy more efficiently. Specifically, ARPA-E aims to invest in key advances in magnetics, semiconductor switches, and charge storage, which could reduce power conversion costs by up to 50% for utilities and 80% for homeowners. Up to $10 million will be made available for this program area.
Monday, April 25, 2011
Thursday, April 21, 2011
Globalization of jobs
from David Ruccio
Once upon a time, the cheerleaders of globalization, like Matt Slaughter, could claim that “for every one job that U.S. multinationals created abroad. . .they created nearly two U.S. jobs in their [U.S.-based] parents.” As the following graph shows, that’s no longer the case.
Over the past decade, U.S. multinational corporations—like General Electric, Caterpillar, Microsoft, and Wal-Mart—have been hiring abroad while cutting jobs at home.
The companies cut their work forces in the U.S. by 2.9 million during the 2000s while increasing employment overseas by 2.4 million, new data from the U.S. Commerce Department show. That’s a big switch from the 1990s, when they added jobs everywhere: 4.4 million in the U.S. and 2.7 million abroad.
In all, U.S. multinationals employed 21.1 million people at home in 2009 and 10.3 million elsewhere, including increasing numbers of higher-skilled foreign workers.
The ability of multinational corporations to shift production and jobs when and where they want—shrinking employment at home and abroad while increasing productivity or hiring everywhere or cutting jobs at home while adding them abroad—undercuts the position of U.S. workers and undermines the usual neoclassical dogma concerning the benefits of globalization.
All they’re left with is the argument that globalization makes consumer goods cheaper in the United States—which only works if anyone is left with a job to buy those goods.
fantastic graph. you can see how beautifully balanced the two sides are.
Of course, this is not about proAmuckika sentiment. After all, it is good that other countries get jobs (should they be at fair wages and conditions) but rather that pro globalization defenders are disingenuous to misinformed...
Thursday, April 14, 2011
Of the 1%, by the 1%, for the 1%
Americans have been watching protests against oppressive regimes that concentrate massive wealth in the hands of an elite few. Yet in our own democracy, 1 percent of the people take nearly a quarter of the nation’s income—an inequality even the wealthy will come to regret.May 2011
It’s no use pretending that what has obviously happened has not in fact happened. The upper 1 percent of Americans are now taking in nearly a quarter of the nation’s income every year. In terms of wealth rather than income, the top 1 percent control 40 percent. Their lot in life has improved considerably. Twenty-five years ago, the corresponding figures were 12 percent and 33 percent. One response might be to celebrate the ingenuity and drive that brought good fortune to these people, and to contend that a rising tide lifts all boats. That response would be misguided. While the top 1 percent have seen their incomes rise 18 percent over the past decade, those in the middle have actually seen their incomes fall. For men with only high-school degrees, the decline has been precipitous—12 percent in the last quarter-century alone. All the growth in recent decades—and more—has gone to those at the top. In terms of income equality, America lags behind any country in the old, ossified Europe that President George W. Bush used to deride. Among our closest counterparts are Russia with its oligarchs and Iran. While many of the old centers of inequality in Latin America, such as Brazil, have been striving in recent years, rather successfully, to improve the plight of the poor and reduce gaps in income, America has allowed inequality to grow.
Economists long ago tried to justify the vast inequalities that seemed so troubling in the mid-19th century—inequalities that are but a pale shadow of what we are seeing in America today. The justification they came up with was called “marginal-productivity theory.” In a nutshell, this theory associated higher incomes with higher productivity and a greater contribution to society. It is a theory that has always been cherished by the rich. Evidence for its validity, however, remains thin. The corporate executives who helped bring on the recession of the past three years—whose contribution to our society, and to their own companies, has been massively negative—went on to receive large bonuses. In some cases, companies were so embarrassed about calling such rewards “performance bonuses” that they felt compelled to change the name to “retention bonuses” (even if the only thing being retained was bad performance). Those who have contributed great positive innovations to our society, from the pioneers of genetic understanding to the pioneers of the Information Age, have received a pittance compared with those responsible for the financial innovations that brought our global economy to the brink of ruin.
Some people look at income inequality and shrug their shoulders. So what if this person gains and that person loses? What matters, they argue, is not how the pie is divided but the size of the pie. That argument is fundamentally wrong. An economy in which most citizens are doing worse year after year—an economy like America’s—is not likely to do well over the long haul. There are several reasons for this.
First, growing inequality is the flip side of something else: shrinking opportunity. Whenever we diminish equality of opportunity, it means that we are not using some of our most valuable assets—our people—in the most productive way possible. Second, many of the distortions that lead to inequality—such as those associated with monopoly power and preferential tax treatment for special interests—undermine the efficiency of the economy. This new inequality goes on to create new distortions, undermining efficiency even further. To give just one example, far too many of our most talented young people, seeing the astronomical rewards, have gone into finance rather than into fields that would lead to a more productive and healthy economy.
Third, and perhaps most important, a modern economy requires “collective action”—it needs government to invest in infrastructure, education, and technology. The United States and the world have benefited greatly from government-sponsored research that led to the Internet, to advances in public health, and so on. But America has long suffered from an under-investment in infrastructure (look at the condition of our highways and bridges, our railroads and airports), in basic research, and in education at all levels. Further cutbacks in these areas lie ahead.
None of this should come as a surprise—it is simply what happens when a society’s wealth distribution becomes lopsided. The more divided a society becomes in terms of wealth, the more reluctant the wealthy become to spend money on common needs. The rich don’t need to rely on government for parks or education or medical care or personal security—they can buy all these things for themselves. In the process, they become more distant from ordinary people, losing whatever empathy they may once have had. They also worry about strong government—one that could use its powers to adjust the balance, take some of their wealth, and invest it for the common good. The top 1 percent may complain about the kind of government we have in America, but in truth they like it just fine: too gridlocked to re-distribute, too divided to do anything but lower taxes.
Economists are not sure how to fully explain the growing inequality in America. The ordinary dynamics of supply and demand have certainly played a role: laborsaving technologies have reduced the demand for many “good” middle-class, blue-collar jobs. Globalization has created a worldwide marketplace, pitting expensive unskilled workers in America against cheap unskilled workers overseas. Social changes have also played a role—for instance, the decline of unions, which once represented a third of American workers and now represent about 12 percent.
But one big part of the reason we have so much inequality is that the top 1 percent want it that way. The most obvious example involves tax policy. Lowering tax rates on capital gains, which is how the rich receive a large portion of their income, has given the wealthiest Americans close to a free ride. Monopolies and near monopolies have always been a source of economic power—from John D. Rockefeller at the beginning of the last century to Bill Gates at the end. Lax enforcement of anti-trust laws, especially during Republican administrations, has been a godsend to the top 1 percent. Much of today’s inequality is due to manipulation of the financial system, enabled by changes in the rules that have been bought and paid for by the financial industry itself—one of its best investments ever. The government lent money to financial institutions at close to 0 percent interest and provided generous bailouts on favorable terms when all else failed. Regulators turned a blind eye to a lack of transparency and to conflicts of interest.
When you look at the sheer volume of wealth controlled by the top 1 percent in this country, it’s tempting to see our growing inequality as a quintessentially American achievement—we started way behind the pack, but now we’re doing inequality on a world-class level. And it looks as if we’ll be building on this achievement for years to come, because what made it possible is self-reinforcing. Wealth begets power, which begets more wealth. During the savings-and-loan scandal of the 1980s—a scandal whose dimensions, by today’s standards, seem almost quaint—the banker Charles Keating was asked by a congressional committee whether the $1.5 million he had spread among a few key elected officials could actually buy influence. “I certainly hope so,” he replied. The Supreme Court, in its recent Citizens United case, has enshrined the right of corporations to buy government, by removing limitations on campaign spending. The personal and the political are today in perfect alignment. Virtually all U.S. senators, and most of the representatives in the House, are members of the top 1 percent when they arrive, are kept in office by money from the top 1 percent, and know that if they serve the top 1 percent well they will be rewarded by the top 1 percent when they leave office. By and large, the key executive-branch policymakers on trade and economic policy also come from the top 1 percent. When pharmaceutical companies receive a trillion-dollar gift—through legislation prohibiting the government, the largest buyer of drugs, from bargaining over price—it should not come as cause for wonder. It should not make jaws drop that a tax bill cannot emerge from Congress unless big tax cuts are put in place for the wealthy. Given the power of the top 1 percent, this is the way you would expect the system to work.
America’s inequality distorts our society in every conceivable way. There is, for one thing, a well-documented lifestyle effect—people outside the top 1 percent increasingly live beyond their means. Trickle-down economics may be a chimera, but trickle-down behaviorism is very real. Inequality massively distorts our foreign policy. The top 1 percent rarely serve in the military—the reality is that the “all-volunteer” army does not pay enough to attract their sons and daughters, and patriotism goes only so far. Plus, the wealthiest class feels no pinch from higher taxes when the nation goes to war: borrowed money will pay for all that. Foreign policy, by definition, is about the balancing of national interests and national resources. With the top 1 percent in charge, and paying no price, the notion of balance and restraint goes out the window. There is no limit to the adventures we can undertake; corporations and contractors stand only to gain. The rules of economic globalization are likewise designed to benefit the rich: they encourage competition among countries for business, which drives down taxes on corporations, weakens health and environmental protections, and undermines what used to be viewed as the “core” labor rights, which include the right to collective bargaining. Imagine what the world might look like if the rules were designed instead to encourage competition among countries for workers. Governments would compete in providing economic security, low taxes on ordinary wage earners, good education, and a clean environment—things workers care about. But the top 1 percent don’t need to care.
Or, more accurately, they think they don’t. Of all the costs imposed on our society by the top 1 percent, perhaps the greatest is this: the erosion of our sense of identity, in which fair play, equality of opportunity, and a sense of community are so important. America has long prided itself on being a fair society, where everyone has an equal chance of getting ahead, but the statistics suggest otherwise: the chances of a poor citizen, or even a middle-class citizen, making it to the top in America are smaller than in many countries of Europe. The cards are stacked against them. It is this sense of an unjust system without opportunity that has given rise to the conflagrations in the Middle East: rising food prices and growing and persistent youth unemployment simply served as kindling. With youth unemployment in America at around 20 percent (and in some locations, and among some socio-demographic groups, at twice that); with one out of six Americans desiring a full-time job not able to get one; with one out of seven Americans on food stamps (and about the same number suffering from “food insecurity”)—given all this, there is ample evidence that something has blocked the vaunted “trickling down” from the top 1 percent to everyone else. All of this is having the predictable effect of creating alienation—voter turnout among those in their 20s in the last election stood at 21 percent, comparable to the unemployment rate.
In recent weeks we have watched people taking to the streets by the millions to protest political, economic, and social conditions in the oppressive societies they inhabit. Governments have been toppled in Egypt and Tunisia. Protests have erupted in Libya, Yemen, and Bahrain. The ruling families elsewhere in the region look on nervously from their air-conditioned penthouses—will they be next? They are right to worry. These are societies where a minuscule fraction of the population—less than 1 percent—controls the lion’s share of the wealth; where wealth is a main determinant of power; where entrenched corruption of one sort or another is a way of life; and where the wealthiest often stand actively in the way of policies that would improve life for people in general.
As we gaze out at the popular fervor in the streets, one question to ask ourselves is this: When will it come to America? In important ways, our own country has become like one of these distant, troubled places.
Alexis de Tocqueville once described what he saw as a chief part of the peculiar genius of American society—something he called “self-interest properly understood.” The last two words were the key. Everyone possesses self-interest in a narrow sense: I want what’s good for me right now! Self-interest “properly understood” is different. It means appreciating that paying attention to everyone else’s self-interest—in other words, the common welfare—is in fact a precondition for one’s own ultimate well-being. Tocqueville was not suggesting that there was anything noble or idealistic about this outlook—in fact, he was suggesting the opposite. It was a mark of American pragmatism. Those canny Americans understood a basic fact: looking out for the other guy isn’t just good for the soul—it’s good for business.
The top 1 percent have the best houses, the best educations, the best doctors, and the best lifestyles, but there is one thing that money doesn’t seem to have bought: an understanding that their fate is bound up with how the other 99 percent live. Throughout history, this is something that the top 1 percent eventually do learn. Too late.
how long before people wake up?
Wednesday, April 6, 2011
U.S. Sees Array of New Threats at Japan’s Nuclear Plant
By JAMES GLANZ and WILLIAM J. BROAD
United States government engineers sent to help with the crisis in Japan are warning that the troubled nuclear plant there is facing a wide array of fresh threats that could persist indefinitely, and that in some cases are expected to increase as a result of the very measures being taken to keep the plant stable, according to a confidential assessment prepared by the Nuclear Regulatory Commission.
Among the new threats that were cited in the assessment, dated March 26, are the mounting stresses placed on the containment structures as they fill with radioactive cooling water, making them more vulnerable to rupture in one of the aftershocks rattling the site after the earthquake and tsunami of March 11. The document also cites the possibility of explosions inside the containment structures due to the release of hydrogen and oxygen from seawater pumped into the reactors, and offers new details on how semimolten fuel rods and salt buildup are impeding the flow of fresh water meant to cool the nuclear cores.
In recent days, workers have grappled with several side effects of the emergency measures taken to keep nuclear fuel at the plant from overheating, including leaks of radioactive water at the site and radiation burns to workers who step into the water. The assessment, as well as interviews with officials familiar with it, points to a new panoply of complex challenges that water creates for the safety of workers and the recovery and long-term stability of the reactors.
While the assessment does not speculate on the likelihood of new explosions or damage from an aftershock, either could lead to a breach of the containment structures in one or more of the crippled reactors, the last barriers that prevent a much more serious release of radiation from the nuclear core. If the fuel continues to heat and melt because of ineffective cooling, some nuclear experts say, that could also leave a radioactive mass that could stay molten for an extended period.
The document, which was obtained by The New York Times, provides a more detailed technical assessment than Japanese officials have provided of the conundrum facing the Japanese as they struggle to prevent more fuel from melting at the Fukushima Daiichi plant. But it appears to rely largely on data shared with American experts by the Japanese.
Among other problems, the document raises new questions about whether pouring water on nuclear fuel in the absence of functioning cooling systems can be sustained indefinitely. Experts have said the Japanese need to continue to keep the fuel cool for many months until the plant can be stabilized, but there is growing awareness that the risks of pumping water on the fuel present a whole new category of challenges that the nuclear industry is only beginning to comprehend.
The document also suggests that fragments or particles of nuclear fuel from spent fuel pools above the reactors were blown “up to one mile from the units,” and that pieces of highly radioactive material fell between two units and had to be “bulldozed over,” presumably to protect workers at the site. The ejection of nuclear material, which may have occurred during one of the earlier hydrogen explosions, may indicate more extensive damage to the extremely radioactive pools than previously disclosed.
David A. Lochbaum, a nuclear engineer who worked on the kinds of General Electric reactors used in Japan and now directs the nuclear safety project at the Union of Concerned Scientists, said that the welter of problems revealed in the document at three separate reactors made a successful outcome even more uncertain.
“I thought they were, not out of the woods, but at least at the edge of the woods,” said Mr. Lochbaum, who was not involved in preparing the document. “This paints a very different picture, and suggests that things are a lot worse. They could still have more damage in a big way if some of these things don’t work out for them.”
The steps recommended by the nuclear commission include injecting nitrogen, an inert gas, into the containment structures in an attempt to purge them of hydrogen and oxygen, which could combine to produce explosions. On Wednesday, the Tokyo Electric Power Company, which owns the plant, said it was preparing to take such a step and to inject nitrogen into one of the reactor containment vessels.
The document also recommends that engineers continue adding boron to cooling water to help prevent the cores from restarting the nuclear reaction, a process known as criticality.
Even so, the engineers who prepared the document do not believe that a resumption of criticality is an immediate likelihood, Neil Wilmshurst, vice president of the nuclear sector at the Electric Power Research Institute, said when contacted about the document. “I have seen no data to suggest that there is criticality ongoing,” said Mr. Wilmshurst, who was involved in the assessment.
The document was prepared for the commission’s Reactor Safety Team, which is assisting the Japanese government and the Tokyo Electric Power Company. It says it is based on the “most recent available data” from numerous Japanese and American organizations, including the electric power company, the Japan Atomic Industrial Forum, the United States Department of Energy, General Electric and the Electric Power Research Institute, an independent, nonprofit group.
The document contains detailed assessments of each of the plant’s six reactors along with recommendations for action. Nuclear experts familiar with the assessment said that it was regularly updated but that over all, the March 26 version closely reflected current thinking.
The assessment provides graphic new detail on the conditions of the damaged cores in reactors 1, 2 and 3. Because slumping fuel and salt from seawater that had been used as a coolant is probably blocking circulation pathways, the water flow in No. 1 “is severely restricted and likely blocked.” Inside the core itself, “there is likely no water level,” the assessment says, adding that as a result, “it is difficult to determine how much cooling is getting to the fuel.” Similar problems exist in No. 2 and No. 3, although the blockage is probably less severe, the assessment says.
Some of the salt may have been washed away in the past week with the switch from seawater to fresh water cooling, nuclear experts said.
A rise in the water level of the containment structures has often been depicted as a possible way to immerse and cool the fuel. The assessment, however, warns that “when flooding containment, consider the implications of water weight on seismic capability of containment.”
Experts in nuclear plant design say that this warning refers to the enormous stress put on the containment structures by the rising water. The more water in the structures, the more easily a large aftershock could rupture one of them.
Margaret Harding, a former reactor designer for General Electric, warned of aftershocks and said, “If I were in the Japanese’s shoes, I’d be very reluctant to have tons and tons of water sitting in a containment whose structural integrity hasn’t been checked since the earthquake.”
The N.R.C. document also expressed concern about the potential for a “hazardous atmosphere” in the concrete-and-steel containment structures because of the release of hydrogen and oxygen from the seawater in a highly radioactive environment.
Hydrogen explosions in the first few days of the disaster heavily damaged several reactor buildings and in one case may have damaged a containment structure. That hydrogen was produced by a mechanism involving the metal cladding of the nuclear fuel. The document urged that Japanese operators restore the ability to purge the structures of these gases and fill them with stable nitrogen gas, a capability lost after the quake and tsunami.
Nuclear experts say that radiation from the core of a reactor can split water molecules in two, releasing hydrogen. Mr. Wilmshurst said that since the March 26 document, engineers had calculated that the amount of hydrogen produced would be small. But Jay A. LaVerne, a physicist at Notre Dame, said that at least near the fuel rods, some hydrogen would in fact be produced, and could react with oxygen. “If so,” Mr. LaVerne said in an interview, “you have an explosive mixture being formed near the fuel rods.”
Nuclear engineers have warned in recent days that the pools outside the containment buildings that hold spent fuel rods could pose an even greater danger than the melted reactor cores. The pools, which sit atop the reactor buildings and are meant to keep spent fuel submerged in water, have lost their cooling systems.
The N.R.C. report suggests that the fuel pool of the No. 4 reactor suffered a hydrogen explosion early in the Japanese crisis and could have shed much radioactive material into the environment, what it calls “a major source term release.”
Experts worry about the fuel pools because explosions have torn away their roofs and exposed their radioactive contents. By contrast, reactors have strong containment vessels that stand a better chance of bottling up radiation from a meltdown of the fuel in the reactor core.
“Even the best juggler in the world can get too many balls up in the air,” Mr. Lochbaum said of the multiplicity of problems at the plant. “They’ve got a lot of nasty things to negotiate in the future, and one missed step could make the situation much, much worse.”
Henry Fountain contributed reporting from New York, and Matthew L. Wald from Washington.
This is very frightening and I am going to try to find out more. Even if they succeed at averting these outcomes, I hope the seriousness of how close we have come to even worse outcomes becomes more fully known in Japan. I am especially concerned about the fact that nuclear fuel was ejected "up to one mile away" by the explosions and that the problems with the spent fuel are severe and not easy to resolve.
Tuesday, April 5, 2011
Wind, water and solar technologies can provide 100 percent of the world's energy, eliminating all fossil fuels. Here's how
In December leaders from around the world will meet in Copenhagen to try to agree on cutting back greenhouse gas emissions for decades to come. The most effective step to implement that goal would be a massive shift away from fossil fuels to clean, renewable energy sources. If leaders can have confidence that such a transformation is possible, they might commit to an historic agreement. We think they can.
A year ago former vice president Al Gore threw down a gauntlet: to repower America with 100 percent carbon-free electricity within 10 years. As the two of us started to evaluate the feasibility of such a change, we took on an even larger challenge: to determine how 100 percent of the world’s energy, for all purposes, could be supplied by wind, water and solar resources, by as early as 2030. Our plan is presented here.
Scientists have been building to this moment for at least a decade, analyzing various pieces of the challenge. Most recently, a 2009 Stanford University study ranked energy systems according to their impacts on global warming, pollution, water supply, land use, wildlife and other concerns. The very best options were wind, solar, geothermal, tidal and hydroelectric power—all of which are driven by wind, water or sunlight (referred to as WWS). Nuclear power, coal with carbon capture, and ethanol were all poorer options, as were oil and natural gas. The study also found that battery-electric vehicles and hydrogen fuel-cell vehicles recharged by WWS options would largely eliminate pollution from the transportation sector.
Our plan calls for millions of wind turbines, water machines and solar installations. The numbers are large, but the scale is not an insurmountable hurdle; society has achieved massive transformations before. During World War II, the U.S. retooled automobile factories to produce 300,000 aircraft, and other countries produced 486,000 more. In 1956 the U.S. began building the Interstate Highway System, which after 35 years extended for 47,000 miles, changing commerce and society.
Is it feasible to transform the world’s energy systems? Could it be accomplished in two decades? The answers depend on the technologies chosen, the availability of critical materials, and economic and political factors.
Clean Technologies Only
Renewable energy comes from enticing sources: wind, which also produces waves; water, which includes hydroelectric, tidal and geothermal energy (water heated by hot underground rock); and sun, which includes photovoltaics and solar power plants that focus sunlight to heat a fluid that drives a turbine to generate electricity. Our plan includes only technologies that work or are close to working today on a large scale, rather than those that may exist 20 or 30 years from now.
To ensure that our system remains clean, we consider only technologies that have near-zero emissions of greenhouse gases and air pollutants over their entire life cycle, including construction, operation and decommissioning. For example, when burned in vehicles, even the most ecologically acceptable sources of ethanol create air pollution that will cause the same mortality level as when gasoline is burned. Nuclear power results in up to 25 times more carbon emissions than wind energy, when reactor construction and uranium refining and transport are considered. Carbon capture and sequestration technology can reduce carbon dioxide emissions from coal-fired power plants but will increase air pollutants and will extend all the other deleterious effects of coal mining, transport and processing, because more coal must be burned to power the capture and storage steps. Similarly, we consider only technologies that do not present significant waste disposal or terrorism risks.
In our plan, WWS will supply electric power for heating and transportation—industries that will have to revamp if the world has any hope of slowing climate change. We have assumed that most fossil-fuel heating (as well as ovens and stoves) can be replaced by electric systems and that most fossil-fuel transportation can be replaced by battery and fuel-cell vehicles. Hydrogen, produced by using WWS electricity to split water (electrolysis), would power fuel cells and be burned in airplanes and by industry.
Plenty of Supply
Today the maximum power consumed worldwide at any given moment is about 12.5 trillion watts (terawatts, or TW), according to the U.S. Energy Information Administration. The agency projects that in 2030 the world will require 16.9 TW of power as global population and living standards rise, with about 2.8 TW in the U.S. The mix of sources is similar to today’s, heavily dependent on fossil fuels. If, however, the planet were powered entirely by WWS, with no fossil-fuel or biomass combustion, an intriguing savings would occur. Global power demand would be only 11.5 TW, and U.S. demand would be 1.8 TW. That decline occurs because, in most cases, electrification is a more efficient way to use energy. For example, only 17 to 20 percent of the energy in gasoline is used to move a vehicle (the rest is wasted as heat), whereas 75 to 86 percent of the electricity delivered to an electric vehicle goes into motion.
Even if demand did rise to 16.9 TW, WWS sources could provide far more power. Detailed studies by us and others indicate that energy from the wind, worldwide, is about 1,700 TW. Solar, alone, offers 6,500 TW. Of course, wind and sun out in the open seas, over high mountains and across protected regions would not be available. If we subtract these and low-wind areas not likely to be developed, we are still left with 40 to 85 TW for wind and 580 TW for solar, each far beyond future human demand. Yet currently we generate only 0.02 TW of wind power and 0.008 TW of solar. These sources hold an incredible amount of untapped potential.
The other WWS technologies will help create a flexible range of options. Although all the sources can expand greatly, for practical reasons, wave power can be extracted only near coastal areas. Many geothermal sources are too deep to be tapped economically. And even though hydroelectric power now exceeds all other WWS sources, most of the suitable large reservoirs are already in use.
The Plan: Power Plants Required
Clearly, enough renewable energy exists. How, then, would we transition to a new infrastructure to provide the world with 11.5 TW? We have chosen a mix of technologies emphasizing wind and solar, with about 9 percent of demand met by mature water-related methods. (Other combinations of wind and solar could be as successful.)
Wind supplies 51 percent of the demand, provided by 3.8 million large wind turbines (each rated at five megawatts) worldwide. Although that quantity may sound enormous, it is interesting to note that the world manufactures 73 million cars and light trucks every year. Another 40 percent of the power comes from photovoltaics and concentrated solar plants, with about 30 percent of the photovoltaic output from rooftop panels on homes and commercial buildings. About 89,000 photovoltaic and concentrated solar power plants, averaging 300 megawatts apiece, would be needed. Our mix also includes 900 hydroelectric stations worldwide, 70 percent of which are already in place.
Only about 0.8 percent of the wind base is installed today. The worldwide footprint of the 3.8 million turbines would be less than 50 square kilometers (smaller than Manhattan). When the needed spacing between them is figured, they would occupy about 1 percent of the earth’s land, but the empty space among turbines could be used for agriculture or ranching or as open land or ocean. The nonrooftop photovoltaics and concentrated solar plants would occupy about 0.33 percent of the planet’s land. Building such an extensive infrastructure will take time. But so did the current power plant network. And remember that if we stick with fossil fuels, demand by 2030 will rise to 16.9 TW, requiring about 13,000 large new coal plants, which themselves would occupy a lot more land, as would the mining to supply them.
The Materials Hurdle
The scale of the WWS infrastructure is not a barrier. But a few materials needed to build it could be scarce or subject to price manipulation.
Enough concrete and steel exist for the millions of wind turbines, and both those commodities are fully recyclable. The most problematic materials may be rare-earth metals such as neodymium used in turbine gearboxes. Although the metals are not in short supply, the low-cost sources are concentrated in China, so countries such as the U.S. could be trading dependence on Middle Eastern oil for dependence on Far Eastern metals. Manufacturers are moving toward gearless turbines, however, so that limitation may become moot.
Photovoltaic cells rely on amorphous or crystalline silicon, cadmium telluride, or copper indium selenide and sulfide. Limited supplies of tellurium and indium could reduce the prospects for some types of thin-film solar cells, though not for all; the other types might be able to take up the slack. Large-scale production could be restricted by the silver that cells require, but finding ways to reduce the silver content could tackle that hurdle. Recycling parts from old cells could ameliorate material difficulties as well.
Three components could pose challenges for building millions of electric vehicles: rare-earth metals for electric motors, lithium for lithium-ion batteries and platinum for fuel cells. More than half the world’s lithium reserves lie in Bolivia and Chile. That concentration, combined with rapidly growing demand, could raise prices significantly. More problematic is the claim by Meridian International Research that not enough economically recoverable lithium exists to build anywhere near the number of batteries needed in a global electric-vehicle economy. Recycling could change the equation, but the economics of recycling depend in part on whether batteries are made with easy recyclability in mind, an issue the industry is aware of. The long-term use of platinum also depends on recycling; current available reserves would sustain annual production of 20 million fuel-cell vehicles, along with existing industrial uses, for fewer than 100 years.
Smart Mix for Reliability
A new infrastructure must provide energy on demand at least as reliably as the existing infrastructure. WWS technologies generally suffer less downtime than traditional sources. The average U.S. coal plant is offline 12.5 percent of the year for scheduled and unscheduled maintenance. Modern wind turbines have a down time of less than 2 percent on land and less than 5 percent at sea. Photovoltaic systems are also at less than 2 percent. Moreover, when an individual wind, solar or wave device is down, only a small fraction of production is affected; when a coal, nuclear or natural gas plant goes offline, a large chunk of generation is lost.
The main WWS challenge is that the wind does not always blow and the sun does not always shine in a given location. Intermittency problems can be mitigated by a smart balance of sources, such as generating a base supply from steady geothermal or tidal power, relying on wind at night when it is often plentiful, using solar by day and turning to a reliable source such as hydroelectric that can be turned on and off quickly to smooth out supply or meet peak demand. For example, interconnecting wind farms that are only 100 to 200 miles apart can compensate for hours of zero power at any one farm should the wind not be blowing there. Also helpful is interconnecting geographically dispersed sources so they can back up one another, installing smart electric meters in homes that automatically recharge electric vehicles when demand is low and building facilities that store power for later use.
Because the wind often blows during stormy conditions when the sun does not shine and the sun often shines on calm days with little wind, combining wind and solar can go a long way toward meeting demand, especially when geothermal provides a steady base and hydroelectric can be called on to fill in the gaps.
As Cheap as Coal
The mix of WWS sources in our plan can reliably supply the residential, commercial, industrial and transportation sectors. The logical next question is whether the power would be affordable. For each technology, we calculated how much it would cost a producer to generate power and transmit it across the grid. We included the annualized cost of capital, land, operations, maintenance, energy storage to help offset intermittent supply, and transmission. Today the cost of wind, geothermal and hydroelectric are all less than seven cents a kilowatt-hour (¢/kWh); wave and solar are higher. But by 2020 and beyond wind, wave and hydro are expected to be 4¢/kWh or less.
For comparison, the average cost in the U.S. in 2007 of conventional power generation and transmission was about 7¢/kWh, and it is projected to be 8¢/kWh in 2020. Power from wind turbines, for example, already costs about the same or less than it does from a new coal or natural gas plant, and in the future wind power is expected to be the least costly of all options. The competitive cost of wind has made it the second-largest source of new electric power generation in the U.S. for the past three years, behind natural gas and ahead of coal.
Solar power is relatively expensive now but should be competitive as early as 2020. A careful analysis by Vasilis Fthenakis of Brookhaven National Laboratory indicates that within 10 years, photovoltaic system costs could drop to about 10¢/kWh, including long-distance transmission and the cost of compressed-air storage of power for use at night. The same analysis estimates that concentrated solar power systems with enough thermal storage to generate electricity 24 hours a day in spring, summer and fall could deliver electricity at 10¢/kWh or less.
Transportation in a WWS world will be driven by batteries or fuel cells, so we should compare the economics of these electric vehicles with that of internal-combustion-engine vehicles. Detailed analyses by one of us (Delucchi) and Tim Lipman of the University of California, Berkeley, have indicated that mass-produced electric vehicles with advanced lithium-ion or nickel metal-hydride batteries could have a full lifetime cost per mile (including battery replacements) that is comparable with that of a gasoline vehicle, when gasoline sells for more than $2 a gallon.
When the so-called externality costs (the monetary value of damages to human health, the environment and climate) of fossil-fuel generation are taken into account, WWS technologies become even more cost-competitive.
Overall construction cost for a WWS system might be on the order of $100 trillion worldwide, over 20 years, not including transmission. But this is not money handed out by governments or consumers. It is investment that is paid back through the sale of electricity and energy. And again, relying on traditional sources would raise output from 12.5 to 16.9 TW, requiring thousands more of those plants, costing roughly $10 trillion, not to mention tens of trillions of dollars more in health, environmental and security costs. The WWS plan gives the world a new, clean, efficient energy system rather than an old, dirty, inefficient one.
Our analyses strongly suggest that the costs of WWS will become competitive with traditional sources. In the interim, however, certain forms of WWS power will be significantly more costly than fossil power. Some combination of WWS subsidies and carbon taxes would thus be needed for a time. A feed-in tariff (FIT) program to cover the difference between generation cost and wholesale electricity prices is especially effective at scaling-up new technologies. Combining FITs with a so-called declining clock auction, in which the right to sell power to the grid goes to the lowest bidders, provides continuing incentive for WWS developers to lower costs. As that happens, FITs can be phased out. FITs have been implemented in a number of European countries and a few U.S. states and have been quite successful in stimulating solar power in Germany.
Taxing fossil fuels or their use to reflect their environmental damages also makes sense. But at a minimum, existing subsidies for fossil energy, such as tax benefits for exploration and extraction, should be eliminated to level the playing field. Misguided promotion of alternatives that are less desirable than WWS power, such as farm and production subsidies for biofuels, should also be ended, because it delays deployment of cleaner systems. For their part, legislators crafting policy must find ways to resist lobbying by the entrenched energy industries.
Finally, each nation needs to be willing to invest in a robust, long-distance transmission system that can carry large quantities of WWS power from remote regions where it is often greatest—such as the Great Plains for wind and the desert Southwest for solar in the U.S.—to centers of consumption, typically cities. Reducing consumer demand during peak usage periods also requires a smart grid that gives generators and consumers much more control over electricity usage hour by hour.
A large-scale wind, water and solar energy system can reliably supply the world’s needs, significantly benefiting climate, air quality, water quality, ecology and energy security. As we have shown, the obstacles are primarily political, not technical. A combination of feed-in tariffs plus incentives for providers to reduce costs, elimination of fossil subsidies and an intelligently expanded grid could be enough to ensure rapid deployment. Of course, changes in the real-world power and transportation industries will have to overcome sunk investments in existing infrastructure. But with sensible policies, nations could set a goal of generating 25 percent of their new energy supply with WWS sources in 10 to 15 years and almost 100 percent of new supply in 20 to 30 years. With extremely aggressive policies, all existing fossil-fuel capacity could theoretically be retired and replaced in the same period, but with more modest and likely policies full replacement may take 40 to 50 years. Either way, clear leadership is needed, or else nations will keep trying technologies promoted by industries rather than vetted by scientists.
A decade ago it was not clear that a global WWS system would be technically or economically feasible. Having shown that it is, we hope global leaders can figure out how to make WWS power politically feasible as well. They can start by committing to meaningful climate and renewable energy goals now.
Note: This article was originally printed with the title, "A Path to Sustainable Energy by 2030."
This is refered to in another fine article
Trashing the Planet for Natural Gas: Shale Gas Development Threatens
Freshwater Sources, Likely Escalates Climate Destabilization
Karen Charman that may be available here for a while
which comes via
what is the best way? Are all ways bad in some way? What if the basic assumption were that there will be pain, how do we make the right decision? How do we convince ourselves that total costs must be accounted for and included in the discussion and create an agreed upon standard for this kind of accounting independent of the solution that might result?