Controlled Nuclear Fusion: The Energy Source That Is Always A Few Years Away

Nuclear fusion, the process that powers our sun and other stars, is considered by many the ‘holy grail’ of energy supply. Why is that so? The numbers tell the story.

The basic physics of fusion is well known and easily understood: when light elements (lighter than iron) are forced together under extreme conditions of pressure and temperature they will fuse – i.e., form a heavier element than either that is lighter than the combined mass of the two fusing elements. The mass that is apparently ‘lost’ is converted to energy according to Einstein’s famous equation E=mc2 (i.e., c squared).

image

It turns out that so much energy is released in this process (a simple, back-of-the-envelope calculation is shown below) that if the process can be harnessed on earth an unlimited source of energy is available. Fusion has other advantages, as well as serious technological problems which are also discussed below. First, why are the numbers so intriguing?

While many fusion reactions are possible and take place in stars, most attention has been directed to the deuterium-tritium (D-T) fusion reaction that has the lowest energy threshold. Both deuterium and tritium are heavier, isotopic forms of the common element, hydrogen. Deuterium is readily available from seawater (most seawater is two parts ordinary hydrogen to one part oxygen; one out of every 6,240 seawater molecules is two parts deuterium to one part oxygen). Tritium supplies do not occur in nature – it is radioactive and disappears quickly due to its short half-life – but can be bred from a common element, lithium, when exposed to neutrons.

image

D-T is also the reaction that largely powers our sun (but not exclusively), routinely converting massive amounts of hydrogen into massive amounts of helium and releasing massive amounts of energy.

image

It has been doing this for more than four billion years and is estimated to continue doing this for about another five billion when the hydrogen supply will finally dwindle. At this latter point the fusion reactions in the core of the sun will no longer be able to offset the gravitational forces acting on the sun’s very large mass and the sun will explode as the Crab Nebula did in 1054. It will then expand and swallow up the earth and other planets. Take heed!

To understand the numbers: every cubic meter of seawater, on average, contains 30 grams of deuterium. There are 300 million cubic miles of water on earth, 97% in the oceans.

image

Each deuterium nucleus (one proton + one neutron) weighs so little (3.3 millionths of a trillionth of a trillionth of a kilogram) that these 30 grams amount to close to a trillion trillion nuclei. Each time one of these nuclei is fused with a tritium nucleus (one proton + two neutrons) 17.6 MeV (millions of electron volts) of energy is released which can be captured as heat. Now MeV sounds like a lot of energy but it isn’t – a Btu, a more common energy unit, is 6.6 thousand trillion MeV).

Now this is a lot of numbers, some very small and some very large, but taking them all together that cubic meter of seawater can lead to the production of about 7 million kWh of thermal energy, which if converted into electricity at 50% efficiency corresponds to 3.5 million kWh. If one were to convert the potential fusion energy in just over one million cubic meters of seawater (about 3 ten thousandths of a cubic mile) one could supply the annual U.S. electricity production of 4 trillion kWh – and remember that our oceans contain several hundred million cubic miles of water. This is why some people get excited about fusion energy.

Unfortunately, there are a few barriers to overcome, starting with how to get D and T, both positively-charged nuclei, to fuse. The positive electrical charges repel one another (the so-called Coulomb Barrier) and you have to bring the distance between them to an incredibly small number before the ‘strong nuclear force’ can come into play and allow creation of the new, heavier helium nucleus (two protons + two neutrons). It is this still mysterious force that holds protons and neutrons together in our various elements.

image

So how does one bring these two nuclei close enough together to allow fusion to occur? The answer in the sun is enormous gravitational pressure and temperature, which we cannot reproduce on earth. The pressures in the sun are beyond our ability to achieve in any sustained way but the temperatures are not (temperature is a way of characterizing a particle’s kinetic energy, or speed) and fusion research is focused on achieving extremely high temperatures (100’s of millions of degrees or higher) at achievable high pressures. The fact that this is not easy to achieve is why fusion energy is always a ways in the future. Two techniques are the focus of global fusion research activities – magnetic confinement (as in tokamaks and Iter) and inertial confinement (as in laser-powered or ion beam-powered fusion) – see, e.g., http://www.world-nuclear.org/info/Current-and-Future-Generation/Nuclear-Fusion-Power. Several hundred billion US$ a year are being spent on these activities, mostly in international collaborations.

image

image

Fusion on earth has been achieved but not in a controlled manner, and only in very small amounts and for very short time periods with one exception, the hydrogen bomb. This is an example of an uncontrolled fusion reaction (triggered by an atomic bomb) that releases a large amount of energy in a few millionths of a second. As the French physicist and Nobel laureate Pierre-Gilles de Gennes once said: “We say that we will put the sun in a box. The idea is pretty. The problem is, we don’t know how to make the box.”

The pros and cons of fusion energy can be summarized as follows:
Pros:
– virtually limitless fuel availability at low cost
– no chain reaction, as in nuclear fission, and so it is easy to stop the energy release
– fusion produces no greenhouse gases and little nuclear waste compared to nuclear fission (the radioactive waste from fusion is from neutron activation of elements in its containment environment)
Cons:
– still unproven, at any scale, as controlled reaction that can release more energy than required to initiate the fusion (‘ignition’)
– requires extremely high temperatures that are difficult to contain
– many serious materials problems arising from extreme neutron bombardment
– commercial power plants, if achievable, would be large and expensive to build
– at best, full scale power production is not expected until at least 2050

Where do I come out on all this? I am not trained as a fusion physicist (just as a low temperature solid state physicist) and so lack a close involvement with the efforts of so many for so long to achieve controlled nuclear fusion, and the enthusiasm and positive expectations that inevitably result. Nevertheless, I support the long-term effort to see if ignition can be achieved (some scientists believe Iter is that critical point) and if the many engineering problems associated with commercial application of fusion can be successfully addressed. In my opinion the potential payoff is too big and important for the world to ignore. In fact I was once asked for my advice on whether the U.S. Government should support fusion R&D by a member of the DOE transition team for President-elect Carter, and my answer hasn’t changed.

Desalination: An Important Part of Our Water Future

Desalination (or desalinization) – the process of removing dissolved salts from water – is a technology that has been used for centuries. References to desalination can be found as far back as the writings of Aristotle (320 BC) and Pliny the Elder (76 AD). It is widely used at sea to this day and has helped keep many early mariners alive during long ocean trips. In fact, a typical nuclear-powered U.S. aircraft carrier today uses waste reactor heat to desalinate 400,000 gallons of water per day.

image

Significant advances in desalination technology started in the 1900’s and took a major step during WW II because of the need to supply potable water to military troops operating in remote, arid areas. By the 1980’s desalination technology was commercially viable and commonplace by the 1990’s. Today there are more than 16,000 desalination plants worldwide, producing more than 20 billion gallons of drinkable water every day. This is expected to reach more than 30 billion gallons per day by 2020, with one third of that capacity in the Middle East. To put that number in perspective, current global water consumption is estimated to be just under 1,200 billion gallons.

image

Why is desalination so important? The earth is a water-rich planet, to the tune of about 300 million cubic miles of water, and each cubic mile contains more than one trillion gallons. The problem is that most of that water, approximately 97 percent, is in the oceans which have an average salt content (salinity) of 35,000 parts per million by weight, and drinking that water regularly can kill us. To quote ‘How Desalination Works’ by Laurie Dove: “Ingesting salt signals your cells to flush water molecules to dilute the mineral. Too much salt, and this process can cause a really bad chain reaction: Your cells will be depleted of moisture, your kidneys will shut down and your brain will become damaged. The only way to offset this internal chaos is to urinate with greater frequency to expel all that salt, a remedy that could work only if you have access to lots of fresh drinking water.”

What about the water that is not in the oceans? Three percent of 300 million cubic miles is still a lot of water. Unfortunately, most of that three percent is not easily available for our use. Some is tied up in icecaps and glaciers, some is tied up as water vapor in the atmosphere, and the rest is in groundwater, lakes and rivers. The other hard fact is that some of our freshwater supply is simply inaccessible due to its location and depth. The net result is that we make productive use of less than one percent of our global water resources.

image

Saline, salty water comes in different ‘strengths’ – seawater as mentioned above, and brackish water which has less salt than seawater but more salt than fresh water. It may arise from mixing of fresh water with seawater, a situation that is occurring more frequently as sea levels rise due to global warming, or it may occur in brackish fossil water aquifers that are quite old. Commonly accepted definitions of saline water are:
– fresh water: less than 1,000 parts per million (ppm)
– brackish water: 1,000-10,000 ppm
– highly saline water: 10,000-35,000 ppm (including seawater)

How does one separate salt from saline water to produce fresh water, and what are the barriers to more widespread use of desalination? The latter question is easily answered: the energy required to do the separation, the energy required in some cases to move fresh water to higher elevations, and the associated costs.

There are quite a few technologies today for removing salt from saline water, the oldest being sun-heated water that evaporates and is then condensed, leaving the salt behind. This is also a description of the earth’s hydrologic cycle. The most widely used desalination technologies today are reverse osmosis (RO/60%), multi-stage flash distillation (MSF/26%), and multi-effect distillation (MED/8.2%). Others include electrodialysis, electrode ionization, and hybrid technologies. Energy requirements (electrical + thermal) for desalinating a range of saline waters, expressed in kWh per cubic meter of fresh water and exclusive of energy required for pre-treatment, brine disposal and water transport, are: RO/3-5.5 kWh; MSF/13.5-25.5 kWH; MED/6.5-11 kWH. Reverse osmosis requires no thermal energy, just mechanical energy to force salty water through a membrane that separates the salt from the water. The laws of physics tell us that the minimum amount of energy required to desalinate seawater is about 1 kWh per cubic meter and under 2 kWh per cubic meter has been achieved in RO, leaving limited opportunities for further reductions.

Generally, costs of desalinated water are higher than those of other potable water sources such as fresh water from rivers and groundwater, treated and recycled water, and water conservation. Needless to say, alternatives are not always available and achievable desalination costs today range from $0.5-1 per cubic meter. To put this into perspective, bottled water at $1/liter corresponds to $3,785 per cubic meter.

Desalination projects can be found in about 150 countries, with many more being planned or under construction. Today’s largest users are in the Middle East – for example, Saudi Arabia derives 50% of its municipal water from desalination and Qatar’s much smaller fresh water supply is entirely from desalination. Currently under construction in Kuwait is a power plant-desalination combined facility that will produce 1.5 GWe and 486,000 cubic meters of fresh water a day. It is scheduled for completion in 2016.

image

As world population increases along with demand for clean water desalination will become an increasingly important part of our water supply in the 21st century. We will not run out of water but we will pay more for receiving it in potable form.

Animal Cognition – The Beginnings of Understanding

As stated in the opening page of this energy-water blog, I reserve the right “..to occasionally discuss ‘random thoughts’ on other issues that catch my attention..”. This is one of those occasions, on a topic that I find personally fascinating and scientifically intriguing – animal cognition. Wikipedia defines animal cognition as “..the study of the mental capacities of animals.” For too long this has been a topic of limited scientific investigation and I suspect largely because of the difficulty of gathering data with animal subjects. How many young academic researchers are going to gamble their research careers on such a difficult field?

My interest was stimulated by observing and interacting with my dog, Illy, the second wonderful dog I have been privileged to have in my life. Both have been love machines but I do have to admit that the second is much smarter than the first, an Old English Sheepdog who died in my arms when she was 13. Illy, a female mix of Akita and German Shephard (and a few other breeds) is now 10 and doing just fine, and has taught me much about what dogs are capable of, which is much more than some researchers have been willing to admit. Of course this is no surprise to dog owners!

image

image

My interest in learning about animal cognition was triggered by my feelings during one of the many walks (more than 10,000) I’ve taken with Illy. After that walk I decided to put my thoughts down on paper (actually computer), resulting in a piece entitled ‘What I See When I look at My Dog’. One small quote from that piece: “I see a creature with two eyes, two ears, a mouth, a tongue, four limbs, a heart, lungs, and other internal organs that I have as well. My scientific sense tells me that this dog and I are related, distantly perhaps, but related nevertheless, and that it is only the vagaries of genetic mutation over very long time spans (more than I can comprehend) that accounts for our differences and differences with other living species.”

I also felt that my dog was extremely intelligent (don’t most dog owners feel that way?) and my next step was to look at the animal cognition literature. I ended up reading three books on the subject, two on dogs and one on cognition in a broad range of animals:
– The Genius of Dogs (Brian Hare, Vanessa Woods)
– What’s a Dog For? (John Homans)
– Animal Wise: The Thoughts and Emotions of Our Fellow Creatures (Virginia Morell)
This latter book discusses cognition in dogs and wolves as well as ants, fish, birds, parrots, rats, elephants, dolphins, and chimpanzees. Brian Hare runs the Canine Cognition Center at Duke university; John Homans is the executive editor of New York magazine and has a dog in New York City, and Virginia Morell is a science journalist who has followed animal cognition for many years. All three books were informative, well written and easy to read.

What did I learn from these books and what conclusions did I draw? Simply put, I learned a lot about how animals are thinking and feeling fellow creatures with strong cognitive capabilities that, in some cases, rival or exceed our own. Research on animal cognition, just getting seriously underway, is closing in on the conclusion that core animal brains are similar to core human brains (we’ve developed an outer brain), and the coming decades should be able to shed much more light on animal cognitive abilities and on our special relationship with the canine world.

A good summary of Morell’s excellent book is provided by a reviewer (Liza Gross) who writes: “As Morell shows us, the need to elevate ourselves above nature runs deep. By the 1920s, the rise of behaviorists—psychologists who believed that science could investigate only observable behaviors—again demoted animals to mere stimulus-response robots incapable of anything approaching the human capacity for empathy, learning or intelligence. Some psychologists still cling to this view of animal automatons.

Try telling any dog or cat lover that her cherished companion doesn’t have a personality or care whether she lives or dies. I’ll never forget how our Airedale, Amanda, would let loose in a fit of hysterical howls as she flung herself into my arms every time I came home from college break. And I still miss the Russian blue who magically appeared purring at my feet whenever I was feeling down.

Such anecdotes are simply that, of course. But just because scientists don’t know how to study animal emotions doesn’t mean animals don’t experience them. And given how often a study knocks yet another “uniquely” human trait off its pedestal, it may be just a matter of time before someone figures out how to study emotions in animals too.”

My take on all this is that an exciting century awaits in terms of our understanding of animal emotions and skills and of their relationships with other creatures, including humans. In the case of dogs I’ve formed my conclusions: they are smart, thoughtful and feeling creatures who bring great pleasure to their human families and whose relationship with humans will be better understood and appreciated in the decades ahead.
image

Energy Efficiency – The Necessary Cornerstone of U.S. Energy Policy

So far in this blog I’ve focused mostly on energy supply, with only a few references to limiting energy demand. I intend to correct this imbalance by now discussing, in more detail, energy efficiency, the wise use of whatever energy supplies we have, and the reasons I believe energy efficiency should be the cornerstone of U.S. energy policy. I will do so in the context of talking about energy security.

A search of the literature reveals that no precise definition exists for energy security. My approach to addressing this topic is to start by recognizing that energy is a means to an end, not an end in itself (except perhaps to those who sell energy or fuels). Fundamentally, energy is important only as its use facilitates the provision of services that are important to human welfare. These energy services include heating, cooling, lighting, communication, transporting people and goods, commercial activities, and industrial processes.

It is often said that energy is the lifeblood of modern societies, but the use of energy in its various forms, particularly fire, has been critical to human activities over the centuries and has helped shape human society. What is true is that modern societies provide a high level of energy-dependent services to their members and are totally dependent on energy sources that go well beyond human and animal power.

In the 20th century population growth, increasing urbanization, and increasing human welfare led to a rapid rise in electrification and dramatically increased global energy demand.

image

Transportation proved to be the fastest growing consumer of energy supplies, with well over 90 percent of transportation energy needs provided by petroleum. This pattern is continuing in the 21st century.

Projections by the International Energy Agency, the European Commission, the World Energy Council, the US Energy Information Administration, and others all point to the same general conclusions: there will be increased consumption of all primary energy sources over the next several decades. Specifically, the US Department of Energy’s Energy Information Administration, in its International Energy Outlook 2013, projects that, under business-as-usual, total world energy demand will rise from just under 600 Quads (1 Quad = 1.055 Exajoules) today to just over 800 Quads in 2040. Most of this growth will take place in the developing world.

image

These projections mask a central issue: How urgent is it to reduce growth in global energy demand and related emissions of carbon dioxide, other greenhouse gases, and other pollutants? I believe there is an urgency in a world that is powered today mostly by fossil fuels (80%) and is in the obvious early stages of human-induced global warming and climate change that is now irreversible even if carbon emissions were reduced to zero tomorrow. These impacts include deep ocean and ocean surface heating, more intense storms, glacier melting, rising ocean levels, changes in land temperatures and precipitation patterns, and movement of disease vectors to new regions. A sad corollary is that nations and island locations that had little to nothing to do with creating global warming may end end up suffering its most severe consequences.

The ‘good news’ is that limiting energy demand through increased energy efficiency is in most cases the lowest hanging fruit to be harvested in our struggle to balance energy demand with supply while ensuring that people suffering from energy poverty are being provided needed services. Considerable literature exists on how we can make more efficient use of energy in buildings (insulation, more efficient appliances and lighting, ground source heat pumps, passive solar design), transportation (more fuel efficient cars, trucks ans aircraft, alternative fuels, increased use of public transportation), and industry (more efficient manufacturing technologies). What was once wasted, when energy costs were lower and less attention was paid to energy use, can now be seen as a resource to be mined.

image

In light of the above I conclude that energy security must rest on two principles: (1) using the least amount of energy to provide a given service, and (2) access to technologies providing a diverse supply of reliable, affordable, and environmentally benign energy. The implications for energy policy are also twofold: (1) priority #1 must be the wise, efficient use of whatever energy supplies are available, whether fossil, nuclear, or renewable, and (2) then, and in parallel with increased efficiency, focus on new energy supplies that meet cost, sustainability and environmental requirements.

The clear message is that energy efficiency, the wise use of energy, must be the cornerstone of national energy policies.

image

Wind Energy In Scotland – What A Wind Farm Looks Like

I have just returned from eleven days in Scotland – one week as a Visiting Professor in the Engineering School at the University of Aberdeen and the remaining time visiting with my wife’s family in and around Glasgow.

Prior to heading to Aberdeen, shortly after arriving, I was kindly taken to see the Whitelee Wind Farm just outside Glasgow, an experience I am still savoring. A few pictures will illustrate why I was so excited by the visit – quite a change from the late 60’s and early 70’s when I first got involved with renewables:

imageimageimageimage

The Whitelee is Europe’s largest onshore wind farm, built in two stages to reach its current dimensions: 215 turbines (140/2.3 MW, 69/3 MW, 6/1.67 MW) with a maximum capacity of 539 MW. Wind energy is Scotland’s fastest growing renewable energy technology, reflecting the fact that Scotland is the windiest country in Europe (25% of all of Europe’s wind crosses the Scottish landmass and its surrounding seas). Scotland’s wind energy potential is estimated to be more than 150 GW onshore (current peak demand in Scotland is 10.5 GW) with significant opportunities for additional onshore and offshore development. Scotland’s offshore potential is estimated to be 206 GW and offshore wind power generation is predicted to be about 10 GW in 2020. As result, the Scottish government has set a target of generating 100% of Scotland’s electricity from renewable energy by 2020, with most of this likely to come from wind power. Scotland is also a world leader in development of wave and tidal power.

A few interesting facts about Scotland’s wind power resource:
– Scotland’s first offshore wind turbine was placed at the Beatrice Wind Farm in the North Sea in 2006 and was the world’s large wind turbine at the time – 5 MW. A second identical turbine was also installed and the wind farm began delivering electricity in 2007. Based on historical wind speed measurements it is expected that these turbines will run 96% of the time (more than 8400 hours per year) and at full power (10 MW) 38% of the time.
– based on averages a wind turbine in an EU country will operate at a 25% capacity factor. In Scotland, given the consistency of wind, it is expected that an average Scottish turbine will have a capacity factor of 35% or more. In fact, a small community wind farm in Shetland set a world record in 2005, achieving a capacity factor of 57.9%.
– About half of the UK’s current installed wind capacity is in Scotland.

This is all happening in the context of a vote next September in Scotland as to whether Scotland will separate from the UK and go out on its own as an independent nation. This is a complicated issue that is receiving extensive coverage in Scotland and the other parts of Great Britain, as well as elsewhere, and may be a nail biter until the vote is taken. An interesting fact is that Scotland is already an independent nation legally – the treaty that bound Scotland to England et al combined their parliaments but did not remove Scotland’s legal separateness. It will be an interesting debate for the next nine months.