Monday, March 29, 2021

Economically destructive cap and trade for HFCs is here

This is a bit complicated so bear with me. To begin with, Biden’s avalanche of climate scare executive orders included one telling the State Department to prepare the Kigali Amendment to the Montreal Protocol on Substances that Deplete the Ozone Layer for submission to the Senate, for ratification.

The Kigali Amendment has nothing to do with ozone depletion (a fanciful tale in itself), quite the contrary in fact. The 1987 Montreal Protocol mandated the phase out of CFCs, the primary refrigerant and aerosol propellant at the time. CFC were globally replaced with HFCs, at great expense and bother.

The 2015 Kigali Amendment now mandates the phase out of HFCs. I am not making this up.

HFCs do not threaten the ozone layer, so they have nothing to do with the Montreal Protocol. But the Protocol community decided to do what is called “mission creep”. They crept over to global warming, where HFCs are considered a problem. They have what is called a high “global warming potential” or GWP, so they too have to go.

Reportedly Obama and Kerry played a big role in creeping the Protocol, but they never submitted it for Senate ratification, knowing it would never get past the Republicans.

But now, to steal a great line: A funny thing happened on the way to the Senate.

Remember the giant Omnibus Appropriations Act passed in February? It funded the federal government and the Covid stimulus to the tune of $2.3 trillion. It also, as usual, included some riders that probably could not pass by themselves.

Well on page 1074 we find the “American Innovation and Manufacturing Act” or AIM. Incredibly, AIM includes the entire Kigali Amendment. (Still not making it up.) Not by name, mind you, but all the HFC phaseout rules and time table, pretty much word for word. I wonder if the Senators that voted for this addition to the Omnibus knew they were letting Kigali in the back door?

Simply put, this is a cap and trade system for HFCs, with a cap that declines over time to the point where almost no HFCs are allowed in America.

So, for example, AIM and Kigali establish the same cap. Here it is almost funny. Kigali was formulated in 2015 so the cap is mostly based on how much HFC was made and imported in 2011-13. Back then this was recent supply data. AIM uses the same dates, even though the present HFC supply situation may be very different. The only explanation for AIM using 8 to 10 year old data to establish the cap today is that it is Kigali by the back door.

So the Senate will be asked to ratify something that is already law. I hope they refuse but maybe it is not worth the filibuster. To paraphrase another great line, the greens don’t need no stinking Senate. They got Kigali in the back door already. Note that China, the world’s biggest producer of HFCs, has not ratified Kigali or implemented it by law.

In any case we now have before us a looming declining-cap and trade system for HFCs. EPA is moving quickly and is expected to propose the regulations for this system shortly. There are some big potential problems.

Keep in mind that some of the many primary uses of HFCs, in vast quantities, are these:

Air conditioning in cars, homes and big buildings

Refrigerators, freezers and chillers

Aerosol sprays

Electric power transformers

Heat pumps

Structural foam

Fire suppression

Note too that the leading candidates for HFC replacement are HFOs, which presently have some serious problems. For example they can be flammable, and they do not last all that long. In fact one reason they do not have a high GWP is that they self destruct quickly in air.

By far the biggest problem with AIM is that outdated cap. Technically it is called a “baseline” which sounds friendlier than a cap. But the HFC allowances EPA will distribute, in ever decreasing amounts, are from the baseline, so it is the cap.

In fact there are two big problems here. First of all we really have very little information on HFC production and import 8 to 10 years ago. EPA recently acknowledged this, again in an almost funny way.

They issued what is called a “Notice of data availability” or NODA. Normally a agency issues a NODA to announce the availability of new data, hence the name. But EPA’s NODA says they do not have good data about HFCs in 2011-13 and asks for suggestions. So this is really an EPA “Notice of data unavailability” or NODU.

Second it looks like HFC use today is much greater than it was 8-10 years ago. It is crucial that the AIM baseline developed by EPA be accurate, and especially that EPA’s estimates are not significantly lower than reality. The baseline determines the allocations of allowances and these must be adequate, lest there be a severe shortage of HFCs.

To take an extreme example, suppose the EPA consumption baseline is just half of what is needed for business as usual. In that case the initial, modest 10% reduction which is effective immediately, becomes a destructive 55% cut. If EPA is low by just 20%, the 10% reduction still balloons to 28%. Such a shortage of allowable HFCs could wreak havoc with certain industries and important products.

Then there is the problem of how EPA can even find all of the companies that use HFCs or import HFC containing products, much less how they can allocate the increasingly limited allowances to them. With SO2 allowances it was relatively easy because we knew where the big coal-fired power plants were and they did not change over time.

In contrast, HFC use and importation is a highly dynamic situation. For just one small example, roughly half of the cars sold in America are imported and all of them contain HFCs. In the case of aerosol sprays the allowance allocation problem is mind boggling.

There are other major problems, some not in Kigali, but this is enough to make the point. We are looking at a cap and trade phaseout of a ubiquitous harmless chemical, all in the name of climate change.

AIM is climate craziness personified, a prescription for disaster, especially economically destructive shortages.


The Social Cost of Carbon and Climate Sensitivity Is Model Manipulation at Its Finest

The “social cost of carbon” is a calculation that the Biden administration is looking to use to justify stringent regulation of carbon dioxide emissions.

Missouri Attorney General Eric Schmitt—joined by Arkansas, Arizona, Indiana, Kansas, Montana, Nebraska, Ohio, Oklahoma, South Carolina, Tennessee, and Utah—have now filed a lawsuit, arguing that the use of this metric in policy would constitute an overreach of government power.

Carbon dioxide is an odorless gas that is the basis for almost all plant life on earth. Of course, carbon dioxide emissions have slightly warmed the surface temperature of the planet, but on the other hand, access to plentiful energy has been accompanied by a doubling of life expectancy in the developed world and an elevenfold growth in personal wealth, as noted in the 2016 book “Lukewarming.”

A recent commentary that one of us (Kevin Dayaratna) published, titled “Why the Social Cost of Carbon Is the Most Useless Number You’ve Never Heard Of,” presented years of research on the topic conducted at The Heritage Foundation’s Center for Data Analysis.

He noted how easy it is to artificially raise the social cost of carbon by manipulating various assumptions in the calculation, including the discount rate, which is essentially an estimate of how much money invested today in say, the stock market, will grow in the future.

The Office of Management and Budget has recommended calculating things like the social cost of carbon with discount rates of 3%, 5%, and 7%. Obviously, at the higher rates, the social cost of carbon becomes pretty low. Using a 3% discount rate, the Biden administration hiked the social cost of carbon up to $51 per ton, a significant increase from the Trump administration’s $7 per ton.

Even that might not do for the Biden administration, which could rely upon the recent arguments made by University of Chicago economist Michael Greenstone, who said that the discount rate should be 2% or lower.

Additionally, in order to determine the social cost of carbon, we need to have a good idea of how much the planet’s surface will warm under various policy scenarios.

To calculate this level, scientists have for decades used computer models to find the “sensitivity” of climate to an arbitrary amount of carbon dioxide emissions. This sensitivity is usually the calculated warming, in terms of temperature, for a doubling of atmospheric carbon dioxide.

Here is a dirty little secret that few are aware of: All those horrifying future temperature changes that grace the front pages of papers of record aren’t really the predicted warming above today’s level. Instead, they are the difference between two models of climate change.

The “base climate” isn’t the observed global temperature at a given point in time. Instead, it is what a computer model simulates temperatures to be prior to any significant changes in carbon dioxide.

Reality need not apply to these calculations. And there are sometimes very big differences between the base models and reality, especially in the high latitudes of both hemispheres, and over the world’s vast tropics.

The usual procedure is then to instantaneously quadruple carbon dioxide and let the model spin up to an equilibrium climate. Then—hold onto your hat—that number is divided by two, taking advantage of the fact that warming varies linearly with increasing carbon dioxide, something that has been known for a long time. The final figure is called the equilibrium climate sensitivity to doubled carbon dioxide.

With regard to the equilibrium climate sensitivity, climate science is very odd: The more money we spend studying it, the more uncertain our forecasts become.

This fact is becoming increasingly obvious as a new suite of models is emerging that will be incorporated in the next climate science report from the U.N.’s Intergovernmental Panel on Climate Change, to be released next year.

For decades, there was no real narrowing of the range of the equilibrium climate sensitivity, since a 1979 National Academy of Sciences report, “Carbon Dioxide and Climate: A Scientific Assessment,” chaired by Jule Charney of the Massachusetts Institute of Technology.

The “Charney Sensitivity,” as it came to be called, was 1.5-4.5 C for the lower atmospheric warming that would be caused by a doubling of carbon dioxide.

Subsequent assessments, such as some of the serial “scientific assessments” of the Intergovernmental Panel on Climate Change, gave the same range, or something very close.

Periodically, the U.S. Department of Energy runs what it calls “coupled model intercomparison projects.” The penultimate one, used in the 2013 Intergovernmental Panel on Climate Change assessment, contained 32 families of models with a sensitivity range of 2.1-4.7 C, and a mean value of 3.4 C—i.e., warmer lower and mean values than Charney.

Nevertheless, the Intergovernmental Panel on Climate Change rounded this range back to the good old 1.5-4.5 C, because there was some skepticism about the warmer models.

Despite these differences between various base climate models and the doubled carbon dioxide calculation, reality-based calculations of the equilibrium climate sensitivity by other researchers yield much lower sensitivities, between 1.4 and 1.7 C.

The new coupled model intercomparison projects model suite, on the other hand, displays an even larger range of sensitivity beyond what has been observed. The range of models currently available (which is most of them), is 1.8-5.6 C, and an estimate of the mean is 4 C, and is likely what the Biden administration may very well use to determine the social cost of carbon.

So, sadly, the new coupled model intercomparison project models are worse than the older ones.

A 2017 study shows that, with one exception, the older coupled model intercomparison project models made large systematic errors over the entire globe’s tropics. The exception was a Russian model, which also had the lowest sensitivity of all, at 2.05 C.

Last year, researchers examined the new coupled model intercomparison projects model suite, and what they found was not good:

Rather than being resolved, the problem has become worse, since now every member of the CMIP6 generation of climate models exhibits an upward bias in the entire global troposphere as well as in the tropics.

A very recent paper just published in Geophysical Research Letters indicates that it may be that new estimates of the enhancements of clouds by human aerosol emissions are the problem. Interestingly, the model that has the least cloud interaction is the revised Russian model, and its sensitivity is down to 1.8 C, but it nonetheless still overpredicts observed global warming.

When it became apparent that the new models were predicting even more warming than their predecessors, Paul Voosen, the climate correspondent at Science magazine, interviewed a number of climate scientists and found that the new, “improved” renditions of the cloud-aerosol interaction is causing real problems, either completely eliminating any warming in the 20th century or producing far too much.

One of the scientists involved, Andrew Gettelman, told Voosen that “it took us a year to work that out,” proving yet again that climate scientists modify their models to give what French modeler Frederic Hourdin called an “anticipated acceptable result.”

Acceptable to whom? Hourdin’s landmark paper clearly indicates that it is scientists, not objective science, who subjectively decide how much warming looks right.

The implications of the systematic problems with coupled model intercomparison project models and other manipulated models on the social cost of carbon may be big: The Biden administration will rely on these models to beef up the social cost of carbon as well.

In fact, the Obama administration had done so by using an outdated equilibrium climate sensitivity distribution that was not grounded in reality that inflated its social cost of carbon estimates.

In fact, peer-reviewed research conducted by Kevin Dayaratna, Pat Michaels, Ross McKitrick, and David Kreutzer in two separate journals has illustrated that that under reasonable and realistic assumptions for climate sensitivity, alongside other assumptions, the social cost of carbon may effectively be zero or even negative.

It is now apparent that the reason for using the social cost of carbon to begin with is very simple: to be able to control the energy, agricultural, and industrial sectors of the economy, which will result in big costs for ordinary Americans with little to no climate benefit in return.

So altogether, we have one manipulated class of models—models determining climate sensitivity—likely being used as a basis for manipulating the social cost of carbon. The consequences on the social cost of carbon’s uncertainty are profound.

As a result, the public should be very cautious about accepting new calculations of the social cost of carbon. Although the social cost of carbon is based on an interesting class of statistical models, its use in policy should also serve a case study of model manipulation at its finest.


Canada: Supreme Court Rules Mandatory Carbon Price Constitutional

The Supreme Court of Canada, in a pivotal victory for Prime Minister Justin Trudeau’s climate policy, has ruled that the government’s decision to mandate a national carbon price to reduce greenhouse gas emissions is constitutional.

In a split 6-3 decision issued Thursday morning, Canada’s highest court ruled in favor of the nation’s federal government following a hotly contested legal battle over the government’s decision to impose a minimum fuel charge on all distributors and producers of carbon-based fuel in the country. The move, approved by parliament in 2018, received immediate pushback from a number of Canada’s provinces who claimed the decision was a blatant overreach on behalf of the government and argued that such decisions fall solely under their provincial authority.

These challenges resulted in a lengthy court battle, numerous appeals and conflicting rulings before ultimately landing at the feet of the Supreme Court, which has now definitively found that the decision to issue the carbon price was legal.

Chief Justice Richard Wagner, writing for the majority, said that at the heart of the legal battle rests the stark reality that climate change is a very real threat to the safety and wellbeing of humanity.

“Climate change is real,” Wagner wrote in Thursday’s ruling. “It is caused by greenhouse gas emissions resulting from human activities, and it poses a grave threat to humanity’s future. The only way to address the threat of climate change is to reduce greenhouse gas emissions.”

Because the threat of climate change is so severe, Wagner says, the gravity of the problem gives the government authority to act under the “peace, order and good government” clause of the Canadian Constitution. The clause, commonly referred to as the POGG clause, is rarely successfully cited as the basis for governmental action but does nonetheless give federal leaders the authority to act on issues that relate to the entire nation.

Wagner says this is where Trudeau’s carbon price prevails. While the provinces — namely the more conservative or oil-centric Alberta, Ontario and Saskatchewan areas — claim that managing natural resources to combat climate change is something they can do independently, POGG can be activated when there is a clear inability on behalf of all the provinces to come together and fix the problem themselves.

The chief justice says that allowing provinces to handle this on their own would only hold Canada back from combating climate change as a collective nation. Even if the majority of provinces were able to coordinate their efforts, it would only take a small number of provinces unwilling to impose a minimum carbon price to undermine the actions of the rest of the country.

The ruling states that only though a national and unified approach, in which all provinces are bound to playing by the same carbon pricing rules, does Canada have a chance at successfully reducing its carbon footprint.

The judge notes that provinces are still allowed to regulate themselves when it comes to their carbon pricing systems and can still chose their own regulatory frameworks when it comes to emissions standards. All they have to do is comply with the minimum standards laid out by the federal government or else they risk being slapped with an increased carbon tax.

Wagner also notes that carbon pricing works. The justice writes that there is a “”broad consensus among international bodies” that setting these minimum prices can significantly cut back on greenhouse emissions from carbon, with the idea being that the more carbon costs the less people will actually use it.

The ruling’s mention of an international consensus regarding the effectiveness of carbon pricing methods could have some possible implications for other nations that are considering similar measures. Implementing a carbon pricing system in the United States for instance, has often been cited as crucial step in overhauling America’s energy policies, but has so far failed to get off the ground.

Thursday’s ruling officially gives Trudeau’s climate policy the ability to forge ahead with imposing minimum carbon prices, which will continue to rise throughout the next decade to further discourage carbon use. The minimum price is currently set at $30 per 1.1 ton of emissions, but the government says it will continue to rise the minimum over the next few years before it ultimately hits $170 per 1.1 ton of emissions by 2030.


We should learn what lessons from Fukushima?

Lesson #1: People died from forced evacuations, not from radiation

Dr. Kelvin Kemm

A decade has passed since the Great East Japan Earthquake, and the name Fukushima is etched into history. But few people know the truth of what happened. The phrase, “the lessons learned from Fukushima,” is well-known. But how do people implement them, if they don’t know what happened, or what lessons they should actually learn?

It was after lunch on 11 March 2011 that a giant earthquake occurred 72 kilometers (45 miles) off the Oshika Peninsula in Japan. It registered 9.0 on the Richter Scale, making it the largest ’quake ever recorded in Japan. The undersea ground movement, over 30 km (18 miles) beneath the ocean’s surface, lifted up a huge volume of water, like an immense moving hill. Meanwhile, the ground shockwave travelled toward the land at high speed. It struck Japan and shook the ground for six terrifying minutes.

The shock wave travelled under 11 nuclear reactors, including two separate Fukushima complexes: Fukushima-Diani and Fukushima-Daiichi. (Diani means ‘Complex 1’ and Daiichi ‘Complex 2’.) All 11 reactors shut down, as they were designed to do, and no doubt all the reactor operators breathed a great sigh of relief. It was premature.

The mound of sea water was still traveling. As the water “hill” entered shallow water, nearer the land, it was lifted up into a towering wave as high as 40 meters (130 feet!) in places. Then, some 50 minutes after the earthquake, the tsunami struck the Fukushima-Daiichi nuclear power station. Some kilometres away, when water struck the Fukushima-Diani nuclear power station, it was “only” 9 m (30 ft) high, which was not as devastating as at Daiichi. Diani did not make it into the news.

The water jumped the protective sea walls at Fukushima-Daiichi. The sighs of relief from a half hour before turned into concern and dread. Over at the Fukushima Diani power station, 12 km (7 mi) to the south, water also caused damage to machinery, but the reactors were not harmed. There was no risk of radiation release, so the Diani power station was of no interest to the international media. Diani was safely shut down to “cold shutdown” after two days.

As a result, over the past decade, any reference to “Fukushima” has meant only the Daiichi power station and not the other one.

The devastating tsunami swept up to 10 km (6 mi) inland in places, washing away buildings, roads, and telecommunication and power lines. Over 15,000 people were killed, mainly by drowning.

Although all the nuclear reactors had shut down to a state known as “hot shutdown,” the reactors were still very hot and needed residual cooling for many hours after the urgent fast shutdown. People instinctively know not to put their hands on the engine block of a car right after it has been switched off. Nuclear reactors are the same and need to cool down until they reach the safe state known as “cold shutdown.”

A nuclear reactor has pumps that send water through the reactor until it cools. But the Fukushima electrical pumps failed, because the tsunami had washed away the incoming electrical lines. So the reactor system automatically switched to diesel-driven generators to keep the cooling pumps going; but the water had washed away the diesel fuel supply, meaning the diesels worked for only a short while. Then it switched to emergency batteries; but the batteries were never designed to last for days, and could supply emergency power for only about eight hours.

The hot fuel could not be cooled, and over the next three or four days the fuel in three reactors melted, much like a candle melts.

The world media watched, and broadcast the blow-by-blow action. Japanese authorities started to panic under the international spotlight. The un-circulating cooling water was boiling off inside the reactors resulting in a chemical reaction between hot fuel exposed to hot steam. This led to the production of hydrogen gas. As the steam pressure rose, the engineers decided to open valves to release the pressure. That worked as planned, but it released the hydrogen as well.

Hydrogen, being light, rose up to the roof, where the ventilation system was not working, because there was no electricity. After a while some stray spark ignited the hydrogen which exploded, blowing the lightweight roof off the building right in front of the world’s TV cameras. The Fukushima news just became much more dramatic. Authorities were desperate to show the world some positive action.

They progressively ordered the evacuation of 160,000 people living around the Fukushima neighbourhood. That was a mistake. As days and weeks passed, it materialized that not one single person was killed by nuclear radiation. Not one single person was even injured by nuclear radiation, either. Even today, a decade later, there is still no sign of any longer-term radiation harm to any person or animal. Sadly, however, people did die during the forced evacuation.

So one of the lessons learned from Fukushima is that a huge amount of nuclear power can be struck by the largest earthquake and tsunami ever recorded, and nobody gets harmed by nuclear radiation.

Another lesson learned is that an evacuation order issued too hastily did harm and kill people.

World Nuclear Association Director-General Dr. Sama Bilbao y León said: “The rapidly implemented and protracted evacuation has resulted in well-documented significant negative social and health impacts. In total, the evacuation is thought to have been responsible for more than 2,000 premature deaths among the 160,000 who were evacuated. The rapid evacuation of the frail elderly, as well at those requiring hospital care, had a near-immediate toll.” [emphasis added]

She added: “When facing future scenarios concerning public health and safety, whatever the event, it is important that authorities take an all-hazards approach. There are risks involved in all human activities, not just nuclear power generation. Actions taken to mitigate a situation should not result in worse impacts than the original events. This is particularly important when managing the response to incidents at nuclear facilities – where fear of radiation may lead to an overly conservative assessment and a lack of perspective for relative risks.”

Thus, a decade later, we can contemplate the cumulative lessons learned. Above all, they are that nuclear power is far safer than anyone had thought. Even when dreaded core meltdowns occurred, and although reactors were wrecked, resulting in a financial disaster for the owners, no people were harmed by radiation.

We also learned that, for local residents, it would have been far safer to stay indoors in a house than to join the forced evacuation. We also learned that governments and authorities must listen to the nuclear professionals, and not overreact, even though the television news cameras look awfully close.

Fukushima certainly produced some valuable lessons. Governments, news media and the public need to learn the correct lessons from them.

Dr Kelvin Kemm is a nuclear physicist and is CEO of Stratek Business Strategy Consultants, a project management company based in Pretoria. He conducts business strategy development and project planning in a wide variety of fields for diverse clients. Contact him at

Email from




No comments: