Tuesday, November 22, 2022



As predicted, COP27 was a flop yet again

Every year, the annual UN climate conference heralds two weeks of intergovernmental pantomime.

Without fail, we are told that time is running out, that it’s one minute to midnight, and that it’s the last chance to save the world from doomsday.

There is an uncanny sense of déjà vu while global CO2 emissions continue to rise relentlessly and with no sign of slowing down.

image from https://mcusercontent.com/c920274f2a364603849bbb505/images/61fb8b00-ba51-69eb-5fc7-01581b00f90a.jpg

We at Net Zero Watch have been pointing out for years that every COP passes through the same ritual stages, always ending in ultimate failure. These green away weeks have degenerated into a meaningless ritual, which makes the participants feel good about themselves but in practice achieves little or nothing.

Which is why we predicted with great confidence many months ago that there would be no breaking of the mould at COP27, and that the great conclave of climate cardinals would end in dismal failure yet again.

And how right we were. From the arrival of the very first delegates, it was just as we said it would be. Check out our updated history of the annual COP ritual, which shows how Sharm el-Sheikh 2022 has followed exactly the same pattern, exactly the same path to dismal failure as previous COPs did before this year’s talkfest.

After the annual excitement of hopes, hype and circus came the habitual 'deadlock' and finally the ‘breakthrough’ when the European Union said it was willing to create a new climate compensation fund … but only on the condition that ‘wealthier developing countries’ contribute too.

Yet the eternal question of who will actually pay the demanded $2trillion p.a. in ‘climate reparations’ has been kicked into next year’s Cop28 when a “transitional committee” will be tasked to “identify and expand sources of funding.” In other words, the endless search for climate $$trillions will go on for decades to come.

That being the case, we expect the same COP ritual all over again next year at COP28 in the United Arab Emirates.

***************************************************

That Alluring Curve: The gaffe that lies at the heart of climate science

One of the central ideas underpinning climate change projection is that of the climate model ensemble. The thinking behind an ensemble is very simple. If one takes a number of models, each of which cannot be assumed to be a faithful and complete representation of the system being modelled, then the average of their projections is likely to be closer to the truth than any single one. Furthermore, the spread of projections can be used to represent the uncertainty associated with that average. This entails a well-established statistical approach in which the spread is assumed to take the form a probability density function.

However, nobody should underestimate the difficulties to be overcome by the climate modellers, since the models concerned will be subject to a number of parametric and structural uncertainties. Furthermore, the tuning of the models, and the basis for their inclusion and weighting within the ensemble, can be more of a black art than a science. Despite this, it is assumed that the resulting uncertainties are nevertheless captured by the properties of the probability distribution. The purpose of this article is to convince you that the problems are too deep-seated for that to be the case, since the uncertainties can actually invalidate the statistical methodology employed to analyse the ensemble. In reality, the distribution of projections is highly unlikely to form a true probability density function and so should not be treated as such.

To make my case I will be dividing my argument into two parts. In the first, I will introduce some basics of uncertainty analysis and explain how their consideration has led to the development of guidelines for the handling of uncertainty in system modelling. There will be no reference to climate science in this first part since the guidelines have general application. In the second part, I will examine the extent to which the guidelines have been embraced by the climate science community and explain the implications for evaluation of climate model ensembles. Finally, having made my case, I will briefly discuss the origins and significance of the climate science community’s mishandling of uncertainty analysis and what this may mean for the validity of climate change policy. The central premise is that no policy, least of all one associated with something as hugely important as climate change management, should be based upon the misapplication of statistical technique.

The background issues

Fundamental to an understanding of uncertainty is the ability to answer the following question: Does the uncertainty regarding the likely future state of a system have as its basis the inherent variability of that system, or is it due to gaps in our understanding of the causation behind the variability? To the extent that variability by itself can lead to uncertainty, one will be in the province of stochastic physics. Here the mathematics of randomness will apply, with the caveat that one may be dealing with a chaotic system in which the randomness stems from deterministic processes that are critically sensitive to boundary conditions. If the processes involved are fully understood, and the boundary conditions have been quantified with maximum precision, then the residual uncertainty will be both objective and irreducible. Such uncertainty is sometimes known as aleatory uncertainty, from the Latin for a gambler’s die, alea.1

In practice, however, the basis for the variability may not be fully understood. These gaps in knowledge lead to a fundamentally different form of uncertainty called epistemic uncertainty. Such uncertainty is subjective, since different parties may have differing perspectives that bear upon their understanding of the variability. It can be reduced simply by filling the gaps in knowledge. As a side-effect of the reduction of epistemic uncertainty, a consensus will emerge. However, care has to be taken when using consensus as a metric to measure levels of epistemic uncertainty, since there are also non-epistemic processes that can lead to the development of consensus.2

A simple example that can be used to illustrate the distinction between aleatory and epistemic uncertainty would be a system in which variability results from the random outcome of a thrown die. For a die of a known number of sides, the average score over time is determined by well-understood probability theory and it is a simple matter to determine the appropriate probability distribution. Furthermore, the uncertainty of outcome is both objective and irreducible – it is, after all, the titular aleatory uncertainty. However, now consider a situation in which the number of sides of the die is unknown. In this example the range of possibilities for the average score over time is greater, and there is both an epistemic and aleatory component to the uncertainty. It remains the case that the aleatory component can be defined using a probability density function but the same cannot be said of the epistemic component since it represents an entirely deterministic situation (i.e. it is a fixed number that is simply unknown). A set of probabilities can be provided representing the likelihoods for the various possible number of sides, but there can be no guarantee that the full state space is represented by such a set and there is no basis for assuming that stochastic logic applies to their allocation (as it would for the aleatory). For that reason the amalgamation of the epistemic and aleatory uncertainties cannot be represented by a true probability density function, or at the very least, such a function would not be as meaningful as that used for the aleatory component on its own.

You will also note that in both situations the likelihood (i.e. for the average score of the known die, and for the number of sides of the unknown die) has been characterised by the allocation of probabilities. This is a fundamental difficulty that lies at the heart of uncertainty analysis since probability has such a dual purpose of quantifying both variability and incertitude (between which it does not discern). Consequently, any mathematical technique for analysing uncertainty that is basically probabilistic will suffer from the same drawback, i.e. it runs the risk of conflating variability and incertitude. Yet, as I have just pointed out, only the former lends itself to a stochastically determined probability distribution. It is highly likely, therefore, that the application of such a probabilistic technique will lead to erroneous results if the epistemic and aleatory components have not been isolated prior to the positing of a probability distribution representing the overall uncertainty. As Professors Armen Der Kiureghian and Ove Ditlevsen have noted in the context of time-variant, structural reliability modelling:

The distinction between aleatory and epistemic uncertainties is determined by our modeling choices. The distinction is useful for identifying sources of uncertainty that can be reduced, and in developing sound risk and reliability models. It is shown that for proper formulation of reliability, careful attention should be paid to the categorization (epistemic, aleatory, ergodic or non-ergodic) of uncertainties. Failure to do so may result in underestimation or overestimation of failure probability, which can be quite significant (orders of magnitude) in certain cases.

‘Aleatory or epistemic? Does it matter?’, 2007

In real life, most systems are a lot more complicated than the throwing of a single n-sided die. Typically, a system’s mathematical model will have many free variables (i.e. parameters and boundary conditions) of uncertain value that will determine any projection based upon it. To handle this complexity, modellers employ a technique referred to as Monte Carlo simulation in which multiple runs of the model are aggregated to create a statistical spread of projections. Each run is based upon a random sampling of the possible parametric or boundary condition values allowed for by the model. When performing the sampling, the likelihood of selecting any particular value for a given variable is determined by the probability distribution that applies to that variable’s uncertainty. Note, however, that I used the term ‘random sampling’ here. This is because Monte Carlo simulation is a technique originally developed for the analysis of aleatory uncertainty (it finds widespread use in the field of stochastic physics). When the variable’s uncertainty is epistemic, however, random sampling is inappropriate because the actual value is subject to deterministic incertitude and not stochastic variability. For such variables, one would be sampling possible opinions of experts rather than variable states of the system. But Monte Carlo simulation doesn’t care. Give it a set of probabilities expressing a variable’s uncertainty and it will crunch those numbers merrily and provide you with a very aleatoric looking output. That doesn’t mean that it can be relied upon, however.

The restrictions on the applicability of Monte Carlo simulation are well-known and have led to written guidelines to be followed by mathematical modellers working within the field of risk assessment. Here, for example, is how the US Environmental Protection Agency (EPA) has put it:

Monte Carlo simulation also has important limitations, which have restrained EPA from accepting it as a preferred risk assessment tool. Available software cannot distinguish between variability and uncertainty. Some factors, such as body weight and tap water ingestion, show well-described differences among individuals. These differences are called ‘variability’. Other factors, such as frequency and duration of trespassing, are simply unknown. This lack of knowledge is called ‘uncertainty’.3 Current Monte Carlo software treats uncertainty as if it were variability, which may produce misleading results.

‘Use of Monte Carlo simulation in risk assessments’, 1994

And here is what Professor Susan R. Poulter has to say regarding the guidelines issued by the US National Academy of Sciences (NAS):

In ‘Science and Judgment in Risk Assessment’, the NAS noted the problems with incorporation of subjective assessments of model uncertainty into probability distribution functions, suggesting that such quantitative analyses of model uncertainty be reserved for priority setting and risk trading. For standard setting, residual risk determinations and risk communication, however, the NAS recommended that separate analyses of parameter uncertainty be conducted for each relevant model (rather than a single hybrid distribution), with additional reporting of the subjective probability that each model is correct.4

‘Monte Carlo simulation in environmental risk assessment –Science, policy and legal issues’, 1998

None of this is to say that Monte Carlo simulation is a bad idea or a technique that should never be used. On the contrary, it can be an invaluable tool when analysing the effects of variability in essentially stochastic systems. The problems only emerge when epistemic uncertainty features in the system’s mathematical model and this is not properly isolated, and allowed for, when assessing the extent to which the probability distributions produced by the simulation fully capture the uncertainty.

So, the key questions are these: Is there any indication that these issues and the guidelines arising have been taken on board by the climate scientists? Have they understood that ensemble statistics can be inappropriate or very misleading when epistemic uncertainties are present? And if they haven’t, can we trust the uncertainties they attribute to climate model projections, particularly when model ensembles are involved?

The relevance to climate science

In fact, there is plenty of evidence that the issues are familiar to the community of climate modellers. For example, there is the following from Professor Jeroen Pieter van der Sluijs, of the University of Bergen, regarding the uncertainties associated with climate sensitivity:

Being the product of deterministic models, the 1.5°C to 4.5°C range is not a probability distribution. There have, nevertheless, been attempts to provide a ’best guess’ from the range. This has been regarded as a further useful simplification for policy-makers. However, non-specialists – such as policy-makers, journalists and other scientists – may have interpreted the range of climate sensitivity values as a virtual simulacrum of a probability distribution, the ’best guess’ becoming the ’most likely’ value.

‘Anchoring amid uncertainty’, 1997

More here:

*****************************************************

Pakistan: A Deluge of Bad Maths and Worse Reporting

Let’s give sanity and compassion a chance

Discussing climate stuff with your pals, it’s prudent to ask, “Is that true, or did you get it from the ABC? [Australian Broadcasting Corporation] ” Case in point: the Pakistan floods. Time and again since August, the ABC has told us that a third of the country is or was underwater. Horrific! But can that “third” be right? Isn’t a lot of the country desert and mountains anyway?

For the ABC’s ersatz tax-fed journalists, there’s no doubt about it. They show us satellite mapping with flooded parts (colored blue) clearly defined. Case closed…er, not actually. Blind Freddie can see on the ABC’s Pakistan map that the flooding is nothing like a third. Less than a tenth, more like it.

My intent is not to trivialise the terrible suffering of the subsistence villagers. But facts do matter. For example, Care organisation has been raising Pakistan relief funds from the public, claiming that more than a third of the country is flooded. Maybe they’re trusting the ABC?

Here’s the real deal. UN disaster agency OCHA reported,

Satellite-detected water extents mapped by the United Nations Satellite Center (UNOSAT) indicate preliminarily that of 793,000 km2 of lands in Pakistan analysed between 1 and 29 August, around 75,000 km2 [9.5%, or 8.5% for all Pakistan, TT] appear to be affected by floodwaters…

The ABC formula, as here, is (hat-tip to John Ridgeway at Cliscep.com)

# a heart-rending cameo of suffering mums and kids

# such weather events are “unprecedented”

# global warming’s the cause (no evidence required), the West’s to blame and net-zero’s the solution.

Works every time, as our Teal electees would agree.

Exaggerating the floods also bolsters the ABC’s new campaigning narrative that Australia and the West should hand over squillions to Third World corruptocrats.[1] Those squillions are for alleged loss and damage from global warming allegedly caused by the wealthy West (although China’s emissions now dwarf those of all the West combined.)

Columbia (pop. 52m) at Cop27 demanded $US800 billion per year compensation, that’s $A22,000 per Columbian p.a. for life. About 130 other third-world basket-cases are queued for the West’s money. After the all-night Cop27 deliberations, NZ plunked down a laughable $NZ20m, with Germany, Belgium, Denmark and Scotland offering similar small-change amounts. This ABC campaign for taxpayers to fund third-world compensation is as crazed as its CO2 net-zero fantasies and the urged destruction of our power grid by intermittent renewables aka ruinables. You can rely on the tax-fed ABC tribe to put the national interest last.

The BBC, even more woefully woke than the ABC, has nonetheless already corrected its “one third” flooding reports. It did this fix sneakily by interviewing a Dundee University academic, Dr Simon Cook, who set out the correct picture: “The scale of the disaster is huge, but is a third of Pakistan really under water?”

But the US National Oceanic and Atmospheric Agency website still recycles the “one-third” nonsense, once again demonstrating the untrustworthiness of the world’s top climate “authorities”. [2]

For the ABC to properly correct its howlers, it’ll have to tack “We Were Wrong” notes to programs ranging from smarty-pants climate warrior Phillip Adams on Late Night Live, to TV and News, Radio National, and RN Breakfast, which said, “It [fund-raising] comes as new satellite images confirm a third of the country is underwater.”

The ABC’s most shocking abuse of its charter for accuracy and impartiality was on BTN i.e. Behind the News. These are the same ABC educationists who deluge schoolkids with 19 episodes of would-be Aborigine Professor Bruce Pascoe’s blather about pre-colonial indigenous agronomists.[3] BTN is

an educational news program aimed at 10-13 year old kids. It unpacks and explains news and current affairs to young people in a dynamic and creative way. A range of opinions are presented [but none right-of-centre] so students gain a greater awareness of differing points of view [What? Like warming skepticism and border controls?]. Through watching BTN Classroom, students increase their understanding of complex political, economic, environmental and social issues.”

I tuned into BTN’s Pakistan floods episode, produced by an Amelia Moseley. After referencing the “one third” meme, an unidentified female voice-over continued,

Hundreds of thousands of people are living in roadside camps, and now Australia’s being urged to increase its donations to Pakistan… Experts say this season (monsoon) is more intense because of climate change.

Oh really? They bring on an unidentified “expert”, a handsome young sciency-looking guy who tells kids, “The warmer climate means you have more intense rainfall and also warmer climate means glaciers are likely to melt and collapse quicker. Both of these things came together in this tragic event.”

Oh did they? Has someone been measuring the rate of glacier “collapse” up in the Himalayas, in real time and distinguishing alleged increased glacier flows from the worst monsoon downpours since 2010? Pakistan’s Flood Control Agency report of 2020 (p37) said merely that “the increasing temperatures in the northern mountains of the country are likely to result in glacier melting, thereby affecting the flows of Indus River System.” The ABC fails to mention the direct cause of the floods – a triple dose of powerful and natural el Nina events coupled with tropical cyclones.[4]

*********************************************************

New Greenie plan to take gas out of Australian kitchens

At a time when the electricity grid is under great strain from Greenie meddling, this is insane. I personally remember occasions when my gas stove allowed life to go on unimpeded during an electricity blackout. These days I have multiple oil lamps in addition to my gas stove. I did use them during a late-night blackout recently

I have also noted that most chefs seem to prefer gas stoves. They give immediate and visible temperature control


They’ve been a staple of our kitchens for generations, and it seems Aussies will not give up their gas appliances without a fight.

On Tuesday, property firms Lendlease and GPT Group will come together to help launch the Global Cooksafe Coalition, with plans to phase out gas ovens and stovetops, citing health and environmental concerns.

The property giants have plans to stop installing gas kitchen appliances in new builds in all OECD countries by the end of the decade, and to only do all-electric retrofits in existing properties by 2040.

The campaign has the support of high-profile chefs including Neil Perry, Darren Robertson, Palisa Anderson, Rob Roy Cameron, William Gleave and James Edward Henry. At least one other major Australian property developer is expected to join the Coalition in the next few months, sources said.

But readers have rejected the idea, with a masive majority saying people have the right to use natural gas in their own home.

With more than 1900 readers voting in our online poll by 11.30am AEDT, more than four in five (83 per cent) said they opposed the plan to phase out gas kitchens.

Just 11 per cent said they were in favour of the campaign, while 6 per cent of readers said they were undecided on the issue.

Readers also expressed their opposition to the gas plan in comments, with some labelling it “insane” and “idiotic”.

One reader commented that gas “has been the saviour of many people duting the floods when power was out”, and that “we demonise everything these days”.

Some 76 per cent of poll respondents said they cooked with gas at home - slightly higher than the estimated 65-70 per cent of Australians who use gas domestically.

Chef Neil Perry said electric was “definitely the future of cooking” in both homes and commercial kitchens.

“It’s just cleaner, it’s more efficient and it’s definitely more beneficial for the environment. Everything tends to be neater and cleaner without gas,” he said.

Lendlease Global Head of Sustainability Cate Harris said electrification across operations was “essential” for the company to hit its goal of absolute zero carbon emissions by 2040.

“While the transition to electric cooking powered by renewables will take time, it’s already underway at our new commercial development Victoria Cross Tower in Sydney, and we’re looking forward to working alongside our Coalition partners to drive and accelerate industry change,” she said.

Dale O’Toole from GPT said all-electric kitchens “potentially present financial savings in new developments” and suggested moving away from gas would protect owners from having outdated appliances as the transition to renewable energy picks up momentum.

While the Global Cooksafe Coalition targets appliances in the kitchen only – so gas hot water or heating in the home would still be possible – several Australian jurisdictions are aggressively pursuing plans to electrify homes completely.

From next year, ACT infill developments will not be connected to the network, while Victoria has plans to take gas out of schools and hospitals, and from 2023 it will drop incentives for gas home appliances.

Why the moves against gas

The moves have been prompted by concerns over the health impacts of gas in the home, as well as the greenhouse emissions caused by natural gas.

Dr Kate Charlesworth from the Climate Council said cooking with gas was estimated to be responsible for up to 12 per cent of the childhood asthma burden in Australia, and a recent California study showed home gas stoves were associated with elevated levels of benzene, a known carcinogen.

***************************************

My other blogs. Main ones below

http://dissectleft.blogspot.com (DISSECTING LEFTISM )

http://edwatch.blogspot.com (EDUCATION WATCH)

http://pcwatch.blogspot.com (POLITICAL CORRECTNESS WATCH)

http://australian-politics.blogspot.com (AUSTRALIAN POLITICS)

http://snorphty.blogspot.com/ (TONGUE-TIED)

http://jonjayray.com/blogall.html More blogs

*****************************************

No comments: