Wednesday, June 22, 2016



The great ozone embarrassment

Do you ever wonder why we don't hear much about the ozone hole these days?  There's a reason.  I made some mocking comments about the messed-up talk from Greenies about stratospheric ozone yesterday.  I now want to tell more of the story.

When I searched the net for the numbers about CO2 levels and global temperature, I very rapidly found the numbers nicely set out for both.  So I initially expected that I would have no trouble finding the numbers for atmospheric ozone levels.  I found quite a lot of sites that gave information about that but none of them gave the underlying numbers.  The information was always presented in pretty multi-colored pictures.

That is very strange.  Numbers are food and drink to scientists.  Pictures just cannot give you precision.  So what is going on? Is there a reason for the imprecision?

I think I have eventually found out. The numbers are pretty  embarrassing.   Ozone levels are at least not rising and may be FALLING.  Yet, according to the Ozone-hole enthusiasts, the levels  should be rising.  When the very expensive Montreal protocol of 1989 was imposed on us, we were told that CFC's were destroying ozone at a dangerous rate (ALL change is dangerous according to Greenies) so if we stopped producing CFCs, the ozone would bounce back and the "hole" in Antarctica would shrink away.  So ozone levels should have been RISING for quite a while now.

But the opposite may have happened.  I eventually found  an official  New Zealand statistics site which informed me that: "From 1978 to 2013, median monthly ozone concentrations decreased slightly, about 4 percent",  And I found another source which put the loss to the year 2000 at 7%.

And the cooling trend in the stratosphere can only reasonably be explained by falling ozone levels.  It's absorption of UV by ozone that keeps the stratosphere warm.  I showed yesterday that the cooling trend cannot be explained by CO2 levels.

Greenies are always cautious about when they expect the ozone hole to close, generally putting it quite a few years in the future.  They say, reasonably, that these things oscillate so the  process of ozone recovery must be a gradual one and you need a long series to see a trend.  But  for the level to be DECLINING  looks very much like proof of failure.  

But I needed those elusive numbers to be certain of what was going on. And I did eventually find them at Mauna Loa. They give almost daily readings up to this year. I looked at the readings for three years, 1996, 2010 and this year.  I noted  that the readings in all three years  varied between around 230 to 270 Dobson units, according to the time of the year.  I saw no point in calculating exact averages as it was clear that, at this late stage when the effects of the CFC ban should long ago have cut in, essentially nothing was happening.  The ozone level may not have fallen in recent years but it is not dropping either. The predicted rise was not there.  The levels just bob up and down in the same old way within the same old range year after year

So it looks like the Montreal protocol did nothing.  The whole thing seems to have been wholly misconceived. The "science" behind it was apparently wrong.

Yet it was the "success" of the Montreal protocol that inspired the Greenie assault on CO2.  We have paid a big price for that hasty bit of scientific speculation.





Al Gore Might Want to Oppose the Prosecution of Exxon

The Left is heading into dangerous legal waters. In recent months, leftist attorneys general from blue states like New York and Massachusetts have been trying to build a case against Exxon Mobile Corp. Massachusetts AG Maura Healey demanded Exxon hand over 40 years of documents related to the company’s climate change research in an effort to build a case that the company committed fraud because it’s a “climate change denier.” Exxon, of course, is fighting the subpoena, saying handing over mountains of banker boxes infringes on its First Amendment rights.

In effect, the leftist AGs are pushing for the criminalization of dissent. But that cuts both ways. Climate change may or may not be occurring, and the cause — whether it’s human industry or the climate’s natural cycle — is up for debate. Responding to the prosecution of Exxon, 13 AGs from red states penned a letter that pointed out if the Left wants to prosecute anyone who doesn’t believe socialism is the response to warmer weather, global warming activists could be prosecuted for overstating the threat.

“We all understand the need for a healthy environment, but we represent a wide range of viewpoints regarding the extent to which man contributes to climate change and the costs and benefits of any proposed fix,” read the letter headed by Alabama AG Luther Strange and Texas AG Ken Paxton. “Nevertheless, we agree on at least one thing — this is not a question for the courts. Using law enforcement authority to resolve a public policy debate undermines the trust invested in our offices and threatens free speech.”

While the conservative AGs said in the letter they would not mount such a prosecution, the same legal logic could lead to the prosecution of climate change activists who advocate for the redistribution of taxpayer money to green energy companies — like failed solar energy company Solyndra. For example, Al Gore made statements that were demonstrably false in “An Inconvenient Truth” and he’s continued to double-down on the Chicken Little rhetoric. Is it just a coincidence that he’s a senior partner in a venture-capital firm that invests in clean energy technology?

SOURCE  




Climate Change Prediction Fail? What did ‘climate hero’ James Hansen actually predict back in 1986?

The Senate Environment and Public Works Committee held a hearing on June 10 and 11, 1986, to consider the problems of ozone depletion, the greenhouse effect, and climate change. The event featured testimony from numerous researchers who would go on to become major figures in the climate change debate. Among them was James Hansen, who was then a leading climate modeler with NASA's Goddard Institute of Space Studies and who has subsequently been hailed by the Worldwatch Institute as a "climate hero." When the Washington Post ran an article this week marking the 30th anniversary of those hearings, it found the old testimony "eerily familiar" to what climate scientists are saying today. As such, it behooves us to consider how well those 30-year-old predictions turned out.

At the time, the Associated Press reported that Hansen "predicted that global temperatures should be nearly 2 degrees higher in 20 years" and "said the average U.S. temperature has risen from 1 to 2 degrees since 1958 and is predicted to increase an additional 3 or 4 degrees sometime between 2010 and 2020." These increases would occur due to "an expected doubling of atmospheric carbon dioxide by 2040." UPI reported that Hansen had said "temperatures in the United States in the next decade will range from 0.5 degrees Celsius to 2 degrees higher than they were in 1958." Citing the AP report, one skeptical analyst reckoned that Hansen's predictions were off by a factor of 10. Interpreting a different baseline from the news reports, I concluded that Hansen's predictions had in fact barely passed his low-end threshold. Comments from unconvinced readers about my analysis provoked me to find and re-read Hansen's 1986 testimony.

Combing through Hansen's actual testimony finds him pointing to a map showing "global warming in the 1990's as compared to 1958. The scale of warming is shown on the left-hand side. You can see that the warming in most of the United States is about 1/2 C degree to 1 C degree, the patched green color." Later in his testimony, Hansen noted that his institute's climate models projected that "in the region of the United States, the warming 30 years from now is about 1 1/2 degrees C, which is about 3 F." It is not clear from his testimony if the baseline year for the projected increase in temperature is 1958 or 1986, so we'll calculate both.

In Hansen's written testimony, submitted at the hearing, he outlined two scenarios. Scenario A featured rapid increases in both atmospheric greenhouse gases and warming; Scenario B involved declining emissions of greenhouse gas and slower warming. "The warming in Scenario A at most mid-latitude Northern Hemisphere land areas such as the United States is typically 0.5 to 1.0 degree C (1-3 F degrees) for the decade 1990-2000 and 1-2 degree C (2-4 F degrees) for the decade 2010-2020," he wrote.

The National Oceanic and Atmospheric Administration (NOAA) offers a handy Climate at a Glance calculator that allows us to figure out what various temperatures trends have been for the U.S. since 1901 and the globe since 1881. So first, what did happen to U.S. temperatures between 1958 and 1986? Inputting January 1958 to January 1986 using a 12-month time scale, the NOAA calculator reports that there was a trend of exactly 0.0 F degrees per decade for that period. Curiously, one finds a significant divergence in the temperature trends depending on at which half of the year one examines. The temperature trend over last half of each of the 28 years considered here is -0.13 F degree per decade. In contrast, the trend for the first half of each year yields an upward trend of +0.29 F degrees.

What happens when considering "global warming in the 1990's as compared to 1958"? Again, the first and second half-year trends are disparate. But using the 12-month time scale, the overall trend is +0.25 F degrees per decade, which would imply an increase of about 1 F degree during that period, or just over ½ C degree.

So what about warming 30 years after 1986—that is, warming up until now? If one interprets Hansen's testimony as implying a 1958 baseline, the trend has been +0.37 F degree per decade, yielding an increase of about 1.85 F degrees, or just over 1 C degree. This is near the low end of his projections. If the baseline is 1986, the increase per decade is +0.34 F degrees, yielding an overall increase of just over 1 F degree, or under 0.6 C degree. With four years left to go, this is way below his projection of a 1 to 2 C degrees warming for this decade.

Hansen pretty clearly believed that Scenario A was more likely than Scenario B. And in Scenario A, he predicted that "most mid-latitude Northern Hemisphere land areas such as the United States is typically 0.5 to 1.0 degree C (1-3 F degrees)." According to the NOAA calculator, average temperature in the contiguous U.S. increased between 1990 and 2000 by 1.05 F degree, or about 0.6 C degree.

Hansen's predictions go definitively off the rails when tracking the temperature trend for the contiguous U.S. between 2000 and 2016. Since 2000, according to the NOAA calculator, the average temperature trend has been downward at -0.06 F degree per decade. In other words, no matter what baseline year Hansen meant to use, his projections for temperatures in the U.S. for the second decade of this century are 1 to 3 F degrees too high (so far).

What did Hansen project for global temperatures? He did note that "the natural variability of the temperature in both real world and the model are sufficiently large that we can neither confirm nor refute the modeled greenhouse effect on the basis of current temperature trends." It therefore was impossible to discern a man-made global warming signal in the temperature data from 1958 to 1986. But he added that "by the 1990's the expected warming rises above the noise level. In fact, the model shows that in 20 years, the global warming should reach about 1 degree C, which would be the warmest the Earth has been in the last 100,000 years."

Did it? No. Between 1986 and 2006, according to the NOAA calculator, average global temperature increased at a rate of +0.19 C degree per decade, implying an overall increase of 0.38 C degrees. This is less half of Hansen's 1 C degree projection for that period. Taking the analysis all the way from 1986 to today, the NOAA calculator reports a global trend of +0.17 C degree per decade, yielding an overall increase of 0.51 C degree.

Hansen did offer some caveats with his projections. Among them: The 4.2 C degree climate sensitivity in his model could be off by a factor of 2; less solar irradiance and more volcanic activity could affect the trends; crucial climate mechanisms might be omitted or poorly simulated in the model. Climate sensitivity is conventionally defined as the amount of warming that would occur as the result of doubling atmospheric carbon dioxide. Three decades later, most researchers agree that Hansen set climate sensitivity way too high and thus predicted increases that were way too much. The extent to which his other caveats apply is still widely debated. For example, do climate models accurately reflect changes in the amount of cloudiness that have occurred over the past century?

The U.N.'s Intergovernmental Panel of Climate Change's 1990 Assessment Report included a chapter on the "Detection of the Greenhouse Gas Effect in the Observations." It proposed that total warming of 1 C degree since the late 19th century might serve as a benchmark for when a firm signal of enhanced global warming had emerged. It also suggested that a further 0.5 C degree warming might be chosen as the threshold for detecting the enhanced greenhouse. According to the NOAA calculator, warming since 1880 has been increasing at a rate of +0.07 C degree per decade, implying an overall increase of just under 1 C degree as of this year. As noted above, global temperatures have increased by 0.51 C degree since 1986, so perhaps the man-made global warming signal has finally emerged. In fact, Hansen and colleagues suggest just that in a 2016 study.

The upshot: Both the United States and the Earth have warmed at considerably slower pace than Hansen predicted 30 years ago. If the three-decades-old predictions sound eerily familiar, it's because they've been updated. Here's hoping the new predictions will prove as accurate as the old ones.

SOURCE  





Bat Killings by Wind Energy Turbines Continue

Industry plan to reduce deadly effects of blades may not be enough, some scientists say

On a warm summer evening along the ridgetops of West Virginia’s Allegheny Mountains, thousands of bats are on the move. They flutter among the treetops, searching for insects to eat and roosts on which to rest. But some of the trees here are really metal towers, with 30-meter-long blades rotating at more than 80 kilometers per hour even in this light breeze. They are electricity-generating wind turbines—a great hope for renewable energy, but dangerous for bats. The flying animals run into spinning blades, or the rapid decrease in air pressure around the turbines can cause bleeding in their lungs. By morning, dozens will lay dead on the ground. Countless more will die at wind turbines elsewhere in the U.S. and Canada in the forests and fields of the Midwest and the windy prairies of the Great Plains.

Much of this slaughter—the greatest threat to animals that are a vital link in our ecosystem—was supposed to end last year. In 2015, with great fanfare, the American Wind Energy Association (AWEA), a trade group, announced voluntary guidelines to halt turbines at low wind speeds, when bats are most active, which would save lives. Conservationists praised the move.

But some scientists say this promise falls short. The industry plan claims to reduce bat deaths by 30 percent, but holding the blades in check just a little longer could reduce deaths by up to 90 percent or more, a decade of research indicates, and would do so with little additional energy loss. A research review published in January of this year found that wind turbines are, by far, the largest cause of mass bat mortality around the world. White-nose syndrome, the deadly fungal disease that has decimated bat populations throughout the northeastern U.S., came in second. Biologist Cris Hein of the nonprofit group Bat Conservation International says that if the current industry practices continue and wind turbine installation grows, bat populations already weakened by the fungus will crash. Industry has balked at holding the blades still at higher wind speeds, however, saying the energy loss will be larger than scientists claim.

Bats eat insects, saving farmers billions of dollars in pest control each year, but they generally do not get much attention. No one was even looking for bats under turbines until 2003, according to wildlife biologist Ed Arnett, currently a senior scientist at the Theodore Roosevelt Conservation Partnership. But on routine checks for dead hawks and eagles under West Virginia turbines that summer, surveyors found an estimated 2,000 dead bats. The discovery prompted creation of the Bat and Wind Energy Cooperative - a consortium of federal agencies, the wind energy association and Bat Conservation International. The consortium hired Arnett in 2004 to conduct the first major studies of why turbines kill bats and to find solutions.

In what is now considered a classic study at the Casselman Wind Project in Somerset County, Pa., in 2008 and 2009 Arnett “feathered” the blades in the evening hours of bats’ critical fall migration period. Feathering involves turning the blades parallel to the wind so the turbines do not rotate. Arnett feathered blades at wind speeds of five to 6.5 meters per second, slightly above the cut-in speed – the speed at which the turbines connect with the power grid—now typical in the industry, which is 3.5 to four meters per second. Delaying the cut-in speed reduced bat deaths by 44 to 93 percent, depending on the night studied and conditions. And delaying turbine starts until slightly higher wind speeds during this two-month migration period, Arnett estimated, would only reduce annual wind energy production, by less than 1 percent. A flurry of research by other scientists followed, showing feathered blades and higher cut-in speed saved more bat lives than other proposed solutions.

Paul Cryan, a bat biologist with the U.S. Geological Survey and a co-author of the January bat mortality review, praised the industry’s voluntary guidelines as an important first step. But like Cris Hein, he worries about the ongoing impact of turbines on bat populations. “Bats are long-lived and very slow reproducers,” he says. “Their populations rely on very high adult survival rates. That means their populations recover from big losses very slowly.” He questions whether bats can handle such damage year after year.

Defending the wind turbine policy, John Anderson, AWEA’s senior director of Permitting Policy and Environmental Affairs, says the guidelines were just a first move, not necessarily the last. “The initial step was to find that sweet spot between reducing our impact while maintaining energy production levels that make a project economic,” he says.

To date, however, the industry has resisted feathering at speeds higher than what the guidelines recommend. “For every megawatt hour that wind is not operating, that’s a megawatt hour that has to be replaced by a far more impactful form of energy from fossil fuel,” Anderson notes. He maintains that the low energy cost estimated at Casselman does not hold for other locations. “I wish it was 1 percent everywhere,” he says. “But the reality is that you have different wind profiles in different locations, and different costs of energy. So 1 percent in one location may be very inexpensive and in another [it could be] extremely expensive and make or break the difference in a very competitive market.”

Now the U.S. Fish and Wildlife Service (FWS), part of the bat consortium, is weighing in on the debate, and it appears to be following the conservation research. In a draft Habitat Conservation Plan covering eight Midwestern states the FWS proposes raising turbine cut-in speeds to five or 6.5 meters per second to protect three bat species listed (or being considered for listing) under the Endangered Species Act. One such species is the Indiana bat. To date, few other bat species are officially listed as endangered, including those most frequently killed by turbines. And the FWS can only require action by a wind facility if it has proof that the facility killed an endangered Indiana bat, a difficult task without close monitoring.

Right now, “many, many, many facilities within the range of the Indiana bat” do not participate in any plan, says Rick Amidon, a biologist in the FWS’s Midwest office. The service hopes that a region-wide Habitat Conservation Plan will make it easier for facilities to opt into good conservation practices in advance, before the bodies of endangered species appear under their blades and the FWS takes action. The public comment period for the proposed plan closes July 14.

The situation right now puts Hein and other conservationists in a difficult position. “We see the impact of climate change on bats, and so we’re in favor of renewable energy,” Hein says. “It’s unfortunate that one of those—wind energy—has this negative impact.” He is frustrated that industry has not acted more quickly on existing studies but acknowledges “it’s hard to get an industry to move on anything very rapidly.” In the meantime he and the consortium will keep searching for the ultimate environmental sweet spot.

SOURCE  




Poland severely restricts wind farms

Position of the [Polish] National Institute of Public Health – National Institute of Hygiene on wind farms:

The National Institute of Public Health – National Institute of Hygiene is of the opinion that wind farms situated too close to buildings intended for permanent human occupation may have a negative impact on the well-being and health of the people living in their proximity.

The human health risk factors that the Institute has taken into consideration in its position are as follows:

the emitted noise level and its dependence on the technical specifications of turbines, wind speed as well as the topography and land use around the wind farm,

aerodynamic noise level including infrasound emissions and low-frequency noise components,

the nature of the noise emitted, taking into account its modulation/impulsive/tonal characteristics and the possibility of interference of waves emitted from multiple turbines,
the risk of ice being flung from rotors,

the risk of turbine failure with a rotor blade or its part falling,

the shadow flicker effect,

the electromagnetic radiation level (in the immediate vicinity of turbines),

the probability of sleep disruptions and noise propagation at night,

the level of nuisance and probability of stress and depression symptoms occurring (in consequence of long exposure), related both to noise emissions and to non-acceptance of the noise source.

In the Institute’s opinion, the laws and regulations currently in force in Poland (regarding risk factors which, in practice, include only the noise level) are not only inadequate to facilities such noise source as wind turbines, but they also fail to guarantee a sufficient degree of public health protection. The methodology currently used for environmental impact assessment of wind farms (including human health) is not applicable to wind speeds exceeding 5 m/s. In addition, it does not take into account the full frequency range (in particular, low frequency) and the nuisance level.

In the Institute’s view, owing to the current lack of a comprehensive regulatory framework governing the assessment of health risks related to the operation of wind farms in Poland, an urgent need arises to develop and implement a comprehensive methodology according to which the sufficient distance of wind turbines from human habitation would be determined. The methodology should take into account all the above-mentioned potential risk factors, and its result should reflect the least favourable situation. In addition to landform (natural topography) and land use characteristics, the methodology should also take into consideration the category, type, height and number of turbines at a specific farm, and the location of other wind farms in the vicinity. Similar legislative arrangements aimed to provide for multi-criteria assessment, based on complex numerical algorithms, are currently used in the world.

The Institute is aware of the fact that owing to the diversity of factors and the complicated nature of such an algorithm, its development within a short time period may prove very difficult. Therefore, what seems to be an effective and simpler solution is the prescription of a minimum distance of wind turbines from buildings intended for permanent human occupation. The setback criteria are also a common standard-setting arrangement.

Having regard to the above, until a comprehensive methodology is developed for the assessment of the impact of industrial wind farms on human health, the Institute recommends 2 km as the minimum distance of wind farms from buildings. The recommended value results from a critical assessment of research results published in reviewed scientific periodicals with regard to all potential risk factors for average distance usually specified within the following limits:

0.5-0.7 km, often obtained as a result of calculations, where the noise level (dBA) meets the currently acceptable values (without taking into account adjustments for the impulse/tonal/modulation features of the nose emitted),

1.5-3.0 km, resulting from the noise level, taking into account modulation, low frequencies and infrasound levels,

0.5-1.4 km, related to the risk of turbine failure with a broken rotor blade or its part falling (depending on the size of the piece and its flight profile, rotor speed and turbine type),

0.5-0.8 km, where there is a risk of ice being flung from rotors (depending on the shape and mass of ice, rotor speed and turbine type),

1.0-1.6 km, taking into account the noise nuisance level (between 4% and 35% of the population at 30-45 dBA) for people living in the vicinity of wind farms,

the distance of 1.4-2.5 km, related to the probability of sleep disruptions (on average, between 4% and 5% of the population at 30-45 dBA),

2,0 km, related to the occurrence of potential psychological effects resulting from substantial landscape changes (based on the case where the wind turbine is a dominant landscape feature and the rotor movement is clearly visible and noticeable to people from any location),

1.2-2.1 km, for the shadow flicker effect (for the average wind turbine height in Poland, including the rotor from 120 to 210 m).
In its opinions. the Institute has also considered the recommended distances of wind farms from buildings, as specified by experts, scientists, as well as central and local government bodies around the world (in most cases recommended from 1.0 to 5.0 km).

 SOURCE  





Despite huge investments, renewable energy isn’t winning

For hydrocarbon doomsayers, there’s good news and bad news. In 2015, there were record investments in renewable energy, and record capacity was added, much of it in emerging economies. Yet despite the huge investment, the global share of fossil fuels is not shrinking very fast. Renewables such as wind, solar and geothermal still account for a tiny share of energy production, and there are factors that may inhibit their growth in the next few years.

REN21, the international renewable energy association backed by the United Nations Environment Program, has summarized impressive developments in the sector in 2015. Total investment in renewable power and fuels reached $285.9 billion, an all-time record, and renewable power capacity, including hydropower, increased by 148 gigawatts — another record — to 1.8 terawatts. For the sixth consecutive year, investment in new renewable capacity was higher than in hydrocarbon-burning power plants.

Renewables such as wind, solar and geothermal still account for a tiny share of energy production, and there are factors that may inhibit their growth in the next few years.
Much of the increase came from the developing world. China was in first place; the U.S. came in second, and added more solar and wind capacity than any other country. Turkey added the most geothermal generation. The narrative about the environmentally conscious rich nations and the laggard poor ones is obsolete; Mauritania invested the biggest share of economic output in sustainable energy in 2015, followed by Honduras, Uruguay and Morocco. Bangladesh is the biggest market for home-based solar systems.

One might think the energy revolution is fast displacing fossil fuels. Not really. Although investment in renewables and in the oil industry are of comparable magnitude — $522 billion was invested in oil last year — sustainable energy is growing from a very low base.

Mauritania invested the biggest share of economic output in sustainable energy in 2015, followed by Honduras, Uruguay and Morocco. Bangladesh is the biggest market for home-based solar systems.

We read about the big successes — Costa Rica with 99 percent of energy generated from renewable sources, Uruguay with 92.8 percent, three German states with most of their energy coming from wind — but weaning the world off fossil fuels is an uphill battle.

One reason is regulators’ understandable fixation on generation. Wind and solar installations are relatively easy to promote: The technology is already there, all governments need to do is subsidize its use by levying additional taxes or “feed-in tariffs.” It’s much harder to set up an equally effective mechanism in transportation, which uses the lion’s share of oil products. Although solar and wind generation is already price-competitive with fossil fuels in many countries, modern electric vehicles are pricey, clunky (yes, even the Teslas) and far behind gas-powered competitors in terms of driving range. It would be an expensive proposition for governments to subsidize them to a degree that would make them popular.

Now, because oil is relatively cheap, the global market is moving toward cars that use more gas, especially SUVs. No wonder global oil consumption grew at the fastest rate in five years in 2015.

This year, the growth is set to continue. And increases in renewables capacity may hit some obstacles soon.

Most of last year’s expansion came from additional wind and solar capacity. Countries such as Germany and Poland added a lot of wind power because their governments are about to end direct subsidies and move to tendering programs, which allow only the lowest bidders to build new power plants. This is fair: European governments nursed sustainable energy producers when it was hard for them to compete with traditional generation on price, and now it’s time for a more market-based approach. The policy shift, however, will probably cause an investment slowdown starting in 2017.

Solar photovoltaic generation has another problem in markets where it has a large, established share, especially in Europe. “The more that solar PV penetrates the electricity system, the harder it is to recoup project costs,” the REN21 report says. “So an important shift is under way: from the race to be cost-competitive with fossil fuels to being able to adequately remunerate solar PV in the market.”

Other markets, too, will eventually reach a point where government support has to be scaled back because it’s harder to justify, and the huge investments of today will become harder to recoup. The current investment and growth rates in renewables are not quite natural, and they are not likely to last. Only major technological breakthroughs in energy storage, both for grids and for vehicles, could ensure another leap in sustainable energy use.

Without such breakthroughs, which will make traditional generation and powertrains vastly inferior to modern ones, demand for fossil fuels will remain strong for decades. The International Energy Agency’s projection for 2040, based on the current growth rate in renewables, has the share of natural gas used in power generation roughly at the same level as today. It doesn’t predict any drops in oil demand.

Those who have predicted the end of the petrostates and permanently low oil prices are in for a long wait. Fortunes will still be made in fossil fuels, and oil dictatorships will probably keep squabbling and menacing their neighbors at least for most of our remaining lifetimes.

SOURCE  





1,500 scientists lift the lid on reproducibility

Survey sheds light on the ‘crisis’ rocking research

More than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research.

The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.

Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology1 and cancer biology2, found rates of around 40% and 10%, respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence.

The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. “At the current time there is no consensus on what reproducibility is or should be.” But just recognizing that is a step forward, he says. “The next step may be identifying what is the problem and to get a consensus.”

Failing to reproduce results is a rite of passage, says Marcus Munafo, a biological psychologist at the University of Bristol, UK, who has a long-standing interest in scientific reproducibility. When he was a student, he says, “I tried to replicate what looked simple from the literature, and wasn't able to. Then I had a crisis of confidence, and then I learned that my experience wasn't uncommon.”

The challenge is not to eliminate problems with reproducibility in published work. Being at the cutting edge of science means that sometimes results will not be robust, says Munafo. “We want to be discovering new things but not generating too many false leads.”

But sorting discoveries from false leads can be discomfiting. Although the vast majority of researchers in our survey had failed to reproduce an experiment, less than 20% of respondents said that they had ever been contacted by another researcher unable to reproduce their work. Our results are strikingly similar to another online survey of nearly 900 members of the American Society for Cell Biology (see go.nature.com/kbzs2b). That may be because such conversations are difficult. If experimenters reach out to the original researchers for help, they risk appearing incompetent or accusatory, or revealing too much about their own projects.

A minority of respondents reported ever having tried to publish a replication study. When work does not reproduce, researchers often assume there is a perfectly valid (and probably boring) reason. What's more, incentives to publish positive replications are low and journals can be reluctant to publish negative findings. In fact, several respondents who had published a failed replication said that editors and reviewers demanded that they play down comparisons with the original study.

Nevertheless, 24% said that they had been able to publish a successful replication and 13% had published a failed replication. Acceptance was more common than persistent rejection: only 12% reported being unable to publish successful attempts to reproduce others' work; 10% reported being unable to publish unsuccessful attempts.

Survey respondent Abraham Al-Ahmad at the Texas Tech University Health Sciences Center in Amarillo expected a “cold and dry rejection” when he submitted a manuscript explaining why a stem-cell technique had stopped working in his hands. He was pleasantly surprised when the paper was accepted. The reason, he thinks, is because it offered a workaround for the problem.

Others place the ability to publish replication attempts down to a combination of luck, persistence and editors' inclinations. Survey respondent Michael Adams, a drug-development consultant, says that work showing severe flaws in an animal model of diabetes has been rejected six times, in part because it does not reveal a new drug target. By contrast, he says, work refuting the efficacy of a compound to treat Chagas disease was quickly accepted4.

One-third of respondents said that their labs had taken concrete steps to improve reproducibility within the past five years. Rates ranged from a high of 41% in medicine to a low of 24% in physics and engineering. Free-text responses suggested that redoing the work or asking someone else within a lab to repeat the work is the most common practice. Also common are efforts to beef up the documentation and standardization of experimental methods.

Any of these can be a major undertaking. A biochemistry graduate student in the United Kingdom, who asked not to be named, says that efforts to reproduce work for her lab's projects doubles the time and materials used — in addition to the time taken to troubleshoot when some things invariably don't work. Although replication does boost confidence in results, she says, the costs mean that she performs checks only for innovative projects or unexpected results.

SOURCE  

***************************************

For more postings from me, see  DISSECTING LEFTISM, TONGUE-TIED, EDUCATION WATCH INTERNATIONAL, POLITICAL CORRECTNESS WATCH, FOOD & HEALTH SKEPTIC and AUSTRALIAN POLITICS. Home Pages are   here or   here or   here.  Email me (John Ray) here.  

Preserving the graphics:  Most graphics on this site are hotlinked from elsewhere.  But hotlinked graphics sometimes have only a short life -- as little as a week in some cases.  After that they no longer come up.  From January 2011 on, therefore, I have posted a monthly copy of everything on this blog to a separate site where I can host text and graphics together -- which should make the graphics available even if they are no longer coming up on this site.  See  here or here

*****************************************


No comments: