title
stringlengths
2
145
content
stringlengths
86
178k
global surface temperature
In earth science, global surface temperature (GST; sometimes referred to as global mean surface temperature, GMST, or global average surface temperature) is calculated by averaging the temperatures over sea and land. Periods of global cooling and global warming have alternated throughout Earth's history. Series of reliable global temperature measurements began in the 1850—1880 time frame. Through 1940, the average annual temperature increased, but was relatively stable between 1940 and 1975. Since 1975, it has increased by roughly 0.15 °C to 0.20 °C per decade, to at least 1.1 °C (1.9 °F) above 1880 levels. The current annual GMST is about 15 °C (59 °F), though monthly temperatures can vary almost 2 °C (4 °F) above or below this figure.Sea levels have risen and fallen sharply during Earth's 4.6 billion year history. However, recent global sea level rise, driven by increasing global surface temperatures, has increased over the average rate of the past two to three thousand years. The continuation or acceleration of this trend will cause significant changes in the world's coastlines. Background In the 1860s, physicist John Tyndall recognized the Earth's natural greenhouse effect and suggested that slight changes in the atmospheric composition could bring about climatic variations. In 1896, a seminal paper by Swedish scientist Svante Arrhenius first predicted that changes in the levels of carbon dioxide in the atmosphere could substantially alter the surface temperature through the greenhouse effect.Changes in global temperatures over the past century provide evidence for the effects of increasing greenhouse gasses. When the climate system reacts to such changes, climate change follows. Measurement of the GST(global surface temperature) is one of the many lines of evidence supporting the scientific consensus on climate change, which is that humans are causing warming of Earth's climate system. Warming oceans With the Earth's temperature increasing, the ocean has absorbed much of this increased heat, with the top 700 meters of ocean showing warming of 0.22 C (0.4 °F) since 1969. Expansion of the warm water, along with melting ice sheets, causes the sea level to rise. The distribution of excess heat in the ocean is uneven, with the greatest ocean warming occurring in the southern hemisphere and contributing to the underground melting of the Antarctic ice shelf. The warming of sea water is also related to the thinning of ice shelves and sea ice, both of which have a further impact on the Earth's climate system. Finally, sea warming threatens marine ecosystems and human livelihoods. For example, warm water endangers the health of corals, which in turn endangers marine communities that depend on corals for shelter and food. Ultimately, people who rely on marine fisheries for their livelihoods and jobs may face the negative effects of ocean warming. During the 20th century, the sea surface temperature increased for a century and continued to rise. From 1901 to 2015, the temperature increased by an average of 0.13 °F per decade. Since reliable observations began in 1880, the sea surface temperature has been higher than at any other time in the past three decades. As greenhouse gases absorb more energy from the sun, the ocean absorbs more heat, leading to rising sea surface temperatures and rising sea levels. Changes in ocean temperature and ocean currents brought about by climate change will lead to changes in the global climate pattern. For example, warmer waters may promote the development of stronger storms in the tropics, which may cause property loss and loss of life. Impacts related to sea level rise and severe storms are particularly relevant to coastal communities. Shrinking ice sheets The Antarctic and Greenland ice sheets have decreased exponentially in mass. According to NASA's Gravity Recovery and Climate Experiment, it shows that Greenland has lost an average of 286 billion tons of ice per year. The expansion of the warm water and the melting ice sheets cause the sea level to rise. The ice is changing everywhere on earth. Since 1912, the famous snow of Mount Kilimanjaro has melted more than 80%. The glaciers in the Garhwal Himalayas in India are retreating so fast that researchers believe that by 2035, most of the central and eastern Himalayas will actually disappear. For half a century, its range has dropped by about 10% in the past 30 years. NASA repeated laser altimeter readings showed that the edge of the Greenland ice sheet was shrinking. Now, the spring freshwater ice in the northern hemisphere breaks 9 days earlier than 150 years ago, while the autumn freeze is 10 days later. The melting of frozen ground caused land subsidence in parts of Alaska to exceed 15 feet (4.6 meters). From the Arctic to Peru, from Switzerland to the equatorial glacier in Manjaya, Indonesia, massive ice fields, monstrous glaciers, and sea ice are disappearing, fast.When the temperature rises and the ice melts, more water flows into the ocean from glaciers and ice caps, and the sea water warms and expands in volume. According to the Intergovernmental Panel on Climate Change (IPCC), this combined effect has played a major role in raising the global average sea level by 4 to 8 inches (10 to 20 centimetres) in the past 100 years. Greenland's meltwater may greatly affect the flow of huge ocean currents, which are called the Atlantic meridional overturning circulation or AMOC. Similar to a huge conveyor belt, AMOC helps transport hot water from tropical regions to the Arctic. Its important role in the global distribution of heat also makes it have a significant impact on global weather conditions-AMOC's hot water flow is largely due to the mild climate in places such as Western Europe. As fresh water pours into the ocean from the melting Greenland ice sheet, this may slow down the flow of water. At the same time, studies have shown that melting ice from Antarctica may disrupt the structure of the Southern Ocean. Because the density of fresh water is lower than that of salt water, a large amount of melt water may not be able to merge with the rest of the ocean, but form a layer of material attached to the water surface. This cold liquid traps heat underneath it and causes deeper layers to heat up. This increases the overall temperature of the ocean, which makes it less able to absorb CO2 from the atmosphere. As a result, more CO2 will remain in the atmosphere, leading to an increase in global warming. Greenhouse effects The gases that cause the greenhouse effect include: Water vapor The most abundant greenhouse gas (GHG), but importantly, it can serve as a feedback to the climate. As the Earth's atmosphere warms, water vapor will increase, but the possibility of clouds and precipitation will increase, which becomes some of the most important feedback mechanisms for the greenhouse effect. For instance, the feedback mechanism has the potential to amplify or dampen warming, depending on the location, altitude, and temperature of the clouds. Carbon dioxide (CO2) Carbon dioxide is a small but very important component of the atmosphere. It is released through natural processes such as respiration and volcanic eruptions, as well as through human activities such as deforestation, land use changes, and burning of fossil fuels. Since the beginning of the industrial revolution, human atmospheric CO2 concentration has increased by 47%. This is the most important long-term "forcing" of climate change. Methane Methane is emitted during the production and transportation of coal, natural gas, and oil. Methane emissions also originate from the decay of organic waste from livestock and other agricultural activities and municipal solid waste landfills. Nitrous oxide Nitrous oxide is 300 times more effective than carbon dioxide, and it also depletes the ozone layer. Since it also has a shorter lifespan, reducing its lifespan may have a more rapid and significant impact on global warming. However, the biggest source of nitrous oxide is agriculture, especially fertilized soil and animal manure, which makes it more difficult to control. Permafrost is frozen soil that contains ancient soil, sediments, and organic matter of plants and animals. It covers about a quarter of the northern hemisphere. As the Arctic heats up about twice as fast as the rest of the world, the permafrost begins to melt, and ancient materials are also exposed to oxygen, which causes the gases they release to further exacerbate climate warming. Although the role of nitrous oxide is to deplete the ozone layer, it is not included in the Montreal Protocol on Substances that Deplete the Ozone Layer, an international treaty designed to restore the ozone layer by phasing out certain substances. Chlorofluorocarbons (CFCs) and Hydrochlorofluorocarbons (HCFCs) Synthetic compounds that are entirely industrially sourced can be used in a variety of applications, but due to their ability to help destroy the ozone layer, their production and release into the atmosphere are currently widely regulated by international agreements. While CFC and HCFC destroy ozone, they also trap heat in the lower atmosphere, leading to global warming and changes in climate and weather. HFC, which was originally developed to replace CFC and HCFC, also absorbs and captures infrared radiation or heat in the lower atmosphere of the earth. By the end of this century, the addition of those and other greenhouse gases is expected to raise the earth's temperature by 2.5 °F (1.4 °C) to 8 °F (4.4 °C). Hydrofluorocarbons, CFCs and HFCs are estimated to account for 11.5% of today's greenhouse gas impact on climate and climate change. See also Global temperature record Instrumental temperature record == References ==
warming stripes
Warming stripes (sometimes referred to as climate stripes, climate timelines or stripe graphics) are data visualization graphics that use a series of coloured stripes chronologically ordered to visually portray long-term temperature trends. Warming stripes reflect a "minimalist" style, conceived to use colour alone to avoid technical distractions to intuitively convey global warming trends to non-scientists.The initial concept of visualizing historical temperature data has been extended to involve animation, to visualize sea level rise and predictive climate data, and to visually juxtapose temperature trends with other data such as atmospheric CO2 concentration, global glacier retreat, precipitation, progression of ocean depths, aviation emission's percentage contribution to global warming, and biodiversity loss. Background, publication and content In May 2016, to make visualizing climate change easier for the general public, University of Reading climate scientist Ed Hawkins created an animated spiral graphic of global temperature change as a function of time, a representation said to have gone viral. Jason Samenow wrote in The Washington Post that the spiral graph was "the most compelling global warming visualization ever made", before it was featured in the opening ceremony of the 2016 Summer Olympics.Separately, by 10 June 2017, Ellie Highwood, also a climate scientist at the University of Reading, had completed a crocheted "global warming blanket" that was inspired by "temperature blankets" representing temperature trends in respective localities. Hawkins provided Highwood with a more user friendly colour scale to avoid the muted colour differences present in Highwood's blanket. Independently, in November 2015, University of Georgia estuarine scientist Joan Sheldon made a "globally warm scarf" having 400 blue, red and purple rows, but could not contact Hawkins until 2022. Both Highwood and Sheldon credit as their original inspirations, "sky blankets" and "sky scarves" which are based on daily sky colours.On 22 May 2018, Hawkins published graphics constituting a chronologically ordered series of blue and red vertical stripes that he called warming stripes. Hawkins, a lead author for the IPCC 6th Assessment Report, received the Royal Society's 2018 Kavli Medal, in part "for actively communicating climate science and its various implications with broad audiences".As described in a BBC article, in the month the big meteorological agencies release their annual climate assessments, Hawkins experimented with different ways of rendering the global data and "chanced upon the coloured stripes idea". When he tried out a banner at the Hay Festival, according to the article, Hawkins "knew he'd struck a chord". The National Centre for Atmospheric Science (UK), with which Hawkins is affiliated, states that the stripes "paint a picture of our changing climate in a compelling way. Hawkins swapped out numerical data points for colours which we intuitively react to".Others have called Hawkins' warming stripes "climate stripes" or "climate timelines".Warming stripe graphics are reminiscent of colour field painting, a style prominent in the mid 20th century, which strips out all distractions and uses only colour to convey meaning. Colour field pioneer artist Barnett Newman said he was "creating images whose reality is self-evident", an ethos that Hawkins is said to have applied to the problem of climate change.Collaborating with Berkeley Earth scientist Robert Rohde, on 17 June 2019 Hawkins published for public use, a large set of warming stripes on ShowYourStripes.info. Individualized warming stripe graphics were published for the globe, for most countries, as well as for certain smaller regions such as states in the US or parts of the UK, since different parts of the world are warming more quickly than others. Data sources and data visualization Warming stripe graphics are defined with various parameters, including: source of dataset (meteorological organization) measurement location (global, country, state, etc.) time period (year range, for horizontal "axis") temperature range (range of anomaly (deviation) about a reference or baseline temperature) colour scale (assignment of colours to represent respective ranges of temperature anomaly), and colour choice (shades of blue and red), as well as temperature boundaries (temperature above which a stripe is red and below which is blue, determined by an average annual temperature over a "reference period" or "baseline" of usually 30 years).Hawkins' original graphics use the eight most saturated blues and reds from the ColorBrewer 9-class single hue palettes, which optimize colour palettes for maps and are noted for their colourblind-friendliness. Hawkins said the specific colour choice was an aesthetic decision ("I think they look just right"), also selecting baseline periods to ensure equally dark shades of blue and red for aesthetic balance.A Republik analysis said that "this graphic explains everything in the blink of an eye", attributing its effect mainly to the chosen colors, which "have a magical effect on our brain, (letting) us recognize connections before we have even actively thought about them". The analysis concluded that colors other than blue and red "don't convey the same urgency as (Hawkins') original graphic, in which the colors were used in the classic way: blue=cold, red=warm."ShowYourStripes.info cites dataset sources Berkeley Earth, NOAA, UK Met Office, MeteoSwiss, DWD (Germany), specifically explaining that the data for most countries comes from the Berkeley Earth temperature dataset, except that for the US, UK, Switzerland & Germany the data comes from respective national meteorological agencies.For each country-level #ShowYourStripes graphic (Hawkins, June 2019), the average temperature in the 1971–2000 reference period is set as the boundary between blue (cooler) and red (warmer) colours, the colour scale varying +/- 2.6 standard deviations of the annual average temperatures between 1901 and 2000. Hawkins noted that the graphic for the Arctic "broke the colour scale" since it is warming more than twice as fast as the global average.For statistical and geographic reasons, it is expected that graphics for small areas will show more year-to-year variation than those for large regions. Year-to-year changes reflected in graphics for localities result from weather variability, whereas global warming over centuries reflects climate change.The NOAA website warns that the graphics "shouldn't be used to compare the rate of change at one location to another", explaining that "the highest and lowest values on the colour scale may be different at different locations". Further, a certain colour in one graphic will not necessarily correspond to the same temperature in other graphics. A climate change denier generated a warming stripes graphic that misleadingly affixed Northern Hemisphere readings over one period to global readings over another period, and omitted readings for the most recent thirteen years, with some of the data being 29-year-smoothed—to give the false impression that recent warming is routine. Calling the graphic "imposter warming stripes", meteorologist Jeff Berardelli described it in January 2020 as "a mishmash of data riddled with gaps and inconsistencies" with an apparent objective to confuse the public. Applications and influence After Hawkins' first publication of warming stripe graphics in May 2018, broadcast meteorologists in multiple countries began to show stripe-decorated neckties, necklaces, pins and coffee mugs on-air, reflecting a growing acceptance of climate science among meteorologists and a willingness to communicate it to audiences. In 2019, the United States House Select Committee on the Climate Crisis used warming stripes in its committee logo, showing horizontally oriented stripes behind a silhouette of the United States Capitol, and three US Senators wore warming stripe lapel pins at the 2020 State of the Union Address.On 17 June 2019, Hawkins initiated a social media campaign with hashtag #ShowYourStripes that encourages people to download their regions' graphics from ShowYourStripes.info, and to post them. The campaign was backed by U.N. Climate Change, the World Meteorological Organization and the Intergovernmental Panel on Climate Change. Called "a new symbol for the climate emergency" by French magazine L'EDN, warming stripes have been applied to knit-it-yourself scarves, a vase, neckties, cufflinks, bath towels, vehicles, and a music festival stage, as well as on the side of Freiburg, Germany, streetcars, as municipal murals in Córdoba, Spain, Anchorage, Alaska, and Jersey, on face masks during the COVID-19 pandemic, in an action logo of the German soccer club 1. FSV Mainz 05, on the side of the Climate Change Observatory in Valencia, on the side of a power station turbine house in Reading, Berkshire, on tech-themed shirts, on designer dresses, on the uniforms of Reading Football Club, on Leipzig's Sachsen Bridge, on a biomethane-powered bus in Reading, Berkshire, as a stage backdrop at the 2022 Glastonbury Festival, on the racer uniforms and socks and webpage banner of the Climate Classic bicycle race, projected onto the White Cliffs of Dover, and on numerous bridges and towers noted by Climate Central. Remarking that "infiltrating popular culture is a means of triggering a change of attitude that will lead to mass action", Hawkins surmised that making the graphics available for free has made them used more widely. Hawkins further said that any merchandise-related profits are donated to charity.Through a campaign led by nonprofit Climate Central using hashtag #MetsUnite, more than 100 TV meteorologists—the scientists most laymen interact with more than any other—featured warming stripes and used the graphics to focus audience attention during broadcasts on summer solstices beginning in 2018 with the "Stripes for the Solstice" effort.On 24 June 2019, Hawkins tweeted that nearly a million stripe graphics had been downloaded by visitors from more than 180 countries in the course of their first week.In 2018, the German Weather Service's meteorological training journal Promet showed a warming stripes graphic on the cover of the issue titled "Climate Communication". By September 2019, the Met Office, the UK's national weather service, was using both a climate spiral and a warming stripe graphic on its "What is climate change?" webpage. Concurrently, the cover of the 21–27 September 2019 issue of The Economist, dedicated to "The climate issue," showed a warming stripe graphic, as did the cover of The Guardian on the morning of the 20 September 2019 climate strikes. The environmental initiative Scientists for Future (2019) included warming stripes in its logo. The Science Information Service (Germany) noted in December 2019 that warming stripes were a "frequently used motif" in demonstrations by the School strike for the climate and Scientists for Future, and were also on the roof of the German Maritime Museum in Bremerhaven. Also in December 2019, Voilà Information Design said that warming stripes "have replaced the polar bear on a melting iceberg as the icon of the climate crisis".On 18 January 2020, a 20-metre-wide artistic light-show installation of warming stripes was opened at the Gendarmenmarkt in Berlin, with the Berlin-Brandenburg Academy of Sciences building being illuminated in the same way. The cover of the "Climate Issue" (fall 2020) of the Space Science and Engineering Center's Through the Atmosphere journal was a warming stripes graphic, and in June 2021 the WMO used warming stripes to "show climate change is here and now" in its statement that "2021 is a make-or-break year for climate action". The November 2021 UN Climate Change Conference (COP26) exhibited an immersive "climate canopy" sculpture consisting of hanging, blue and red color-coded, vertical lighted bars with fabric fringes.On 27 September 2019, the Fachhochschule (University of Applied Science) Potsdam announced that warming stripes graphics had won in the science category of an international competition recognising innovative and understandable visualisations of climate change, the jury stating that the graphics make an "impact through their innovative, minimalist design".Hawkins was appointed Member of the Order of the British Empire (MBE) in the 2020 New Year Honours "For services to Climate Science and to Science Communication".In April 2022, textiles from haute couture fashion designer Lucy Tammam with warming stripes won the Best Customer Engagement Campaign title in the Sustainable Fashion 2022 awards by Drapers fashion magazine.In October 2022, the front cover of Greta Thunberg's The Climate Book features warming stripes. Extensions of warming stripes In 2018, University of Reading post-doctoral research assistant Emanuele Bevacqua juxtaposed vertical-stripe graphics for CO2 concentration and for average global temperature (August), and "circular warming stripes" depicting average global temperature with concentric coloured rings (November).In March 2019, German engineer Alexander Radtke extended Hawkins' historical graphics to show predictions of future warming through the year 2200, a graphic that one commentator described as making the future "a lot more visceral". Radtke bifurcated the graphic to show diverging predictions for different degrees of human action in reducing greenhouse gas emissions.On or before 30 May 2019, UK-based software engineer Kevin Pluck designed animated warming stripes that portray the unfolding of the temperature increase, allowing viewers to experience the change from an earlier stable climate to recent rapid warming.By June 2019, Hawkins vertically stacked hundreds of warming stripe graphics from corresponding world locations and grouped them by continent to form a comprehensive, composite graphic, "Temperature Changes Around the World (1901–2018)".On 1 July 2019, Durham University geography research fellow Richard Selwyn Jones published a Global Glacier Change graphic, modeled after and credited as being inspired by Hawkins' #ShowYourStripes graphics, allowing global warming and global glacier retreat to be visually juxtaposed. Jones followed on 8 July 2019 with a stripe graphic portraying global sea level change using only shades of blue. Separately, NOAA displayed a graphic juxtaposing annual temperatures and precipitation, researchers from the Netherlands used stripe graphics to represent progression of ocean depths, and the Institute of Physics used applied the graphic to represent aviation emission's percentage contribution to global warming. In 2023, University of Derby professor Miles Richardson created sequenced stripes to illustrate biodiversity loss. Critical response Some warned that warming stripes of individual countries or states, taken out of context, could advance the idea that global temperatures are not rising, though research meteorologist J. Marshall Shepherd said that "geographic variations in the graphics offer an outstanding science communication opportunity". Meteorologist and #MetsUnite coordinator Jeff Berardelli said that "local stripe visuals help us tell a nuanced story—the climate is not changing uniformly everywhere".Others say the charts should include axes or legends, though the website FAQ page explains the graphics were "specifically designed to be as simple as possible, and to start conversations... (to) fill a gap and enable communication with minimal scientific knowledge required to understand their meaning". J. Marshall Shepherd, former president of the American Meteorological Society, lauded Hawkins' approach, writing that "it is important not to miss the bigger picture. Science communication to the public has to be different" and commending Hawkins for his "innovative" approach and "outstanding science communication" effort.In The Washington Post, Matthew Cappucci wrote that the "simple graphics ... leave a striking visual impression" and are "an easily accessible way to convey an alarming trend", adding that "warming tendencies are plain as day". Greenpeace spokesman Graham Thompson remarked that the graphics are "like a really well-designed logo while still being an accurate representation of very important data".CBS News contributor Jeff Berardelli noted that the graphics "aren't based on future projections or model assumptions" in the context of stating that "science is not left or right. It's simply factual."A September 2019 editorial in The Economist hypothesized that "to represent this span of human history (1850–2018) as a set of simple stripes may seem reductive"—noting those years "saw world wars, technological innovation, trade on an unprecedented scale and a staggering creation of wealth"—but concluded that "those complex histories and the simplifying stripes share a common cause," namely, fossil fuel combustion.Informally, warming stripes have been said to resemble "tie-dyed bar codes" and a "work of art in a gallery". See also Notes References Further reading Hawkins, Ed (2019-09-20). "The story behind the viral graphic that electrified the climate movement". Fast Company. Archived from the original on 2019-09-21. Rennie, Jared. "Annual United States Climate Stripes: Temperature and Precipitation". ArcGIS. North Carolina State University. — clickable map of warming stripes for each county in 48 contiguous US states Windhager, Florian; Schreder, Günther; Mayr, Eva (2019). "On Inconvenient Images: Exploring the Design Space of Engaging Climate Change Visualizations for Public Audiences". Workshop on Visualisation in Environmental Sciences (EnvirVis). The Eurographics Association: 1–8. doi:10.2312/envirvis.20191098. ISBN 978-3-03868086-4. — Survey of climate change visualizations External links ShowYourStripes.info — warming stripes portraying historical data for multiple locations
instrumental temperature record
The instrumental temperature record is a record of temperatures within Earth's climate based on direct measurement of air temperature and ocean temperature, using thermometers and other thermometry devices. Instrumental temperature records are distinguished from indirect reconstructions using climate proxy data such as from tree rings and ocean sediments. Instrument-based data are collected from thousands of meteorological stations, buoys and ships around the globe. Whilst many heavily-populated areas have a high density of measurements, observations are more widely spread in sparsely populated areas such as polar regions and deserts, as well as over many parts of Africa and South America. Measurements were historically made using mercury or alcohol thermometers which were read manually, but are increasingly made using electronic sensors which transmit data automatically. Records of global average surface temperature are usually presented as anomalies rather than as absolute temperatures. A temperature anomaly is measured against a reference value (also called baseline period or long-term average). For example, a commonly used baseline period is the time period 1951-1980. The longest-running temperature record is the Central England temperature data series, which starts in 1659. The longest-running quasi-global records start in 1850. Temperatures are also measured in the upper atmosphere using a variety of methods, including radiosondes launched using weather balloons, a variety of satellites, and aircraft. Satellites are used extensively to monitor temperatures in the upper atmosphere but to date have generally not been used to assess temperature change at the surface. In recent decades, global surface temperature datasets have been supplemented by extensive sampling of ocean temperatures at various depths, allowing estimates of ocean heat content. The record shows a rising trend in global average surface temperatures (i.e. global warming) driven by human-induced emissions of greenhouse gases. The global average and combined land and ocean surface temperature show a warming of 1.09 °C (range: 0.95 to 1.20 °C) from 1850–1900 to 2011–2020, based on multiple independently produced datasets.: 5  The trend is faster since 1970s than in any other 50-year period over at least the last 2000 years.: 8  Within this long-term upward trend, there is short-term variability because of natural internal variability (e.g. ENSO, volcanic eruption), but record highs have been occurring regularly. Methods Instrumental temperature records are based on direct, instrument-based measurements of air temperature and ocean temperature, unlike indirect reconstructions using climate proxy data such as from tree rings and ocean sediments. The longest-running temperature record is the Central England temperature data series, which starts in 1659. The longest-running quasi-global records start in 1850. Temperatures on other time scales are explained in global temperature record. "Global temperature" can have different definitions. There is a small difference between air and surface temperatures.: 12 Global record from 1850 The period for which reasonably reliable instrumental records of near-surface temperature exist with quasi-global coverage is generally considered to begin around 1850. Earlier records exist, but with sparser coverage, largely confined to the Northern Hemisphere, and less standardized instrumentation. The temperature data for the record come from measurements from land stations and ships. On land, temperatures are measured either using electronic sensors, or mercury or alcohol thermometers which are read manually, with the instruments being sheltered from direct sunlight using a shelter such as a Stevenson screen. The sea record consists of ships taking sea temperature measurements, mostly from hull-mounted sensors, engine inlets or buckets, and more recently includes measurements from moored and drifting buoys. The land and marine records can be compared. Land and sea measurement and instrument calibration is the responsibility of national meteorological services. Standardization of methods is organized through the World Meteorological Organization (and formerly through its predecessor, the International Meteorological Organization).Most meteorological observations are taken for use in weather forecasts. Centers such as European Centre for Medium-Range Weather Forecasts show instantaneous map of their coverage; or the Hadley Centre show the coverage for the average of the year 2000. Coverage for earlier in the 20th and 19th centuries would be significantly less. While temperature changes vary both in size and direction from one location to another, the numbers from different locations are combined to produce an estimate of a global average change. Absolute temperatures v. anomalies Records of global average surface temperature are usually presented as anomalies rather than as absolute temperatures. A temperature anomaly is measured against a reference value (also called baseline period or long-term average). For example, a commonly used baseline period is 1951-1980. Therefore, if the average temperature for that time period was 15 °C, and the currently measured temperature is 17 °C, then the temperature anomaly is +2 °C. Temperature anomalies are useful for deriving average surface temperatures because they tend to be highly correlated over large distances (of the order of 1000 km). In other words, anomalies are representative of temperature changes over large areas and distances. By comparison, absolute temperatures vary markedly over even short distances. A dataset based on anomalies will also be less sensitive to changes in the observing network (such as a new station opening in a particularly hot or cold location) than one based on absolute values will be. The Earth's average surface absolute temperature for the 1961–1990 period has been derived by spatial interpolation of average observed near-surface air temperatures from over the land, oceans and sea ice regions, with a best estimate of 14 °C (57.2 °F). The estimate is uncertain, but probably lies within 0.5 °C of the true value. Given the difference in uncertainties between this absolute value and any annual anomaly, it's not valid to add them together to imply a precise absolute value for a specific year. Total warming and trends The global average and combined land and ocean surface temperature, show a warming of 1.09 °C (range: 0.95 to 1.20 °C) from 1850–1900 to 2011–2020, based on multiple independently produced datasets.: 5  The trend is faster since 1970s than in any other 50-year period over at least the last 2000 years.: 8 Most of the observed warming occurred in two periods: around 1900 to around 1940 and around 1970 onwards; the cooling/plateau from 1940 to 1970 has been mostly attributed to sulphate aerosol.: 207  Some of the temperature variations over this time period may also be due to ocean circulation patterns.Land air temperatures are rising faster than sea surface temperatures. Land temperatures have warmed by 1.59 °C (range: 1.34 to 1.83 °C) from 1850–1900 to 2011–2020, while sea surface temperatures have warmed by 0.88 °C (range: 0.68 to 1.01 °C) over the same period.: 5 For 1980 to 2020, the linear warming trend for combined land and sea temperatures has been 0.18 °C to 0.20 °C per decade, depending on the data set used.: Table 2.4 It is unlikely that any uncorrected effects from urbanisation, or changes in land use or land cover, have raised global land temperature changes by more than 10%.: 189  However, larger urbanisation signals have been found locally in some rapidly urbanising regions, such as eastern China.: Section 2.3.1.1.3 Warmest periods Warmest years The warmest years in the instrumental temperature record have occurred in the last decade (i.e. 2012-2021). The World Meteorological Organization reported in March 2021 that 2016 and 2020 were the two warmest years in the period since 1850.Each individual year from 2015 onwards has been warmer than any year prior to 1850. In other words: each of the seven years in 2015-2021 was clearly warmer than any pre-2014 year. There is a long-term warming trend, and there is variability about this trend because of natural sources of variability (e.g. ENSO such as 2014–2016 El Niño event, volcanic eruption). Not every year will set a record but record highs are occurring regularly. While record-breaking years can attract considerable public interest, individual years are less significant than the overall trend. Some climatologists have criticized the attention that the popular press gives to "warmest year" statistics.Based on the NOAA dataset (note that other datasets produce different rankings), the following table lists the global combined land and ocean annually averaged temperature rank and anomaly for each of the 10 warmest years on record. For comparison: IPCC uses the mean of four different datasets and expresses the data relative to 1850–1900. Although global instrumental temperature records begin only in 1850, reconstructions of earlier temperatures based on climate proxies, suggest these recent years may be the warmest for several centuries to millennia, or longer.: 2–6 Warmest decades Numerous drivers have been found to influence annual global mean temperatures. An examination of the average global temperature changes by decades reveals continuing climate change: each of the last four decades has been successively warmer at the Earth's surface than any preceding decade since 1850. The most recent decade (2011-2020) was warmer than any multi-centennial period in the past 11,700 years.: 2–6 The following chart is from NASA data of combined land-surface air and sea-surface water temperature anomalies. Factors influencing global temperature Factors that influence global temperature include: Greenhouse gases trap outgoing radiation warming the atmosphere which in turn warms the land (greenhouse effect). El Niño–Southern Oscillation (ENSO): El Niño generally tends to increase global temperatures. La Niña, on the other hand, usually causes years which are cooler than the short-term average. El Niño is the warm phase of the El Niño–Southern Oscillation (ENSO) and La Niña the cold phase. In the absence of other short-term influences such as volcanic eruptions, strong El Niño years are typically 0.1 °C to 0.2 °C warmer than the years immediately preceding and following them, and strong La Niña years 0.1 °C to 0.2 °C cooler. The signal is most prominent in the year in which the El Niño/La Niña ends. Aerosols and volcanic eruptions: Aerosols diffuse incoming radiation generally cooling the planet. On a long-term basis, aerosols are primarily of anthropogenic origin, but major volcanic eruptions can produce quantities of aerosols which exceed those from anthropogenic sources over periods of time up to a few years. Volcanic eruptions which are sufficiently large to inject significant quantities of sulphur dioxide into the stratosphere can have a significant global cooling effect for one to three years after the eruption. This effect is most prominent for tropical volcanoes as the resultant aerosols can spread over both hemispheres. The largest eruptions of the last 100 years, such as the Mount Pinatubo eruption in 1991 and Mount Agung eruption in 1963-1964, have been followed by years with global mean temperatures 0.1 °C to 0.2 °C below long-term trends at the time. Land use change like deforestation can increase greenhouse gases through burning biomass. Albedo can also be changed. Incoming solar radiation varies very slightly, with the main variation controlled by the approximately 11-year solar magnetic activity cycle. Robustness of evidence There is a scientific consensus that climate is changing and that greenhouse gases emitted by human activities are the primary driver. The scientific consensus is reflected, for example, by the Intergovernmental Panel on Climate Change (IPCC), an international body which summarizes existing science, and the U.S. Global Change Research Program.The methods used to derive the principal estimates of global surface temperature trends—HadCRUT3, NOAA and NASA/GISS—are largely independent. Other reports and assessments The U.S. National Academy of Sciences, both in its 2002 report to President George W. Bush, and in later publications, has strongly endorsed evidence of an average global temperature increase in the 20th century.The preliminary results of an assessment carried out by the Berkeley Earth Surface Temperature group and made public in October 2011, found that over the past 50 years the land surface warmed by 0.911 °C, and their results mirrors those obtained from earlier studies carried out by the NOAA, the Hadley Centre and NASA's GISS. The study addressed concerns raised by "skeptics" including urban heat island effect, "poor" station quality, and the "issue of data selection bias" and found that these effects did not bias the results obtained from these earlier studies.The Berkeley Earth dataset has subsequently been made operational and is now one of the datasets used by IPCC and WMO in their assessments. Global surface and ocean datasets National Oceanic and Atmospheric Administration (NOAA) maintains the Global Historical Climatology Network (GHCN-Monthly) data base containing historical temperature, precipitation, and pressure data for thousands of land stations worldwide. Also, NOAA's National Climatic Data Center (NCDC) of surface temperature measurements maintains a global temperature record since 1880.HadCRUT, a collaboration between the University of East Anglia's Climatic Research Unit and the Hadley Centre for Climate Prediction and Research NASA's Goddard Institute for Space Studies maintains GISTEMP. More recently the Berkeley Earth Surface Temperature dataset. These datasets are updated frequently, and are generally in close agreement. Internal climate variability and global warming One of the issues that has been raised in the media is the view that global warming "stopped in 1998". This view ignores the presence of internal climate variability. Internal climate variability is a result of complex interactions between components of the climate system, such as the coupling between the atmosphere and ocean. An example of internal climate variability is the El Niño–Southern Oscillation (ENSO). The El Niño in 1998 was particularly strong, possibly one of the strongest of the 20th century, and 1998 was at the time the world's warmest year on record by a substantial margin. Cooling over the 2007 to 2012 period, for instance, was likely driven by internal modes of climate variability such as La Niña. The area of cooler-than-average sea surface temperatures that defines La Niña conditions can push global temperatures downward, if the phenomenon is strong enough. The slowdown in global warming rates over the 1998 to 2012 period is also less pronounced in current generations of observational datasets than in those available at the time in 2012. The temporary slowing of warming rates ended after 2012, with every year from 2015 onwards warmer than any year prior to 2015, but it is expected that warming rates will continue to fluctuate on decadal timescales through the 21st century.: Box 3.1 Satellite temperature records The most recent climate model simulations give a range of results for changes in global-average temperature. Some models show more warming in the troposphere than at the surface, while a slightly smaller number of simulations show the opposite behaviour. There is no fundamental inconsistency among these model results and observations at the global scale.The satellite records used to show much smaller warming trends for the troposphere which were considered to disagree with model prediction; however, following revisions to the satellite records, the trends are now similar. Siting of temperature measurement stations The U.S. National Weather Service Cooperative Observer Program has established minimum standards regarding the instrumentation, siting, and reporting of surface temperature stations. The observing systems available are able to detect year-to-year temperature variations such as those caused by El Niño or volcanic eruptions.Another study concluded in 2006, that existing empirical techniques for validating the local and regional consistency of temperature data are adequate to identify and remove biases from station records, and that such corrections allow information about long-term trends to be preserved. A study in 2013 also found that urban bias can be accounted for, and when all available station data is divided into rural and urban, that both temperature sets are broadly consistent. Related research Trends and predictions Each of the seven years in 2015-2021 was clearly warmer than any pre-2014 year, and this trend is expected to be true for some time to come (that is, the 2016 record will be broken before 2026 etc.). A decadal forecast by the World Meteorological Organisation issued in 2021 stated a probability of 40% of having a year above 1.5 C in the 2021-2025 period.Global warming is very likely to reach 1.0 °C to 1.8 °C by the late 21st century under the very low GHG emissions scenario. In an intermediate scenario global warming would reach 2.1 °C to 3.5 °C, and 3.3 °C to 5.7 °C under the very high GHG emissions scenario.: SPM-17  These projections are based on climate models in combination with observations.: TS-30 Regional temperature changes The changes in climate are not expected to be uniform across the Earth. In particular, land areas change more quickly than oceans, and northern high latitudes change more quickly than the tropics. There are three major ways in which global warming will make changes to regional climate: melting ice, changing the hydrological cycle (of evaporation and precipitation) and changing currents in the oceans. See also Atmospheric reanalysis Carbon dioxide in Earth's atmosphere Heat wave List of large-scale temperature reconstructions of the last 2,000 years Satellite temperature measurements – Measurements of atmospheric, land surface or sea temperature by satellites.Pages displaying short descriptions of redirect targets Temperature record of the last 2,000 years Warming stripes References External links GISS Surface Temperature Analysis (GISTEMP) Google Earth interface for CRUTEM4 land temperature data International Surface Temperature Initiative
paleocene–eocene thermal maximum
The Paleocene–Eocene thermal maximum (PETM), alternatively "Eocene thermal maximum 1" (ETM1), and formerly known as the "Initial Eocene" or "Late Paleocene thermal maximum", was a time period with a more than 5–8 °C global average temperature rise across the event. This climate event occurred at the time boundary of the Paleocene and Eocene geological epochs. The exact age and duration of the event is uncertain but it is estimated to have occurred around 55.5 million years ago (Ma).The associated period of massive carbon release into the atmosphere has been estimated to have lasted from 20,000 to 50,000 years. The entire warm period lasted for about 200,000 years. Global temperatures increased by 5–8 °C.The onset of the Paleocene–Eocene thermal maximum has been linked to volcanism and uplift associated with the North Atlantic Igneous Province, causing extreme changes in Earth's carbon cycle and a significant temperature rise. The period is marked by a prominent negative excursion in carbon stable isotope (δ13C) records from around the globe; more specifically, there was a large decrease in 13C/12C ratio of marine and terrestrial carbonates and organic carbon. Paired δ13C, δ11B, and δ18O data suggest that ~12000 Gt of carbon (at least 44000 Gt CO2e) were released over 50,000 years, averaging 0.24 Gt per year. Stratigraphic sections of rock from this period reveal numerous other changes. Fossil records for many organisms show major turnovers. For example, in the marine realm, a mass extinction of benthic foraminifera, a global expansion of subtropical dinoflagellates, and an appearance of excursion, planktic foraminifera and calcareous nannofossils all occurred during the beginning stages of PETM. On land, modern mammal orders (including primates) suddenly appear in Europe and in North America. Setting The configuration of oceans and continents was somewhat different during the early Paleogene relative to the present day. The Panama Isthmus did not yet connect North America and South America, and this allowed direct low-latitude circulation between the Pacific and Atlantic Oceans. The Drake Passage, which now separates South America and Antarctica, was closed, and this perhaps prevented thermal isolation of Antarctica. The Arctic was also more restricted. Although various proxies for past atmospheric CO2 levels in the Eocene do not agree in absolute terms, all suggest that levels then were much higher than at present. In any case, there were no significant ice sheets during this time.Earth surface temperatures increased by about 6 °C from the late Paleocene through the early Eocene. Superimposed on this long-term, gradual warming were at least two (and probably more) "hyperthermals". These can be defined as geologically brief (<200,000 year) events characterized by rapid global warming, major changes in the environment, and massive carbon addition. Though not the first within the Cenozoic, the PETM was the most extreme of these hyperthermals. Another hyperthermal clearly occurred at approximately 53.7 Ma, and is now called ETM-2 (also referred to as H-1, or the Elmo event). However, additional hyperthermals probably occurred at about 53.6 Ma (H-2), 53.3 (I-1), 53.2 (I-2) and 52.8 Ma (informally called K, X or ETM-3). The number, nomenclature, absolute ages, and relative global impact of the Eocene hyperthermals are the source of considerable current research. Whether they only occurred during the long-term warming, and whether they are causally related to apparently similar events in older intervals of the geological record (e.g. the Toarcian turnover of the Jurassic) are open issues. Global warming A study in 2020 estimated the Global mean surface temperature (GMST) with 66% confidence during the latest Paleocene (c. 57 Ma) as 22.3–28.3 °C (72.1–82.9 °F), PETM (56 Ma) as 27.2–34.5 °C (81.0–94.1 °F) and Early Eocene Climatic Optimum (EECO) (53.3 to 49.1 Ma) as 23.2–29.7 °C (73.8–85.5 °F). Estimates of the amount of average global temperature rise at the start of the PETM range from approximately 3 to 6 °C to between 5 and 8 °C. This warming was superimposed on "long-term" early Paleogene warming, and is based on several lines of evidence. There is a prominent (>1‰) negative excursion in the δ18O of foraminifera shells, both those made in surface and deep ocean water. Because there was little or no polar ice in the early Paleogene, the shift in δ18O very probably signifies a rise in ocean temperature. The temperature rise is also supported by the spread of warmth-loving taxa to higher latitudes, changes in plant leaf shape and size, the Mg/Ca ratios of foraminifera, and the ratios of certain organic compounds, such as TEXH86.Proxy data from Esplugafereda in northeastern Spain shows a rapid +8 °C temperature rise, in accordance with existing regional records of marine and terrestrial environments.TEXH86 values indicate that the average sea surface temperature (SST) reached over 36 °C (97 °F) in the tropics during the PETM, enough to cause heat stress even in organisms resistant to extreme thermal stress, such as dinoflagellates, of which a significant number of species went extinct. Oxygen isotope ratios from Tanzania suggest that tropical SSTs may have been even higher, exceeding 40 °C. Low latitude Indian Ocean Mg/Ca records show seawater at all depths warmed by ~4-5 °C. In the Pacific Ocean, tropical SSTs increased by about 4-5 °C. TEXL86 values from deposits in New Zealand, then located between 50°S and 60°S in the southwestern Pacific, indicate SSTs of 26 °C (79 °F) to 28 °C (82 °F), an increase of over 10 °C (18 °F) from an average of 13 °C (55 °F) to 16 °C (61 °F) at the boundary between the Selandian and Thanetian. The extreme warmth of the southwestern Pacific extended into the Australo-Antarctic Gulf. Sediment core samples from the East Tasman Plateau, then located at a palaeolatitude of ~65 °S, show an increase in SSTs from ~26 °C to ~33 °C during the PETM. In the North Sea, SSTs jumped by 10 °C, reaching highs of ~33 °C.Certainly, the central Arctic Ocean was ice-free before, during, and after the PETM. This can be ascertained from the composition of sediment cores recovered during the Arctic Coring Expedition (ACEX) at 87°N on Lomonosov Ridge. Moreover, temperatures increased during the PETM, as indicated by the brief presence of subtropical dinoflagellates, and a marked increase in TEX86. The latter record is intriguing, though, because it suggests a 6 °C (11 °F) rise from ~17 °C (63 °F) before the PETM to ~23 °C (73 °F) during the PETM. Assuming the TEX86 record reflects summer temperatures, it still implies much warmer temperatures on the North Pole compared to the present day, but no significant latitudinal amplification relative to surrounding time. The above considerations are important because, in many global warming simulations, high latitude temperatures increase much more at the poles through an ice–albedo feedback. It may be the case, however, that during the PETM, this feedback was largely absent because of limited polar ice, so temperatures on the Equator and at the poles increased similarly. Notable is the absence of documented greater warming in polar regions compared to other regions. This implies a non-existing ice-albedo feedback, suggesting no sea or land ice was present in the late Paleocene.Precise limits on the global temperature rise during the PETM and whether this varied significantly with latitude remain open issues. Oxygen isotope and Mg/Ca of carbonate shells precipitated in surface waters of the ocean are commonly used measurements for reconstructing past temperature; however, both paleotemperature proxies can be compromised at low latitude locations, because re-crystallization of carbonate on the seafloor renders lower values than when formed. On the other hand, these and other temperature proxies (e.g., TEX86) are impacted at high latitudes because of seasonality; that is, the "temperature recorder" is biased toward summer, and therefore higher values, when the production of carbonate and organic carbon occurred. Carbon cycle disturbance Clear evidence for massive addition of 13C-depleted carbon at the onset of the PETM comes from two observations. First, a prominent negative excursion in the carbon isotope composition (δ13C) of carbon-bearing phases characterizes the PETM in numerous (>130) widespread locations from a range of environments. Second, carbonate dissolution marks the PETM in sections from the deep sea.The total mass of carbon injected to the ocean and atmosphere during the PETM remains the source of debate. In theory, it can be estimated from the magnitude of the negative carbon isotope excursion (CIE), the amount of carbonate dissolution on the seafloor, or ideally both. However, the shift in the δ13C across the PETM depends on the location and the carbon-bearing phase analyzed. In some records of bulk carbonate, it is about 2‰ (per mil); in some records of terrestrial carbonate or organic matter it exceeds 6‰. Carbonate dissolution also varies throughout different ocean basins. It was extreme in parts of the north and central Atlantic Ocean, but far less pronounced in the Pacific Ocean. With available information, estimates of the carbon addition range from about 2,000 to 7,000 gigatons. Timing of carbon addition and warming The timing of the PETM δ13C excursion is of considerable interest. This is because the total duration of the CIE, from the rapid drop in δ13C through the near recovery to initial conditions, relates to key parameters of our global carbon cycle, and because the onset provides insight to the source of 13C-depleted CO2. The total duration of the CIE can be estimated in several ways. The iconic sediment interval for examining and dating the PETM is a core recovered in 1987 by the Ocean Drilling Program at Hole 690B at Maud Rise in the South Atlantic Ocean. At this location, the PETM CIE, from start to end, spans about 2 m. Long-term age constraints, through biostratigraphy and magnetostratigraphy, suggest an average Paleogene sedimentation rate of about 1.23 cm/1,000yrs. Assuming a constant sedimentation rate, the entire event, from onset though termination, was therefore estimated at 200,000 years. Subsequently, it was noted that the CIE spanned 10 or 11 subtle cycles in various sediment properties, such as Fe content. Assuming these cycles represent precession, a similar but slightly longer age was calculated by Rohl et al. 2000. If a massive amount of 13C-depleted CO2 is rapidly injected into the modern ocean or atmosphere and projected into the future, a ~200,000 year CIE results because of slow flushing through quasi steady-state inputs (weathering and volcanism) and outputs (carbonate and organic) of carbon. A different study, based on a revised orbital chronology and data from sediment cores in the South Atlantic and the Southern Ocean, calculated a slightly shorter duration of about 170,000 years.A ~200,000 year duration for the CIE is estimated from models of global carbon cycling.Age constraints at several deep-sea sites have been independently examined using 3He contents, assuming the flux of this cosmogenic nuclide is roughly constant over short time periods. This approach also suggests a rapid onset for the PETM CIE (<20,000 years). However, the 3He records support a faster recovery to near initial conditions (<100,000 years) than predicted by flushing via weathering inputs and carbonate and organic outputs.There is other evidence to suggest that warming predated the δ13C excursion by some 3,000 years.Some authors have suggested that the magnitude of the CIE may be underestimated due to local processes in many sites causing a large proportion of allochthonous sediments to accumulate in their sedimentary rocks, contaminating and offsetting isotopic values derived from them. Organic matter degradation by microbes has also been implicated as a source of skewing of carbon isotopic ratios in bulk organic matter. Effects Precipitation The climate would also have become much wetter, with the increase in evaporation rates peaking in the tropics. Deuterium isotopes reveal that much more of this moisture was transported polewards than normal. Warm weather would have predominated as far north as the Polar basin. Finds of fossils of Azolla floating ferns in polar regions indicate subtropic temperatures at the poles. Central China during the PETM hosted dense subtropical forests as a result of the significant increase in rates of precipitation in the region, with average temperatures between 21 °C and 24 °C and mean annual precipitation ranging from 1,396 to 1,997 mm. Very high precipitation is also evidenced in the Cambay Shale Formation of India by the deposition of thick lignitic seams as a consequence of increased soil erosion and organic matter burial. Precipitation rates in the North Sea likewise soared during the PETM. In Cap d'Ailly, in present-day Normandy, a transient dry spell occurred just before the negative CIE, after which much moister conditions predominated, with the local environment transitioning from a closed marsh to an open, eutrophic swamp with frequent algal blooms. Precipitation patterns became highly unstable along the New Jersey Shelf. In the Rocky Mountain Interior, precipitation locally declined, however, as the interior of North America became more seasonally arid. East African sites display evidence of aridity punctuated by seasonal episodes of potent precipitation, revealing the global climate during the PETM not to be universally humid. Evidence from Forada in northeastern Italy suggests that arid and humid climatic intervals alternated over the course of the PETM concomitantly with precessional cycles in mid-latitudes, and that overall, net precipitation over the central-western Tethys decreased. Ocean The amount of freshwater in the Arctic Ocean increased, in part due to Northern Hemisphere rainfall patterns, fueled by poleward storm track migrations under global warming conditions. The flux of freshwater entering the oceans increased drastically during the PETM, and continued for a time after the PETM's termination. Anoxia The PETM generated the only oceanic anoxic event (OAE) of the Cenozoic. In parts of the oceans, especially the North Atlantic Ocean, bioturbation was absent. This may be due to bottom-water anoxia or due to changing ocean circulation patterns changing the temperatures of the bottom water. However, many ocean basins remained bioturbated through the PETM. Iodine to calcium ratios suggest oxygen minimum zones in the oceans expanded vertically and possibly also laterally. Water column anoxia and euxinia was most prevalent in restricted oceanic basins, such as the Arctic and Tethys Oceans. Euxinia struck the epicontinental North Sea Basin as well, as shown by increases in sedimentary uranium, molybdenum, sulphur, and pyrite concentrations, along with the presence of sulphur-bound isorenieratane. The Gulf Coastal Plain was also affected by euxinia.It is possible that during the PETM's early stages, anoxia helped to slow down warming through carbon drawdown via organic matter burial. A pronounced negative lithium isotope excursion in both marine carbonates and local weathering inputs suggests that weathering and erosion rates increased during the PETM, generating an increase in organic carbon burial, which acted as a negative feedback on the PETM's severe global warming. Sea level Along with the global lack of ice, the sea level would have risen due to thermal expansion. Evidence for this can be found in the shifting palynomorph assemblages of the Arctic Ocean, which reflect a relative decrease in terrestrial organic material compared to marine organic matter. Currents At the start of the PETM, the ocean circulation patterns changed radically in the course of under 5,000 years. Global-scale current directions reversed due to a shift in overturning from the Southern Hemisphere to Northern Hemisphere. This "backwards" flow persisted for 40,000 years. Such a change would transport warm water to the deep oceans, enhancing further warming. The major biotic turnover among benthic foraminifera has been cited as evidence of a significant change in deep water circulation. Acidification Ocean acidification occurred during the PETM, causing the calcite compensation depth to shoal. The lysocline marks the depth at which carbonate starts to dissolve (above the lysocline, carbonate is oversaturated): today, this is at about 4 km, comparable to the median depth of the oceans. This depth depends on (among other things) temperature and the amount of CO2 dissolved in the ocean. Adding CO2 initially raises the lysocline, resulting in the dissolution of deep water carbonates. This deep-water acidification can be observed in ocean cores, which show (where bioturbation has not destroyed the signal) an abrupt change from grey carbonate ooze to red clays (followed by a gradual grading back to grey). It is far more pronounced in North Atlantic cores than elsewhere, suggesting that acidification was more concentrated here, related to a greater rise in the level of the lysocline. Corrosive waters may have then spilled over into other regions of the world ocean from the North Atlantic. Model simulations show acidic water accumulation in the deep North Atlantic at the onset of the event. Acidification of deep waters, and the later spreading from the North Atlantic can explain spatial variations in carbonate dissolution. In parts of the southeast Atlantic, the lysocline rose by 2 km in just a few thousand years. Evidence from the tropical Pacific Ocean suggests a minimum lysocline shoaling of around 500 m at the time of this hyperthermal. Acidification may have increased the efficiency of transport of photic zone water into the ocean depths, thus partially acting as a negative feedback that retarded the rate of atmospheric carbon dioxide buildup. Also, diminished biocalcification inhibited the removal of alkalinity from the deep ocean, causing an overshoot of calcium carbonate deposition once net calcium carbonate production resumed, helping restore the ocean to its state before the PETM. As a consequence of coccolithophorid blooms enabled by enhanced runoff, carbonate was removed from seawater as the Earth recovered from the negative carbon isotope excursion, thus acting to ameliorate ocean acidification. Life Stoichiometric magnetite (Fe3O4) particles were obtained from PETM-age marine sediments. The study from 2008 found elongate prism and spearhead crystal morphologies, considered unlike any magnetite crystals previously reported, and are potentially of biogenic origin. These biogenic magnetite crystals show unique gigantism, and probably are of aquatic origin. The study suggests that development of thick suboxic zones with high iron bioavailability, the result of dramatic changes in weathering and sedimentation rates, drove diversification of magnetite-forming organisms, likely including eukaryotes. Biogenic magnetites in animals have a crucial role in geomagnetic field navigation. Ocean The PETM is accompanied by significant changes in the diversity of calcareous nannofossils and benthic and planktonic foraminifera. A mass extinction of 35–50% of benthic foraminifera (especially in deeper waters) occurred over the course of ~1,000 years, with the group suffering more during the PETM than during the dinosaur-slaying K-T extinction. At the onset of the PETM, benthic foraminiferal diversity dropped by 30% in the Pacific Ocean, while at Zumaia in what is now Spain, 55% of benthic foraminifera went extinct over the course of the PETM, though this decline was not ubiquitous to all sites; Himalayan platform carbonates show no major change in assemblages of large benthic foraminifera at the onset of the PETM; their decline came about towards the end of the event. A decrease in diversity and migration away from the oppressively hot tropics indicates planktonic foraminifera were adversely affected as well. The Lilliput effect is observed in shallow water foraminifera, possibly as a response to decreased surficial water density or diminished nutrient availability. The nannoplankton genus Fasciculithus went extinct as a result of increased surface water oligotrophy; the genera Sphenolithus, Zygrhablithus, Octolithus suffered badly too. Contrarily, the dinoflagellate Apectodinium bloomed. The fitness of Apectodinium homomorphum stayed constant over the PETM while that of others declined.The deep-sea extinctions are difficult to explain, because many species of benthic foraminifera in the deep-sea are cosmopolitan, and can find refugia against local extinction. General hypotheses such as a temperature-related reduction in oxygen availability, or increased corrosion due to carbonate undersaturated deep waters, are insufficient as explanations. Acidification may also have played a role in the extinction of the calcifying foraminifera, and the higher temperatures would have increased metabolic rates, thus demanding a higher food supply. Such a higher food supply might not have materialized because warming and increased ocean stratification might have led to declining productivity and/or increased remineralization of organic matter in the water column, before it reached the benthic foraminifera on the sea floor. The only factor global in extent was an increase in temperature. Regional extinctions in the North Atlantic can be attributed to increased deep-sea anoxia, which could be due to the slowdown of overturning ocean currents, or the release and rapid oxidation of large amounts of methane. In shallower waters, it's undeniable that increased CO2 levels result in a decreased oceanic pH, which has a profound negative effect on corals. Experiments suggest it is also very harmful to calcifying plankton. However, the strong acids used to simulate the natural increase in acidity which would result from elevated CO2 concentrations may have given misleading results, and the most recent evidence is that coccolithophores (E. huxleyi at least) become more, not less, calcified and abundant in acidic waters. No change in the distribution of calcareous nanoplankton such as the coccolithophores can be attributed to acidification during the PETM. Extinction rates among calcareous nannoplakton increased, but so did origination rates. Acidification did lead to an abundance of heavily calcified algae and weakly calcified forams. The calcareous nannofossil species Neochiastozygus junctus thrived; its success is attributable to enhanced surficial productivity caused by enhanced nutrient runoff. Eutrophication at the onset of the PETM precipitated a decline among K-strategist large foraminifera, though they rebounded during the post-PETM oligotrophy coevally with the demise of low-latitude corals.A study published in May 2021 concluded that fish thrived in at least some tropical areas during the PETM, based on discovered fish fossils including Mene maculata at Ras Gharib, Egypt. Land Humid conditions caused migration of modern Asian mammals northward, dependent on the climatic belts. Uncertainty remains for the timing and tempo of migration.The increase in mammalian abundance is intriguing. Increased global temperatures may have promoted dwarfing – which may have encouraged speciation. Major dwarfing occurred early in the PETM, with further dwarfing taking place during the middle of the hyperthermal. The dwarfing of various mammal lineages led to further dwarfing in other mammals whose reduction in body size was not directly induced by the PETM. Many major mammalian clades – including hyaenodontids, artiodactyls, perissodactyls, and primates – appeared and spread around the globe 13,000 to 22,000 years after the initiation of the PETM.The diversity of insect herbivory, as measured by the amount and diversity of damage to plants caused by insects, increased during the PETM in correlation with global warming. The ant genus Gesomyrmex radiated across Eurasia during the PETM. As with mammals, soil-dwelling invertebrates are observed to have dwarfed during the PETM.A profound change in terrestrial vegetation across the globe is associated with the PETM. Across all regions, floras from the latest Palaeocene are highly distinct from those of the PETM and the Early Eocene. Geologic effects Sediment deposition changed significantly at many outcrops and in many drill cores spanning this time interval. During the PETM, sediments are enriched with kaolinite from a detrital source due to denudation (initial processes such as volcanoes, earthquakes, and plate tectonics). Increased precipitation and enhanced erosion of older kaolinite-rich soils and sediments may have been responsible for this. Increased weathering from the enhanced runoff formed thick paleosoil enriched with carbonate nodules (Microcodium like), and this suggests a semi-arid climate. Unlike during lesser, more gradual hyperthermals, glauconite authigenesis was inhibited.The sedimentological effects of the PETM lagged behind the carbon isotope shifts. In the Tremp-Graus Basin of northern Spain, fluvial systems grew and rates of deposition of alluvial sediments increased with a lag time of around 3,800 years after the PETM.At some marine locations (mostly deep-marine), sedimentation rates must have decreased across the PETM, presumably because of carbonate dissolution on the seafloor; at other locations (mostly shallow-marine), sedimentation rates must have increased across the PETM, presumably because of enhanced delivery of riverine material during the event. Possible causes Discriminating between different possible causes of the PETM is difficult. Temperatures were rising globally at a steady pace, and a mechanism must be invoked to produce an instantaneous spike which may have been accentuated or catalyzed by positive feedback (or activation of "tipping or points"). The biggest aid in disentangling these factors comes from a consideration of the carbon isotope mass balance. We know the entire exogenic carbon cycle (i.e. the carbon contained within the oceans and atmosphere, which can change on short timescales) underwent a −0.2 % to −0.3 % perturbation in δ13C, and by considering the isotopic signatures of other carbon reserves, can consider what mass of the reserve would be necessary to produce this effect. The assumption underpinning this approach is that the mass of exogenic carbon was the same in the Paleogene as it is today – something which is very difficult to confirm. Eruption of large kimberlite field Although the cause of the initial warming has been attributed to a massive injection of carbon (CO2 and/or CH4) into the atmosphere, the source of the carbon has yet to be found. The emplacement of a large cluster of kimberlite pipes at ~56 Ma in the Lac de Gras region of northern Canada may have provided the carbon that triggered early warming in the form of exsolved magmatic CO2. Calculations indicate that the estimated 900–1,100 Pg of carbon required for the initial approximately 3 °C of ocean water warming associated with the Paleocene-Eocene thermal maximum could have been released during the emplacement of a large kimberlite cluster. The transfer of warm surface ocean water to intermediate depths led to thermal dissociation of seafloor methane hydrates, providing the isotopically depleted carbon that produced the carbon isotopic excursion. The coeval ages of two other kimberlite clusters in the Lac de Gras field and two other early Cenozoic hyperthermals indicate that CO2 degassing during kimberlite emplacement is a plausible source of the CO2 responsible for these sudden global warming events. Volcanic activity North Atlantic Igneous Province One of the leading candidates for the cause of the observed carbon cycle disturbances and global warming is volcanic activity associated with the North Atlantic Igneous Province (NAIP), which is believed to have released more than 10,000 gigatons of carbon during the PETM based on the relatively isotopically heavy values of the initial carbon addition. Mercury anomalies during the PETM point to massive volcanism during the event. On top of that, increases in ∆199Hg show intense volcanism was concurrent with the beginning of the PETM. Osmium isotopic anomalies in Arctic Ocean sediments dating to the PETM have been interpreted as evidence of a volcanic cause of this hyperthermal.Intrusions of hot magma into carbon-rich sediments may have triggered the degassing of isotopically light methane in sufficient volumes to cause global warming and the observed isotope anomaly. This hypothesis is documented by the presence of extensive intrusive sill complexes and thousands of kilometer-sized hydrothermal vent complexes in sedimentary basins on the mid-Norwegian margin and west of Shetland. This hydrothermal venting occurred at shallow depths, enhancing its ability to vent gases into the atmosphere and influence the global climate. Volcanic eruptions of a large magnitude can impact global climate, reducing the amount of solar radiation reaching the Earth's surface, lowering temperatures in the troposphere, and changing atmospheric circulation patterns. Large-scale volcanic activity may last only a few days, but the massive outpouring of gases and ash can influence climate patterns for years. Sulfuric gases convert to sulfate aerosols, sub-micron droplets containing about 75 percent sulfuric acid. Following eruptions, these aerosol particles can linger as long as three to four years in the stratosphere. Furthermore, phases of volcanic activity could have triggered the release of methane clathrates and other potential feedback loops. NAIP volcanism influenced the climatic changes of the time not only through the addition of greenhouse gases but also by changing the bathymetry of the North Atlantic. The connection between the North Sea and the North Atlantic through the Faroe-Shetland Basin was severely restricted, as was its connection to it by way of the English Channel.Later phases of NAIP volcanic activity may have caused the other hyperthermal events of the Early Eocene as well, such as ETM2. Other volcanic activity It has also been suggested that volcanic activity around the Caribbean may have disrupted the circulation of oceanic currents, amplifying the magnitude of climate change. Orbital forcing The presence of later (smaller) warming events of a global scale, such as the Elmo horizon (aka ETM2), has led to the hypothesis that the events repeat on a regular basis, driven by maxima in the 400,000 and 100,000 year eccentricity cycles in the Earth's orbit. Cores from Howard's Tract, Maryland indicate the PETM occurred as a result of an extreme in axial precession during an orbital eccentricity maximum. The current warming period is expected to last another 50,000 years due to a minimum in the eccentricity of the Earth's orbit. Orbital increase in insolation (and thus temperature) would force the system over a threshold and unleash positive feedbacks. The orbital forcing hypothesis has been challenged by a study finding the PETM to have coincided with a minimum in the ~400 kyr eccentricity cycle, inconsistent with a proposed orbital trigger for the hyperthermal. Comet impact One theory holds that a 12C-rich comet struck the earth and initiated the warming event. A cometary impact coincident with the P/E boundary can also help explain some enigmatic features associated with this event, such as the iridium anomaly at Zumaia, the abrupt appearance of a localized kaolinitic clay layer with abundant magnetic nanoparticles, and especially the nearly simultaneous onset of the carbon isotope excursion and the thermal maximum. A key feature and testable prediction of a comet impact is that it should produce virtually instantaneous environmental effects in the atmosphere and surface ocean with later repercussions in the deeper ocean. Even allowing for feedback processes, this would require at least 100 gigatons of extraterrestrial carbon. Such a catastrophic impact should have left its mark on the globe. A clay layer of 5-20m thickness on the coastal shelf of New Jersey contained unusual amounts of magnetite, but it was found to have formed 9-18 kyr too late for these magnetic particles to have been a result of a comet's impact, and the particles had a crystal structure which was a signature of magnetotactic bacteria rather than an extraterrestrial origin. However, recent analyses have shown that isolated particles of non-biogenic origin make up the majority of the magnetic particles in the clay sample.A 2016 report in Science describes the discovery of impact ejecta from three marine P-E boundary sections from the Atlantic margin of the eastern U.S., indicating that an extraterrestrial impact occurred during the carbon isotope excursion at the P-E boundary. The silicate glass spherules found were identified as microtektites and microkrystites. Burning of peat The combustion of prodigious quantities of peat was once postulated, because there was probably a greater mass of carbon stored as living terrestrial biomass during the Paleocene than there is today since plants in fact grew more vigorously during the period of the PETM. This theory was refuted, because in order to produce the δ13C excursion observed, over 90 percent of the Earth's biomass would have to have been combusted. However, the Paleocene is also recognized as a time of significant peat accumulation worldwide. A comprehensive search failed to find evidence for the combustion of fossil organic matter, in the form of soot or similar particulate carbon. Enhanced respiration Respiration rates of organic matter increase when temperatures rise. One feedback mechanism proposed to explain the rapid rise in carbon dioxide levels is a sudden, speedy rise in terrestrial respiration rates concordant with global temperature rise initiated by any of the other causes of warming. Methane clathrate release Methane hydrate dissolution has been invoked as a highly plausible causal mechanism for the carbon isotope excursion and warming observed at the PETM. The most obvious feedback mechanism that could amplify the initial perturbation is that of methane clathrates. Under certain temperature and pressure conditions, methane – which is being produced continually by decomposing microbes in sea bottom sediments – is stable in a complex with water, which forms ice-like cages trapping the methane in solid form. As temperature rises, the pressure required to keep this clathrate configuration stable increases, so shallow clathrates dissociate, releasing methane gas to make its way into the atmosphere. Since biogenic clathrates have a δ13C signature of −60 ‰ (inorganic clathrates are the still rather large −40 ‰), relatively small masses can produce large δ13C excursions. Further, methane is a potent greenhouse gas as it is released into the atmosphere, so it causes warming, and as the ocean transports this warmth to the bottom sediments, it destabilizes more clathrates.In order for the clathrate hypothesis to be applicable to PETM, the oceans must show signs of having been warmer slightly before the carbon isotope excursion, because it would take some time for the methane to become mixed into the system and δ13C-reduced carbon to be returned to the deep ocean sedimentary record. Up until the 2000s, the evidence suggested that the two peaks were in fact simultaneous, weakening the support for the methane theory. In 2002, a short gap between the initial warming and the δ13C excursion was detected. In 2007, chemical markers of surface temperature (TEX86) had also indicated that warming occurred around 3,000 years before the carbon isotope excursion, although this did not seem to hold true for all cores. However, research in 2005 found no evidence of this time gap in the deeper (non-surface) waters. Moreover, the small apparent change in TEX86 that precede the δ13C anomaly can easily (and more plausibly) be ascribed to local variability (especially on the Atlantic coastal plain, e.g. Sluijs, et al., 2007) as the TEX86 paleo-thermometer is prone to significant biological effects. The δ18O of benthic or planktonic forams does not show any pre-warming in any of these localities, and in an ice-free world, it is generally a much more reliable indicator of past ocean temperatures. Analysis of these records reveals another interesting fact: planktonic (floating) forams record the shift to lighter isotope values earlier than benthic (bottom dwelling) forams. The lighter (lower δ13C) methanogenic carbon can only be incorporated into foraminifer shells after it has been oxidised. A gradual release of the gas would allow it to be oxidised in the deep ocean, which would make benthic foraminifera show lighter values earlier. The fact that the planktonic foraminifera are the first to show the signal suggests that the methane was released so rapidly that its oxidation used up all the oxygen at depth in the water column, allowing some methane to reach the atmosphere unoxidised, where atmospheric oxygen would react with it. This observation also allows us to constrain the duration of methane release to under around 10,000 years.However, there are several major problems with the methane hydrate dissociation hypothesis. The most parsimonious interpretation for surface-water forams to show the δ13C excursion before their benthic counterparts (as in the Thomas et al. paper) is that the perturbation occurred from the top down, and not the bottom up. If the anomalous δ13C (in whatever form: CH4 or CO2) entered the atmospheric carbon reservoir first, and then diffused into the surface ocean waters, which mix with the deeper ocean waters over much longer time-scales, we would expect to observe the planktonics shifting toward lighter values before the benthics. Moreover, careful examination of the Thomas et al. data set shows that there is not a single intermediate planktonic foram value, implying that the perturbation and attendant δ13C anomaly happened over the lifespan of a single foram – much too fast for the nominal 10,000-year release needed for the methane hypothesis to work.An additional critique of the methane clathrate release hypothesis is that the warming effects of large-scale methane release would not be sustainable for more than a millennium. Thus, exponents of this line of criticism suggest that methane clathrate release could not have been the main driver of the PETM, which lasted for 50,000 to 200,000 years.There has been some debate about whether there was a large enough amount of methane hydrate to be a major carbon source; a 2011 paper proposed that was the case. The present-day global methane hydrate reserve was once considered to be between 2,000 and 10,000 Gt C (billions of tons of carbon), but is now estimated between 1500 and 2000 Gt C. However, because the global ocean bottom temperatures were ~6 °C higher than today, which implies a much smaller volume of sediment hosting gas hydrate than today, the global amount of hydrate before the PETM has been thought to be much less than present-day estimates. In a 2006 study, scientists regarded the source of carbon for the PETM to be a mystery. A 2011 study, using numerical simulations suggests that enhanced organic carbon sedimentation and methanogenesis could have compensated for the smaller volume of hydrate stability. A 2016 study based on reconstructions of atmospheric CO2 content during the PETM's carbon isotope excursions (CIE), using triple oxygen isotope analysis, suggests a massive release of seabed methane into the atmosphere as the driver of climatic changes. The authors also state that a massive release of methane hydrates through thermal dissociation of methane hydrate deposits has been the most convincing hypothesis for explaining the CIE ever since it was first identified, according to them. In 2019, a study suggested that there was a global warming of around 2 degrees several millennia before PETM, and that this warming had eventually destabilized methane hydrates and caused the increased carbon emission during PETM, as evidenced by the large increase in barium ocean concentrations (since PETM-era hydrate deposits would have been also been rich in barium, and would have released it upon their meltdown). In 2022, a foraminiferal records study had reinforced this conclusion, suggesting that the release of CO2 before PETM was comparable to the current anthropogenic emissions in its rate and scope, to the point that that there was enough time for a recovery to background levels of warming and ocean acidification in the centuries to millennia between the so-called pre-onset excursion (POE) and the main event (carbon isotope excursion, or CIE). A 2021 paper had further indicated that while PETM began with a significant intensification of volcanic activity and that lower-intensity volcanic activity sustained elevated carbon dioxide levels, "at least one other carbon reservoir released significant greenhouse gases in response to initial warming".It was estimated in 2001 that it would take around 2,300 years for an increased temperature to diffuse warmth into the sea bed to a depth sufficient to cause a release of clathrates, although the exact time-frame is highly dependent on a number of poorly constrained assumptions. Ocean warming due to flooding and pressure changes due to a sea-level drop may have caused clathrates to become unstable and release methane. This can take place over as short of a period as a few thousand years. The reverse process, that of fixing methane in clathrates, occurs over a larger scale of tens of thousands of years. Ocean circulation The large scale patterns of ocean circulation are important when considering how heat was transported through the oceans. Our understanding of these patterns is still in a preliminary stage. Models show that there are possible mechanisms to quickly transport heat to the shallow, clathrate-containing ocean shelves, given the right bathymetric profile, but the models cannot yet match the distribution of data we observe. "Warming accompanying a south-to-north switch in deepwater formation would produce sufficient warming to destabilize seafloor gas hydrates over most of the world ocean to a water depth of at least 1900 m." This destabilization could have resulted in the release of more than 2000 gigatons of methane gas from the clathrate zone of the ocean floor. The timing of changes in ocean circulation with respect to the shift in carbon isotope ratios has been argued to support the proposition that warmer deep water caused methane hydrate release. However, a different study found no evidence of a change in deep water formation, instead suggesting that deepened subtropical subduction rather than subtropical deep water formation occurred during the PETM.Arctic freshwater input into the North Pacific could serve as a catalyst for methane hydrate destabilization, an event suggested as a precursor to the onset of the PETM. Recovery Climate proxies, such as ocean sediments (depositional rates) indicate a duration of ~83 ka, with ~33 ka in the early rapid phase and ~50 ka in a subsequent gradual phase.The most likely method of recovery involves an increase in biological productivity, transporting carbon to the deep ocean. This would be assisted by higher global temperatures and CO2 levels, as well as an increased nutrient supply (which would result from higher continental weathering due to higher temperatures and rainfall; volcanoes may have provided further nutrients). Evidence for higher biological productivity comes in the form of bio-concentrated barium. However, this proxy may instead reflect the addition of barium dissolved in methane. Diversifications suggest that productivity increased in near-shore environments, which would have been warm and fertilized by run-off, outweighing the reduction in productivity in the deep oceans. Another pulse of NAIP volcanic activity may have also played a role in terminating the hyperthermal via a volcanic winter. Comparison with today's climate change Since at least 1997, the Paleocene–Eocene thermal maximum has been investigated in geoscience as an analog to understand the effects of global warming and of massive carbon inputs to the ocean and atmosphere, including ocean acidification. Humans today emit about 10 Gt of carbon (about 37 Gt CO2e) per year, and will have released a comparable amount in about 1,000 years at that rate. A main difference is that during the Paleocene–Eocene thermal maximum, the planet was ice-free, as the Drake Passage had not yet opened and the Central American Seaway had not yet closed. Although the PETM is now commonly held to be a "case study" for global warming and massive carbon emission, the cause, details, and overall significance of the event remain uncertain. Rate of carbon addition Model simulations of peak carbon addition to the ocean–atmosphere system during the PETM give a probable range of 0.3–1.7 petagrams of carbon per year (Pg C/yr), which is much slower than the currently observed rate of carbon emissions. One petagram of carbon is equivalent to a gigaton of carbon (GtC); the current rate of carbon injection into the atmosphere is over 10 GtC/yr, a rate much greater than the carbon injection rate that occurred during the PETM. It has been suggested that today's methane emission regime from the ocean floor is potentially similar to that during the PETM. Because the modern rate of carbon release exceeds the PETM's, it is speculated the a PETM-like scenario is the best-case consequence of anthropogenic global warming, with a mass extinction of a magnitude similar to the Cretaceous-Palaeogene extinction event being a worst-case scenario. Similarity of temperatures Professor of Earth and planetary sciences James Zachos notes that IPCC projections for 2300 in the 'business-as-usual' scenario could "potentially bring global temperature to a level the planet has not seen in 50 million years" – during the early Eocene. Some have described the PETM as arguably the best ancient analog of modern climate change. Scientists have investigated effects of climate change on chemistry of the oceans by exploring oceanic changes during the PETM. Tipping points A study found that the PETM shows that substantial climate-shifting tipping points in the Earth system exist, which "can trigger release of additional carbon reservoirs and drive Earth's climate into a hotter state". Climate sensitivity Whether climate sensitivity was lower or higher during the PETM than today remains under debate. A 2022 study found that the Eurasian Epicontinental Sea acted as a major carbon sink during the PETM due to its high biological productivity and helped to slow and mitigate the warming, and that the existence of many large epicontinental seas at that time made the Earth's climate less sensitive to forcing by greenhouse gases relative to today, when much fewer epicontinental seas exist. Other research, however, suggests that climate sensitivity was higher during the PETM than today, meaning that sensitivity to greenhouse gas release increases the higher their concentration in the atmosphere. See also References Further reading Jardine, Phil (2011). "Paleocene–Eocene Thermal Maximum". Palaeontology Online. 1 (5): 1–7. External links BBC Radio 4, In Our Time, The Paleocene–Eocene Thermal Maximum, 16 March 2017 Global Warming 56 Million Years Ago: What it Means for Us (Video)
list of global issues
A global issue is a matter of public concern worldwide. This list of global issues presents problems or phenomena affecting people around the world, including but not limited to widespread social issues, economic issues, and environmental issues. Organizations that maintain or have published an official list of global issues include the United Nations, and the World Economic Forum. Global catastrophic risks Not all of these risks are independent, because the majority, if not all of them are a result of human activity. Biodiversity loss Climate change Destructive artificial intelligence Environmental disaster Nuclear holocaust Pandemic Biotechnology risk Molecular nanotechnology Societal collapse United Nations list The UN has listed issues that it deems to be the most pressing as of 2023: As part of the 2030 Agenda for Sustainable Development, the UN Millennium Development Goals (2000-2015) were superseded by the UN Sustainable Development Goals (2016-2030), which are also known as The Global Goals. There are associated Targets and Indicators for each Global Goal. World Economic Forum List In keeping with their economy-centered view, the World Economic Forum formulated a list of 10 most pressing points in 2016: Food security Inclusive growth Future of work/unemployment Climate change Financial crisis of 2007–2008 Future of the internet/Fourth Industrial Revolution Gender equality Global trade and investment and regulatory frameworks Long-term investment/Investment strategy Future healthcare Global environmental issues No single issue can be analysed, treated, or isolated from the others. For example, habitat loss and climate change adversely affect biodiversity. Deforestation and pollution are direct consequences of overpopulation and both, in turn, affect biodiversity. While overpopulation locally leads to rural flight, this is more than counterbalanced by accelerating urbanization and urban sprawl. Theories like the world-system theory and the Gaia hypothesis focus on the inter-dependency aspect of environmental and economic issues. Among the most evident environmental problems are: Overconsumption – situation where resource use has outpaced the sustainable capacity of the ecosystem. Overpopulation – too many people for the planet to sustain. Biodiversity loss Deforestation Desertification Global warming/climate change Habitat destruction Holocene extinction Ocean acidification Ozone depletion Pollution Waste and waste disposal Water pollution Resource depletion Urban sprawl See also References Literature John L. Seitz, Kristen A. Hite (2012). Global Issues (4 ed.). Wiley-Blackwell. ISBN 978-0-470-65564-1. Richard J. Payne (2012). Global Issues (4th ed.). Pearson. ISBN 978-0205854592. Michael T. Snarr, D. Neil Snarr, ed. (2012). Introducing Global Issues (5th ed.). Lynne Rienner Pub. ISBN 978-1588268457. Shirley A. Fedorak (2013). Global Issues: A Cross-Cultural Perspective Paperback. ISBN 978-1442605961. Global Education Magazine Global Environmental Politics External links The UN's Global Issues page "Think Global: Learn More About Global Issues". dus.psu.edu. Retrieved 31 January 2018. "Global issues". TED.com. 11 May 2016. Retrieved 18 January 2018. "Global Issues". TakingITGlobal. Retrieved 18 January 2018. "Global Issues : social, political, economic and environmental issues that affect us all". Global Issues. 24 September 2011. Retrieved 18 January 2018. "Global Issues Network". Global Issues Network. 28 March 2013. Retrieved 18 January 2018. "Issues of resilience, interdependence and growth affect global climate change risk". Carbon Brief. 20 February 2014. Retrieved 18 January 2018. "A list of the most urgent global issues". 80,000 Hours. 30 April 2017. Retrieved 6 June 2018.
el niño–southern oscillation
El Niño–Southern Oscillation (ENSO) is an irregular periodic variation in winds and sea surface temperatures over the tropical eastern Pacific Ocean, affecting the climate of much of the tropics and subtropics. The warming phase of the sea temperature is known as El Niño and the cooling phase as La Niña. The Southern Oscillation is the accompanying atmospheric component, coupled with the sea temperature change: El Niño is accompanied by high air surface pressure in the tropical western Pacific and La Niña with low air surface pressure there. The two periods last several months each and typically occur every few years with varying intensity per period.The two phases relate to the Walker circulation, which was discovered by Gilbert Walker during the early twentieth century. The Walker circulation is caused by the pressure gradient force that results from a high-pressure area over the eastern Pacific Ocean, and a low-pressure system over Indonesia. Weakening or reversal of the Walker circulation (which includes the trade winds) decreases or eliminates the upwelling of cold deep sea water, thus creating El Niño by causing the ocean surface to reach above average temperatures. An especially strong Walker circulation causes La Niña, resulting in cooler ocean temperatures due to increased upwelling. Mechanisms that cause the oscillation remain under study. The extremes of this climate pattern's oscillations cause extreme weather (such as floods and droughts) in many regions of the world. Developing countries dependent upon agriculture and fishing, particularly those bordering the Pacific Ocean, are the most affected. Outline The El Niño–Southern Oscillation is a single climate phenomenon that periodically fluctuates between three phases: Neutral, La Niña or El Niño. La Niña and El Niño are opposite phases which require certain changes to take place in both the ocean and the atmosphere before an event is declared.Normally the northward flowing Humboldt Current brings relatively cold water from the Southern Ocean northwards along South America's west coast to the tropics, where it is enhanced by up-welling taking place along the coast of Peru. Along the equator, trade winds cause the ocean currents in the eastern Pacific to draw water from the deeper ocean to the surface, thus cooling the ocean surface. Under the influence of the equatorial trade winds, this cold water flows westwards along the equator where it is slowly heated by the sun. As a direct result sea surface temperatures in the western Pacific are generally warmer, by about 8–10 °C (14–18 °F) than those in the Eastern Pacific. This warmer area of ocean is a source for convection and is associated with cloudiness and rainfall. During El Niño years the cold water weakens or disappears completely as the water in the Central and Eastern Pacific becomes as warm as the Western Pacific. Walker circulation The Walker circulation is caused by the pressure gradient force that results from a high pressure system over the eastern Pacific Ocean, and a low pressure system over Indonesia. The Walker circulations of the tropical Indian, Pacific, and Atlantic basins result in westerly surface winds in northern summer in the first basin and easterly winds in the second and third basins. As a result, the temperature structure of the three oceans display dramatic asymmetries. The equatorial Pacific and Atlantic both have cool surface temperatures in northern summer in the east, while cooler surface temperatures prevail only in the western Indian Ocean. These changes in surface temperature reflect changes in the depth of the thermocline.Changes in the Walker circulation with time occur in conjunction with changes in surface temperature. Some of these changes are forced externally, such as the seasonal shift of the sun into the Northern Hemisphere in summer. Other changes appear to be the result of coupled ocean-atmosphere feedback in which, for example, easterly winds cause the sea surface temperature to fall in the east, enhancing the zonal heat contrast and hence intensifying easterly winds across the basin. These anomalous easterlies induce more equatorial upwelling and raise the thermocline in the east, amplifying the initial cooling by the southerlies. This coupled ocean-atmosphere feedback was originally proposed by Bjerknes. From an oceanographic point of view, the equatorial cold tongue is caused by easterly winds. Were the Earth climate symmetric about the equator, cross-equatorial wind would vanish, and the cold tongue would be much weaker and have a very different zonal structure than is observed today.During non-El Niño conditions, the Walker circulation is seen at the surface as easterly trade winds that move water and air warmed by the sun toward the west. This also creates ocean upwelling off the coasts of Peru and Ecuador and brings nutrient-rich cold water to the surface, increasing fishing stocks. The western side of the equatorial Pacific is characterized by warm, wet, low-pressure weather as the collected moisture is dumped in the form of typhoons and thunderstorms. The ocean is some 60 cm (24 in) higher in the western Pacific as the result of this motion. Sea surface temperature oscillation Within the National Oceanic and Atmospheric Administration in the United States, sea surface temperatures in the Niño 3.4 region, which stretches from the 120th to 170th meridians west longitude astride the equator five degrees of latitude on either side, are monitored. This region is approximately 3,000 kilometres (1,900 mi) to the southeast of Hawaii. The most recent three-month average for the area is computed, and if the region is more than 0.5 °C (0.9 °F) above (or below) normal for that period, then an El Niño (or La Niña) is considered in progress. The United Kingdom's Met Office also uses a several month period to determine ENSO state. When this warming or cooling occurs for only seven to nine months, it is classified as El Niño/La Niña "conditions"; when it occurs for more than that period, it is classified as El Niño/La Niña "episodes". Neutral phase If the temperature variation from climatology is within 0.5 °C (0.9 °F), ENSO conditions are described as neutral. Neutral conditions are the transition between warm and cold phases of ENSO. Ocean temperatures (by definition), tropical precipitation, and wind patterns are near average conditions during this phase. Close to half of all years are within neutral periods. During the neutral ENSO phase, other climate anomalies/patterns such as the sign of the North Atlantic Oscillation or the Pacific–North American teleconnection pattern exert more influence. Warm phase When the Walker circulation weakens or reverses and the Hadley circulation strengthens an El Niño results, causing the ocean surface to be warmer than average, as upwelling of cold water occurs less or not at all offshore northwestern South America. El Niño (, , Spanish pronunciation: [el ˈniɲo]) is associated with a band of warmer than average ocean water temperatures that periodically develops off the Pacific coast of South America. El niño is Spanish for "the child boy", and the capitalized term El Niño refers to the Christ child, Jesus, because periodic warming in the Pacific near South America is usually noticed around Christmas. El Niño accompanies high air surface pressure in the western Pacific. Mechanisms that cause the oscillation remain under study. Cold phase An especially strong Walker circulation causes La Niña, resulting in cooler ocean temperatures in the central and eastern tropical Pacific Ocean due to increased upwelling. La Niña (, Spanish pronunciation: [la ˈniɲa]) is a coupled ocean-atmosphere phenomenon that is the counterpart of El Niño as part of the broader El Niño Southern Oscillation climate pattern. The name La Niña originates from Spanish, meaning "the child girl", analogous to El Niño meaning "the child boy". During a period of La Niña the sea surface temperature across the equatorial eastern central Pacific will be lower than normal by 3–5 °C. In the United States, an appearance of La Niña happens for at least five months of La Niña conditions. However, each country and island nation has a different threshold for what constitutes a La Niña event, which is tailored to their specific interests. The Japan Meteorological Agency for example, declares that a La Niña event has started when the average five month sea surface temperature deviation for the NINO.3 region, is over 0.5 °C (0.90 °F) cooler for 6 consecutive months or longer. Transitional phases Transitional phases at the onset or departure of El Niño or La Niña can also be important factors on global weather by affecting teleconnections. Significant episodes, known as Trans-Niño, are measured by the Trans-Niño index (TNI). Examples of affected short-time climate in North America include precipitation in the Northwest US and intense tornado activity in the contiguous US. Southern Oscillation The Southern Oscillation is the atmospheric component of El Niño. This component is an oscillation in surface air pressure between the tropical eastern and the western Pacific Ocean waters. The strength of the Southern Oscillation is measured by the Southern Oscillation Index (SOI). The SOI is computed from fluctuations in the surface air pressure difference between Tahiti (in the Pacific) and Darwin, Australia (on the Indian Ocean). El Niño episodes have negative SOI, meaning there is lower pressure over Tahiti and higher pressure in Darwin. La Niña episodes have positive SOI, meaning there is higher pressure in Tahiti and lower in Darwin.Low atmospheric pressure tends to occur over warm water and high pressure occurs over cold water, in part because of deep convection over the warm water. El Niño episodes are defined as sustained warming of the central and eastern tropical Pacific Ocean, thus resulting in a decrease in the strength of the Pacific trade winds, and a reduction in rainfall over eastern and northern Australia. La Niña episodes are defined as sustained cooling of the central and eastern tropical Pacific Ocean, thus resulting in an increase in the strength of the Pacific trade winds, and the opposite effects in Australia when compared to El Niño. Although the Southern Oscillation Index has a long station record going back to the 1800s, its reliability is limited due to the presence of both Darwin and Tahiti well south of the Equator, resulting in the surface air pressure at both locations being less directly related to ENSO. To overcome this question, a new index was created, being named the Equatorial Southern Oscillation Index (EQSOI). To generate this index data, two new regions, centered on the Equator, were delimited to create a new index: The western one is located over Indonesia and the eastern one is located over equatorial Pacific, close to the South American coast. However, data on EQSOI goes back only to 1949. Madden–Julian oscillation The Madden–Julian oscillation, or (MJO), is the largest element of the intraseasonal (30- to 90-day) variability in the tropical atmosphere, and was discovered by Roland Madden and Paul Julian of the National Center for Atmospheric Research (NCAR) in 1971. It is a large-scale coupling between atmospheric circulation and tropical deep convection. Rather than being a standing pattern like the El Niño Southern Oscillation (ENSO), the MJO is a traveling pattern that propagates eastward at approximately 4 to 8 m/s (14 to 29 km/h; 9 to 18 mph), through the atmosphere above the warm parts of the Indian and Pacific oceans. This overall circulation pattern manifests itself in various ways, most clearly as anomalous rainfall. The wet phase of enhanced convection and precipitation is followed by a dry phase where thunderstorm activity is suppressed. Each cycle lasts approximately 30–60 days. Because of this pattern, The MJO is also known as the 30- to 60-day oscillation, 30- to 60-day wave, or intraseasonal oscillation. There is strong year-to-year (interannual) variability in MJO activity, with long periods of strong activity followed by periods in which the oscillation is weak or absent. This interannual variability of the MJO is partly linked to the El Niño–Southern Oscillation (ENSO) cycle. In the Pacific, strong MJO activity is often observed 6 – 12 months prior to the onset of an El Niño episode, but is virtually absent during the maxima of some El Niño episodes, while MJO activity is typically greater during a La Niña episode. Strong events in the Madden–Julian oscillation over a series of months in the western Pacific can speed the development of an El Niño or La Niña but usually do not in themselves lead to the onset of a warm or cold ENSO event. However, observations suggest that the 1982–1983 El Niño developed rapidly during July 1982 in direct response to a Kelvin wave triggered by an MJO event during late May. Further, changes in the structure of the MJO with the seasonal cycle and ENSO might facilitate more substantial impacts of the MJO on ENSO. For example, the surface westerly winds associated with active MJO convection are stronger during advancement toward El Niño and the surface easterly winds associated with the suppressed convective phase are stronger during advancement toward La Nina. Impacts On precipitation Developing countries dependent upon agriculture and fishing, particularly those bordering the Pacific Ocean, are the most affected by ENSO. The effects of El Niño in South America are direct and strong. An El Niño is associated with warm and very wet weather months in April–October along the coasts of northern Peru and Ecuador, causing major flooding whenever the event is strong or extreme. La Niña causes a drop in sea surface temperatures over Southeast Asia and heavy rains over Malaysia, the Philippines, and Indonesia.To the north across Alaska, La Niña events lead to drier than normal conditions, while El Niño events do not have a correlation towards dry or wet conditions. During El Niño events, increased precipitation is expected in California due to a more southerly, zonal, storm track. During La Niña, increased precipitation is diverted into the Pacific Northwest due to a more northerly storm track. During La Niña events, the storm track shifts far enough northward to bring wetter than normal winter conditions (in the form of increased snowfall) to the Midwestern states, as well as hot and dry summers. During the El Niño portion of ENSO, increased precipitation falls along the Gulf coast and Southeast due to a stronger than normal, and more southerly, polar jet stream.In the late winter and spring during El Niño events, drier than average conditions can be expected in Hawaii. On Guam during El Niño years, dry season precipitation averages below normal. However, the threat of a tropical cyclone is over triple what is normal during El Niño years, so extreme shorter duration rainfall events are possible. On American Samoa during El Niño events, precipitation averages about 10 percent above normal, while La Niña events lead to precipitation amounts which average close to 10 percent below normal. ENSO is linked to rainfall over Puerto Rico. During an El Niño, snowfall is greater than average across the southern Rockies and Sierra Nevada mountain range, and is well-below normal across the Upper Midwest and Great Lakes states. During a La Niña, snowfall is above normal across the Pacific Northwest and western Great Lakes. In Western Asia, during the region's November–April rainy season, it was discovered that in the El Niño phase there was increased precipitation, and in the La Niña phase there was a reduced amount of precipitation on average.Although ENSO can dramatically affect precipitation, even severe droughts and rainstorms in ENSO areas are not always deadly. Scholar Mike Davis cites ENSO as responsible for droughts in India and China in the late nineteenth century, but argues that nations in these areas avoided devastating famine during these droughts with institutional preparation and organized relief efforts. On Tehuantepecers The synoptic condition for the Tehuantepecer, a violent mountain-gap wind in between the mountains of Mexico and Guatemala, is associated with high-pressure system forming in Sierra Madre of Mexico in the wake of an advancing cold front, which causes winds to accelerate through the Isthmus of Tehuantepec. Tehuantepecers primarily occur during the cold season months for the region in the wake of cold fronts, between October and February, with a summer maximum in July caused by the westward extension of the Azores-Bermuda high pressure system. Wind magnitude is greater during El Niño years than during La Niña years, due to the more frequent cold frontal incursions during El Niño winters. Tehuantepec winds reach 20 knots (40 km/h) to 45 knots (80 km/h), and on rare occasions 100 knots (190 km/h). The wind's direction is from the north to north-northeast. It leads to a localized acceleration of the trade winds in the region, and can enhance thunderstorm activity when it interacts with the Intertropical Convergence Zone. The effects can last from a few hours to six days. On global warming El Niño events cause short-term (approximately 1 year in length) spikes in global average surface temperature while La Niña events cause short term cooling. Therefore, the relative frequency of El Niño compared to La Niña events can affect global temperature trends on decadal timescales. Over the last several decades, the number of El Niño events increased, and the number of La Niña events decreased, although observation of ENSO for much longer is needed to detect robust changes.The studies of historical data show the recent El Niño variation is most likely linked to global warming. For example, one of the most recent results, even after subtracting the positive influence of decadal variation, is shown to be possibly present in the ENSO trend, the amplitude of the ENSO variability in the observed data still increases, by as much as 60% in the last 50 years. A study published in 2023 by CSIRO researchers found that climate change may have increased by two times the likelihood of strong El Niño events and nine times the likelihood of strong La Niña events. The study claims it found a consensus between different models and experiments.Future trends in ENSO are uncertain as different models make different predictions. It may be that the observed phenomenon of more frequent and stronger El Niño events occurs only in the initial phase of the global warming, and then (e.g., after the lower layers of the ocean get warmer, as well), El Niño will become weaker. It may also be that the stabilizing and destabilizing forces influencing the phenomenon will eventually compensate for each other. More research is needed to provide a better answer to that question. The ENSO is considered to be a potential tipping element in Earth's climate and, under the global warming, can enhance or alternate regional climate extreme events through a strengthened teleconnection. For example, an increase in the frequency and magnitude of El Niño events have triggered warmer than usual temperatures over the Indian Ocean, by modulating the Walker circulation. This has resulted in a rapid warming of the Indian Ocean, and consequently a weakening of the Asian Monsoon. On coral bleaching Following the El Nino event in 1997 – 1998, the Pacific Marine Environmental Laboratory attributes the first large-scale coral bleaching event to the warming waters. On hurricanes Based on modeled and observed accumulated cyclone energy (ACE), El Niño years usually result in less active hurricane seasons in the Atlantic Ocean, but instead favor a shift of tropical cyclone activity in the Pacific Ocean, compared to La Niña years favoring above average hurricane development in the Atlantic and less so in the Pacific basin. Diversity The traditional ENSO (El Niño Southern Oscillation), also called Eastern Pacific (EP) ENSO, involves temperature anomalies in the eastern Pacific. However, in the 1990s and 2000s, nontraditional ENSO conditions were observed, in which the usual place of the temperature anomaly (Niño 1 and 2) is not affected, but an anomaly arises in the central Pacific (Niño 3.4). The phenomenon is called Central Pacific (CP) ENSO, "dateline" ENSO (because the anomaly arises near the dateline), or ENSO "Modoki" (Modoki is Japanese for "similar, but different"). There are flavors of ENSO additional to EP and CP types and some scientists argue that ENSO exists as a continuum often with hybrid types.The effects of the CP ENSO are different from those of the traditional EP ENSO. The El Niño Modoki leads to more hurricanes more frequently making landfall in the Atlantic. La Niña Modoki leads to a rainfall increase over northwestern Australia and northern Murray–Darling basin, rather than over the east as in a conventional La Niña. Also, La Niña Modoki increases the frequency of cyclonic storms over Bay of Bengal, but decreases the occurrence of severe storms in the Indian Ocean.The recent discovery of ENSO Modoki has some scientists believing it to be linked to global warming. However, comprehensive satellite data go back only to 1979. More research must be done to find the correlation and study past El Niño episodes. More generally, there is no scientific consensus on how/if climate change might affect ENSO.There is also a scientific debate on the very existence of this "new" ENSO. Indeed, a number of studies dispute the reality of this statistical distinction or its increasing occurrence, or both, either arguing the reliable record is too short to detect such a distinction, finding no distinction or trend using other statistical approaches, or that other types should be distinguished, such as standard and extreme ENSO. Following the asymmetric nature of the warm and cold phases of ENSO, some studies could not identify such distinctions for La Niña, both in observations and in the climate models, but some sources indicate that there is a variation on La Niña with cooler waters on central Pacific and average or warmer water temperatures on both eastern and western Pacific, also showing eastern Pacific Ocean currents going to the opposite direction compared to the currents in traditional La Niñas. Paleoclimate records Different modes of ENSO-like events have been registered in paleoclimatic archives, showing different triggering methods, feedbacks and environmental responses to the geological, atmospheric and oceanographic characteristics of the time. These paleorecords can be used to provide a qualitative basis for conservation practices. See also Recharge Oscillator Effects of the El Niño–Southern Oscillation in Australia Effects of the El Niño–Southern Oscillation in the United States References External links ENSO Outlook: An alert system for the El Niño–Southern Oscillation at BoM Current map of sea surface temperature anomalies in the Pacific Ocean Southern Oscillation Diagnostic Discussion at CPC
global temperature record
The global temperature record shows the fluctuations of the temperature of the atmosphere and the oceans through various spans of time. There are numerous estimates of temperatures since the end of the Pleistocene glaciation, particularly during the current Holocene epoch. Some temperature information is available through geologic evidence, going back millions of years. More recently, information from ice cores covers the period from 800,000 years before the present time until now. A study of the paleoclimate covers the time period from 12,000 years ago to the present. Tree rings and measurements from ice cores can give evidence about the global temperature from 1,000-2,000 years before the present until now. The most detailed information exists since 1850, when methodical thermometer-based records began. Modifications on the Stevenson-type screen were made for uniform instrument measurements around 1880. Geologic evidence (millions of years) On longer time scales, sediment cores show that the cycles of glacials and interglacials are part of a deepening phase within a prolonged ice age that began with the glaciation of Antarctica approximately 40 million years ago. This deepening phase, and the accompanying cycles, largely began approximately 3 million years ago with the growth of continental ice sheets in the Northern Hemisphere. Gradual changes in Earth's climate of this kind have been frequent during the Earth's 4,540 million year existence and most often are attributed to changes in the configuration of continents and ocean sea ways. Ice cores (from 800,000 years before present) Even longer term records exist for few sites: the recent Antarctic EPICA core reaches 800 kyr; many others reach more than 100,000 years. The EPICA core covers eight glacial/interglacial cycles. The NGRIP core from Greenland stretches back more than 100 kyr, with 5 kyr in the Eemian interglacial. Whilst the large-scale signals from the cores are clear, there are problems interpreting the detail, and connecting the isotopic variation to the temperature signal. Ice core locations The World Paleoclimatology Data Center (WDC) maintains the ice core data files of glaciers and ice caps in polar and low latitude mountains all over the world. Ice core records from Greenland As a paleothermometry, the ice core in central Greenland showed consistent records on the surface-temperature changes. According to the records, changes in global climate are rapid and widespread. Warming phase only needs simple steps, however, the cooling process requires more prerequisites and bases. Also, Greenland has the clearest record of abrupt climate changes in the ice core, and there are no other records that can show the same time interval with equally high time resolution.When scientists explored the trapped gas in the ice core bubbles, they found that the methane concentration in Greenland ice core is significantly higher than that in Antarctic samples of similar age, the records of changes of concentration difference between Greenland and Antarctic reveal variation of latitudinal distribution of methane sources. Increase in methane concentration shown by Greenland ice core records implies that the global wetland area has changed greatly over past years. As a component of greenhouse gases, methane plays an important role in global warming. The variation of methane from Greenland records makes a unique contribution for global temperature records undoubtedly. Ice core records from Antarctica The Antarctic ice sheet originated in the late Eocene, the drilling has restored a record of 800,000 years in Dome Concordia, and it is the longest available ice core in Antarctica. In recent years, more and more new studies have provided older but discrete records. Due to the uniqueness of the Antarctic ice sheet, the Antarctic ice core not only records the global temperature changes, but also contains huge quantities of information about the global biogeochemical cycles, climate dynamics and abrupt changes in global climate.By comparing with current climate records, the ice core records in Antarctica further confirm that polar amplification. Although Antarctica is covered by the ice core records, the density is rather low considering the area of Antarctica. Exploring more drilling stations is the primary goal for current research institutions. Ice core records from low-latitude regions The ice core records from low-latitude regions are not as common as records from polar regions, however, these records still provide much useful information for scientists. Ice cores in low-latitude regions usually locates in high altitude areas. The Guliya record is the longest record from low-latitude, high altitude regions, which spans over 700,000 years. According to these records, scientists found the evidence which can prove the Last Glacial Maximum (LGM) was colder in the tropics and subtropics than previously believed. Also, the records from low-latitude regions helped scientists confirm that the 20th century was the warmest period in the last 1000 years. Paleoclimate (from 12,000 years before present) Many estimates of past temperatures have been made over Earth's history. The field of paleoclimatology includes ancient temperature records. As the present article is oriented toward recent temperatures, there is a focus here on events since the retreat of the Pleistocene glaciers. The 10,000 years of the Holocene epoch covers most of this period, since the end of the Northern Hemisphere's Younger Dryas millennium-long cooling. The Holocene Climatic Optimum was generally warmer than the 20th century, but numerous regional variations have been noted since the start of the Younger Dryas. Tree rings and ice cores (from 1,000–2,000 years before present) Proxy measurements can be used to reconstruct the temperature record before the historical period. Quantities such as tree ring widths, coral growth, isotope variations in ice cores, ocean and lake sediments, cave deposits, fossils, ice cores, borehole temperatures, and glacier length records are correlated with climatic fluctuations. From these, proxy temperature reconstructions of the last 2000 years have been performed for the northern hemisphere, and over shorter time scales for the southern hemisphere and tropics.Geographic coverage by these proxies is necessarily sparse, and various proxies are more sensitive to faster fluctuations. For example, tree rings, ice cores, and corals generally show variation on an annual time scale, but borehole reconstructions rely on rates of thermal diffusion, and small scale fluctuations are washed out. Even the best proxy records contain far fewer observations than the worst periods of the observational record, and the spatial and temporal resolution of the resulting reconstructions is correspondingly coarse. Connecting the measured proxies to the variable of interest, such as temperature or rainfall, is highly non-trivial. Data sets from multiple complementary proxies covering overlapping time periods and areas are reconciled to produce the final reconstructions.Proxy reconstructions extending back 2,000 years have been performed, but reconstructions for the last 1,000 years are supported by more and higher quality independent data sets. These reconstructions indicate: global mean surface temperatures over the last 25 years have been higher than any comparable period since AD 1600, and probably since AD 900 there was a Little Ice Age centered on AD 1700 there was a Medieval Warm Period centered on AD 1000, but this was not a global phenomenon. Indirect historical proxies As well as natural, numerical proxies (tree-ring widths, for example) there exist records from the human historical period that can be used to infer climate variations, including: reports of frost fairs on the Thames; records of good and bad harvests; dates of spring blossom or lambing; extraordinary falls of rain and snow; and unusual floods or droughts. Such records can be used to infer historical temperatures, but generally in a more qualitative manner than natural proxies. Recent evidence suggests that a sudden and short-lived climatic shift between 2200 and 2100 BCE occurred in the region between Tibet and Iceland, with some evidence suggesting a global change. The result was a cooling and reduction in precipitation. This is believed to be a primary cause of the collapse of the Old Kingdom of Egypt. Satellite and balloon (1950s–present) Weather balloon radiosonde measurements of atmospheric temperature at various altitudes begin to show an approximation of global coverage in the 1950s. Since December 1978, microwave sounding units on satellites have produced data which can be used to infer temperatures in the troposphere. Several groups have analyzed the satellite data to calculate temperature trends in the troposphere. Both the University of Alabama in Huntsville (UAH) and the private, NASA funded, corporation Remote Sensing Systems (RSS) find an upward trend. For the lower troposphere, UAH found a global average trend between 1978 and 2019 of 0.130 degrees Celsius per decade. RSS found a trend of 0.148 degrees Celsius per decade, to January 2011.In 2004 scientists found trends of +0.19 degrees Celsius per decade when applied to the RSS dataset. Others found 0.20 degrees Celsius per decade up between 1978 and 2005, since which the dataset has not been updated. Thermometers (1850–present) See also Climate variability and change Global warming (causing recent climate change) CLIWOC (climatological database for the world's oceans) Dendroclimatology References External links Hadley Centre: Global temperature data NASA's Goddard Institute for Space Studies (GISS) — Global Temperature Trends. Surface Temperature Reconstructions for the last 2,000 Years
arctic
The Arctic ( or ) is a polar region located at the northernmost part of Earth. The Arctic consists of the Arctic Ocean, adjacent seas, and parts of Canada (Yukon, Northwest Territories, Nunavut), Danish Realm (Greenland), northern Finland (Northern Ostrobothnia, Kainuu and Lappi), Iceland, northern Norway (Nordland, Troms, Finnmark, Svalbard and Jan Mayen), Russia (Murmansk, Siberia, Nenets Okrug, Novaya Zemlya), northernmost Sweden (Västerbotten, Norrbotten and Lappland) and the United States (Alaska). Land within the Arctic region has seasonally varying snow and ice cover, with predominantly treeless permafrost (permanently frozen underground ice) under the tundra. Arctic seas contain seasonal sea ice in many places. The Arctic region is a unique area among Earth's ecosystems. The cultures in the region and the Arctic indigenous people have adapted to its cold and extreme conditions. Life in the Arctic includes zooplankton and phytoplankton, fish and marine mammals, birds, land animals, plants and human societies. Arctic land is bordered by the subarctic. Definition and etymology The word Arctic comes from the Greek word ἀρκτικός (arktikos), "near the Bear, northern" and from the word ἄρκτος (arktos), meaning bear. The name refers either to the constellation Ursa Major, the "Great Bear", which is prominent in the northern portion of the celestial sphere, or to the constellation Ursa Minor, the "Little Bear", which contains the celestial north pole (currently very near Polaris, the current north Pole Star, or North Star).There are a number of definitions of what area is contained within the Arctic. The area can be defined as north of the Arctic Circle (about 66° 34'N), the approximate southern limit of the midnight sun and the polar night. Another definition of the Arctic, which is popular with ecologists, is the region in the Northern Hemisphere where the average temperature for the warmest month (July) is below 10 °C (50 °F); the northernmost tree line roughly follows the isotherm at the boundary of this region. Climate The Arctic is characterized by cold winters and cool summers. Its precipitation mostly comes in the form of snow and is low, with most of the area receiving less than 50 cm (20 in). High winds often stir up snow, creating the illusion of continuous snowfall. Average winter temperatures can go as low as −40 °C (−40 °F), and the coldest recorded temperature is approximately −68 °C (−90 °F). Coastal Arctic climates are moderated by oceanic influences, having generally warmer temperatures and heavier snowfalls than the colder and drier interior areas. The Arctic is affected by current global warming, leading to Arctic sea ice shrinkage, diminished ice in the Greenland ice sheet, and Arctic methane release as the permafrost thaws. The melting of Greenland's ice sheet is linked to polar amplification.Due to the poleward migration of the planet's isotherms (about 56 km (35 mi) per decade during the past 30 years as a consequence of global warming), the Arctic region (as defined by tree line and temperature) is currently shrinking. Perhaps the most alarming result of this is Arctic sea ice shrinkage. There is a large variance in predictions of Arctic sea ice loss, with models showing near-complete to complete loss in September from 2035 to some time around 2067. Flora and fauna Arctic life is characterized by adaptation to short growing seasons with long periods of sunlight, and cold, dark, snow-covered winter conditions. Plants Arctic vegetation is composed of plants such as dwarf shrubs, graminoids, herbs, lichens, and mosses, which all grow relatively close to the ground, forming tundra. An example of a dwarf shrub is the bearberry. As one moves northward, the amount of warmth available for plant growth decreases considerably. In the northernmost areas, plants are at their metabolic limits, and small differences in the total amount of summer warmth make large differences in the amount of energy available for maintenance, growth and reproduction. Colder summer temperatures cause the size, abundance, productivity and variety of plants to decrease. Trees cannot grow in the Arctic, but in its warmest parts, shrubs are common and can reach 2 m (6 ft 7 in) in height; sedges, mosses and lichens can form thick layers. In the coldest parts of the Arctic, much of the ground is bare; non-vascular plants such as lichens and mosses predominate, along with a few scattered grasses and forbs (like the Arctic poppy). Animals Herbivores on the tundra include the Arctic hare, lemming, muskox, and reindeer. They are preyed on by the snowy owl, Arctic fox, Grizzly bear, and Arctic wolf. The polar bear is also a predator, though it prefers to hunt for marine life from the ice. There are also many birds and marine species endemic to the colder regions. Other terrestrial animals include wolverines, moose, Dall sheep, ermines, and Arctic ground squirrels. Marine mammals include seals, walruses, and several species of cetacean—baleen whales and also narwhals, orcas, and belugas. An excellent and famous example of a ring species exists and has been described around the Arctic Circle in the form of the Larus gulls. Natural resources The Arctic includes copious natural resources (oil, gas, minerals, fresh water, fish and, if the subarctic is included, forest) to which modern technology and the economic opening up of Russia have given significant new opportunities. The interest of the tourism industry is also on the increase. The Arctic contains some of the last and most extensive continuous wilderness areas in the world, and its significance in preserving biodiversity and genotypes is considerable. The increasing presence of humans fragments vital habitats. The Arctic is particularly susceptible to the abrasion of groundcover and to the disturbance of the rare breeding grounds of the animals that are characteristic to the region. The Arctic also holds 1/5 of the Earth's water supply. Paleontology During the Cretaceous time period, the Arctic still had seasonal snows, though only a light dusting and not enough to permanently hinder plant growth. Animals such as the Chasmosaurus, Hypacrosaurus, Troodon, and Edmontosaurus may have all migrated north to take advantage of the summer growing season, and migrated south to warmer climes when winter came. A similar situation may also have been found amongst dinosaurs that lived in Antarctic regions, such as the Muttaburrasaurus of Australia. However, others claim that dinosaurs lived year-round at very high latitudes, such as near the Colville River, which is now at about 70° N but at the time (70 million years ago) was 10° further north. Indigenous population The earliest inhabitants of North America's central and eastern Arctic are referred to as the Arctic small tool tradition (AST) and existed c. 2500 BCE. AST consisted of several Paleo-Eskimo cultures, including the Independence cultures and Pre-Dorset culture. The Dorset culture (Inuktitut: Tuniit or Tunit) refers to the next inhabitants of central and eastern Arctic. The Dorset culture evolved because of technological and economic changes during the period of 1050–550 BCE. With the exception of the Quebec/Labrador peninsula, the Dorset culture vanished around 1500 CE. Supported by genetic testing, evidence shows that descendants of the Dorset culture, known as the Sadlermiut, survived in Aivilik, Southampton and Coats Islands, until the beginning of the 20th century.The Dorset/Thule culture transition dates around the ninth–10th centuries CE. Scientists theorize that there may have been cross-contact of the two cultures with sharing of technology, such as fashioning harpoon heads, or the Thule may have found Dorset remnants and adapted their ways with the predecessor culture. Others believe the Thule displaced the Dorset. By 1300 CE, the Inuit, present-day Arctic inhabitants and descendants of Thule culture, had settled in west Greenland, and moved into east Greenland over the following century (Inughuit, Kalaallit and Tunumiit are modern Greenlandic Inuit groups descended from Thule). Over time, the Inuit have migrated throughout the Arctic regions of Eastern Russia, the United States, Canada, and Greenland.Other Circumpolar North indigenous peoples include the Chukchi, Evenks, Iñupiat, Khanty, Koryaks, Nenets, Sami, Yukaghir, Gwich'in, and Yupik. International cooperation and politics The eight Arctic nations (Canada, Kingdom of Denmark [Greenland & The Faroe Islands], Finland, Iceland, Norway, Sweden, Russia, and US) are all members of the Arctic Council, as are organizations representing six indigenous populations (The Aleut International Association, Arctic Athabaskan Council, Gwich'in Council International, Inuit Circumpolar Council, Russian Association of Indigenous Peoples of the North, and Saami Council). The council operates on consensus basis, mostly dealing with environmental treaties and not addressing boundary or resource disputes. Though Arctic policy priorities differ, every Arctic nation is concerned about sovereignty/defense, resource development, shipping routes, and environmental protection. Much work remains on regulatory agreements regarding shipping, tourism, and resource development in Arctic waters. Arctic shipping is subject to some regulatory control through the International Code for Ships Operating in Polar Waters, adopted by the International Maritime Organization on 1 January 2017 and applies to all ships in Arctic waters over 500 tonnes.Research in the Arctic has long been a collaborative international effort, evidenced by the International Polar Year. The International Arctic Science Committee, hundreds of scientists and specialists of the Arctic Council, and the Barents Euro-Arctic Council are more examples of collaborative international Arctic research. Territorial claims No country owns the geographic North Pole or the region of the Arctic Ocean surrounding it. The surrounding six Arctic states that border the Arctic Ocean—Canada, Kingdom of Denmark (with Greenland), Iceland, Norway, Russia, and the United States—are limited to a 200 nautical miles (370 km; 230 mi) exclusive economic zone (EEZ) off their coasts. Two Arctic states (Finland and Sweden) do not have direct access to the Arctic Ocean. Upon ratification of the United Nations Convention on the Law of the Sea, a country has ten years to make claims to an extended continental shelf beyond its 200 nautical mile zone. Due to this, Norway (which ratified the convention in 1996), Russia (ratified in 1997), Canada (ratified in 2003) and the Kingdom of Denmark (ratified in 2004) launched projects to establish claims that certain sectors of the Arctic seabed should belong to their territories. On 2 August 2007, two Russian bathyscaphes, MIR-1 and MIR-2, for the first time in history descended to the Arctic seabed beneath the North Pole and placed there a Russian flag made of rust-proof titanium alloy. The flag-placing during Arktika 2007 generated commentary on and concern for a race for control of the Arctic's vast hydrocarbon resources. Foreign ministers and other officials representing Canada, the Kingdom of Denmark, Norway, Russia, and the United States met in Ilulissat, Greenland on 28 May 2008 at the Arctic Ocean Conference and announced the Ilulissat Declaration, blocking any "new comprehensive international legal regime to govern the Arctic Ocean," and pledging "the orderly settlement of any possible overlapping claims."As of 2012, the Kingdom of Denmark is claiming the continental shelf based on the Lomonosov Ridge between Greenland and over the North Pole to the northern limit of the Russian EEZ.The Russian Federation is also claiming a large swath of seabed along the Lomonosov Ridge but, unlike Denmark, confined its claim to its side of the Arctic. In August 2015, Russia made a supplementary submission for the expansion of the external borders of its continental shelf in the Arctic Ocean, asserting that the eastern part of the Lomonosov Ridge and the Mendeleyev Ridge are an extension of the Eurasian continent. In August 2016, the UN Commission on the Limits of the Continental Shelf began to consider Russia's submission.Canada claims the Northwest Passage as part of its internal waters belonging to Canada, while the United States and most maritime nations regards it as an international strait, which means that foreign vessels have right of transit passage. Exploration Since 1937, the larger portion of the Asian-side Arctic region has been extensively explored by Soviet and Russian crewed drifting ice stations. Between 1937 and 1991, 88 international polar crews established and occupied scientific settlements on the drift ice and were carried thousands of kilometres by the ice flow. Plans to modernise forty research (also meteorological and maritime) stations across the Russian Arctic and 30 abandoned stations will be revived. These to provide safe shipping with major volumes and to review the level of pollution. Pollution The Arctic is comparatively clean, although there are certain ecologically difficult localized pollution problems that present a serious threat to people's health living around these pollution sources. Due to the prevailing worldwide sea and air currents, the Arctic area is the fallout region for long-range transport pollutants, and in some places the concentrations exceed the levels of densely populated urban areas. An example of this is the phenomenon of Arctic haze, which is commonly blamed on long-range pollutants. Another example is with the bioaccumulation of PCB's (polychlorinated biphenyls) in Arctic wildlife and people. Preservation There have been many proposals to preserve the Arctic over the years. Most recently a group of stars at the Rio Earth Summit, on 21 June 2012, proposed protecting the Arctic, similar to the Antarctic protection. The initial focus of the campaign will be a UN resolution creating a global sanctuary around the pole, and a ban on oil drilling and unsustainable fishing in the Arctic.The Arctic has climate change rates that are amongst the highest in the world. Due to the major impacts to the region from climate change the near climate future of the region will be extremely different under all scenarios of warming. Global warming The effects of global warming in the Arctic include rising temperatures, loss of sea ice, and melting of the Greenland ice sheet. Potential methane release from the region, especially through the thawing of permafrost and methane clathrates, is also a concern. Because of the amplified response of the Arctic to global warming, it is often seen as a leading indicator of global warming. The melting of Greenland's ice sheet is linked to polar amplification.The Arctic is especially vulnerable to the effects of any climate change, as has become apparent with the reduction of sea ice in recent years. Climate models predict much greater warming in the Arctic than the global average, resulting in significant international attention to the region. In particular, there are concerns that Arctic shrinkage, a consequence of melting glaciers and other ice in Greenland, could soon contribute to a substantial rise in sea levels worldwide.The current Arctic warming is leading to ancient carbon being released from thawing permafrost, leading to methane and carbon dioxide production by micro-organisms. Release of methane and carbon dioxide stored in permafrost could cause abrupt and severe global warming, as they are potent greenhouse gases. Climate change is also predicted to have a large impact on tundra vegetation, causing an increase of shrubs, and having a negative impact on bryophytes and lichens.Apart from concerns regarding the detrimental effects of warming in the Arctic, some potential opportunities have gained attention. The melting of the ice is making the Northwest Passage, the shipping routes through the northernmost latitudes, more navigable, raising the possibility that the Arctic region will become a prime trade route. One harbinger of the opening navigability of the Arctic took place in the summer of 2016 when the Crystal Serenity successfully navigated the Northwest Passage, a first for a large cruise ship.In addition, it is believed that the Arctic seabed may contain substantial oil fields which may become accessible if the ice covering them melts. These factors have led to recent international debates as to which nations can claim sovereignty or ownership over the waters of the Arctic. Arctic waters Arctic lands See also Arctic ecology Arctic Search and Rescue Agreement List of countries by northernmost point Arctic sanctuary Poverty in the Arctic Arctic Winter Games Winter City Notes References Bibliography Gibbon, Guy E.; Kenneth M. Ames (1998). Archaeology of prehistoric native America: an encyclopedia. Vol. 1537 of Garland reference library of the humanities. Taylor & Francis. ISBN 978-0-8153-0725-9. Further reading Brian W. Coad, James D. Reist. (2017). Marine Fishes of Arctic Canada. University of Toronto Press. ISBN 978-1-4426-4710-7 "Global Security, Climate Change, and the Arctic" Archived 29 December 2017 at the Wayback Machine – 24-page special journal issue (Fall 2009), Swords and Ploughshares, Program in Arms Control, Disarmament, and International Security (ACDIS), University of Illinois GLOBIO Human Impact maps Report on human impacts on the Arctic Krupnik, Igor, Michael A. Lang, and Scott E. Miller, eds. Smithsonian at the Poles: Contributions to International Polar Year Science. Washington, D.C.: Smithsonian Institution Scholarly Press, 2009. Konyshev, Valery & Sergunin, Alexander: The Arctic at the Crossroads of Geopolitical Interests Russian Politics and Law, 2012, Vol.50, No.2, pp. 34–54 Käpylä, Juha & Mikkola, Harri: The Global Arctic: The Growing Arctic Interests of Russia, China, the United States and the European Union Archived 15 September 2013 at the Wayback Machine FIIA Briefing Paper 133, August 2013, The Finnish Institute of International Affairs. Konyshev, Valery & Sergunin, Alexander. The Arctic at the crossroads of geopolitical interests // Russian Politics and Law, 2012. Vol. 50, No. 2. p. 34–54 Konyshev, Valery & Sergunin, Alexander: Is Russia a revisionist military power in the Arctic? Defense & Security Analysis, September 2014. Konyshev, Valery & Sergunin, Alexander. Russia in search of its Arctic strategy: between hard and soft power? Polar Journal, April 2014. McCannon, John. A History of the Arctic: Nature, Exploration and Exploitation. Reaktion Books and University of Chicago Press, 2012. ISBN 9781780230184 O'Rourke, Ronald (14 October 2016). Changes in the Arctic: Background and Issues for Congress (PDF). Washington, DC: Congressional Research Service. Archived (PDF) from the original on 9 October 2022. Retrieved 20 October 2016. Sperry, Armstrong (1957). All About the Arctic and Antarctic. Random House. LCCN 57007518. External links Arctic Report Card Blossoming Arctic International Arctic Research Center
global dimming
The first systematic measurements of global direct irradiance at the Earth's surface began in the 1950s. A decline in irradiance was soon observed, and it was given the name of global dimming. It continued from 1950s until 1980s, with an observed reduction of 4–5% per decade, even though solar activity did not vary more than the usual at the time. Global dimming has instead been attributed to an increase in atmospheric particulate matter, predominantly sulfate aerosols, as the result of rapidly growing air pollution due to post-war industrialization. After 1980s, global dimming started to reverse, alongside reductions in particulate emissions, in what has been described as global brightening, although this reversal is only considered "partial" for now. The reversal has also been globally uneven, as the dimming trend continued during the 1990s over some mostly developing countries like India, Zimbabwe, Chile and Venezuela. Over China, the dimming trend continued at a slower rate after 1990, and did not begin to reverse until around 2005.Global dimming has interfered with the hydrological cycle by lowering evaporation, which is likely to have reduced rainfall in certain areas, and may have caused the observed southwards shift of the entire tropical rain belt between 1950 and 1985, with a limited recovery afterwards. Since high evaporation at the tropics is needed to drive the wet season, cooling caused by particulate pollution appears to weaken Monsoon of South Asia, while reductions in pollution strengthen it. Multiple studies have also connected record levels of particulate pollution in the Northern Hemisphere to the monsoon failure behind the 1984 Ethiopian famine, although the full extent of anthropogenic vs. natural influences on that event is still disputed. On the other hand, global dimming has also counteracted some of the greenhouse gas emissions, effectively "masking" the total extent of global warming experienced to date, with the most-polluted regions even experiencing cooling in the 1970s. Conversely, global brightening had contributed to the acceleration of global warming which began in the 1990s.In the near future, global brightening is expected to continue, as nations act to reduce the toll of air pollution on the health of their citizens. This also means that less of global warming would be masked in the future. Climate models are broadly capable of simulating the impact of aerosols like sulfates, and in the IPCC Sixth Assessment Report, they are believed to offset around 0.5 °C (0.90 °F) of warming. Likewise, climate change scenarios incorporate reductions in particulates and the cooling they offered into their projections, and this includes the scenarios for climate action required to meet 1.5 °C (2.7 °F) and 2 °C (3.6 °F) targets. It is generally believed that the cooling provided by global dimming is similar to the warming derived from atmospheric methane, meaning that simultaneous reductions in both would effectively cancel each other out. However, uncertainties remain about the models' representation of aerosol impacts on weather systems, especially over the regions with a poorer historical record of atmospheric observations.The processes behind global dimming are similar to those which drive reductions in direct sunlight after volcanic eruptions. In fact, the eruption of Mount Pinatubo in 1991 had temporarily reversed the brightening trend. Both processes are considered an analogue for stratospheric aerosol injection, a solar geoengineering intervention which aims to counteract global warming through intentional releases of reflective aerosols, albeit at much higher altitudes, where lower quantities would be needed and the polluting effects would be minimized. However, while that intervention may be very effective at stopping or reversing warming and its main consequences, it would also have substantial effects on the global hydrological cycle, as well as regional weather and ecosystems. Because its effects are only temporary, it would have to be maintained for centuries until the greenhouse gas concentrations are normalized to avoid a rapid and violent return of the warming, sometimes known as termination shock. History In the late 1960s, Mikhail Ivanovich Budyko worked with simple two-dimensional energy-balance climate models to investigate the reflectivity of ice. He found that the ice–albedo feedback created a positive feedback loop in the Earth's climate system. The more snow and ice, the more solar radiation is reflected back into space and hence the colder Earth grows and the more it snows. Other studies suggested that sulfate pollution or a volcano eruption could provoke the onset of an ice age.In the 1980s, research in Israel and the Netherlands revealed an apparent reduction in the amount of sunlight, and Atsumu Ohmura, a geography researcher at the Swiss Federal Institute of Technology, found that solar radiation striking the Earth's surface had declined by more than 10% over the three previous decades, even as the global temperature had been generally rising since the 1970s. In the 1990s, this was followed by the papers describing multi-decade declines in Estonia, Germany and across the former Soviet Union, which prompted the researcher Gerry Stanhill to coin the term "global dimming". Subsequent research estimated an average reduction in sunlight striking the terrestrial surface of around 4–5% per decade over late 1950s–1980s, and 2–3% per decade when 1990s were included. Notably, solar radiation at the top of the atmosphere did not vary by more than 0.1-0.3% in all that time, strongly suggesting that the reasons for the dimming were on Earth. Additionally, only visible light and infrared radiation were dimmed, rather than the ultraviolet part of the spectrum. Reversal Starting from 2005, scientific papers began to report that after 1990, the global dimming trend had clearly switched to global brightening. This followed measures taken to combat air pollution by the developed nations, typically through flue-gas desulfurization installations at thermal power plants, such as wet scrubbers or fluidized bed combustion. In the United States, sulfate aerosols have declined significantly since 1970 with the passage of the Clean Air Act, which was strengthened in 1977 and 1990. According to the EPA, from 1970 to 2005, total emissions of the six principal air pollutants, including sulfates, dropped by 53% in the US. By 2010, this reduction in sulfate pollution led to estimated healthcare cost savings valued at $50 billion annually. Similar measures were taken in Europe, such as the 1985 Helsinki Protocol on the Reduction of Sulfur Emissions under the Convention on Long-Range Transboundary Air Pollution, and with similar improvements. On the other hand, a 2009 review found that dimming continued in China after stabilizing in the 1990s and intensified in India, consistent with their continued industrialization, while the US, Europe, and South Korea continued to brighten. Evidence from Zimbabwe, Chile and Venezuela also pointed to continued dimming during that period, albeit at a lower confidence level due to the lower number of observations. Due to these contrasting trends, no statistically significant change had occurred on a global scale from 2001 to 2012. Post-2010 observations indicate that the global decline in aerosol concentrations and global dimming continued, with pollution controls on the global shipping industry playing a substantial role in the recent years. Since nearly 90% of the human population lives in the Northern Hemisphere, clouds there are far more affected by aerosols than in the Southern Hemisphere, but these differences have halved in the two decades since 2000, providing further evidence for the ongoing global brightening. Causes Global dimming had been widely attributed to the increased presence of aerosol particles in Earth's atmosphere, predominantly those of sulfates. While natural dust is also an aerosol with some impacts on climate, and volcanic eruptions considerably increase sulfate concentrations in the short term, these effects have been dwarfed by increases in sulfate emissions since the start of the Industrial Revolution. According to the IPCC First Assessment Report, the global human-caused emissions of sulfur into the atmosphere were less than 3 million tons per year in 1860, yet they increased to 15 million tons in 1900, 40 million tons in 1940 and about 80 millions in 1980. This meant that the human-caused emissions became "at least as large" as all natural emissions of sulfur-containing compounds: the largest natural source, emissions of dimethyl sulfide from the ocean, was estimated at 40 million tons per year, while volcano emissions were estimated at 10 million tons. Moreover, that was the average figure: according to the report, "in the industrialized regions of Europe and North America, anthropogenic emissions dominate over natural emissions by about a factor of ten or even more". Aerosols and other atmospheric particulates have direct and indirect effects on the amount of sunlight received at the surface. Directly, particles of sulfur dioxide reflect almost all sunlight, like tiny mirrors. On the other hand, incomplete combustion of fossil fuels (such as diesel) and wood releases particles of black carbon (predominantly soot), which absorb solar energy and heat up, reducing the overall amount of sunlight received on the surface while also contributing to warming. Black carbon is an extremely small component of air pollution at land surface levels, yet it has a substantial heating effect on the atmosphere at altitudes above two kilometers (6,562 ft).Indirectly, the pollutants affect the climate by acting as nuclei, meaning that water droplets in clouds coalesce around the particles. Increased pollution causes more particulates and thereby creates clouds consisting of a greater number of smaller droplets (that is, the same amount of water is spread over more droplets). The smaller droplets make clouds more reflective, so that more incoming sunlight is reflected back into space and less reaches the Earth's surface. This same effect also reflects radiation from below, trapping it in the lower atmosphere. In models, these smaller droplets also decrease rainfall. In the 1990s, experiments comparing the atmosphere over the northern and southern islands of the Maldives, showed that the effect of macroscopic pollutants in the atmosphere at that time (blown south from India) caused about a 10% reduction in sunlight reaching the surface in the area under the Asian brown cloud – a much greater reduction than expected from the presence of the particles themselves. Prior to the research being undertaken, predictions were of a 0.5–1% effect from particulate matter; the variation from prediction may be explained by cloud formation with the particles acting as the focus for droplet creation. Relationship to climate change It has been understood for a long time that any effect on solar irradiance from aerosols would necessarily impact Earth's radiation balance. Reductions in atmospheric temperatures have already been observed after large volcanic eruptions such as the 1963 eruption of Mount Agung in Bali, 1982 El Chichón eruption in Mexico, 1985 Nevado del Ruiz eruption in Colombia and 1991 eruption of Mount Pinatubo in the Philippines. However, even the major eruptions only result in temporary jumps of sulfur particles, unlike the more sustained increases caused by the anthropogenic pollution. In 1990, the IPCC First Assessment Report acknowledged that "Human-made aerosols, from sulphur emitted largely in fossil fuel combustion can modify clouds and this may act to lower temperatures", while "a decrease in emissions of sulphur might be expected to increase global temperatures". However, lack of observational data and difficulties in calculating indirect effects on clouds left the report unable to estimate whether the total impact of all anthropogenic aerosols on the global temperature amounted to cooling or warming. By 1995, the IPCC Second Assessment Report had confidently assessed the overall impact of aerosols as negative (cooling); however, aerosols were recognized as the largest source of uncertainty in future projections in that report and the subsequent ones.At the peak of global dimming, it was able to counteract the warming trend completely, but by 1975, the continually increasing concentrations of greenhouse gases have overcome the masking effect and dominated ever since. Even then, regions with high concentrations of sulfate aerosols due to air pollution had initially experienced cooling, in contradiction to the overall warming trend. The eastern United States was a prominent example: the temperatures there declined by 0.7 °C (1.3 °F) between 1970 and 1980, and by up to 1 °C (1.8 °F) in the Arkansas and Missouri. As the sulfate pollution was reduced, the central and eastern United States had experienced warming of 0.3 °C (0.54 °F) between 1980 and 2010, even as sulfate particles still accounted for around 25% of all particulates. By 2021, the northeastern coast of the United States was instead one of the fastest-warming regions of North America, as the slowdown of the Atlantic Meridional Overturning Circulation increased temperatures in that part of the North Atlantic Ocean.Globally, the emergence of extreme heat beyond the preindustrial records was delayed by aerosol cooling, and hot extremes accelerated as global dimming abated: it has been estimated that since the mid-1990s, peak daily temperatures in northeast Asia and hottest days of the year in Western Europe would have been substantially less hot if aerosol concentrations had stayed the same as before. In Europe, the declines in aerosol concentrations since the 1980s had also reduced the associated fog, mist and haze: altogether, it was responsible for about 10–20% of daytime warming across Europe, and about 50% of the warming over the more polluted Eastern Europe. Because aerosol cooling depends on reflecting sunlight, air quality improvements had a negligible impact on wintertime temperatures, but had increased temperatures from April to September by around 1 °C (1.8 °F) in Central and Eastern Europe. Some of the acceleration of sea level rise, as well as Arctic amplification and the associated Arctic sea ice decline, was also attributed to the reduction in aerosol masking.Pollution from black carbon, mostly represented by soot, also contributes to global dimming. However, because it absorbs heat instead of reflecting it, it warms the planet instead of cooling it like sulfates. This warming is much weaker than that of greenhouse gases, but it can be regionally significant when black carbon is deposited over ice masses like mountain glaciers and the Greenland ice sheet, where it reduces their albedo and increases their absorption of solar radiation. Even the indirect effect of soot particles acting as cloud nuclei is not strong enough to provide cooling: the "brown clouds" formed around soot particles were known to have a net warming effect since the 2000s. Black carbon pollution is particularly strong over India, and as the result, it is considered to be one of the few regions where cleaning up air pollution would reduce, rather than increase, warming.Since changes in aerosol concentrations already have an impact on the global climate, they would necessarily influence future projections as well. In fact, it is impossible to fully estimate the warming impact of all greenhouse gases without accounting for the counteracting cooling from aerosols. Climate models started to account for the effects of sulfate aerosols around the IPCC Second Assessment Report; when the IPCC Fourth Assessment Report was published in 2007, every climate model had integrated sulfates, but only 5 were able to account for less impactful particulates like black carbon. By 2021, CMIP6 models estimated total aerosol cooling in the range from 0.1 °C (0.18 °F) to 0.7 °C (1.3 °F); The IPCC Sixth Assessment Report selected the best estimate of a 0.5 °C (0.90 °F) cooling provided by sulfate aerosols, while black carbon amounts to about 0.1 °C (0.18 °F) of warming. While these values are based on combining model estimates with observational constraints, including those on ocean heat content, the matter is not yet fully settled. The difference between model estimates mainly stems from disagreements over the indirect effects of aerosols on clouds. While it is well known that aerosols increase the number of cloud droplets and this makes the clouds more reflective, calculating how liquid water path, an important cloud property, is affected by their presence is far more challenging, as it involves computationally heavy continuous calculations of evaporation and condensation within clouds. Climate models generally assume that aerosols increase liquid water path, which makes the clouds even more reflective. However, satellite observations taken in 2010s suggested that aerosols decreased liquid water path instead, and in 2018, this was reproduced in a model which integrated more complex cloud microphysics. Yet, 2019 research found that earlier satellite observations were biased by failing to account for the thickest, most water-heavy clouds naturally raining more and shedding more particulates: very strong aerosol cooling was seen when comparing clouds of the same thickness. Moreover, large-scale observations can be confounded by changes in other atmospheric factors, like humidity: i.e. it was found that while post-1980 improvements in air quality would have reduced the number of clouds over the East Coast of the United States by around 20%, this was offset by the increase in relative humidity caused by atmospheric response to AMOC slowdown. Similarly, while the initial research looking at sulfates from the 2014–2015 eruption of Bárðarbunga found that they caused no change in liquid water path, it was later suggested that this finding was confounded by counteracting changes in humidity. To avoid confounders, many observations of aerosol effects focus on ship tracks, but post-2020 research found that visible ship tracks are a poor proxy for other clouds, and estimates derived from them overestimate aerosol cooling by as much as 200%. At the same time, other research found that the majority of ship tracks are "invisible" to satellites, meaning that the earlier research had underestimated aerosol cooling by overlooking them. Finally, 2023 research indicates that all climate models have underestimated sulfur emissions from volcanoes which occur in the background, outside of major eruptions, and so had consequently overestimated the cooling provided by anthropogenic aerosols, especially in the Arctic climate. Regardless of the current strength of aerosol cooling, all future climate change scenarios project decreases in particulates and this includes the scenarios where 1.5 °C (2.7 °F) and 2 °C (3.6 °F) targets are met: their specific emission reduction targets assume the need to make up for lower dimming. Since models estimate that the cooling caused by sulfates is largely equivalent to the warming caused by atmospheric methane (and since methane is a relatively short-lived greenhouse gas), it is believed that simultaneous reductions in both would effectively cancel each other out. Yet, in the recent years, methane concentrations had been increasing at rates exceeding their previous period of peak growth in the 1980s, with wetland methane emissions driving much of the recent growth, while air pollution is getting cleaned up aggressively. These trends are some of the main reasons why 1.5 °C (2.7 °F) warming is now expected around 2030, as opposed to the mid-2010s estimates where it would not occur until 2040.It has also been suggested that aerosols are not given sufficient attention in regional risk assessments, in spite of being more influential on a regional scale than globally. For instance, a climate change scenario with high greenhouse gas emissions but strong reductions in air pollution would see 0.2 °C (0.36 °F) more global warming by 2050 than the same scenario with little improvement in air quality, but regionally, the difference would add 5 more tropical nights per year in northern China and substantially increase precipitation in northern China and northern India. Likewise, a paper comparing current level of clean air policies with a hypothetical maximum technically feasible action under otherwise the same climate change scenario found that the latter would increase the risk of temperature extremes by 30–50% in China and in Europe. Unfortunately, because historical records of aerosols are sparser in some regions than in others, accurate regional projections of aerosol impacts are difficult. Even the latest CMIP6 climate models can only accurately represent aerosol trends over Europe, but struggle with representing North America and Asia, meaning that their near-future projections of regional impacts are likely to contain errors as well. Aircraft contrails and lockdowns In general, aircraft contrails (also called vapor trails) are believed to trap outgoing longwave radiation emitted by the Earth and atmosphere more than they reflect incoming solar radiation, resulting in a net increase in radiative forcing. In 1992, this warming effect was estimated between 3.5 mW/m2 and 17 mW/m2. Global radiative forcing impact of aircraft contrails has been calculated from the reanalysis data, climate models, and radiative transfer codes; estimated at 12 mW/m2 for 2005, with an uncertainty range of 5 to 26 mW/m2, and with a low level of scientific understanding. Contrail cirrus may be air traffic's largest radiative forcing component, larger than all CO2 accumulated from aviation, and could triple from a 2006 baseline to 160–180 mW/m2 by 2050 without intervention. For comparison, the total radiative forcing from human activities amounted to 2.72 W/m2 (with a range between 1.96 and 3.48W/m2) in 2019, and the increase from 2011 to 2019 alone amounted to 0.34W/m2.Contrail effects differ a lot depending on when they are formed, as they decrease the daytime temperature and increase the nighttime temperature, reducing their difference. In 2006, it was estimated that night flights contribute 60 to 80% of contrail radiative forcing while accounting for 25% of daily air traffic, and winter flights contribute half of the annual mean radiative forcing while accounting for 22% of annual air traffic. Starting from the 1990s, it was suggested that contrails during daytime have a strong cooling effect, and when combined with the warming from night-time flights, this would lead to a substantial diurnal temperature variation (the difference in the day's highs and lows at a fixed station). When no commercial aircraft flew across the USA following the September 11 attacks, the diurnal temperature variation was widened by 1.1 °C (2.0 °F). Measured across 4,000 weather stations in the continental United States, this increase was the largest recorded in 30 years. Without contrails, the local diurnal temperature range was 1 °C (1.8 °F) higher than immediately before. In the southern US, the difference was diminished by about 3.3 °C (6 °F), and by 2.8 °C (5 °F) in the US midwest. However, follow-up studies found that a natural change in cloud cover can more than explain these findings. The authors of a 2008 study wrote, "The variations in high cloud cover, including contrails and contrail-induced cirrus clouds, contribute weakly to the changes in the diurnal temperature range, which is governed primarily by lower altitude clouds, winds, and humidity." A 2011 study of British meteorological records taken during World War II identified one event where the temperature was 0.8 °C (1.4 °F) higher than the day's average near airbases used by USAAF strategic bombers after they flew in a formation, although they cautioned it was a single event. The global response to the 2020 coronavirus pandemic led to a reduction in global air traffic of nearly 70% relative to 2019. Thus, it provided an extended opportunity to study the impact of contrails on regional and global temperature. Multiple studies found "no significant response of diurnal surface air temperature range" as the result of contrail changes, and either "no net significant global ERF" (effective radiative forcing) or a very small warming effect. On the other hand, the decline in sulfate emissions caused by the curtailed road traffic and industrial output during the COVID-19 lockdowns did have a detectable warming impact: it was estimated to have increased global temperatures by 0.01–0.02 °C (0.018–0.036 °F) initially and up to 0.03 °C (0.054 °F) by 2023, before disappearing. Regionally, the lockdowns were estimated to increase temperatures by 0.05–0.15 °C (0.090–0.270 °F) in eastern China over January–March, and then by 0.04–0.07 °C (0.072–0.126 °F) over Europe, eastern United States, and South Asia in March–May, with the peak impact of 0.3 °C (0.54 °F) in some regions of the United States and Russia. In the city of Wuhan, the urban heat island effect was found to have decreased by 0.24 °C (0.43 °F) at night and by 0.12 °C (0.22 °F) overall during the strictest lockdowns. Relationship to hydrological cycle On regional and global scale, air pollution can affect the water cycle, in a manner similar to some natural processes. One example is the impact of Sahara dust on hurricane formation: air laden with sand and mineral particles moves over the Atlantic Ocean, where they block some of the sunlight from reaching the water surface, slightly cooling it and dampening the development of hurricanes. Likewise, it has been suggested since the early 2000s that since aerosols decrease solar radiation over the ocean and hence reduce evaporation from it, they would be "spinning down the hydrological cycle of the planet." In 2011, it was found that anthropogenic aerosols had been the predominant factor behind 20th century changes in rainfall over the Atlantic Ocean sector, when the entire tropical rain belt shifted southwards between 1950 and 1985, with a limited northwards shift afterwards. Future reductions in aerosol emissions are expected to result in a more rapid northwards shift, with limited impact in the Atlantic but a substantially greater impact in the Pacific.Most notably, multiple studies connect aerosols from the Northern Hemisphere to the failed monsoon in sub-Saharan Africa during the 1970s and 1980s, which then led to the Sahel drought and the associated famine. However, model simulations of Sahel climate are very inconsistent, so it's difficult to prove that the drought would not have occurred without aerosol pollution, although it would have clearly been less severe. Some research indicates that those models which demonstrate warming alone driving strong precipitation increases in the Sahel are the most accurate, making it more likely that sulfate pollution was to blame for overpowering this response and sending the region into drought.Another dramatic finding had connected the impact of aerosols with the weakening of the Monsoon of South Asia. It was first advanced in 2006, yet it also remained difficult to prove. In particular, some research suggested that warming itself increases the risk of monsoon failure, potentially pushing it past a tipping point. By 2021, however, it was concluded that global warming consistently strengthened the monsoon, and some strengthening was already observed in the aftermath of lockdown-caused aerosol reductions.In 2009, an analysis of 50 years of data found that light rains had decreased over eastern China, even though there was no significant change in the amount of water held by the atmosphere. This was attributed to aerosols reducing droplet size within clouds, which led to those clouds retaining water for a longer time without raining. The phenomenon of aerosols suppressing rainfall through reducing cloud droplet size has been confirmed by subsequent studies. Later research found that aerosol pollution over South and East Asia didn't just suppress rainfall there, but also resulted in more moisture transferred to Central Asia, where summer rainfall had increased as the result. IPCC Sixth Assessment Report had also linked changes in aerosol concentrations to altered precipitation in the Mediterranean region. Solar geoengineering An increase in planetary albedo of 1% would eliminate most of radiative forcing from anthropogenic greenhouse gas emissions and thereby global warming, while a 2% albedo increase would negate the warming effect of doubling the atmospheric carbon dioxide concentration. This is the theory behind solar geoengineering, and the high reflective potential of sulfate aerosols means that they were considered in this capacity for a long time. In 1974, Mikhail Budyko suggested that if global warming became a problem, the planet could be cooled by burning sulfur in the stratosphere, which would create a haze. This approach would simply send the sulfates to the troposphere – the lowest part of the atmosphere. Using it today would be equivalent to more than reversing the decades of air quality improvements, and the world would face the same issues which prompted the introduction of those regulations in the first place, such as acid rain. The suggestion of relying on tropospheric global dimming to curb warming has been described as a "Faustian bargain" and is not seriously considered by modern research.Instead, starting with the seminal 2006 paper by Paul Crutzen, the solution advocated is known as stratospheric aerosol injection, or SAI. It would transport sulfates into the next higher layer of the atmosphere – stratosphere, where they would last for years instead of weeks, so far less sulfur would have to be emitted. It has been estimated that the amount of sulfur needed to offset a warming of around 4 °C (7.2 °F) relative to now (and 5 °C (9.0 °F) relative to the preindustrial), under the highest-emission scenario RCP 8.5 would be less than what is already emitted through air pollution today, and that reductions in sulfur pollution from future air quality improvements already expected under that scenario would offset the sulfur used for geoengineering. The trade-off is increased cost. While there's a popular narrative that stratospheric aerosol injection can be carried out by individuals, small states, or other non-state rogue actors, scientific estimates suggest that cooling the atmosphere by 1 °C (1.8 °F) through stratospheric aerosol injection would cost at least $18 billion annually (at 2020 USD value), meaning that only the largest economies or economic blocs could afford this intervention. Even so, these approaches would still be "orders of magnitude" cheaper than greenhouse gas mitigation, let alone the costs of unmitigated effects of climate change.The main downside to SAI is that any such cooling would still cease 1–3 years after the last aerosol injection, while the warming from CO2 emissions lasts for hundreds to thousands of years unless they are reversed earlier. This means that neither stratospheric aerosol injection nor other forms of solar geoengineering can be used as a substitute for reducing greenhouse gas emissions, because if solar geoengineering were to cease while greenhouse gas levels remained high, it would lead to "large and extremely rapid" warming and similarly abrupt changes to the water cycle. Many thousands of species would likely go extinct as the result. Instead, any solar geoengineering would act as a temporary measure to limit warming while emissions of greenhouse gases are reduced and carbon dioxide is removed, which may well take hundreds of years.Other risks include limited knowledge about the regional impacts of solar geoengineering (beyond the certainty that even stopping or reversing the warming entirely would still result in significant changes in weather patterns in many areas) and, correspondingly, the impacts on ecosystems. It is generally believed that relative to now, crop yields and carbon sinks would be largely unaffected or may even increase slightly, because reduced photosynthesis due to lower sunlight would be offset by CO2 fertilization effect and the reduction in thermal stress, but there's less confidence about how specific ecosystems may be affected. Moreover, stratospheric aerosol injection is likely to somewhat increase mortality from skin cancer due to the weakened ozone layer, but it would also reduce mortality from ground-level ozone, with the net effect unclear. Changes in precipitation are also likely to shift the habitat of mosquitoes and thus substantially affect the distribution and spread of vector-borne diseases, with currently unclear consequences. See also Aerosols Air pollution Anthropogenic cloud Asian brown cloud Chemtrail conspiracy theory Environmental impact of aviation Global cooling Nuclear winter Ship tracks Snowball Earth Stratospheric aerosol injection Sulfur dioxide Sunshine recorders References External links Global Aerosol Climatology Project
solar cycle
The solar cycle, also known as the solar magnetic activity cycle, sunspot cycle, or Schwabe cycle, is a nearly periodic 11-year change in the Sun's activity measured in terms of variations in the number of observed sunspots on the Sun's surface. Over the period of a solar cycle, levels of solar radiation and ejection of solar material, the number and size of sunspots, solar flares, and coronal loops all exhibit a synchronized fluctuation from a period of minimum activity to a period of a maximum activity back to a period of minimum activity. The magnetic field of the Sun flips during each solar cycle, with the flip occurring when the solar cycle is near its maximum. After two solar cycles, the Sun's magnetic field returns to its original state, completing what is known as a Hale cycle. This cycle has been observed for centuries by changes in the Sun's appearance and by terrestrial phenomena such as aurora but was not clearly identified until 1843. Solar activity, driven by both the solar cycle and transient aperiodic processes, governs the environment of interplanetary space by creating space weather and impacting space- and ground-based technologies as well as the Earth's atmosphere and also possibly climate fluctuations on scales of centuries and longer. Understanding and predicting the solar cycle remains one of the grand challenges in astrophysics with major ramifications for space science and the understanding of magnetohydrodynamic phenomena elsewhere in the universe. The current scientific consensus on climate change is that solar variations only play a marginal role in driving global climate change, since the measured magnitude of recent solar variation is much smaller than the forcing due to greenhouse gases. Definition Solar cycles have an average duration of about 11 years. Solar maximum and solar minimum refer to periods of maximum and minimum sunspot counts. Cycles span from one minimum to the next. Observational history The idea of a cyclical solar cycle was first hypothesized by Christian Horrebow based on his regular observations of sunspots made between 1761 and 1776 from the Rundetaarn observatory in Copenhagen, Denmark. In 1775, Horrebow noted how "it appears that after the course of a certain number of years, the appearance of the Sun repeats itself with respect to the number and size of the spots". The solar cycle however would not be clearly identified until 1843 when Samuel Heinrich Schwabe noticed a periodic variation in the average number of sunspots after 17 years of solar observations. Schwabe continued to observe the sunspot cycle for another 23 years, until 1867. In 1852, Rudolf Wolf designated the first numbered solar cycle to have started in February 1755 based on Schwabe's and other observations. Wolf also created a standard sunspot number index, the Wolf number, which continues to be used today. Between 1645 and 1715, very few sunspots were observed and recorded. This was first noted by Gustav Spörer and was later named the Maunder minimum after the wife-and-husband team Annie S. D. Maunder and Edward Walter Maunder who extensively researched this peculiar interval.In the second half of the nineteenth century Richard Carrington and Spörer independently noted the phenomena of sunspots appearing at different heliographic latitudes at different parts of the cycle. (See Spörer's law.) Alfred Harrison Joy would later describe how the magnitude at which the sunspots are "tilted"—with the leading spot(s) closer to the equator than the trailing spot(s)―grows with the latitude of these regions. (See Joy's law.) The cycle's physical basis was elucidated by George Ellery Hale and collaborators, who in 1908 showed that sunspots were strongly magnetized (the first detection of magnetic fields beyond the Earth). In 1919 they identified a number of patterns that would collectively become known as Hale's law: In the same heliographic hemisphere, bipolar active regions tend to have the same leading polarity. In the opposite hemisphere, i.e., across the equator, these regions tend to have the opposite leading polarity. Leading polarities in both hemispheres flip from one sunspot cycle to the next.Hale's observations revealed that the complete magnetic cycle—which would later be referred to as a Hale cycle—spans two solar cycles, or 22 years, before returning to its original state (including polarity). Because nearly all manifestations are insensitive to polarity, the 11-year solar cycle remains the focus of research; however, the two halves of the Hale cycle are typically not identical: the 11-year cycles usually alternate between higher and lower sums of Wolf's sunspot numbers (the Gnevyshev-Ohl rule).In 1961 the father-and-son team of Harold and Horace Babcock established that the solar cycle is a spatiotemporal magnetic process unfolding over the Sun as a whole. They observed that the solar surface is magnetized outside of sunspots, that this (weaker) magnetic field is to first order a dipole, and that this dipole undergoes polarity reversals with the same period as the sunspot cycle. Horace's Babcock Model described the Sun's oscillatory magnetic field as having a quasi-steady periodicity of 22 years. It covered the oscillatory exchange of energy between toroidal and poloidal solar magnetic field components. Cycle history Sunspot numbers over the past 11,400 years have been reconstructed using carbon-14 isotope ratios. The level of solar activity beginning in the 1940s is exceptional – the last period of similar magnitude occurred around 9,000 years ago (during the warm Boreal period). The Sun was at a similarly high level of magnetic activity for only ~10% of the past 11,400 years. Almost all earlier high-activity periods were shorter than the present episode. Fossil records suggest that the solar cycle has been stable for at least the last 700 million years. For example, the cycle length during the Early Permian is estimated to be 10.62 years and similarly in the Neoproterozoic. Until 2009, it was thought that 28 cycles had spanned the 309 years between 1699 and 2008, giving an average length of 11.04 years, but research then showed that the longest of these (1784–1799) may actually have been two cycles. If so then the average length would be only around 10.7 years. Since observations began cycles as short as 9 years and as long as 14 years have been observed, and if the cycle of 1784–1799 is double then one of the two component cycles had to be less than 8 years in length. Significant amplitude variations also occur. Several lists of proposed historical "grand minima" of solar activity exist. Recent cycles Cycle 25 Solar cycle 25 began in December 2019. Several predictions have been made for solar cycle 25 based on different methods, ranging from very weak to strong magnitude. A physics-based prediction relying on the data-driven solar dynamo and solar surface flux transport models by Bhowmik and Nandy (2018) seems to have predicted the strength of the solar polar field at the current minima correctly and forecasts a weak but not insignificant solar cycle 25 similar to or slightly stronger than cycle 24. Notably, they rule out the possibility of the Sun falling into a Maunder-minimum-like (inactive) state over the next decade. A preliminary consensus by a solar cycle 25 Prediction Panel was made in early 2019. The Panel, which was organized by NOAA's Space Weather Prediction Center (SWPC) and NASA, based on the published solar cycle 25 predictions, concluded that solar cycle 25 will be very similar to solar cycle 24. They anticipate that the solar cycle minimum before cycle 25 will be long and deep, just as the minimum that preceded cycle 24. They expect solar maximum to occur between 2023 and 2026 with a sunspot range of 95 to 130, given in terms of the revised sunspot number. Cycle 24 Solar cycle 24 began on 4 January 2008, with minimal activity until early 2010. The cycle featured a "double-peaked" solar maximum. The first peak reached 99 in 2011 and the second in early 2014 at 101. Cycle 24 ended in December 2019 after 11.0 years. Cycle 23 Solar cycle 23 lasted 11.6 years, beginning in May 1996 and ending in January 2008. The maximum smoothed sunspot number (monthly number of sunspots averaged over a twelve-month period) observed during the solar cycle was 120.8 (March 2000), and the minimum was 1.7. A total of 805 days had no sunspots during this cycle. Phenomena Because the solar cycle reflects magnetic activity, various magnetically driven solar phenomena follow the solar cycle, including sunspots, faculae/plage, network, and coronal mass ejections. Sunspots The Sun's apparent surface, the photosphere, radiates more actively when there are more sunspots. Satellite monitoring of solar luminosity revealed a direct relationship between the solar cycle and luminosity with a peak-to-peak amplitude of about 0.1%. Luminosity decreases by as much as 0.3% on a 10-day timescale when large groups of sunspots rotate across the Earth's view and increase by as much as 0.05% for up to 6 months due to faculae associated with large sunspot groups.The best information today comes from SOHO (a cooperative project of the European Space Agency and NASA), such as the MDI magnetogram, where the solar "surface" magnetic field can be seen. As each cycle begins, sunspots appear at mid-latitudes, and then move closer and closer to the equator until a solar minimum is reached. This pattern is best visualized in the form of the so-called butterfly diagram. Images of the Sun are divided into latitudinal strips, and the monthly-averaged fractional surface of sunspots is calculated. This is plotted vertically as a color-coded bar, and the process is repeated month after month to produce this time-series diagram. While magnetic field changes are concentrated at sunspots, the entire sun undergoes analogous changes, albeit of smaller magnitude. Faculae and plage Faculae are bright magnetic features on the photosphere. They extend into the chromosphere, where they are referred to as plage. The evolution of plage areas is typically tracked from solar observations in the Ca II K line (393.37 nm). The amount of facula and plage area varies in phase with the solar cycle, and they are more abundant than sunspots by approximately an order of magnitude. They exhibit a non linear relation to sunspots. Plage regions are also associated with strong magnetic fields in the solar surface. Solar flares and coronal mass ejections The solar magnetic field structures the corona, giving it its characteristic shape visible at times of solar eclipses. Complex coronal magnetic field structures evolve in response to fluid motions at the solar surface, and emergence of magnetic flux produced by dynamo action in the solar interior. For reasons not yet understood in detail, sometimes these structures lose stability, leading to solar flares and coronal mass ejections (CME). Flares consist of an abrupt emission of energy (primarily at ultraviolet and X-ray wavelengths), which may or may not be accompanied by a coronal mass ejection, which consists of injection of energetic particles (primarily ionized hydrogen) into interplanetary space. Flares and CME are caused by sudden localized release of magnetic energy, which drives emission of ultraviolet and X-ray radiation as well as energetic particles. These eruptive phenomena can have a significant impact on Earth's upper atmosphere and space environment, and are the primary drivers of what is now called space weather. Consequently, the occurrence of both geomagnetic storms and solar energetic particle events shows a strong solar cycle variation, peaking close to sunspot maximum. The occurrence frequency of coronal mass ejections and flares is strongly modulated by the cycle. Flares of any given size are some 50 times more frequent at solar maximum than at minimum. Large coronal mass ejections occur on average a few times a day at solar maximum, down to one every few days at solar minimum. The size of these events themselves does not depend sensitively on the phase of the solar cycle. A case in point are the three large X-class flares that occurred in December 2006, very near solar minimum; an X9.0 flare on Dec 5 stands as one of the brightest on record. Patterns Along with the approximately 11-year sunspot cycle, a number of additional patterns and cycles have been hypothesized. Waldmeier effect The Waldmeier effect describes the observation that the maximum amplitudes of solar cycles are inversely proportional to the time between their solar minima and maxima. Therefore, cycles with larger maximum amplitudes tend to take less time to reach their maxima than cycles with smaller amplitudes. This effect was named after Max Waldmeier who first described it. Gnevyshev–Ohl rule The Gnevyshev–Ohl rule describes the tendency for the sum of the Wolf number over an odd solar cycle to exceed that of the preceding even cycle. Gleissberg cycle The Gleissberg cycle describes an amplitude modulation of solar cycles with a period of about 70–100 years, or seven or eight solar cycles. It was named after Wolfgang Gleißberg.Associated centennial variations in magnetic fields in the corona and heliosphere have been detected using carbon-14 and beryllium-10 cosmogenic isotopes stored in terrestrial reservoirs such as ice sheets and tree rings and by using historic observations of geomagnetic storm activity, which bridge the time gap between the end of the usable cosmogenic isotope data and the start of modern satellite data.These variations have been successfully reproduced using models that employ magnetic flux continuity equations and observed sunspot numbers to quantify the emergence of magnetic flux from the top of the solar atmosphere and into the heliosphere, showing that sunspot observations, geomagnetic activity and cosmogenic isotopes offer a convergent understanding of solar activity variations. Suess cycle The Suess cycle, or de Vries cycle, is a cycle present in radiocarbon proxies of solar activity with a period of about 210 years. It was named after Hans Eduard Suess and Hessel de Vries. Despite calculated radioisotope production rates being well correlated with the 400-year sunspot record, there is little evidence of the Suess cycle in the 400-year sunspot record by itself. Other hypothesized cycles Periodicity of solar activity with periods longer than the solar cycle of about 11 (22) years has been proposed, including: The Hallstatt cycle (named after a cool and wet period in Europe when glaciers advanced) is hypothesized to extend for approximately 2,400 years. In studies of carbon-14 ratios, cycles of 105, 131, 232, 385, 504, 805 and 2,241 years have been proposed, possibly matching cycles derived from other sources. Damon and Sonett proposed carbon 14-based medium- and short-term variations of periods 208 and 88 years; as well as suggesting a 2300-year radiocarbon period that modulates the 208-year period. Brückner-Egeson-Lockyer cycle (30 to 40 year cycles) Effects Solar Surface magnetism Sunspots eventually decay, releasing magnetic flux in the photosphere. This flux is dispersed and churned by turbulent convection and solar large-scale flows. These transport mechanisms lead to the accumulation of magnetized decay products at high solar latitudes, eventually reversing the polarity of the polar fields (notice how the blue and yellow fields reverse in the Hathaway/NASA/MSFC graph above). The dipolar component of the solar magnetic field reverses polarity around the time of solar maximum and reaches peak strength at the solar minimum. Space Spacecraft CMEs (coronal mass ejections) produce a radiation flux of high-energy protons, sometimes known as solar cosmic rays. These can cause radiation damage to electronics and solar cells in satellites. Solar proton events also can cause single-event upset (SEU) events on electronics; at the same, the reduced flux of galactic cosmic radiation during solar maximum decreases the high-energy component of particle flux. CME radiation is dangerous to astronauts on a space mission who are outside the shielding produced by the Earth's magnetic field. Future mission designs (e.g., for a Mars Mission) therefore incorporate a radiation-shielded "storm shelter" for astronauts to retreat to during such an event. Gleißberg developed a CME forecasting method that relies on consecutive cycles.The increased irradiance during solar maximum expands the envelope of the Earth's atmosphere, causing low-orbiting space debris to re-enter more quickly. Galactic cosmic ray flux The outward expansion of solar ejecta into interplanetary space provides overdensities of plasma that are efficient at scattering high-energy cosmic rays entering the solar system from elsewhere in the galaxy. The frequency of solar eruptive events is modulated by the cycle, changing the degree of cosmic ray scattering in the outer solar system accordingly. As a consequence, the cosmic ray flux in the inner Solar System is anticorrelated with the overall level of solar activity. This anticorrelation is clearly detected in cosmic ray flux measurements at the Earth's surface. Some high-energy cosmic rays entering Earth's atmosphere collide hard enough with molecular atmospheric constituents that they occasionally cause nuclear spallation reactions. Fission products include radionuclides such as 14C and 10Be that settle on the Earth's surface. Their concentration can be measured in tree trunks or ice cores, allowing a reconstruction of solar activity levels into the distant past. Such reconstructions indicate that the overall level of solar activity since the middle of the twentieth century stands amongst the highest of the past 10,000 years, and that epochs of suppressed activity, of varying durations have occurred repeatedly over that time span. Atmospheric Solar irradiance The total solar irradiance (TSI) is the amount of solar radiative energy incident on the Earth's upper atmosphere. TSI variations were undetectable until satellite observations began in late 1978. A series of radiometers were launched on satellites since the 1970s. TSI measurements varied from 1355 to 1375 W/m2 across more than ten satellites. One of the satellites, the ACRIMSAT was launched by the ACRIM group. The controversial 1989–1991 "ACRIM gap" between non-overlapping ACRIM satellites was interpolated by the ACRIM group into a composite showing +0.037%/decade rise. Another series based on the ACRIM data is produced by the PMOD group and shows a −0.008%/decade downward trend. This 0.045%/decade difference can impact climate models. However, reconstructed total solar irradiance with models favor the PMOD series, thus reconciling the ACRIM-gap issue.Solar irradiance varies systematically over the cycle, both in total irradiance and in its relative components (UV vs visible and other frequencies). The solar luminosity is an estimated 0.07 percent brighter during the mid-cycle solar maximum than the terminal solar minimum. Photospheric magnetism appears to be the primary cause (96%) of 1996–2013 TSI variation. The ratio of ultraviolet to visible light varies.TSI varies in phase with the solar magnetic activity cycle with an amplitude of about 0.1% around an average value of about 1361.5 W/m2 (the "solar constant"). Variations about the average of up to −0.3% are caused by large sunspot groups and of +0.05% by large faculae and the bright network on a 7-10-day timescale (see TSI variation graphics). Satellite-era TSI variations show small but detectable trends.TSI is higher at solar maximum, even though sunspots are darker (cooler) than the average photosphere. This is caused by magnetized structures other than sunspots during solar maxima, such as faculae and active elements of the "bright" network, that are brighter (hotter) than the average photosphere. They collectively overcompensate for the irradiance deficit associated with the cooler, but less numerous sunspots. The primary driver of TSI changes on solar rotational and solar cycle timescales is the varying photospheric coverage of these radiatively active solar magnetic structures.Energy changes in UV irradiance involved in production and loss of ozone have atmospheric effects. The 30 hPa atmospheric pressure level changed height in phase with solar activity during solar cycles 20–23. UV irradiance increase caused higher ozone production, leading to stratospheric heating and to poleward displacements in the stratospheric and tropospheric wind systems. Short-wavelength radiation With a temperature of 5870 K, the photosphere emits a proportion of radiation in the extreme ultraviolet (EUV) and above. However, hotter upper layers of the Sun's atmosphere (chromosphere and corona) emit more short-wavelength radiation. Since the upper atmosphere is not homogeneous and contains significant magnetic structure, the solar ultraviolet (UV), EUV and X-ray flux varies markedly over the cycle. The photo montage to the left illustrates this variation for soft X-ray, as observed by the Japanese satellite Yohkoh from after August 30, 1991, at the peak of cycle 22, to September 6, 2001, at the peak of cycle 23. Similar cycle-related variations are observed in the flux of solar UV or EUV radiation, as observed, for example, by the SOHO or TRACE satellites. Even though it only accounts for a minuscule fraction of total solar radiation, the impact of solar UV, EUV and X-ray radiation on the Earth's upper atmosphere is profound. Solar UV flux is a major driver of stratospheric chemistry, and increases in ionizing radiation significantly affect ionosphere-influenced temperature and electrical conductivity. Solar radio flux Emission from the Sun at centimetric (radio) wavelength is due primarily to coronal plasma trapped in the magnetic fields overlying active regions. The F10.7 index is a measure of the solar radio flux per unit frequency at a wavelength of 10.7 cm, near the peak of the observed solar radio emission. F10.7 is often expressed in SFU or solar flux units (1 SFU = 10−22 W m−2 Hz−1). It represents a measure of diffuse, nonradiative coronal plasma heating. It is an excellent indicator of overall solar activity levels and correlates well with solar UV emissions. Sunspot activity has a major effect on long distance radio communications, particularly on the shortwave bands although medium wave and low VHF frequencies are also affected. High levels of sunspot activity lead to improved signal propagation on higher frequency bands, although they also increase the levels of solar noise and ionospheric disturbances. These effects are caused by impact of the increased level of solar radiation on the ionosphere. 10.7 cm solar flux could interfere with point-to-point terrestrial communications. Clouds Speculations about the effects of cosmic-ray changes over the cycle potentially include: Changes in ionization affect the aerosol abundance that serves as the condensation nucleus for cloud formation. During solar minima more cosmic rays reach Earth, potentially creating ultra-small aerosol particles as precursors to Cloud condensation nuclei. Clouds formed from greater amounts of condensation nuclei are brighter, longer lived and likely to produce less precipitation. A change in cosmic rays could affect certain types of clouds. It was proposed that, particularly at high latitudes, cosmic ray variation may impact terrestrial low altitude cloud cover (unlike a lack of correlation with high altitude clouds), partially influenced by the solar-driven interplanetary magnetic field (as well as passage through the galactic arms over longer timeframes), but this hypothesis was not confirmed.Later papers showed that production of clouds via cosmic rays could not be explained by nucleation particles. Accelerator results failed to produce sufficient, and sufficiently large, particles to result in cloud formation; this includes observations after a major solar storm. Observations after Chernobyl do not show any induced clouds. Terrestrial Organisms The impact of the solar cycle on living organisms has been investigated (see chronobiology). Some researchers claim to have found connections with human health.The amount of ultraviolet UVB light at 300 nm reaching the Earth's surface varies by a few percent over the solar cycle due to variations in the protective ozone layer. In the stratosphere, ozone is continuously regenerated by the splitting of O2 molecules by ultraviolet light. During a solar minimum, the decrease in ultraviolet light received from the Sun leads to a decrease in the concentration of ozone, allowing increased UVB to reach the Earth's surface. Radio communication Skywave modes of radio communication operate by bending (refracting) radio waves (electromagnetic radiation) through the Ionosphere. During the "peaks" of the solar cycle, the ionosphere becomes increasingly ionized by solar photons and cosmic rays. This affects the propagation of the radio wave in complex ways that can either facilitate or hinder communications. Forecasting of skywave modes is of considerable interest to commercial marine and aircraft communications, amateur radio operators and shortwave broadcasters. These users occupy frequencies within the High Frequency or 'HF' radio spectrum that are most affected by these solar and ionospheric variances. Changes in solar output affect the maximum usable frequency, a limit on the highest frequency usable for communications. Climate Both long-term and short-term variations in solar activity are proposed to potentially affect global climate, but it has proven challenging to show any link between solar variation and climate.Early research attempted to correlate weather with limited success, followed by attempts to correlate solar activity with global temperature. The cycle also impacts regional climate. Measurements from the SORCE's Spectral Irradiance Monitor show that solar UV variability produces, for example, colder winters in the U.S. and northern Europe and warmer winters in Canada and southern Europe during solar minima.Three proposed mechanisms mediate solar variations' climate impacts: Total solar irradiance ("Radiative forcing"). Ultraviolet irradiance. The UV component varies by more than the total, so if UV were for some (as yet unknown) reason having a disproportionate effect, this might affect climate. Solar wind-mediated galactic cosmic ray changes, which may affect cloud cover.The solar cycle variation of 0.1% has small but detectable effects on the Earth's climate. Camp and Tung suggest that solar irradiance correlates with a variation of 0.18 K ±0.08 K (0.32 °F ±0.14 °F) in measured average global temperature between solar maximum and minimum.Other effects include one study which found a relationship with wheat prices, and another one that found a weak correlation with the flow of water in the Paraná River. Eleven-year cycles have been found in tree-ring thicknesses and layers at the bottom of a lake hundreds of millions of years ago. The current scientific consensus on climate change is that solar variations only play a marginal role in driving global climate change, since the measured magnitude of recent solar variation is much smaller than the forcing due to greenhouse gases. Also, average solar activity in the 2010s was no higher than in the 1950s (see above), whereas average global temperatures had risen markedly over that period. Otherwise, the level of understanding of solar impacts on weather is low.Solar variations also affect the orbital decay of objects in low Earth orbit (LEO) by altering the density of the upper thermosphere. Solar dynamo The 11-year solar cycle is thought to be one-half of a 22-year Babcock–Leighton solar dynamo cycle, which corresponds to an oscillatory exchange of energy between toroidal and poloidal solar magnetic fields which is mediated by solar plasma flows which also provides energy to the dynamo system at every step. At solar-cycle maximum, the external poloidal dipolar magnetic field is near its dynamo-cycle minimum strength, but an internal toroidal quadrupolar field, generated through differential rotation within the tachocline, is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the Convection zone forces emergence of the toroidal magnetic field through the photosphere, giving rise to pairs of sunspots, roughly aligned east–west with opposite magnetic polarities. The magnetic polarity of sunspot pairs alternates every solar cycle, a phenomenon described by Hale's law.During the solar cycle's declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number. At solar minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are relatively rare and the poloidal field is at maximum strength. During the next cycle, differential rotation converts magnetic energy back from the poloidal to the toroidal field, with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11-year sunspot cycle corresponds to a change in the polarity of the Sun's large-scale magnetic field.Solar dynamo models indicate that plasma flux transport processes in the solar interior such as differential rotation, meridional circulation and turbulent pumping play an important role in the recycling of the toroidal and poloidal components of the solar magnetic field (Hazra and Nandy 2016). The relative strengths of these flux transport processes also determine the "memory" of the solar cycle that plays an important role in physics-based predictions of the solar cycle. Yeates, Nandy and Mackay (2008) and Karak and Nandy (2012), in particular, utilized stochastically forced non-linear solar dynamo simulations to establish that the solar cycle memory is short, lasting over one cycle, thus implying accurate predictions are possible only for the next solar cycle and not beyond. This postulate of a short one cycle memory in the solar dynamo mechanism was later observationally verified by Muñoz-Jaramillo et al. (2013). Although the tachocline has long been thought to be the key to generating the Sun's large-scale magnetic field, recent research has questioned this assumption. Radio observations of brown dwarfs have indicated that they also maintain large-scale magnetic fields and may display cycles of magnetic activity. The Sun has a radiative core surrounded by a convective envelope, and at the boundary of these two is the tachocline. However, brown dwarfs lack radiative cores and tachoclines. Their structure consists of a solar-like convective envelope that exists from core to surface. Since they lack a tachocline yet still display solar-like magnetic activity, it has been suggested that solar magnetic activity is only generated in the convective envelope. Speculated influence of the planets A 2012 paper proposed that the torque exerted by the planets on a non-spherical tachocline layer deep in the Sun may synchronize the solar dynamo. Their results were shown to be an artifact of the incorrectly applied smoothing method leading to aliasing. Additional models incorporating the influence of planetary forces on the Sun have since been proposed. However, the solar variability is known to be essentially stochastic and unpredictable beyond one solar cycle, which contradicts the idea of the deterministic planetary influence on solar dynamo. Modern dynamo models are able to reproduce the solar cycle without any planetary influence.In 1974 the book The Jupiter Effect suggested that the alignment of the planets would alter the Sun's solar wind and, in turn, Earth's weather, culminating in multiple catastrophes on March 10, 1982. None of the catastrophes occurred. In 2023, a paper by Cionco et al. demonstrated the improbability that the suspected tidal effect on the Sun driven by Venus and Jupiter were significant on whole solar tidal generating potential. See also References General references Hathaway, David (2015). "The solar cycle". Living Reviews in Solar Physics. 12 (1): 4. arXiv:1502.07020. Bibcode:2015LRSP...12....4H. doi:10.1007/lrsp-2015-4. PMC 4841188. PMID 27194958. Usoskin, Ilya (2017). "A history of solar activity over millennia". Living Reviews in Solar Physics. 14 (1): 3. arXiv:0810.3972. Bibcode:2017LRSP...14....3U. doi:10.1007/s41116-017-0006-9. S2CID 195340740. Willson, Richard C.; H.S. Hudson (1991). "The Sun's luminosity over a complete solar cycle". Nature. 351 (6321): 42–4. Bibcode:1991Natur.351...42W. doi:10.1038/351042a0. S2CID 4273483. Foukal, Peter; et al. (1977). "The effects of sunspots and faculae on the solar constant". Astrophysical Journal. 215: 952. Bibcode:1977ApJ...215..952F. doi:10.1086/155431. Dziembowski, W.A.; P.R. Goode; J. Schou (2001). "Does the sun shrink with increasing magnetic activity?". Astrophysical Journal. 553 (2): 897–904. arXiv:astro-ph/0101473. Bibcode:2001ApJ...553..897D. doi:10.1086/320976. S2CID 18375710. Stetson, H.T. (1937). Sunspots and Their Effects. New York: McGraw Hill. Yaskell, Steven Haywood (31 December 2012). Grand Phases On The Sun: The case for a mechanism responsible for extended solar minima and maxima. Trafford Publishing. ISBN 978-1-4669-6300-9. External links NOAA / NESDIS / NGDC (2002) Solar Variability Affecting Earth NOAA CD-ROM NGDC-05/01. This CD-ROM contains over 100 solar-terrestrial and related global data bases covering the period through April 1990. Solanki, S.K.; Fligge, M. (2001). Wilson, A. (ed.). Long-term changes in solar irradiance. Proceedings of the 1st Solar and Space Weather Euroconference, 25–29 September 2000, Santa Cruz de Tenerife, Tenerife, Spain. The Solar Cycle and Terrestrial Climate. Vol. 463. ESA Publications Division. pp. 51–60. Bibcode:2000ESASP.463...51S. ISBN 978-9290926931. ESA SP-463. Wu, C.J.; Krivova, N.; Solanki, S.K.; Usoskin, I.G. (2018). "Solar total and spectral irradiance reconstruction over the last 9000 years". Astronomy & Astrophysics. 620: A120. arXiv:1811.03464. Bibcode:2018A&A...620A.120W. doi:10.1051/0004-6361/201832956. S2CID 118843780. Recent Total Solar Irradiance data Archived 2013-07-06 at the Wayback Machine updated every Monday N0NBH Solar data and tools SolarCycle24.com Solar Physics Web Pages at NASA's Marshall Space Flight Center Science Briefs: Do Variations in the Solar Cycle Affect Our Climate System?. By David Rind, NASA GISS, January 2009 Yohkoh Public Outreach Project Stanford Solar Center NASA's Cosmos Windows to the Universe: The Sun SOHO Web Site TRACE Web Site Solar Influences Data Analysis Center Solar Cycle Update: Twin Peaks?. 2013. SunSpotWatch.com (since 1999)
energy in spain
Primary energy consumption in Spain in 2015 was mainly composed of fossil fuels. The largest sources are petroleum (42.3%), natural gas (19.8%) and coal (11.6%). The remaining 26.3% is accounted for by nuclear energy (12%) and different renewable energy sources (14.3%). Domestic production of primary energy includes nuclear (44,8%), solar, wind and geothermal (22,4%), biomass and waste (21,1%), hydropower (7,2%) and fossil (4,5%). Energy statistics Energy plan By 2030 the government plan is to have 62 GW of wind power installed, 76 GW of photovoltaic power, 4.8 GW of solar thermal power, 1.4 GW of biomass power, and 22 GW of storage. This should give a 32% decrease in greenhouse gasses compared to 1990. The long term plan is to achieve carbon neutrality before 2050. Energy Sources Fossil fuels Coal In December 2021 Spain had capacity of 4.1 GW using coal as the energy source although the plan is to phase it out by 2030. Natural gas Algeria is Spain's largest natural gas supplier. Amid the deterioration of relations with Morocco, Algeria decided not to renew the contract of the Maghreb–Europe Gas Pipeline (GME), which expired at midnight on 31 October 2021. From November 1 on, natural gas exports to Spain are primarily transported through the Medgaz pipeline (with the short-term possibility of covering further demand either by expanding the Medgaz or by shipping LNG).The demand for natural gas in Spain in 2022 was 364.3 TWh. Gas used for power generation increased 52% from 2021, with household, commercial and industrial demand for gas falling 21% to 226.4 TWh. Renewable energy Renewable energy includes wind, hydro and solar power, biomass and geothermal energy sources. Spain has long been a leader in renewable energy, and has recently become the first country in the world to have relied on wind as its top energy source for an entire year. The country is attempting to use wind power to supply 40 percent of its electricity consumption by 2020. At the same time, Spain is also developing other renewable sources of energy, particularly solar photovoltaic. Renewable based power in Spain reached 46.7% of total power consumption in 2021. Solar power In 2013, solar accounted for 3.1 percent of Spain's total electricity when capacity was 4,638 MW. By 2022 Spain had increased the solar capacity to 19,113 MW. Wind power In 2021, wind power provided 24% of Spain’s total installed power generation capacity and 23% of total power generation. In 2023 it will reach 30 GW of capacity. Biomass Biomass provides around 2% of electricity generation capacity. Nuclear energy Spain has seven operating nuclear reactors and in 2022 they generated 20.25% of the country’s electricity.The government’s energy and climate plan specifies that installed nuclear capacity will remain at current levels of about 7,100 MW until at least 2025, and will then reduce to just over 3,000 MW from 2030 onwards. Hydro energy In Spain in 2021 hydro power provided 17% of Spain’s total installed power generation capacity and 11% of total power generation.Draught impacts electricity production from hydro in the summer months. Global warming According to Energy Information Administration the CO2 emissions from energy consumption of Spain were in 2009 360 Mt, below Italy 450 Mt and France 429 Mt and above Poland 295 Mt and the Netherlands 250 Mt. The emissions tonnes per capita were in Spain 7.13, Italy 7.01 France 6.3 Poland 7.43, and the Netherlands 14.89. See also Electricity sector in Spain Ayoluengo oil field == References ==
lake tanganyika
Lake Tanganyika () is an African Great Lake. It is the second-oldest freshwater lake in the world, the second-largest by volume, and the second-deepest, in all cases after Lake Baikal in Siberia. It is the world's longest freshwater lake. The lake is shared among four countries—Tanzania, the Democratic Republic of the Congo (DRC), Burundi, and Zambia, with Tanzania (46%) and DRC (40%) possessing the majority of the lake. It drains into the Congo River system and ultimately into the Atlantic Ocean. Geography Lake Tanganyika is situated within the Albertine Rift, the western branch of the East African Rift, and is confined by the mountainous walls of the valley. It is the largest rift lake in Africa and the second-largest lake by volume in the world. It is the deepest lake in Africa and holds the greatest volume of fresh water on the continent, accounting for 16% of the world's available fresh water. It extends for 676 km (420 mi) in a general north–south direction and averages 50 km (31 mi) in width. The lake covers 32,900 km2 (12,700 sq mi), with a shoreline of 1,828 km (1,136 mi), a mean depth of 570 m (1,870 ft) and a maximum depth of 1,470 m (4,820 ft) (in the northern basin). It holds an estimated 18,750 km3 (4,500 cu mi).The catchment area of the lake is 231,000 km2 (89,000 sq mi). Two main rivers flow into the lake, as well as numerous smaller rivers and streams (whose lengths are limited by the steep mountains around the lake). The one major outflow is the Lukuga River, which empties into the Congo River drainage. Precipitation and evaporation play a greater role than the rivers. At least 90% of the water influx is from rain falling on the lake's surface and at least 90% of the water loss is from direct evaporation.The major river flowing into the lake is the Ruzizi River, formed about 10,000 years ago, which enters the north of the lake from Lake Kivu. The Malagarasi River, which is Tanzania's second largest river, enters the east side of Lake Tanganyika. The Malagarasi is older than Lake Tanganyika, and before the lake was formed, it probably was a headwater of the Lualaba River, the main Congo River headstream.The lake has a complex history of changing flow patterns, due to its high altitude, great depth, slow rate of refill, and mountainous location in a turbulently volcanic area that has undergone climate changes. Apparently, it has rarely in the past had an outflow to the sea. It has been described as "practically endorheic" for this reason. The lake's connection to the sea is dependent on a high water level allowing water to overflow out of the lake through the Lukuga River into the Congo. When not overflowing, the lake's exit into the Lukuga River typically is blocked by sand bars and masses of weed, and instead this river depends on its own tributaries, especially the Niemba River, to maintain a flow.The lake may also have at times had different inflows and outflows; inward flows from a higher Lake Rukwa, access to Lake Malawi and an exit route to the Nile have all been proposed to have existed at some point in the lake's history.Lake Tanganyika is an ancient lake, one of only twenty more than a million years old. Its three basins, which in periods with much lower water levels were separate lakes, are of different ages. The central began to form 9–12 million years ago (Mya), the northern 7–8 Mya and the southern 2–4 Mya. Water characteristics The lake's water is alkaline with a pH around 9 at depths of 0–100 m (0–330 ft). Below this, it is around 8.7, gradually decreasing to 8.3–8.5 in the deepest parts of Tanganyika. A similar pattern can be seen in the electric conductivity, ranging from about 670 μS/cm in the upper part to 690 μS/cm in the deepest.Surface temperatures generally range from about 24 °C (75 °F) in the southern part of the lake in early August to 28–29 °C (82–84 °F) in the late rainy season in March—April. At depths greater than 400 m (1,300 ft), the temperature is very stable at 23.1–23.4 °C (73.6–74.1 °F). The water has gradually warmed since the 19th century and this has accelerated with global warming since the 1950s.The lake is stratified and seasonal mixing generally does not extend beyond depths of 150 m (490 ft). The mixing mainly occurs as upwellings in the south and is wind-driven, but to a lesser extent, up- and downwellings also occur elsewhere in the lake. As a consequence of the stratification, the deep sections contain "fossil water". This also means it has no oxygen (it is anoxic) in the deeper parts, essentially limiting fish and other aerobic organisms to the upper part. Some geographical variations are seen in this limit, but it is typically at depths around 100 m (330 ft) in the northern part of the lake and 240–250 m (790–820 ft) in the south. The oxygen-devoid deepest sections contain high levels of toxic hydrogen sulphide and are essentially lifeless, except for bacteria. Biology Reptiles Lake Tanganyika and associated wetlands are home to Nile crocodiles (including famous giant Gustave), Zambian hinged terrapins, serrated hinged terrapins, and pan hinged terrapins (last species not in the lake itself, but in adjacent lagoons). Storm's water cobra, a threatened subspecies of banded water cobra that feeds mainly on fish, is only found in Lake Tanganyika, where it prefers rocky shores. Cichlid fish The lake holds at least 250 species of cichlid fish and undescribed species remain. Almost all (98%) of the Tanganyika cichlids are endemic to the lake and it is thus an important biological resource for the study of speciation in evolution. Some of the endemics do occur slightly into the upper Lukuga River, Lake Tanganyika's outflow, but further spread into the Congo River basin is prevented by physics (Lukuga has fast-flowing sections with many rapids and waterfalls) and chemistry (Tanganyika's water is alkaline, while the Congo's generally is acidic). The cichlids of the African Great Lakes, including Tanganyika, represent the most diverse extent of adaptive radiation in vertebrates.Although Tanganyika has far fewer cichlid species than Lakes Malawi and Victoria which both have experienced relatively recent explosive species radiations (resulting in many closely related species), its cichlids are the most morphologically and genetically diverse. This is linked to the high age of Tanganyika, as it is far older than the other lakes. Tanganyika has the largest number of endemic cichlid genera of all African lakes. All Tanganyika cichlids are in the subfamily Pseudocrenilabrinae. Of the 10 tribes in this subfamily, half are largely or entirely restricted to the lake (Cyprichromini, Ectodini, Lamprologini, Limnochromini and Tropheini) and another three have species in the lake (Haplochromini, Tilapiini and Tylochromini). Others have proposed splitting the Tanganyika cichlids into as many as 12–16 tribes (in addition to previous mentioned, Bathybatini, Benthochromini, Boulengerochromini, Cyphotilapiini, Eretmodini, Greenwoodochromini, Perissodini and Trematocarini).Most Tanganyika cichlids live along the shoreline down to a depth of 100 m (330 ft), but some deep-water species regularly descend to 200 m (660 ft). Trematocara species have exceptionally been found at more than 300 m (980 ft), which is deeper than any other cichlid in the world. Some of the deep-water cichlids (e.g., Bathybates, Gnathochromis, Hemibates and Xenochromis) have been caught in places virtually devoid of oxygen, but how they are able to survive there is unclear. Tanganyika cichlids are generally benthic (found at or near the bottom) and/or coastal. No Tanganyika cichlids are truly pelagic and offshore, except for some of the piscivorous Bathybates. Two of these, B. fasciatus and B. leo, mainly feed on Tanganyika sardines. Tanganyika cichlids differ extensively in ecology and include species that are herbivores, detritivores, planktivores, insectivores, molluscivores, scavengers, scale-eaters and piscivores. These dietary specializations, however, have been shown to be flexible. That is, many species of Tanganyikan cichlid with specialized diets showed opportunistic, episodic exploitation of Stolothrissa tanganicae and Limnothrissa miodon when prey concentrations were unusually high. Their breeding behavior fall into two main groups, the substrate spawners (often in caves or rock crevices) and the mouthbrooders. Among the endemic species are two of the world's smallest cichlids, Neolamprologus multifasciatus and N. similis (both shell dwellers) at up to 4–5 cm (1.6–2.0 in), and one of the largest, the giant cichlid (Boulengerochromis microlepis) at up to 90 cm (3.0 ft).Many cichlids from Lake Tanganyika, such as species from the genera Altolamprologus, Cyprichromis, Eretmodus, Julidochromis, Lamprologus, Neolamprologus, Tropheus and Xenotilapia, are popular aquarium fish due to their bright colors and patterns, and interesting behaviors. Recreating a Lake Tanganyika biotope to host those cichlids in a habitat similar to their natural environment is also popular in the aquarium hobby. Cichlid tribes in Lake Tanganyika (E = tribe endemic or near-endemic) Other fish Lake Tanganyika is home to more than 80 species of non-cichlid fish and about 60% of these are endemic.The open waters of the pelagic zone are dominated by four non-cichlid species: Two species of "Tanganyika sardine" (Limnothrissa miodon and Stolothrissa tanganicae) form the largest biomass of fish in this zone, and they are important prey for the forktail lates (Lates microlepis) and sleek lates (L. stappersii). Two additional lates are found in the lake, the Tanganyika lates (L. angustifrons) and bigeye lates (L. mariae), but both these are primarily benthic hunters, although they also may move into open waters. The four lates, all endemic to Tanganyika, have been overfished and larger individuals are rare today.Among the more unusual fish in the lake are the endemic, facultatively brood parasitic "cuckoo catfish", including at least Synodontis grandiops and S. multipunctatus. A number of others are very similar (e.g., S. lucipinnis and S. petricola) and have often been confused; it is unclear if they have a similar behavior. The facultative brood parasites often lay their eggs synchronously with mouthbroding cichlids. The cichlid pick up the eggs in their mouth as if they were their own. Once the catfish eggs hatch the young eat the cichlid eggs. Six catfish genera are entirely restricted to the lake basin: Bathybagrus, Dinotopterus, Lophiobagrus, Phyllonemus, Pseudotanganikallabes and Tanganikallabes. Although not endemic on a genus level, six species of Chrysichthys catfish are only found in the Tanganyika basin where they live both in shallow and relatively deep waters; in the latter habitat they are the primary predators and scavengers. A unique evolutionary radiation in the lake is the 15 species of Mastacembelus spiny eels, all but one endemic to its basin. Although other African Great Lakes have Synodontis catfish, endemic catfish genera and Mastacembelus spiny eels, the relatively high diversity is unique to Tanganyika, which likely is related to its old age.Among the non-endemic fish, some are widespread African species but several are only shared with the Malagarasi and Congo River basins, such as the Congo bichir (Polypterus congicus), goliath tigerfish (Hydrocynus goliath), Citharinus citharus, six-banded distichodus (Distichodus sexfasciatus) and mbu puffer (Tetraodon mbu). Molluscs and crustaceans A total of 83 freshwater snail species (65 endemic) and 11 bivalve species (8 endemic) are known from the lake. Among the endemic bivalves are three monotypic genera: Grandidieria burtoni, Pseudospatha tanganyicensis and Brazzaea anceyi. Many of the snails are unusual for species living in freshwater in having noticeably thickened shells and/or distinct sculpture, features more commonly seen in marine snails. They are referred to as thalassoids, which can be translated to "marine-like". All the Tanganyika thalassoids, which are part of Prosobranchia, are endemic to the lake. Initially they were believed to be related to similar marine snails, but they are now known to be unrelated. Their appearance is now believed to be the result of the highly diverse habitats in Lake Tanganyika and evolutionary pressure from snail-eating fish and, in particular, Platythelphusa crabs. A total of 17 freshwater snail genera are endemic to the lake, such as Hirthia, Lavigeria, Paramelania, Reymondia, Spekia, Stanleya, Tanganyicia and Tiphobia. There are about 30 species of non-thalassoid snails in the lake, but only five of these are endemic, including Ferrissia tanganyicensis and Neothauma tanganyicense. The latter is the largest Tanganyika snail and its shell is often used by small shell-dwelling cichlids.Crustaceans are also highly diverse in Tanganyika with more than 200 species, of which more than half are endemic. They include 10 species of freshwater crabs (9 Platythelphusa and Potamonautes platynotus; all endemic), at least 11 species of small atyid shrimp (Atyella, Caridella and Limnocaridina), an endemic palaemonid shrimp (Macrobrachium moorei), about 100 ostracods, including many endemics, and several copepods. Among these, Limnocaridina iridinae lives inside the mantle cavity of the unionid mussel Pleiodon spekei, making it one of only two known commensal species of freshwater shrimp (the other is the sponge-living Caridina spongicola from Lake Towuti, Indonesia).Among Rift Valley lakes, Lake Tanganyika far surpasses all others in terms of crustacean and freshwater snail richness (both in total number of species and number of endemics). For example, the only other Rift Valley lake with endemic freshwater crabs are Lake Kivu and Lake Victoria with two species each. Other invertebrates The diversity of other invertebrate groups in Lake Tanganyika is often not well-known, but there are at least 20 described species of leeches (12 endemics), 9 sponges (7 endemic), 6 bryozoa (2 endemic), 11 flatworms (7 endemic), 20 nematodes (7 endemic), 28 annelids (17 endemic) and the small hydrozoan jellyfish Limnocnida tanganyicae. Fishing Lake Tanganyika supports a major fishery, which, depending on source, provides 25–40% or c. 60% of the animal protein in the diet of the people living in the region.Lake Tanganyika fish can be found exported throughout East Africa. Major commercial fishing began in the mid-1950s and has, together with global warming, had a heavy impact on the fish populations, causing significant declines. In 2016, it was estimated that the total catch was up to 200,000 tonnes. History It is thought that early Homo sapiens were making an impact on the region during the Stone Age. The time period of the Middle Stone Age to Late Stone Age is described as an age of advanced hunter-gatherers.There are many methods in which the native people of the area were fishing. Most of them included using a lantern as a lure for fish that are attracted to light. There were three basic forms. One called Lusenga which is a wide net used by one person from a canoe. The second one is using a lift net. This was done by dropping a net deep below the boat using two parallel canoes and then simultaneously pulling it up. The third is called Chiromila which consisted of three canoes. One canoe was stationary with a lantern while another canoe holds one end of the net and the other circles the stationary one to meet up with the net.The first known Westerners to find the lake were the British explorers Richard Burton and John Speke, in 1858. They located it while searching for the source of the Nile River. Speke continued and found the actual source, Lake Victoria. Later David Livingstone passed by the lake. He noted the name "Liemba" for its southern part, a word probably from the Fipa language. Tanganyika means "stars" in the Luvale language.: 523 The lake was the scene of Battle for Lake Tanganyika during World War I. With the aid of the Graf Goetzen, the Germans had complete control of the lake in the early stages of the war. The ship was used both to ferry cargo and personnel across the lake, and as a base from which to launch surprise attacks on Allied troops. It therefore became essential for the Allied forces to gain control of the lake themselves. Under the command of Lieutenant Commander Geoffrey Spicer-Simson the British Royal Navy achieved the monumental task of bringing two armed motor boats HMS Mimi and HMS Toutou from England to the lake by rail, road and river to Albertville (since renamed Kalemie in 1971) on the western shore of Lake Tanganyika. The two boats waited until December 1915, and mounted a surprise attack on the Germans, with the capture of the gunboat Kingani. Another German vessel, the Hedwig, was sunk in February 1916, leaving the Götzen as the only German vessel remaining to control the lake. In order to avoid his prize ship falling into Allied hands, Zimmer scuttled the vessel on July 26, 1916. The vessel was later raised in 1924 and renamed MV Liemba. References External links Texts on Wikisource: "Tanganyika". Collier's New Encyclopedia. 1921. "Tanganyika". The New Student's Reference Work. 1914. "Tanganyika". Encyclopædia Britannica (11th ed.). 1911.
triassic–jurassic extinction event
The Triassic–Jurassic (Tr-J) extinction event (TJME), often called the end-Triassic extinction, marks the boundary between the Triassic and Jurassic periods, 201.4 million years ago, and is one of the top five major extinction events of the Phanerozoic eon, profoundly affecting life on land and in the oceans. In the seas, the entire class of conodonts and 23–34% of marine genera disappeared. On land, all archosauromorphs other than crocodylomorphs, pterosaurs, and dinosaurs became extinct; some of the groups which died out were previously abundant, such as aetosaurs, phytosaurs, and rauisuchids. Some remaining non-mammalian therapsids and many of the large temnospondyl amphibians had become extinct prior to the Jurassic as well. However, there is still much uncertainty regarding a connection between the Tr-J boundary and terrestrial vertebrates, due to a lack of terrestrial fossils from the Rhaetian (latest) stage of the Triassic. What was left fairly untouched were plants, crocodylomorphs, dinosaurs, pterosaurs and mammals; this allowed the dinosaurs, pterosaurs, and crocodylomorphs to become the dominant land animals for the next 135 million years.Statistical analysis of marine losses at this time suggests that the decrease in diversity was caused more by a decrease in speciation than by an increase in extinctions. Nevertheless, a pronounced turnover in plant spores and a collapse of coral reef communities indicates that an ecological catastrophe did occur at the Triassic-Jurassic boundary. Older hypotheses on extinction have proposed that gradual climate or sea level change may be the culprit, or perhaps one or more asteroid strikes. However, the most well-supported and widely-held theory for the cause of the Tr-J extinction places the blame on the start of volcanic eruptions in the Central Atlantic Magmatic Province (CAMP), which was responsible for outputting a high amount of carbon dioxide into Earth's atmosphere, inducing profound global warming, along with ocean acidification. Effects This event vacated terrestrial ecological niches, allowing the dinosaurs to assume the dominant roles in the Jurassic period. This event happened in less than 10,000 years and occurred just before Pangaea started to break apart. In the area of Tübingen (Germany), a Triassic–Jurassic bonebed can be found, which is characteristic for this boundary. Marine invertebrates The Triassic-Jurassic extinction completed the transition from the Palaeozoic evolutionary fauna to the Modern evolutionary fauna, a change that began in the aftermath of the end-Guadalupian extinction and continued following the Permian-Triassic extinction event (PTME). Ammonites were affected substantially by the Triassic-Jurassic extinction. Ceratitidans, the most prominent group of ammonites in the Triassic, became extinct at the end of the Rhaetian after having their diversity reduced significantly in the Norian. Other ammonite groups such as the Ammonitina, Lytoceratina, and Phylloceratina diversified from the Early Jurassic onward. Bivalves experienced high extinction rates at the early and middle Rhaetian. The Lilliput effect affected megalodontid bivalves, whereas file shell bivalves experienced the Brobdingnag effect, the reverse of the Lilliput effect, as a result of the mass extinction event. There is some evidence of a bivalve cosmopolitanism event during the mass extinction. Gastropod diversity was barely affected at the Triassic-Jurassic boundary, although gastropods gradually suffered numerous losses over the late Norian and Rhaetian, during the leadup to the TJME. Plankton diversity was relatively mildly impacted at the Triassic-Jurassic boundary, although extinction rates among radiolarians rose significantly. Brachiopods declined in diversity at the end of the Triassic before rediversifying in the Sinemurian and Pliensbachian. Conulariids seemingly completely died out at the end of the Triassic. Around 96% of coral genera died out, with integrated corals being especially devastated. Corals practically disappeared from the Tethys Ocean at the end of the Triassic except for its northernmost reaches, resulting in an early Hettangian "coral gap". There is good evidence for a collapse in the reef community, which was likely driven by ocean acidification resulting from CO2 supplied to the atmosphere by the CAMP eruptions.Most evidence points to a relatively fast recovery from the mass extinction. Benthic ecosystems recovered far more rapidly after the TJME than they did after the PTME. British Early Jurassic benthic marine environments display a relatively rapid recovery that began almost immediately after the end of the mass extinction despite numerous relapses into anoxic conditions during the earliest Jurassic. In the Neuquén Basin, recovery began in the late early Hettangian and lasted until a new biodiversity equilibrium in the late Hettangian. Also despite recurrent anoxic episodes, large bivalves began to reappear shortly after the extinction event. Siliceous sponges dominated the immediate aftermath interval thanks to the enormous influx of silica into the oceans from the weathering of the CAMP's aerially extensive basalts. Some clades recovered more slowly than others, however, as exemplified by corals and their disappearance in the early Hettangian. Marine vertebrates Fish did not suffer a mass extinction at the end of the Triassic. The late Triassic in general did experience a gradual drop in actinopterygiian diversity after an evolutionary explosion in the middle Triassic. Though this may have been due to falling sea levels or the Carnian pluvial event, it may instead be a result of sampling bias considering that middle Triassic fish have been more extensively studied than late Triassic fish. Despite the apparent drop in diversity, neopterygiians (which include most modern bony fish) suffered less than more "primitive" actinopterygiians, indicating a biological turnover where modern groups of fish started to supplant earlier groups. Conodonts, which were prominent index fossils throughout the Paleozoic and Triassic, finally became extinct at the T-J boundary following declining diversity.Like fish, marine reptiles experienced a substantial drop in diversity between the middle Triassic and the Jurassic. However, their extinction rate at the Triassic–Jurassic boundary was not elevated. The highest extinction rates experienced by Mesozoic marine reptiles actually occurred at the end of the Ladinian stage, which corresponds to the end of the middle Triassic. The only marine reptile families which became extinct at or slightly before the Triassic–Jurassic boundary were the placochelyids (the last family of placodonts), and giant ichthyosaurs such as shastasaurids and shonisaurids. Nevertheless, some authors have argued that the end of the Triassic acted as a genetic "bottleneck" for ichthyosaurs, which never regained the level of anatomical diversity and disparity which they possessed during the Triassic. Terrestrial vertebrates One of the earliest pieces of evidence for a late Triassic extinction was a major turnover in terrestrial tetrapods such as amphibians, reptiles, and synapsids. Edwin H. Colbert drew parallels between the system of extinction and adaptation between the Triassic–Jurassic and Cretaceous-Paleogene boundaries. He recognized how dinosaurs, lepidosaurs (lizards and their relatives), and crocodyliforms (crocodilians and their relatives) filled the niches of more ancient groups of amphibians and reptiles which were extinct by the start of the Jurassic. Olsen (1987) estimated that 42% of all terrestrial tetrapods became extinct at the end of the Triassic, based on his studies of faunal changes in the Newark Supergroup of eastern North America. More modern studies have debated whether the turnover in Triassic tetrapods was abrupt at the end of the Triassic, or instead more gradual.During the Triassic, amphibians were mainly represented by large, crocodile-like members of the order Temnospondyli. Although the earliest lissamphibians (modern amphibians like frogs and salamanders) did appear during the Triassic, they would become more common in the Jurassic while the temnospondyls diminished in diversity past the Triassic–Jurassic boundary. Although the decline of temnospondyls did send shockwaves through freshwater ecosystems, it was probably not as abrupt as some authors have suggested. Brachyopoids, for example, survived until the Cretaceous according to new discoveries in the 1990s. Several temnospondyl groups did become extinct near the end of the Triassic despite earlier abundance, but it is uncertain how close their extinctions were to the end of the Triassic. The last known metoposaurids ("Apachesaurus") were from the Redonda Formation, which may have been early Rhaetian or late Norian. Gerrothorax, the last known plagiosaurid, has been found in rocks which are probably (but not certainly) Rhaetian, while a capitosaur humerus was found in Rhaetian-age deposits in 2018. Therefore, plagiosaurids and capitosaurs were likely victims of an extinction at the very end of the Triassic, while most other temnospondyls were already extinct. Terrestrial reptile faunas were dominated by archosauromorphs during the Triassic, particularly phytosaurs and members of Pseudosuchia (the reptile lineage which leads to modern crocodilians). In the early Jurassic and onwards, dinosaurs and pterosaurs became the most common land reptiles, while small reptiles were mostly represented by lepidosauromorphs (such as lizards and tuatara relatives). Among pseudosuchians, only small crocodylomorphs did not become extinct by the end of the Triassic, with both dominant herbivorous subgroups (such as aetosaurs) and carnivorous ones (rauisuchids) having died out. Phytosaurs, drepanosaurs, trilophosaurids, tanystropheids, and procolophonids, which were other common reptiles in the late Triassic, had also become extinct by the start of the Jurassic. However, pinpointing the extinction of these different land reptile groups is difficult, as the last stage of the Triassic (the Rhaetian) and the first stage of the Jurassic (the Hettangian) each have few records of large land animals. Some paleontologists have considered only phytosaurs and procolophonids to have become extinct at the Triassic-Jurassic boundary, with other groups having become extinct earlier. However, it is likely that many other groups survived up until the boundary according to British fissure deposits from the Rhaetian. Aetosaurs, kuehneosaurids, drepanosaurs, thecodontosaurids, "saltoposuchids" (like Terrestrisuchus), trilophosaurids, and various non-crocodylomorph pseudosuchians are all examples of Rhaetian reptiles which may have become extinct at the Triassic-Jurassic boundary. Terrestrial plants The extinction event marks a floral turnover as well, with estimates of the percentage of Rhaetian pre-extinction plants being lost ranging from 17% to 73%. Though spore turnovers are observed across the Triassic-Jurassic boundary, the abruptness of this transition and the relative abundances of given spore types both before and after the boundary are highly variable from one region to another, pointing to a global ecological restructuring rather than a mass extinction of plants. Overall, plants suffered minor diversity losses on a global scale as a result of the extinction, but species turnover rates were high and substantial changes occurred in terms of relative abundance and growth distribution among taxa. Evidence from Central Europe suggests that rather than a sharp, very rapid decline followed by an adaptive radiation, a more gradual turnover in both fossil plants and spores with several intermediate stages is observed over the course of the extinction event. Extinction of plant species can in part be explained by the suspected increased carbon dioxide in the atmosphere as a result of CAMP volcanic activity, which would have created photoinhibition and decreased transpiration levels among species with low photosynthetic plasticity, such as the broad leaved Ginkgoales which declined to near extinction across the Tr-J boundary.Ferns and other species with dissected leaves displayed greater adaptability to atmosphere conditions of the extinction event, and in some instances were able to proliferate across the boundary and into the Jurassic. In the Jiyuan Basin of North China, Classopolis content increased drastically in concordance with warming, drying, wildfire activity, enrichments in isotopically light carbon, and an overall reduction in floral diversity. In the Sichuan Basin, relatively cool mixed forests in the late Rhaetian were replaced by hot, arid fernlands during the Triassic-Jurassic transition, which in turn later gave way to a cheirolepid-dominated flora in the Hettangian and Sinemurian. The abundance of ferns in China that were resistant to high levels of aridity increased significantly across the Triassic-Jurassic boundary, though ferns better adapted for moist, humid environments declined, indicating that plants experienced major environmental stress, albeit not an outright mass extinction. In some regions, however, major floral extinctions did occur, with some researchers challenging the hypothesis of there being no significant floral mass extinction on this basis. In the Newark Supergroup of the United States East Coast, about 60% of the diverse monosaccate and bisaccate pollen assemblages disappear at the Tr–J boundary, indicating a major extinction of plant genera. Early Jurassic pollen assemblages are dominated by Corollina, a new genus that took advantage of the empty niches left by the extinction. Along the margins of the European Epicontinental Sea and the European shores of the Tethys, coastal and near-coastal mires fell victim to an abrupt sea level rise. These mires were replaced by a pioneering opportunistic flora after an abrupt sea level fall, although its heyday was short lived and it died out shortly after its rise. In the Eiberg Basin of the Northern Calcareous Alps, there was a very rapid palynomorph turnover. Polyploidy may have been an important factor that mitigated a conifer species' risk of going extinct. Possible causes Gradual climate change Gradual climate change, sea-level fluctuations, or a pulse of oceanic acidification during the late Triassic may have reached a tipping point. However, the effect of such processes on Triassic animal and plant groups is not well understood. The extinctions at the end of the Triassic were initially attributed to gradually changing environments. Within his 1958 study recognizing biological turnover between the Triassic and Jurassic, Edwin H. Colbert's 1958 proposal was that this extinction was a result of geological processes decreasing the diversity of land biomes. He considered the Triassic period to be an era of the world experiencing a variety of environments, from towering highlands to arid deserts to tropical marshes. In contrast, the Jurassic period was much more uniform both in climate and elevation due to excursions by shallow seas.Later studies noted a clear trend towards increased aridification towards the end of the Triassic. Although high-latitude areas like Greenland and Australia actually became wetter, most of the world experienced more drastic changes in climate as indicated by geological evidence. This evidence includes an increase in carbonate and evaporite deposits (which are most abundant in dry climates) and a decrease in coal deposits (which primarily form in humid environments such as coal forests). In addition, the climate may have become much more seasonal, with long droughts interrupted by severe monsoons. The world gradually got warmer over this time as well; from the late Norian to the Rhaetian, mean annual temperatures rose by 7 to 9 °C. Sea level fall Geological formations in Europe seem to indicate a drop in sea levels in the late Triassic, and then a rise in the early Jurassic. Although falling sea levels have sometimes been considered a culprit for marine extinctions, evidence is inconclusive since many sea level drops in geological history are not correlated with increased extinctions. However, there is still some evidence that marine life was affected by secondary processes related to falling sea levels, such as decreased oxygenation (caused by sluggish circulation), or increased acidification. These processes do not seem to have been worldwide, with the sea level fall observed in European sediments believed to be not global but regional, but they may explain local extinctions in European marine fauna. A pronounced sea level in latest Triassic records from Lake Williston in northeastern British Columbia, which was then the northeastern margin of Panthalassa, resulted in an extinction event of infaunal (sediment-dwelling) bivalves, though not epifaunal ones. Extraterrestrial impact Some have hypothesized that an impact from an asteroid or comet caused the Triassic–Jurassic extinction, similar to the extraterrestrial object which was the main factor in the Cretaceous–Paleogene extinction about 66 million years ago, as evidenced by the Chicxulub crater in Mexico. However, so far no impact crater of sufficient size has been dated to precisely coincide with the Triassic–Jurassic boundary. Nevertheless, the late Triassic did experience several impacts, including the second-largest confirmed impact in the Mesozoic. The Manicouagan Reservoir in Quebec is one of the most visible large impact craters on Earth, and at 100 km (62 mi) in diameter it is tied with the Eocene Popigai impact structure in Siberia as the fourth-largest impact crater on Earth. Olsen et al. (1987) were the first scientists to link the Manicouagan crater to the Triassic–Jurassic extinction, citing its age which at the time was roughly considered to be late Triassic. More precise radiometric dating by Hodych & Dunning (1992) has shown that the Manicouagan impact occurred about 214 million years ago, about 13 million years before the Triassic–Jurassic boundary. Therefore, it could not have been responsible for an extinction precisely at the Triassic–Jurassic boundary. Nevertheless, the Manicouagan impact did have a widespread effect on the planet; a 214-million-year-old ejecta blanket of shocked quartz has been found in rock layers as far away as England and Japan. There is still a possibility that the Manicouagan impact was responsible for a small extinction midway through the late Triassic at the Carnian–Norian boundary, although the disputed age of this boundary (and whether an extinction actually occurred in the first place) makes it difficult to correlate the impact with extinction. Onoue et al. (2016) alternatively proposed that the Manicouagan impact was responsible for a marine extinction in the middle of the Norian which affected radiolarians, sponges, conodonts, and Triassic ammonoids. Thus, the Manicouagan impact may have been partially responsible for the gradual decline in the latter two groups which culminated in their extinction at the Triassic–Jurassic boundary. The boundary between the Adamanian and Revueltian land vertebrate faunal zones, which involved extinctions and faunal changes in tetrapods and plants, was possibly also caused by the Manicouagan impact, although discrepancies between magnetochronological and isotopic dating lead to some uncertainty.Other Triassic craters are closer to the Triassic–Jurassic boundary but also much smaller than the Manicouagan reservoir. The eroded Rochechouart impact structure in France has most recently been dated to 201±2 million years ago, but at 25 km (16 mi) across (possibly up to 50 km (30 mi) across originally), it appears to be too small to have affected the ecosystem. The 40 km (25 mi) wide Saint Martin crater in Manitoba has been proposed as a candidate for a possible TJME-causing impact, but its has since been dated to be Carnian. Other putative or confirmed Triassic craters include the 80 km (50 mi) wide Puchezh-Katunki crater in Eastern Russia (though it may be Jurassic in age), the 15 km (9 mi) wide Obolon' crater in Ukraine, and the 9 km (6 mi) wide Red Wing Creek structure in North Dakota. Spray et al. (1998) noted an interesting phenomenon, that being how the Manicouagan, Rochechouart, and Saint Martin craters all seem to be at the same latitude, and that the Obolon' and Red Wing craters form parallel arcs with the Rochechouart and Saint Martin craters, respectively. Spray and his colleagues hypothesized that the Triassic experienced a "multiple impact event", a large fragmented asteroid or comet which broke up and impacted the earth in several places at the same time. Such an impact has been observed in the present day, when Comet Shoemaker-Levy 9 broke up and hit Jupiter in 1992. However, the "multiple impact event" hypothesis for Triassic impact craters has not been well-supported; Kent (1998) noted that the Manicouagan and Rochechouart craters were formed in eras of different magnetic polarity, and radiometric dating of the individual craters has shown that the impacts occurred millions of years apart.Shocked quartz has been found in Rhaetian deposits from the Northern Apennines of Italy, providing possible evidence of an end-Triassic extraterrestrial impact. Certain trace metals indicative of a bolide impact have been found in the late Rhaetian, though not at the Triassic-Jurassic boundary itself; the discoverers of these trace metal anomalies purport that such a bolide impact could only have been an indirect cause of the TJME. The discovery of seismites two to four metres thick coeval with the carbon isotope fluctuations associated with the TJME has been interpreted as evidence of a possible bolide impact, although no definitive link between these seismites and any impact event has been found.On the other hand, the dissimilarity between the isotopic perturbations characterising the TJME and those characterising the end-Cretaceous mass extinction makes an extraterrestrial impact highly unlikely to have been the cause of the TJME, according to many researchers. Various trace metal ratios, including palladium/iridium, platinum/iridium, and platinum/rhodium, in rocks deposited during the TJME have numerical values very different from what would be expected in an extraterrestrial impact scenario, providing further evidence against this hypothesis. Central Atlantic Magmatic Province The leading and best evidenced explanation for the TJME is massive volcanic eruptions, specifically from the Central Atlantic Magmatic Province (CAMP), the largest known large igneous province by area, and one of the most voluminous, with its flood basalts extending across parts of southwestern Europe, northwestern Africa, northeastern South America, and southeastern North America. The coincidence and synchrony of CAMP activity and the TJME is indicated by uranium-lead dating, argon-argon dating, and palaeomagnetism. The isotopic composition of fossil soils and marine sediments near the boundary between the Late Triassic and Early Jurassic has been tied to a large negative δ13C excursion. Carbon isotopes of hydrocarbons (n-alkanes) derived from leaf wax and lignin, and total organic carbon from two sections of lake sediments interbedded with the CAMP in eastern North America have shown carbon isotope excursions similar to those found in the mostly marine St. Audrie's Bay section, Somerset, England; the correlation suggests that the end-Triassic extinction event began at the same time in marine and terrestrial environments, slightly before the oldest basalts in eastern North America but simultaneous with the eruption of the oldest flows in Morocco (Also suggested by Deenen et al., 2010), with both a critical CO2 greenhouse and a marine biocalcification crisis. Contemporaneous CAMP eruptions, mass extinction, and the carbon isotopic excursions are shown in the same places, making the case for a volcanic cause of a mass extinction. The observed negative carbon isotope excursion is lower in some sites that correspond to what was then eastern Panthalassa because of the extreme aridity of western Pangaea limiting weathering and erosion there. The negative CIE associated with CAMP volcanism lasted for approximately 20,000 to 40,000 years, or about one or two of Earth's axial precession cycles, although the carbon cycle was so disrupted that it did not stabilise until the Sinemurian. Mercury anomalies from deposits in various parts of the world have further bolstered the volcanic cause hypothesis, as have anomalies from various platinum-group elements. Nickel enrichments are also observed at the Triassic-Jurassic boundary coevally with light carbon enrichments, providing yet more evidence of massive volcanism.Some scientists initially rejected the volcanic eruption theory, because the Newark Supergroup, a section of rock in eastern North America that records the Triassic–Jurassic boundary, contains no ash-fall horizons and its oldest basalt flows were estimated to lie around 10 m above the transition zone. However, updated dating protocol and wider sampling has confirmed that the CAMP eruptions started in Morocco only a few thousand years before the extinction, preceding their onset in Nova Scotia and New Jersey, and that they continued in several more pulses for the next 600,000 years. Volcanic global warming has also been criticised as an explanation because some estimates have found that the amount of carbon dioxide emitted was only around 250 ppm, not enough to generate a mass extinction. In addition, at some sites, changes in carbon isotope ratios have been attributed to diagenesis and not any primary environmental changes. Global warming The flood basalts of the CAMP released gigantic quantities of carbon dioxide, a potent greenhouse gas causing intense global warming. Before the end-Triassic extinction, carbon dioxide levels were around 1,000 ppm as measured by the stomatal index of Lepidopteris ottonis, but this quantity jumped to 1,300 ppm at the onset of the extinction event. During the TJME, carbon dioxide concentrations increased fourfold. The record of CAMP degassing shows several distinct pulses of carbon dioxide immediately following each major pulse of magmatism, at least two of which amount to a doubling of atmospheric CO2. Carbon dioxide was emitted quickly and in enormous quantities compared to other periods of Earth's history, rate of carbon dioxide emissions was one of the most meteoric rises in carbon dioxide levels in Earth's entire history. It is estimated that a single volcanic pulse from the large igneous province would have emitted an amount of carbon dioxide roughly equivalent to projected anthropogenic carbon dioxide emissions for the 21st century. In addition, the flood basalts intruded through sediments that were rich in organic matter and combusted it, which led to the degassing of volatiles that further enhanced volcanic warming of the climate. Thermogenic carbon release through such contact metamorphism of carbon-rich deposits has been found to be a sensible hypothesis providing a coherent explanation for the magnitude of the negative carbon isotope excursions at the terminus of the Triassic. Global temperatures rose sharply by 3 to 4 °C. In some regions, the temperature rise was as great as 10 °C.The catastrophic dissociation of gas hydrates as a positive feedback resulting from warming, which has been suggested as one possible cause of the PTME, the largest mass extinction of all time, may have exacerbated greenhouse conditions, although others suggest that methane hydrate release was temporally mismatched with the TJME and thus not a cause of it. Global cooling Besides the carbon dioxide-driven long-term global warming, CAMP volcanism had shorter term cooling effects resulting from the emission of sulphur dioxide aerosols. A 2022 study shows that high latitudes had colder climates with evidence of mild glaciation. The authors propose that cold periods ("ice ages") induced by volcanic ejecta clouding the atmosphere might have favoured endothermic animals, with dinosaurs, pterosaurs, and mammals being more capable at enduring these conditions than large pseudosuchians due to insulation. Metal poisoning CAMP volcanism released enormous amounts of toxic mercury. The appearance of high rates of mutaganesis of varying severity in fossil spores during the TJME coincides with mercury anomalies and is thus believed by researchers to have been caused by mercury poisoning. Wildfires The intense, rapid warming is believed to have resulted in increased storminess and lightning activity as a consequence of the more humid climate. The uptick in lightning activity is in turn implicated as a cause of an increase in wildfire activity. The combined presence of charcoal fragments and heightened levels of pyrolytic polycyclic aromatic hydrocarbons in Polish sedimentary facies straddling the Triassic-Jurassic boundary indicates wildfires were extremely commonplace during the earliest Jurassic, immediately after the Triassic-Jurassic transition. Frequent wildfires, combined with increased seismic activity from CAMP emplacement, led to apocalyptic soil degradation. Ocean acidification In addition to these climatic effects, oceanic uptake of volcanogenic carbon and sulphur dioxide would have led to a significant decrease of seawater pH known as ocean acidification, which is discussed as a relevant driver of marine extinction. Evidence for ocean acidification as an extinction mechanism comes from the preferential extinction of marine organisms with thick aragonitic skeletons and little biotic control of biocalcification (e.g., corals, hypercalcifying sponges), which resulted in a coral reef collapse and an early Hettangian "coral gap". Extensive fossil remains of malformed calcareous nannoplankton, a common sign of significant drops in pH, have also been extensively reported from the Triassic-Jurassic boundary. Global interruption of carbonate deposition at the Triassic-Jurassic boundary has been cited as additional evidence for catastrophic ocean acidification. Upwardly developing aragonite fans in the shallow subseafloor may also reflect decreased pH, these structures being speculated to have precipitated concomitantly with acidification. Anoxia Anoxia was another mechanism of extinction; the end-Triassic extinction was coeval with an uptick in black shale deposition and a pronounced negative δ238U excursion, indicating a major decrease in marine oxygen availability. Isorenieratane concentration increase reveals that populations of green sulphur bacteria, which photosynthesise using hydrogen sulphide instead of water, grew significantly across the Triassic-Jurassic boundary; these findings indicate that euxinia, a form of anoxia defined by not just the absence of dissolved oxygen but high concentrations of hydrogen sulphide, also developed in the oceans. A meteoric shift towards positive sulphur isotope ratios in reduced sulphur species indicates a complete utilisation of sulphate by sulphate reducing bacteria. Evidence of anoxia has been discovered at the Triassic-Jurassic boundary across the world's oceans; the western Tethys, eastern Tethys, and Panthalassa were all affected by a precipitous drop in seawater oxygen, although at a few sites, the TJME was associated with fully oxygenated waters. Positive δ15N excursions have also been interpreted as evidence of anoxia concomitant with increased denitrification in marine sediments in the TJME's aftermath.In northeastern Panthalassa, episodes of anoxia and euxinia were already occurring before the TJME, making its marine ecosystems unstable even before the main crisis began. This early phase of environmental degradation in eastern Panthalassa may have been caused by an early phase of CAMP activity. During the TJME, the rapid warming and increase in continental weathering led to the stagnation of ocean circulation and deoxygenation of seawater in many ocean regions, causing catastrophic marine environmental effects in conjunction with ocean acidification, which was enhanced and exacerbated by widespread photic zone euxinia through organic matter respiration and carbon dioxide release. Off the shores of the Wrangellia Terrane, the onset of photic zone euxinia was preceded by an interval of limited nitrogen availability and increased nitrogen fixation in surface waters while euxinia developed in bottom waters. In what is now northwestern Europe, shallow seas became salinity stratified, enabling easy development of anoxia. The persistence of anoxia into the Hettangian age may have helped delay the recovery of marine life in the extinction's aftermath, and recurrent hydrogen sulphide poisoning likely had the same retarding effect on biotic rediversification. Ozone depletion Research on the role of ozone shield deterioration during the Permian-Triassic mass extinction has suggested that it may have been a factor in the TJME as well. A spike in the abundance of unseparated tetrads of Kraeuselisporites reissingerii has been interpreted as evidence of increased ultraviolet radiation flux resulting from ozone layer damage caused by volcanic aerosols. Comparisons to present climate change The extremely rapid, centuries-long timescale of carbon emissions and global warming caused by pulses of CAMP volcanism has drawn comparisons between the Triassic-Jurassic mass extinction and anthropogenic global warming, currently causing the Holocene extinction. The current rate of carbon dioxide emissions is around 50 gigatonnes per year, hundreds of times faster than during the latest Triassic, although the lack of extremely detailed stratigraphic resolution and pulsed nature of CAMP volcanism means that individual pulses of greenhouse gas emissions likely occurred on comparable timescales to human release of warming gases since the Industrial Revolution. The degassing rate of the first pulse of CAMP volcanism is estimated to have been around half of the rate of modern anthropogenic emissions. Palaeontologists studying the TJME and its impacts warn that a major reduction in humanity's carbon dioxide emissions to slow down climate change is of critical importance for preventing a catastrophe similar to the TJME from befalling the modern biosphere. If human-induced climate change persists as is, predictions can be made as to how various aspects of the biosphere will respond based on records of the TJME. For example, current conditions such the increased carbon dioxide levels, ocean acidification, and ocean deoxygenation create a similar climate to that of the Triassic-Jurassic boundary for marine life, so it is the common assumption that should the trends continue, modern reef-building taxa and skeletal benthic organisms will be preferentially impacted. References Literature Hodych, J. P.; G. R. Dunning (1992). "Did the Manicougan impact trigger end-of-Triassic mass extinction?". Geology. 20 (1): 51–54. Bibcode:1992Geo....20...51H. doi:10.1130/0091-7613(1992)020<0051:DTMITE>2.3.CO;2. McElwain, J. C.; D. J. Beerling; F. I. Woodward (27 August 1999). "Fossil Plants and Global Warming at the Triassic–Jurassic Boundary". Science. 285 (5432): 1386–1390. doi:10.1126/science.285.5432.1386. PMID 10464094. McHone, J.G. (2003), Volatile emissions of Central Atlantic Magmatic Province basalts: Mass assumptions and environmental consequences, in Hames, W.E. et al., eds., The Central Atlantic Magmatic Province: Insights from Fragments of Pangea. American Geophysical Union Monograph 136, p. 241–254. Tanner, L.H.; S.G. Lucas; M.G. Chapman (2004). "Assessing the record and causes of Late Triassic extinctions". Earth-Science Reviews. 65 (1–2): 103–139. Bibcode:2004ESRv...65..103T. doi:10.1016/S0012-8252(03)00082-5.[1] Whiteside, Jessica H.; Paul E. Olsen; Timothy Eglinton; Michael E. Brookfield; Raymond N. Sambrotto (March 22, 2010). "Compound-specific carbon isotopes from Earth's largest flood basalt eruptions directly linked to the end-Triassic mass extinction". Proceedings of the National Academy of Sciences of the United States of America. 107 (15): 6721–5. Bibcode:2010PNAS..107.6721W. doi:10.1073/pnas.1001706107. PMC 2872409. PMID 20308590. Deenen, M.H.L.; M. Ruhl; N.R. Bonis; W. Krijgsman; W. Kuerscher; M. Reitsma; M.J. van Bergen (2010). "A new chronology for the end-Triassic mass extinction". Earth and Planetary Science Letters. 291 (1–4): 113–125. Bibcode:2010E&PSL.291..113D. doi:10.1016/j.epsl.2010.01.003. Hautmann, M. (2012). "Extinction: End-Triassic Mass Extinction". eLS. John Wiley & Sons, Ltd: Chichester. doi:10.1002/9780470015902.a0001655.pub3. ISBN 978-0470016176. S2CID 130434497. Tetsuji Onoue; Honami Sato; Daisuke Yamashita; Minoru Ikehara; Kazutaka Yasukawa; Koichiro Fujinaga; Yasuhiro Kato; Atsushi Matsuoka (2016-07-08). "Bolide impact triggered the Late Triassic extinction event in equatorial Panthalassa". Scientific Reports. 6: 29609. Bibcode:2016NatSR...629609O. doi:10.1038/srep29609. PMC 4937377. PMID 27387863. External links Theories on the Triassic–Jurassic Extinction Archived 2017-08-01 at the Wayback Machine The Triassic–Jurassic Mass Extinction 200 million year old mystery BBC News story, 12-Oct-2011
ice sheet
In glaciology, an ice sheet, also known as a continental glacier, is a mass of glacial ice that covers surrounding terrain and is greater than 50,000 km2 (19,000 sq mi). The only current ice sheets are in Antarctica and Greenland; during the Last Glacial Period at Last Glacial Maximum, the Laurentide Ice Sheet covered much of North America, the Weichselian ice sheet covered Northern Europe and the Patagonian Ice Sheet covered southern South America. Ice sheets are bigger than ice shelves or alpine glaciers. Masses of ice covering less than 50,000 km2 are termed an ice cap. An ice cap will typically feed a series of glaciers around its periphery. Although the surface is cold, the base of an ice sheet is generally warmer due to geothermal heat. In places, melting occurs and the melt-water lubricates the ice sheet so that it flows more rapidly. This process produces fast-flowing channels in the ice sheet — these are ice streams. The present-day polar ice sheets are relatively young in geological terms. The Antarctic Ice Sheet first formed as a small ice cap (maybe several) in the early Oligocene, 33.9-23.0 Ma, but retreated and advanced many times until the Pliocene, 5.33-2.58Ma, when it came to occupy almost all of Antarctica. The Greenland ice sheet did not develop at all until the late Pliocene, but apparently developed very rapidly with the first continental glaciation. This had the unusual effect of allowing fossils of plants that once grew on present-day Greenland to be much better preserved than with the slowly forming Antarctic ice sheet. Antarctic ice sheet The Antarctic ice sheet is the largest single mass of ice on Earth. It covers an area of almost 14 million km2 (5.4 million mi2) and contains 30 million km3 of ice. Around 90% of the Earth's ice mass is in Antarctica, which, if melted, would cause sea levels to rise by 58 meters (190 feet). The continent-wide average surface temperature trend of Antarctica is positive and significant at >0.05 °C (0.09 °F)/decade since 1957.The Antarctic ice sheet is divided by the Transantarctic Mountains into two unequal sections called the East Antarctic Ice Sheet (EAIS) and the smaller West Antarctic Ice Sheet (WAIS). The EAIS rests on a major land mass, but the bed of the WAIS is, in places, more than 2,500 meters (8,200 feet) below sea level. It would be seabed if the ice sheet were not there. The WAIS is classified as a marine-based ice sheet, meaning that its bed lies below sea level and its edges flow into floating ice shelves. The WAIS is bounded by the Ross Ice Shelf, the Filchner-Ronne Ice Shelf, and outlet glaciers that drain into the Amundsen Sea. This ice sheet is losing mass at an accelerating pace but it is unclear how much it will contribute to the rising sea level. Greenland ice sheet The Greenland ice sheet occupies about 82% of the surface of Greenland, and if melted would cause sea levels to rise by 7.2 meters (24 feet). Estimated changes in the mass of Greenland's ice sheet suggest it is melting at a rate of about 239 cubic kilometres (57 cubic miles) per year. These measurements came from NASA's Gravity Recovery and Climate Experiment satellite, launched in 2002, as reported by BBC News in August 2006. Ice sheet dynamics Ice movement is dominated by the motion of glaciers, whose activity is determined by a number of processes. Their motion is the result of cyclic surges interspersed with longer periods of inactivity, on both hourly and centennial time scales. Predicted effects of global warming The Greenland, and possibly the Antarctic, ice sheets have been losing mass recently, because losses by ablation including outlet glaciers exceed accumulation of snowfall. According to the Intergovernmental Panel on Climate Change (IPCC), loss of Antarctic and Greenland ice sheet mass contributed, respectively, about 0.21 ± 0.35 and 0.21 ± 0.07 mm/year to sea level rise between 1993 and 2003. The IPCC projects that ice mass loss from melting of the Greenland ice sheet will continue to outpace accumulation of snowfall. Accumulation of snowfall on the Antarctic ice sheet is projected to outpace losses from melting. However, in the words of the IPCC, "Dynamical processes related to ice flow not included in current models but suggested by recent observations could increase the vulnerability of the ice sheets to warming, increasing future sea level rise. Understanding of these processes is limited and there is no consensus on their magnitude." More research work is therefore required to improve the reliability of predictions of ice-sheet response to global warming. In 2018, scientists discovered channels between the East and West Antarctic ice sheets that may allow melted ice to flow more quickly to the sea.The effects on ice sheets due to increasing temperature may accelerate, but as documented by the IPCC the effects are not easily projected accurately and in the case of the Antarctic, may trigger an accumulation of additional ice mass. If an ice sheet were ablated down to bare ground, less light from the sun would be reflected back into space and more would be absorbed by the land. The Greenland Ice Sheet covers 84% of the island, and the Antarctic Ice Sheet covers approximately 98% of the continent. Due to the significant thickness of these ice sheets, global warming analysis typically focuses on the loss of ice mass from the ice sheets increasing sea level rise, and not on a reduction in the surface area of the ice sheets. Until recently, ice sheets were viewed as inert components of the carbon cycle and were largely disregarded in global models. Research in the past decade has transformed this view, demonstrating the existence of uniquely adapted microbial communities, high rates of biogeochemical/physical weathering in ice sheets and storage and cycling of organic carbon in excess of 100 billion tonnes, as well as nutrients (see diagram). Global ice sheet Many or most of Solar System objects are icy. Particularly icy moons have global ice sheets. Earth might have had a global ice sheet as well in its past, during a phase called snowball Earth. See also Cryosphere Quaternary glaciation Wisconsin glaciation References Further reading Müller, Jonas; Koch, Luka, eds. (2012). Ice Sheets: Dynamics, Formation and Environmental Concerns. Hauppauge, New York: Nova Science. ISBN 978-1-61942-367-1. External links United Nations Environment Programme: Global Outlook for Ice and Snow http://www.nasa.gov/vision/earth/environment/ice_sheets.html
unstoppable global warming
Unstoppable Global Warming: Every 1,500 Years is a book about climate change, written by Siegfried Fred Singer and Dennis T. Avery, which asserts that natural changes, and not CO2 emissions, are the cause of global warming. Published by Rowman & Littlefield in 2006, the book sold well and was reprinted in an updated edition in 2007. The title refers to the hypothesis of 1,500-year climate cycles in the Holocene first postulated by Gerard C. Bond, mainly based on petrologic tracers of drift ice in the North Atlantic. Synopsis Over sixteen chapters the authors present their view of the natural cycles in the earth's climate and argue that the current warming period is not caused by man-made greenhouse gas emissions. The book begins with the Earth's climate timeline, starting from the formation of the Earth 4.5 billion years ago, and leading up to the Modern Warm Period.The book ends with a chapter titled "The ultimate failure of the Kyoto Protocol", which predicted that the Protocol would be unsuccessful in curtailing emissions. It covers the localised plummeting emissions associated with the collapse of the Soviet Union and what the book says is Russia's excess amount of Carbon Credits which, the book argues, will be purchased by European nations to offset their rising emissions. Reception The book has attracted opposite reactions from the denialist scene and economists on one hand and from scientists on the other. The Heartland Institute, which is known for its global warming denial, arranged for the distribution of free copies to elected officials. Jay Lehr, an economist and Heartland Institute's "science director", wrote a favorable review in News Weekly, the newsletter of the Australian political movement National Civic Council, calling the book "truly amazing". Economist Richard W. Rahn in The Washington Times welcomed it as "a wonderful new book".Climatologist Mike Hulme writing for The Guardian pointed out that the warming predicted by Bond's cycles is too small to account for the warming that actually observed. He said, "Deploying the machinery of scientific method allows us to filter out hypotheses – such as those presented by Singer and Avery – as being plain wrong". David Archer wrote a point-by-point refutation of claims by Avery and Singer on the RealClimate website. See also Bond event Merchants of Doubt == References ==
gas duster
A gas duster, also known as tinned wind or compressed air, is a product used for cleaning or dusting electronic equipment and other sensitive devices that cannot be cleaned using water. This type of product is most often packaged as a can that, when a trigger is pressed, blasts a stream of compressed gas through a nozzle at the top. Despite the names "canned air" or "compressed air", the cans do not actually contain air (i.e. do not contain O2 or N2 gases) but rather contain other gases that are compressible into liquids. True liquid air is not practical, as it cannot be stored in metal spray cans due to extreme pressure and temperature requirements. Common duster gases include hydrocarbon alkanes, like butane, propane, and isobutane, and fluorocarbons like 1,1-difluoroethane, 1,1,1-trifluoroethane, or 1,1,1,2-tetrafluoroethane which are used because of their lower flammability. When inhaled, gas duster fumes may produce psychoactive effects and may be harmful to health, sometimes even causing death. History The first patent for a unitary, hand-held compressed air dusting tool was filed in 1930 by E C Brown Co, listing Tappan Dewitt as the product's sole inventor. The patent application describes the product as a Single-unit, i.e. unitary, hand-held apparatus comprising a container and a discharge nozzle attached thereto, in which flow of liquid or other fluent material is produced by the muscular energy of the operator at the moment of use or by an equivalent manipulator independent from the apparatus the spray being effected by a gas or vapour flow from a source where the gas or vapour is not in contact with the liquid or other fluent material to be sprayed, e.g. from a compressible bulb, an air pump or an enclosure surrounding the container designed for spraying particulate material. Uses Canned air can be used for cleaning dust off surfaces such as keyboards, as well as sensitive electronics in which moisture is not desired. When using canned air, it is recommended to not hold the can upside down, as this can result in spraying liquid on to the surface. The liquid, when released from the can, boils at a very low temperature, rapidly cooling any surface it touches. This can cause mild to moderate frostbite on contact with skin, especially if the can is held upside down. Also, the can gets very cold during extended use; holding the can itself can result in cold burns. A dust spray can often be used as a freeze spray. Many gas dusters contain HFC-134a (tetrafluoroethane), which is widely used as a propellant and refrigerant. HFC-134a sold for those purposes is often sold at a higher price, which has led to the practice of using gas dusters as a less expensive source of HFCs for those purposes. Adapters have been built for such purposes, although in most cases, the use of such adapters will void the warranty on the equipment they are used with. One example of this practice is the case of airsoft gas guns, which use HFC-134a as the compressed gas. Several vendors sell "duster adapters" for use with airsoft guns, though it is necessary to add a lubricant when using gas dusters to power airsoft guns. Health and safety Since gas dusters are one of the many inhalants that can be easily abused, many manufacturers have added a bittering agent to deter people from inhaling the product. Some U.S. states, as well as the UK, have made laws regarding the abuse of gas duster, as well as other inhalants, by criminalizing inhalant abuse or making the sale of gas duster and other inhalants illegal to those under 18. Because of the generic name "canned air", it is mistakenly believed that the can only contains normal air or contains a less harmful substance (such as nitrous oxide, for example). However, the gases actually used are denser than air, such as difluoroethane. When inhaled, the gas displaces the oxygen in the lungs and removes carbon dioxide from the blood, which can cause the user to suffer from hypoxia. Contrary to popular belief, the majority of the psychoactive effects of these inhalants is not a result of oxygen deprivation. The euphoric feeling produced stems from cellular mechanisms that are dependent on the molecular structure of the specific inhalant, as is the case with all psychoactive drugs. Their exact mechanisms of action have not been well elucidated, but it is hypothesized that they have much in common with that of alcohol. This type of inhalant abuse can cause a plethora of negative effects including brain and nerve damage, paralysis, serious injury, or death.Since gas dusters are often contained in pressure vessels, they are considered explosively volatile. Environmental impacts Global warming Difluoroethane (HFC-152a), trifluoroethane (HFC-143a), and completely non-flammable tetrafluoroethane (HFC-134a) are potent greenhouse gases. According to the Intergovernmental Panel on Climate Change (IPCC), the global warming potential (GWP) of HFC-152a, HFC-143a, and HFC-134a are 124, 4470, and 1430, respectively. GWP refers to global warming effect in comparison to CO2 for unit mass. 1 kg of HFC-152a is equivalent to 124 kg of CO2. Ozone layer depletion Gas dusters sold in many countries are ozone safe as they use "zero ODP" (zero ozone depletion potential) gases. For example, tetrafluoroethane has insignificant ODP. This is a separate issue from the global warming concern. Alternatives True "air dusters" using ordinary air are also available in the market. These typically have much shorter run times than a chemical duster, but are easily refillable. Both hand pump and electric compressor models have been marketed. The maximum pressure for an aerosol can is typically 10 bar (145 psi) at 20 °C (68 °F). Therefore, a fully compressed air duster will exhaust air about 10 times the can volume. Recently electronic versions which only use air have become viable alternatives that are preferred by many large corporations due to the fact that they contain no hazardous chemicals, are safe for the environment, do not freeze and cannot be abused. Another mechanical alternative is a camera air blower. See also List of cleaning products == References ==
fluorinert
Fluorinert is the trademarked brand name for the line of electronics coolant liquids sold commercially by 3M. As perfluorinated compounds (PFCs), all Fluorinert variants have an extremely high Global Warming Potential (GWP), so should be used with caution (see below). It is an electrically insulating, stable fluorocarbon-based fluid, which is used in various cooling applications. It is mainly used for cooling electronics. Different molecular formulations are available with a variety of boiling points, allowing it to be used in "single-phase" applications, where it remains a liquid, or for "two-phase" applications, where the liquid boils to remove additional heat by evaporative cooling. An example of one of the compounds 3M uses is FC-72 (perfluorohexane, C6F14). Perfluorohexane is used for low-temperature heat-transfer applications due to its 56 °C (133 °F) boiling point. Another example is FC-75, perfluoro(2-butyl-tetrahydrofurane). There are 3M fluids that can handle up to 215 °C (419 °F), such as FC-70 (perfluorotripentylamine).Fluorinert is used in situations where air cannot carry away enough heat, or where airflow is so restricted that some sort of forced pumping is required. Toxicity Fluorinert may be harmful if inhaled, and care should be taken to avoid contact with eyes and skin. However, according to the documentation from the manufacturer, no health effects are expected from ingestion of Fluorinert. Usage of fluorinated oils should be limited to closed systems and reduced volumes, since they have a very high global-warming potential and a long atmospheric lifetime. Although Fluorinert was intended to be inert, the Lawrence Livermore National Laboratory discovered that the liquid cooling system of their Cray-2 supercomputers decomposed during extended service, producing some highly toxic perfluoroisobutene. Catalytic scrubbers were installed to remove this contaminant. The science-fiction film The Abyss (1989) depicted an experimental liquid-breathing system, in which the use of highly oxygenated Fluorinert enabled a diver to descend to great depths. While several rats were shown actually breathing Fluorinert, scenes depicting actor Ed Harris using the fluid-breathing apparatus were simulated. Global warming potential Fluorinert perfluorotributylamine absorbs infra-red (IR) wavelengths readily and has a long atmospheric lifetime. As such, it has a very high global warming potential (GWP) of ~9,000, and it should be used in closed systems only and carefully managed to minimize emissions. Alternatives Due to Fluorinert's high global warming potential, alternative low global warming potential agents were found and sold under the brand names Novec Engineered Fluids. There are two types of chemical compounds under Novec branding for similar industrial applications: Segregated Hydrofluoroether (HFE) compounds including Novec 7000, 7100, 7200, 7300, 7500, and 7700 has lower global warming potential (GWP) of ~300. Fluoroketones (FK) compounds including Novec 649 and 774 has very low global warming potential (GWP) of 1. Novec 649 has similar thermo-physical properties to FC-72 making it a good drop-in replacement for low temperature heat transfer. See also Immersion cooling Liquid dielectric Novec 649/1230 Hydrofluoroether References External links 3M Fluorinert Electronic Liquids – contains links to Material Safety Data Sheets and Product Information Sheets
extinction event
An extinction event (also known as a mass extinction or biotic crisis) is a widespread and rapid decrease in the biodiversity on Earth. Such an event is identified by a sharp fall in the diversity and abundance of multicellular organisms. It occurs when the rate of extinction increases with respect to the background extinction rate and the rate of speciation. Estimates of the number of major mass extinctions in the last 540 million years range from as few as five to more than twenty. These differences stem from disagreement as to what constitutes a "major" extinction event, and the data chosen to measure past diversity. The "Big Five" mass extinctions In a landmark paper published in 1982, Jack Sepkoski and David M. Raup identified five particular geological intervals with excessive diversity loss. They were originally identified as outliers on a general trend of decreasing extinction rates during the Phanerozoic, but as more stringent statistical tests have been applied to the accumulating data, it has been established that in the current, Phanerozoic Eon, multicellular animal life has experienced at least five major and many minor mass extinctions. The "Big Five" cannot be so clearly defined, but rather appear to represent the largest (or some of the largest) of a relatively smooth continuum of extinction events. An earlier (first?) event at the end of the Ediacaran is speculated, and all are preceded by the presumed far more extensive mass exitinction of microbial life during the Oxygen Catastrophe early in the Proterozoic Eon. Ordovician–Silurian extinction events (End Ordovician or O–S): 445–444 Ma, just prior to and at the Ordovician–Silurian transition. Two events occurred that killed off 27% of all families, 57% of all genera and 85% of all species. Together they are ranked by many scientists as the second-largest of the five major extinctions in Earth's history in terms of percentage of genera that became extinct. In May 2020, studies suggested that the causes of the mass extinction were global warming, related to volcanism, and anoxia, and not, as considered earlier, cooling and glaciation. However, this is at odds with numerous previous studies, which have indicated global cooling as the primary driver. Most recently, the deposition of volcanic ash has been suggested to be the trigger for reductions in atmospheric carbon dioxide leading to the glaciation and anoxia observed in the geological record. Late Devonian extinctions: 372–359 Ma, occupying much of the Late Devonian up to the Devonian–Carboniferous transition. The Late Devonian was an interval of high diversity loss, concentrated into two extinction events. The largest extinction was the Kellwasser Event (Frasnian-Famennian, or F-F, 372 Ma), an extinction event at the end of the Frasnian, about midway through the Late Devonian. This extinction annihilated coral reefs and numerous tropical benthic (seabed-living) animals such as jawless fish, brachiopods, and trilobites. Another major extinction was the Hangenberg Event (Devonian-Carboniferous, or D-C, 359 Ma), which brought an end to the Devonian as a whole. This extinction wiped out the armored placoderm fish and nearly led to the extinction of the newly evolved ammonoids. These two closely-spaced extinction events collectively eliminated about 19% of all families, 50% of all genera and at least 70% of all species. Sepkoski and Raup (1982) did not initially consider the Late Devonian extinction interval (Givetian, Frasnian, and Famennian stages) to be statistically significant. Regardless, later studies have affirmed the strong ecological impacts of the Kellwasser and Hangenberg Events. Permian–Triassic extinction event (End Permian): 252 Ma, at the Permian–Triassic transition. Phanerozoic Eon's largest extinction killed 53% of marine families, 84% of marine genera, about 81% of all marine species and an estimated 70% of terrestrial vertebrate species. This is also the largest known extinction event for insects. The highly successful marine arthropod, the trilobite, became extinct. The evidence regarding plants is less clear, but new taxa became dominant after the extinction. The "Great Dying" had enormous evolutionary significance: On land, it ended the primacy of early synapsids. The recovery of vertebrates took 30 million years, but the vacant niches created the opportunity for archosaurs to become ascendant. In the seas, the percentage of animals that were sessile (unable to move about) dropped from 67% to 50%. The whole late Permian was a difficult time, at least for marine life, even before the P–T boundary extinction. More recent research has indicated that the End-Capitanian extinction event that preceded the "Great Dying" likely constitutes a separate event from the P–T extinction; if so, it would be larger than some of the "Big Five" extinction events, and perhaps merit a separate place in this list immediately before this one. Triassic–Jurassic extinction event (End Triassic): 201.3 Ma, at the Triassic–Jurassic transition. About 23% of all families, 48% of all genera (20% of marine families and 55% of marine genera) and 70% to 75% of all species became extinct. Most non-dinosaurian archosaurs, most therapsids, and most of the large amphibians were eliminated, leaving dinosaurs with little terrestrial competition. Non-dinosaurian archosaurs continued to dominate aquatic environments, while non-archosaurian diapsids continued to dominate marine environments. The Temnospondyl lineage of large amphibians also survived until the Cretaceous in Australia (e.g., Koolasuchus). Cretaceous–Paleogene extinction event (End Cretaceous, K–Pg extinction, or formerly K–T extinction): 66 Ma, at the Cretaceous (Maastrichtian) – Paleogene (Danian) transition. The event was formerly called the Cretaceous-Tertiary or K–T extinction or K–T boundary; it is now officially named the Cretaceous–Paleogene (or K–Pg) extinction event. About 17% of all families, 50% of all genera and 75% of all species became extinct. In the seas all the ammonites, plesiosaurs and mosasaurs disappeared and the percentage of sessile animals was reduced to about 33%. All non-avian dinosaurs became extinct during that time. The boundary event was severe with a significant amount of variability in the rate of extinction between and among different clades. Mammals and birds, the former descended from the synapsids and the latter from theropod dinosaurs, emerged as dominant terrestrial animals.Despite the popularization of these five events, there is no definite line separating them from other extinction events; using different methods of calculating an extinction's impact can lead to other events featuring in the top five.Older fossil records are more difficult to interpret. This is because: Older fossils are harder to find as they are usually buried at a considerable depth. Dating of older fossils is more difficult. Productive fossil beds are researched more than unproductive ones, therefore leaving certain periods unresearched. Prehistoric environmental events can disturb the deposition process. The preservation of fossils varies on land, but marine fossils tend to be better preserved than their more sought-after land-based counterparts.It has been suggested that the apparent variations in marine biodiversity may actually be an artifact, with abundance estimates directly related to quantity of rock available for sampling from different time periods. However, statistical analysis shows that this can only account for 50% of the observed pattern, and other evidence such as fungal spikes (geologically rapid increase in fungal abundance) provides reassurance that most widely accepted extinction events are real. A quantification of the rock exposure of Western Europe indicates that many of the minor events for which a biological explanation has been sought are most readily explained by sampling bias. Sixth mass extinction Research completed after the seminal 1982 paper (Sepkoski and Raup) has concluded that a sixth mass extinction event is ongoing due to human activities: Extinctions by severity Extinction events can be tracked by several methods, including geological change, ecological impact, extinction vs. origination (speciation) rates, and most commonly diversity loss among taxonomic units. Most early papers used families as the unit of taxonomy, based on compendiums of marine animal families by Sepkoski (1982, 1992). Later papers by Sepkoski and other authors switched to genera, which are more precise than families and less prone to taxonomic bias or incomplete sampling relative to species. These are several major papers estimating loss or ecological impact from fifteen commonly-discussed extinction events. Different methods used by these papers are described in the following section. The "Big Five" mass extinctions are bolded. a Graphed but not discussed by Sepkoski (1996), considered continuous with the Late Devonian mass extinctionb At the time considered continuous with the end-Permian mass extinctionc Includes late Norian time slicesd Diversity loss of both pulses calculated togethere Pulses extend over adjacent time slices, calculated separatelyf Considered ecologically significant, but not analyzed directlyg Excluded due to a lack of consensus on Late Triassic chronology The study of major extinction events Breakthrough studies in the 1980s–1990s For much of the 20th century, the study of mass extinctions was hampered by insufficient data. Mass extinctions, though acknowledged, were considered mysterious exceptions to the prevailing gradualistic view of prehistory, where slow evolutionary trends define faunal changes. The first breakthrough was published in 1980 by a team led by Luis Alvarez, who discovered trace metal evidence for an asteroid impact at the end of the Cretaceous period. The Alvarez hypothesis for the end-Cretaceous extinction gave mass extinctions, and catastrophic explanations, newfound popular and scientific attention. Another landmark study came in 1982, when a paper written by David M. Raup and Jack Sepkoski was published in the journal Science. This paper, originating from a compendium of extinct marine animal families developed by Sepkoski, identified five peaks of marine family extinctions which stand out among a backdrop of decreasing extinction rates through time. Four of these peaks were statistically significant: the Ashgillian (end-Ordovician), Late Permian, Norian (end-Triassic), and Maastrichtian (end-Cretaceous). The remaining peak was a broad interval of high extinction smeared over the later half of the Devonian, with its apex in the Frasnian stage.Through the 1980s, Raup and Sepkoski continued to elaborate and build upon their extinction and origination data, defining a high-resolution biodiversity curve (the "Sepkoski curve") and successive evolutionary faunas with their own patterns of diversification and extinction. Though these interpretations formed a strong basis for subsequent studies of mass extinctions, Raup and Sepkoski also proposed a more controversial idea in 1984: a 26-million-year periodic pattern to mass extinctions. Two teams of astronomers linked this to a hypothetical brown dwarf in the distant reaches of the solar system, inventing the "Nemesis hypothesis" which has been strongly disputed by other astronomers. Around the same time, Sepkoski began to devise a compendium of marine animal genera, which would allow researchers to explore extinction at a finer taxonomic resolution. He began to publish preliminary results of this in-progress study as early as 1986, in a paper which identified 29 extinction intervals of note. By 1992, he also updated his 1982 family compendium, finding minimal changes to the diversity curve despite a decade of new data. In 1996, Sepkoski published another paper which tracked marine genera extinction (in terms of net diversity loss) by stage, similar to his previous work on family extinctions. The paper filtered its sample in three ways: all genera (the entire unfiltered sample size), multiple-interval genera (only those found in more than one stage), and "well-preserved" genera (excluding those from groups with poor or understudied fossil records). Diversity trends in marine animal families were also revised based on his 1992 update.Revived interest in mass extinctions led many other authors to re-evaluate geological events in the context of their effects on life. A 1995 paper by Michael Benton tracked extinction and origination rates among both marine and continental (freshwater & terrestrial) families, identifying 22 extinction intervals and no periodic pattern. Overview books by O.H. Wallister (1996) and A. Hallam and P.B. Wignall (1997) summarized the new extinction research of the previous two decades. One chapter in the former source lists over 60 geological events which could conceivably be considered global extinctions of varying sizes. These texts, and other widely circulated publications in the 1990s, helped to establish the popular image of mass extinctions as a "big five" alongside many smaller extinctions through prehistory. New data on genera: Sepkoski's compendium Though Sepkoski passed away in 1999, his marine genera compendium was formally published in 2002. This prompted a new wave of studies into the dynamics of mass extinctions. These papers utilized the compendium to track origination rates (the rate that new species appear or speciate) parallel to extinction rates in the context of geological stages or substages. A review and re-analysis of Sepkoski's data by Bambach (2006) identified 18 distinct mass extinction intervals, including 4 large extinctions in the Cambrian. These fit Sepkoski's definition of extinction, as short substages with large diversity loss and overall high extinction rates relative to their surroundings.Bambach et al. (2004) considered each of the "Big Five" extinction intervals to have a different pattern in the relationship between origination and extinction trends. Moreover, background extinction rates were broadly variable and could be separated into more severe and less severe time intervals. Background extinctions were least severe relative to the origination rate in the middle Ordovician-early Silurian, late Carboniferous-Permian, and Jurassic-recent. This argues that the Late Ordovician, end-Permian, and end-Cretaceous extinctions were statistically significant outliers in biodiversity trends, while the Late Devonian and end-Triassic extinctions occurred in time periods which were already stressed by relatively high extinction and low origination.Computer models run by Foote (2005) determined that abrupt pulses of extinction fit the pattern of prehistoric biodiversity much better than a gradual and continuous background extinction rate with smooth peaks and troughs. This strongly supports the utility of rapid, frequent mass extinctions as a major driver of diversity changes. Pulsed origination events are also supported, though to a lesser degree which is largely dependent on pulsed extinctions.Similarly, Stanley (2007) used extinction and origination data to investigate turnover rates and extinction responses among different evolutionary faunas and taxonomic groups. In contrast to previous authors, his diversity simulations show support for an overall exponential rate of biodiversity growth through the entire Phanerozoic. Tackling biases in the fossil record As data continued to accumulate, some authors began to re-evaluate Sepkoski's sample using methods meant to account for sampling biases. As early as 1982, a paper by Phillip W. Signor and Jere H. Lipps noted that the true sharpness of extinctions was diluted by the incompleteness of the fossil record. This phenomenon, later called the Signor-Lipps effect, notes that a species' true extinction must occur after its last fossil, and that origination must occur before its first fossil. Thus, species which appear to die out just prior to an abrupt extinction event may instead be a victim of the event, despite an apparent gradual decline looking at the fossil record alone. A model by Foote (2007) found that many geological stages had artificially inflated extinction rates due to Signor-Lipps "backsmearing" from later stages with extinction events. Other biases include the difficulty in assessing taxa with high turnover rates or restricted occurrences, which cannot be directly assessed due to a lack of fine-scale temporal resolution. Many paleontologists opt to assess diversity trends by randomized sampling and rarefaction of fossil abundances rather than raw temporal range data, in order to account for all of these biases. But that solution is influenced by biases related to sample size. One major bias in particular is the "Pull of the recent", the fact that the fossil record (and thus known diversity) generally improves closer to the modern day. This means that biodiversity and abundance for older geological periods may be underestimated from raw data alone.Alroy (2010) attempted to circumvene sample size-related biases in diversity estimates using a method he called "shareholder quorum subsampling" (SQS). In this method, fossils are sampled from a "collection" (such as a time interval) to assess the relative diversity of that collection. Every time a new species (or other taxon) enters the sample, it brings over all other fossils belonging to that species in the collection (its "share" of the collection). For example, a skewed collection with half its fossils from one species will immediately reach a sample share of 50% if that species is the first to be sampled. This continues, adding up the sample shares until a "coverage" or "quorum" is reached, referring to a pre-set desired sum of share percentages. At that point, the number of species in the sample are counted. A collection with more species is expected to reach a sample quorum with more species, thus accurately comparing the relative diversity change between two collections without relying on the biases inherent to sample size.Alroy also elaborated on three-timer algorithms, which are meant to counteract biases in estimates of extinction and origination rates. A given taxon is a "three-timer" if it can be found before, after, and within a given time interval, and a "two-timer" if it overlaps with a time interval on one side. Counting "three-timers" and "two-timers" on either end of a time interval, and sampling time intervals in sequence, can together be combined into equations to predict extinction and origination with less bias. In subsequent papers, Alroy continued to refine his equations to improve lingering issues with precision and unusual samples.McGhee et al. (2013), a paper which primarily focused on ecological effects of mass extinctions, also published new estimates of extinction severity based on Alroy's methods. Many extinctions were significantly more impactful under these new estimates, though some were less prominent.Stanley (2016) was another paper which attempted to remove two common errors in previous estimates of extinction severity. The first error was the unjustified removal of "singletons", genera unique to only a single time slice. Their removal would mask the influence of groups with high turnover rates or lineages cut short early in their diversification. The second error was the difficulty in distinguishing background extinctions from brief mass extinction events within the same short time interval. To circumvent this issue, background rates of diversity change (extinction/origination) were estimated for stages or substages without mass extinctions, and then assumed to apply to subsequent stages with mass extinctions. For example, the Santonian and Campanian stages were each used to estimate diversity changes in the Maastrichtian prior to the K-Pg mass extinction. Subtracting background extinctions from extinction tallies had the effect of reducing the estimated severity of the six sampled mass extinction events. This effect was stronger for mass extinctions which occurred in periods with high rates of background extinction, like the Devonian. Uncertainty in the Proterozoic and earlier eons Because most diversity and biomass on Earth is microbial, and thus difficult to measure via fossils, extinction events placed on-record are those that affect the easily observed, biologically complex component of the biosphere rather than the total diversity and abundance of life. For this reason, well-documented extinction events are confined to the Phanerozoic eon – with the sole exception of the Oxygen Catastrophe in the Proterozoic – since before the Phanerozoic, all living organisms were either microbial, or if multicellular then soft-bodied. Perhaps due to the absence of a robust microbial fossil record, mass extinctions might only seem to be mainly a Phanerozoic phenomenon, with merely the observable extinction rates appearing low before large complex organisms arose.Extinction occurs at an uneven rate. Based on the fossil record, the background rate of extinctions on Earth is about two to five taxonomic families of marine animals every million years. Marine fossils are mostly used to measure extinction rates because of their superior fossil record and stratigraphic range compared to land animals. The Oxygen Catastrophe, which occurred around 2.45 billion years ago in the Paleoproterozoic, is plausible as the first-ever major extinction event. It was perhaps also the worst-ever, in some sense, but with the Earth's ecology just before that time so poorly understood, and the concept of prokaryote genera so different from genera of complex life, that it would be difficult to meaningfully compare it to any of the "Big Five" even if Paleoproterozoic life were better known.Since the Cambrian explosion, five further major mass extinctions have significantly exceeded the background extinction rate. The most recent and best-known, the Cretaceous–Paleogene extinction event, which occurred approximately 66 Ma (million years ago), was a large-scale mass extinction of animal and plant species in a geologically short period of time. In addition to the five major Phanerozoic mass extinctions, there are numerous lesser ones, and the ongoing mass extinction caused by human activity is sometimes called the sixth mass extinction. Evolutionary importance Mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the newly dominant group is "superior" to the old but usually because an extinction event eliminates the old, dominant group and makes way for the new one, a process known as adaptive radiation.For example, mammaliaformes ("almost mammals") and then mammals existed throughout the reign of the dinosaurs, but could not compete in the large terrestrial vertebrate niches that dinosaurs monopolized. The end-Cretaceous mass extinction removed the non-avian dinosaurs and made it possible for mammals to expand into the large terrestrial vertebrate niches. The dinosaurs themselves had been beneficiaries of a previous mass extinction, the end-Triassic, which eliminated most of their chief rivals, the crurotarsans. Another point of view put forward in the Escalation hypothesis predicts that species in ecological niches with more organism-to-organism conflict will be less likely to survive extinctions. This is because the very traits that keep a species numerous and viable under fairly static conditions become a burden once population levels fall among competing organisms during the dynamics of an extinction event. Furthermore, many groups that survive mass extinctions do not recover in numbers or diversity, and many of these go into long-term decline, and these are often referred to as "Dead Clades Walking". However, clades that survive for a considerable period of time after a mass extinction, and which were reduced to only a few species, are likely to have experienced a rebound effect called the "push of the past".Darwin was firmly of the opinion that biotic interactions, such as competition for food and space – the 'struggle for existence' – were of considerably greater importance in promoting evolution and extinction than changes in the physical environment. He expressed this in The Origin of Species: "Species are produced and exterminated by slowly acting causes ... and the most import of all causes of organic change is one which is almost independent of altered ... physical conditions, namely the mutual relation of organism to organism – the improvement of one organism entailing the improvement or extermination of others". Patterns in frequency Various authors have suggested that extinction events occurred periodically, every 26 to 30 million years, or that diversity fluctuates episodically about every 62 million years. Various ideas, mostly regarding astronomical influences, attempt to explain the supposed pattern, including the presence of a hypothetical companion star to the Sun, oscillations in the galactic plane, or passage through the Milky Way's spiral arms. However, other authors have concluded that the data on marine mass extinctions do not fit with the idea that mass extinctions are periodic, or that ecosystems gradually build up to a point at which a mass extinction is inevitable. Many of the proposed correlations have been argued to be spurious or lacking statistical significance. Others have argued that there is strong evidence supporting periodicity in a variety of records, and additional evidence in the form of coincident periodic variation in nonbiological geochemical variables such as Strontium isotopes, flood basalts, anoxic events, orogenies, and evaporite deposition. One explanation for this proposed cycle is carbon storage and release by oceanic crust, which exchanges carbon between the atmosphere and mantle. Mass extinctions are thought to result when a long-term stress is compounded by a short-term shock. Over the course of the Phanerozoic, individual taxa appear to have become less likely to suffer extinction, which may reflect more robust food webs, as well as fewer extinction-prone species, and other factors such as continental distribution. However, even after accounting for sampling bias, there does appear to be a gradual decrease in extinction and origination rates during the Phanerozoic. This may represent the fact that groups with higher turnover rates are more likely to become extinct by chance; or it may be an artefact of taxonomy: families tend to become more speciose, therefore less prone to extinction, over time; and larger taxonomic groups (by definition) appear earlier in geological time.It has also been suggested that the oceans have gradually become more hospitable to life over the last 500 million years, and thus less vulnerable to mass extinctions, but susceptibility to extinction at a taxonomic level does not appear to make mass extinctions more or less probable. Causes There is still debate about the causes of all mass extinctions. In general, large extinctions may result when a biosphere under long-term stress undergoes a short-term shock. An underlying mechanism appears to be present in the correlation of extinction and origination rates to diversity. High diversity leads to a persistent increase in extinction rate; low diversity to a persistent increase in origination rate. These presumably ecologically controlled relationships likely amplify smaller perturbations (asteroid impacts, etc.) to produce the global effects observed. Identifying causes of specific mass extinctions A good theory for a particular mass extinction should: explain all of the losses, not just focus on a few groups (such as dinosaurs); explain why particular groups of organisms died out and why others survived; provide mechanisms that are strong enough to cause a mass extinction but not a total extinction; be based on events or processes that can be shown to have happened, not just inferred from the extinction.It may be necessary to consider combinations of causes. For example, the marine aspect of the end-Cretaceous extinction appears to have been caused by several processes that partially overlapped in time and may have had different levels of significance in different parts of the world.Arens and West (2006) proposed a "press / pulse" model in which mass extinctions generally require two types of cause: long-term pressure on the eco-system ("press") and a sudden catastrophe ("pulse") towards the end of the period of pressure. Their statistical analysis of marine extinction rates throughout the Phanerozoic suggested that neither long-term pressure alone nor a catastrophe alone was sufficient to cause a significant increase in the extinction rate. Most widely supported explanations MacLeod (2001) summarized the relationship between mass extinctions and events that are most often cited as causes of mass extinctions, using data from Courtillot, Jaeger & Yang et al. (1996), Hallam (1992) and Grieve & Pesonen (1992): Flood basalt events (giant volcanic eruptions): 11 occurrences, all associated with significant extinctions But Wignall (2001) concluded that only five of the major extinctions coincided with flood basalt eruptions and that the main phase of extinctions started before the eruptions. Sea-level falls: 12, of which seven were associated with significant extinctions. Asteroid impacts: one large impact is associated with a mass extinction, that is, the Cretaceous–Paleogene extinction event; there have been many smaller impacts but they are not associated with significant extinctions, or cannot be dated precisely enough. The impact that created the Siljan Ring either was just before the Late Devonian Extinction or coincided with it.The most commonly suggested causes of mass extinctions are listed below. Flood basalt events The formation of large igneous provinces by flood basalt events could have: produced dust and particulate aerosols, which inhibited photosynthesis and thus caused food chains to collapse both on land and at sea emitted sulfur oxides that were precipitated as acid rain and poisoned many organisms, contributing further to the collapse of food chains emitted carbon dioxide and thus possibly causing sustained global warming once the dust and particulate aerosols dissipated.Flood basalt events occur as pulses of activity punctuated by dormant periods. As a result, they are likely to cause the climate to oscillate between cooling and warming, but with an overall trend towards warming as the carbon dioxide they emit can stay in the atmosphere for hundreds of years. Flood basalt events have been implicated as the cause of many major extinction events. It is speculated that massive volcanism caused or contributed to the Kellwasser Event, the End-Guadalupian Extinction Event, the End-Permian Extinction Event, the Smithian-Spathian Extinction, the Triassic-Jurassic Extinction Event, the Toarcian Oceanic Anoxic Event, the Cenomanian-Turonian Oceanic Anoxic Event, the Cretaceous-Palaeogene Extinction Event, and the Palaeocene-Eocene Thermal Maximum. The correlation between gigantic volcanic events expressed in the large igneous provinces and mass extinctions was shown for the last 260 million years. Recently such possible correlation was extended across the whole Phanerozoic Eon. Sea-level fall These are often clearly marked by worldwide sequences of contemporaneous sediments that show all or part of a transition from sea-bed to tidal zone to beach to dry land – and where there is no evidence that the rocks in the relevant areas were raised by geological processes such as orogeny. Sea-level falls could reduce the continental shelf area (the most productive part of the oceans) sufficiently to cause a marine mass extinction, and could disrupt weather patterns enough to cause extinctions on land. But sea-level falls are very probably the result of other events, such as sustained global cooling or the sinking of the mid-ocean ridges. Sea-level falls are associated with most of the mass extinctions, including all of the "Big Five"—End-Ordovician, Late Devonian, End-Permian, End-Triassic, and End-Cretaceous, along with the more recently recognised Capitanian mass extinction of comparable severity to the Big Five.A 2008 study, published in the journal Nature, established a relationship between the speed of mass extinction events and changes in sea level and sediment. The study suggests changes in ocean environments related to sea level exert a driving influence on rates of extinction, and generally determine the composition of life in the oceans. Extraterrestrial threats Impact events The impact of a sufficiently large asteroid or comet could have caused food chains to collapse both on land and at sea by producing dust and particulate aerosols and thus inhibiting photosynthesis. Impacts on sulfur-rich rocks could have emitted sulfur oxides precipitating as poisonous acid rain, contributing further to the collapse of food chains. Such impacts could also have caused megatsunamis and/or global forest fires. Most paleontologists now agree that an asteroid did hit the Earth about 66 Ma, but there is lingering dispute whether the impact was the sole cause of the Cretaceous–Paleogene extinction event. Nonetheless, in October 2019, researchers reported that the Cretaceous Chicxulub asteroid impact that resulted in the extinction of non-avian dinosaurs 66 Ma, also rapidly acidified the oceans, producing ecological collapse and long-lasting effects on the climate, and was a key reason for end-Cretaceous mass extinction.The Permian-Triassic extinction event has also been hypothesised to have been caused by an asteroid impact that formed the Araguainha crater due to the estimated date of the crater's formation overlapping with the end-Permian extinction event. However, this hypothesis has been widely challenged, with the impact hypothesis being rejected by most researchers.According to the Shiva hypothesis, the Earth is subject to increased asteroid impacts about once every 27 million years because of the Sun's passage through the plane of the Milky Way galaxy, thus causing extinction events at 27 million year intervals. Some evidence for this hypothesis has emerged in both marine and non-marine contexts. Alternatively, the Sun's passage through the higher density spiral arms of the galaxy could coincide with mass extinction on Earth, perhaps due to increased impact events. However, a reanalysis of the effects of the Sun's transit through the spiral structure based on maps of the spiral structure of the Milky Way in CO molecular line emission has failed to find a correlation. A nearby nova, supernova or gamma ray burst A nearby gamma-ray burst (less than 6000 light-years away) would be powerful enough to destroy the Earth's ozone layer, leaving organisms vulnerable to ultraviolet radiation from the Sun. Gamma ray bursts are fairly rare, occurring only a few times in a given galaxy per million years. It has been suggested that a gamma ray burst caused the End-Ordovician extinction, while a supernova has been proposed as the cause of the Hangenberg event. Global cooling Sustained and significant global cooling could kill many polar and temperate species and force others to migrate towards the equator; reduce the area available for tropical species; often make the Earth's climate more arid on average, mainly by locking up more of the planet's water in ice and snow. The glaciation cycles of the current ice age are believed to have had only a very mild impact on biodiversity, so the mere existence of a significant cooling is not sufficient on its own to explain a mass extinction. It has been suggested that global cooling caused or contributed to the End-Ordovician, Permian–Triassic, Late Devonian extinctions, and possibly others. Sustained global cooling is distinguished from the temporary climatic effects of flood basalt events or impacts. Global warming This would have the opposite effects: expand the area available for tropical species; kill temperate species or force them to migrate towards the poles; possibly cause severe extinctions of polar species; often make the Earth's climate wetter on average, mainly by melting ice and snow and thus increasing the volume of the water cycle. It might also cause anoxic events in the oceans (see below). Global warming as a cause of mass extinction is supported by several recent studies.The most dramatic example of sustained warming is the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions. It has also been suggested to have caused the Triassic–Jurassic extinction event, during which 20% of all marine families became extinct. Furthermore, the Permian–Triassic extinction event has been suggested to have been caused by warming. Clathrate gun hypothesis Clathrates are composites in which a lattice of one substance forms a cage around another. Methane clathrates (in which water molecules are the cage) form on continental shelves. These clathrates are likely to break up rapidly and release the methane if the temperature rises quickly or the pressure on them drops quickly—for example in response to sudden global warming or a sudden drop in sea level or even earthquakes. Methane is a much more powerful greenhouse gas than carbon dioxide, so a methane eruption ("clathrate gun") could cause rapid global warming or make it much more severe if the eruption was itself caused by global warming. The most likely signature of such a methane eruption would be a sudden decrease in the ratio of carbon-13 to carbon-12 in sediments, since methane clathrates are low in carbon-13; but the change would have to be very large, as other events can also reduce the percentage of carbon-13.It has been suggested that "clathrate gun" methane eruptions were involved in the end-Permian extinction ("the Great Dying") and in the Paleocene–Eocene Thermal Maximum, which was associated with one of the smaller mass extinctions. Anoxic events Anoxic events are situations in which the middle and even the upper layers of the ocean become deficient or totally lacking in oxygen. Their causes are complex and controversial, but all known instances are associated with severe and sustained global warming, mostly caused by sustained massive volcanism.It has been suggested that anoxic events caused or contributed to the Ordovician–Silurian, late Devonian, Capitanian, Permian–Triassic, and Triassic–Jurassic extinctions, as well as a number of lesser extinctions (such as the Ireviken, Lundgreni, Mulde, Lau, Smithian-Spathian, Toarcian, and Cenomanian–Turonian events). On the other hand, there are widespread black shale beds from the mid-Cretaceous that indicate anoxic events but are not associated with mass extinctions. The bio-availability of essential trace elements (in particular selenium) to potentially lethal lows has been shown to coincide with, and likely have contributed to, at least three mass extinction events in the oceans, that is, at the end of the Ordovician, during the Middle and Late Devonian, and at the end of the Triassic. During periods of low oxygen concentrations very soluble selenate (Se6+) is converted into much less soluble selenide (Se2-), elemental Se and organo-selenium complexes. Bio-availability of selenium during these extinction events dropped to about 1% of the current oceanic concentration, a level that has been proven lethal to many extant organisms.British oceanologist and atmospheric scientist, Andrew Watson, explained that, while the Holocene epoch exhibits many processes reminiscent of those that have contributed to past anoxic events, full-scale ocean anoxia would take "thousands of years to develop". Hydrogen sulfide emissions from the seas Kump, Pavlov and Arthur (2005) have proposed that during the Permian–Triassic extinction event the warming also upset the oceanic balance between photosynthesising plankton and deep-water sulfate-reducing bacteria, causing massive emissions of hydrogen sulfide, which poisoned life on both land and sea and severely weakened the ozone layer, exposing much of the life that still remained to fatal levels of UV radiation. Oceanic overturn Oceanic overturn is a disruption of thermo-haline circulation that lets surface water (which is more saline than deep water because of evaporation) sink straight down, bringing anoxic deep water to the surface and therefore killing most of the oxygen-breathing organisms that inhabit the surface and middle depths. It may occur either at the beginning or the end of a glaciation, although an overturn at the start of a glaciation is more dangerous because the preceding warm period will have created a larger volume of anoxic water.Unlike other oceanic catastrophes such as regressions (sea-level falls) and anoxic events, overturns do not leave easily identified "signatures" in rocks and are theoretical consequences of researchers' conclusions about other climatic and marine events. It has been suggested that oceanic overturn caused or contributed to the late Devonian and Permian–Triassic extinctions. Geomagnetic reversal One theory is that periods of increased geomagnetic reversals will weaken Earth's magnetic field long enough to expose the atmosphere to the solar winds, causing oxygen ions to escape the atmosphere in a rate increased by 3–4 orders, resulting in a disastrous decrease in oxygen. Plate tectonics Movement of the continents into some configurations can cause or contribute to extinctions in several ways: by initiating or ending ice ages; by changing ocean and wind currents and thus altering climate; by opening seaways or land bridges that expose previously isolated species to competition for which they are poorly adapted (for example, the extinction of most of South America's native ungulates and all of its large metatherians after the creation of a land bridge between North and South America). Occasionally continental drift creates a super-continent that includes the vast majority of Earth's land area, which in addition to the effects listed above is likely to reduce the total area of continental shelf (the most species-rich part of the ocean) and produce a vast, arid continental interior that may have extreme seasonal variations. Another theory is that the creation of the super-continent Pangaea contributed to the End-Permian mass extinction. Pangaea was almost fully formed at the transition from mid-Permian to late-Permian, and the "Marine genus diversity" diagram at the top of this article shows a level of extinction starting at that time, which might have qualified for inclusion in the "Big Five" if it were not overshadowed by the "Great Dying" at the end of the Permian. Other hypotheses Many other hypotheses have been proposed, such as the spread of a new disease, or simple out-competition following an especially successful biological innovation. But all have been rejected, usually for one of the following reasons: they require events or processes for which there is no evidence; they assume mechanisms that are contrary to the available evidence; they are based on other theories that have been rejected or superseded. Scientists have been concerned that human activities could cause more plants and animals to become extinct than any point in the past. Along with human-made changes in climate (see above), some of these extinctions could be caused by overhunting, overfishing, invasive species, or habitat loss. A study published in May 2017 in Proceedings of the National Academy of Sciences argued that a "biological annihilation" akin to a sixth mass extinction event is underway as a result of anthropogenic causes, such as over-population and over-consumption. The study suggested that as much as 50% of the number of animal individuals that once lived on Earth were already extinct, threatening the basis for human existence too. Future biosphere extinction/sterilization The eventual warming and expanding of the Sun, combined with the eventual decline of atmospheric carbon dioxide, could actually cause an even greater mass extinction, having the potential to wipe out even microbes (in other words, the Earth would be completely sterilized): rising global temperatures caused by the expanding Sun would gradually increase the rate of weathering, which would in turn remove more and more CO2 from the atmosphere. When CO2 levels get too low (perhaps at 50 ppm), most plant life will die out, although simpler plants like grasses and mosses can survive much longer, until CO2 levels drop to 10 ppm.With all photosynthetic organisms gone, atmospheric oxygen can no longer be replenished, and it is eventually removed by chemical reactions in the atmosphere, perhaps from volcanic eruptions. Eventually the loss of oxygen will cause all remaining aerobic life to die out via asphyxiation, leaving behind only simple anaerobic prokaryotes. When the Sun becomes 10% brighter in about a billion years, Earth will suffer a moist greenhouse effect resulting in its oceans boiling away, while the Earth's liquid outer core cools due to the inner core's expansion and causes the Earth's magnetic field to shut down. In the absence of a magnetic field, charged particles from the Sun will deplete the atmosphere and further increase the Earth's temperature to an average of around 420 K (147 °C, 296 °F) in 2.8 billion years, causing the last remaining life on Earth to die out. This is the most extreme instance of a climate-caused extinction event. Since this will only happen late in the Sun's life, it would represent the final mass extinction in Earth's history (albeit a very long extinction event). Effects and recovery The effects of mass extinction events varied widely. After a major extinction event, usually only weedy species survive due to their ability to live in diverse habitats. Later, species diversify and occupy empty niches. Generally, it takes millions of years for biodiversity to recover after extinction events. In the most severe mass extinctions it may take 15 to 30 million years.The worst Phanerozoic event, the Permian–Triassic extinction, devastated life on Earth, killing over 90% of species. Life seemed to recover quickly after the P-T extinction, but this was mostly in the form of disaster taxa, such as the hardy Lystrosaurus. The most recent research indicates that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety of niches, took much longer to recover. It is thought that this long recovery was due to successive waves of extinction that inhibited recovery, as well as prolonged environmental stress that continued into the Early Triassic. Recent research indicates that recovery did not begin until the start of the mid-Triassic, four to six million years after the extinction; and some writers estimate that the recovery was not complete until 30 million years after the P-T extinction, that is, in the late Triassic. Subsequent to the P-T extinction, there was an increase in provincialization, with species occupying smaller ranges – perhaps removing incumbents from niches and setting the stage for an eventual rediversification.The effects of mass extinctions on plants are somewhat harder to quantify, given the biases inherent in the plant fossil record. Some mass extinctions (such as the end-Permian) were equally catastrophic for plants, whereas others, such as the end-Devonian, did not affect the flora. See also Footnotes References Further reading External links "Calculate the effects of an impact". Lunar and Planetary Laboratory. Tucson, AZ: University of Arizona. "Species Alliance". – nonprofit organization producing a documentary about mass extinction titled "Call of Life: Facing the Mass Extinction" "Interstellar dust cloud-induced extinction theory". space.com. 4 March 2005. "Sepkoski's Global Genus Database of Marine Animals". Geology. University of Wisconsin. – Calculate extinction rates for yourself!
polar bear
The polar bear (Ursus maritimus) is a large bear native to the Arctic and nearby areas. It is closely related to the brown bear, and the two species can interbreed. The polar bear is the largest extant species of bear and land carnivore, with adult males weighing 300–800 kg (660–1,760 lb). The species is sexually dimorphic, as adult females are much smaller. The polar bear is white- or yellowish-furred with black skin and a thick layer of fat. It is more slender than the brown bear, with a narrower skull, longer neck and lower shoulder hump. Its teeth are sharper and more adapted to cutting meat. The paws are large and allow the bear to walk on ice and paddle in the water. Polar bears are both terrestrial and pagophilic (ice-living) and are considered to be marine mammals due to their dependence on marine ecosystems. They prefer the annual sea ice but live on land when the ice melts in the summer. They are mostly carnivorous and specialized for preying on seals, particularly ringed seals. Such prey is typically taken by ambush; the bear may stalk its prey on the ice or in the water, but also will stay at a breathing hole or ice edge to wait for prey to swim by. The bear primarily feeds on the seal's energy-rich blubber. Other prey include walruses, beluga whales and some terrestrial animals. Polar bears are usually solitary but can be found in groups when on land. During the breeding season, male bears guard females and defend them from rivals. Mothers give birth to cubs in maternity dens during the winter. Young stay with their mother for up to two and a half years. The polar bear is considered to be a vulnerable species by the International Union for Conservation of Nature (IUCN) with an estimated total population of 22,000 to 31,000 individuals. Its biggest threats are climate change, pollution and energy development. Climate change has caused a decline in sea ice, giving the polar bear less access to its favoured prey and increasing the risk of malnutrition and starvation. Less sea ice also means that the bears must spend more time on land, increasing conflicts with people. Polar bears have been hunted, both by native and non-native peoples, for their coats, meat and other items. They have been kept in captivity in zoos and circuses and are prevalent in art, folklore, religion and modern culture. Naming The polar bear was given its common name by Thomas Pennant in A Synopsis of Quadrupeds (1771). It was known as the "white bear" in Europe between the 13th and 18th centuries, as well as "ice bear", "sea bear" and "Greenland bear". The Norse referred to it as isbjørn ("ice bear") and hvitebjørn ("white bear"). The bear is called nanook by the Inuit. The Netsilik cultures additionally have different names for bears based on certain factors, such as sex and age: these include adult males (anguraq), single adult females (tattaq), gestating females (arnaluk), newborns (hagliaqtug), large adolescents (namiaq) and dormant bears (apitiliit). The scientific name Ursus maritimus is Latin for "sea bear". Taxonomy Carl Linnaeus classified the polar bear as a type of brown bear (Ursus arctos), labelling it as Ursus maritimus albus-major, articus in the 1758 edition of his work Systema Naturae. Constantine John Phipps formally described the polar bear as a distinct species, Ursus maritimus in 1774, following his 1773 voyage towards the North Pole. Due to its adaptations to a marine environment, some taxonomists like Theodore Knottnerus-Meyer have placed the polar bear in its genus Thalarctos. However Ursus is widely considered to be the valid genus for the species based on the fossil record and the fact that it can breed with the brown bear.Different subspecies have been proposed including Ursus maritimus maritimus and U. m. marinus. However these are not supported and the polar bear is considered to be monotypic. One possible fossil subspecies, U. m. tyrannus, was posited in 1964 by Björn Kurtén, who reconstructed the subspecies from a single fragment of an ulna which was approximately 20 percent larger than expected for a polar bear. However, re-evaluation in the 21st century has indicated that the fragment likely comes from a giant brown bear. Evolution The polar bear is one of eight extant species in the bear family Ursidae and of six extant species in the subfamily Ursinae. Ursine bears may have originated around 5 million years ago and show extensive hybridization of species in their lineage. The cladogram below is based on a 2017 genetic study: Fossils of polar bears are uncommon. The oldest known fossil is a 130,000- to 110,000-year-old jaw bone, found on Prince Charles Foreland, Norway, in 2004. Scientists in the 20th century surmised that polar bears directly descended from a population of brown bears, possibly in eastern Siberia or Alaska. Mitochondrial DNA studies in the 1990s and 2000s supported the status of the polar bear as a derivative of the brown bear, finding that some brown bear populations were more closely related to polar bears than to other brown bears, particularly the ABC Islands bears of Southeast Alaska. A 2010 study estimated that the polar bear lineage split from other brown bears around 150,000 years ago. More extensive genetic studies have refuted the idea that polar bears are directly descended from brown bears and found that the two species are separate sister lineages. The genetic similarities between polar bears and some brown bears were found to be the result of interbreeding. A 2012 study estimated the split between polar and brown bears as occurring around 600,000 years ago. A 2022 study estimated the divergence as occurring even earlier at over one million years ago. Glaciation events over hundreds of thousands of years led to both the origin of polar bears and their subsequent interactions and hybridizations with brown bears.Studies in 2011 and 2012 concluded that gene flow went from brown bears to polar bears during hybridization. In particular, a 2011 study concluded that living polar bear populations derived their maternal lines from now-extinct Irish brown bears. Later studies have clarified that gene flow went from polar to brown bears rather than the reverse. Up to 9 percent of the genome of ABC bears was transferred from polar bears, while Irish bears had up to 21.5 percent polar bear origin. Mass hybridization between the two species appears to have stopped around 200,000 years ago. Modern hybrids are relatively rare in the wild.Analysis of the number of variations of gene copies in polar bears compared with brown bears and American black bears shows distinct adaptions. Polar bears have a less diverse array of olfactory receptor genes, a result of there being fewer odours in their Arctic habitat. With its carnivorous, high-fat diet the species has fewer copies of the gene involved in making amylase, an enzyme that breaks down starch, and more selection for genes for fatty acid breakdown and a more efficient circulatory system. The polar bear's thicker coat is the result of more copies of genes involved in keratin-creating proteins. Characteristics The polar bear is the largest living species of bear and land carnivore, though some brown bear subspecies like the Kodiak bear can rival it in size. Males are generally 200–250 cm (6.6–8.2 ft) long with a weight of 300–800 kg (660–1,760 lb). Females are smaller at 180–200 cm (5.9–6.6 ft) with a weight of 150–300 kg (330–660 lb). Sexual dimorphism in the species is particularly high compared with most other mammals. Male polar bears also have proportionally larger heads than females. The weight of polar bears fluctuates during the year, as they can bulk up on fat and increase their mass by 50 percent. A fattened, pregnant female can weigh as much as 500 kg (1,100 lb). Adults may stand 130–160 cm (4.3–5.2 ft) tall at the shoulder. The tail is 76–126 mm (3.0–5.0 in) long. The largest polar bear on record, reportedly weighing 1,002 kg (2,209 lb), was a male shot at Kotzebue Sound in northwestern Alaska in 1960.Compared with the brown bear, this species has a more slender build, with a narrower, flatter and smaller skull, a longer neck, and a lower shoulder hump. The snout profile is curved, resembling a "Roman nose". They have 34–42 teeth including 12 incisors, 4 canines, 8–16 premolars and 10 molars. The teeth are adapted for a more carnivorous diet than that of the brown bear, having longer, sharper and more spaced out canines, and smaller, more pointed cheek teeth (premolars and molars). The species has a large space or diastema between the canines and cheek teeth, which may allow it to better bite into prey. Since it normally preys on animals much smaller than it, the polar bear does not have a particularly strong bite. Polar bears have large paws, with the front paws being broader than the back. The feet are hairier than in other bear species, providing warmth and friction when stepping on snow and sea ice. The claws are small but sharp and hooked and are used both to snatch prey and climb onto ice. The coat consists of dense underfur around 5 cm (2.0 in) long and guard hairs around 15 cm (5.9 in) long. Males have long hairs on their forelegs, which is thought to signal their fitness to females. The outer surface of the hairs has a scaly appearance, and the guard hairs are hollow, which allows the animals to trap heat and float in the water. The transparent guard hairs forward scatter ultraviolet light between the underfur and the skin, leading to a cycle of absorption and re-emission, keeping them warm. The fur appears white due to the backscatter of incident light and the absence of pigment. Polar bears gain a yellowish colouration as they are exposed more to the sun. This is reversed after they moult. It can also be grayish or brownish. Their light fur provides camouflage in their snowy environment. After emerging from the water, the bear can easily shake itself dry before freezing since the hairs are resistant to tangling when wet. The skin, including the nose and lips, is black and absorbs heat. Polar bears have a 5–10 cm (2.0–3.9 in) thick layer of fat underneath the skin, which provides both warmth and energy. Polar bears maintain their core body temperature at about 36.9 °C (98 °F). Overheating is countered by a layer of highly vascularized striated muscle tissue and finely controlled blood vessels. Bears also cool off by entering the water.The eyes of a polar bear are close to the top of the head, which may allow them to stay out of the water when the animal is swimming at the surface. They are relatively small, which may be an adaption against blowing snow and snow blindness. Polar bears are dichromats, and lack the cone cells for seeing green. They have many rod cells which allow them to see at night. The ears are small, allowing them to retain heat and not get frostbitten. They can hear best at frequencies of 11.2–22.5 kHz, a wider frequency range than expected given that their prey mostly makes low-frequency sounds. The nasal concha creates a large surface area, so more warm air can move through the nasal passages. Their olfactory system is also large and adapted for smelling prey over vast distances. The animal has reniculate kidneys which filter out the salt in their food. Distribution and habitat Polar bears inhabit the Arctic and adjacent areas. Their range includes Greenland, Canada, Alaska, Russia and the Svalbard Archipelago of Norway. Polar bears have been recorded 25 km (16 mi) from the North Pole. The southern limits of their range include James Bay and Newfoundland and Labrador in Canada and St. Matthew Island and the Pribilof Islands of Alaska. They are not permanent residents of Iceland but have been recorded visiting there if they can reach it via sea ice. Due to minimal human encroachment on the bears' remote habitat, they can still be found in much of their original range, more so than any other large land carnivore.Polar bears have been divided into at least 18 subpopulations labelled East Greenland (ES), Barents Sea (BS), Kara Sea (KS), Laptev Sea (LVS), Chukchi Sea (CS), northern and southern Beaufort Sea (SBS and NBS), Viscount Melville (VM), M'Clintock Channel (MC), Gulf of Boothia (GB), Lancaster Sound (LS), Norwegian Bay (NB), Kane Basin (KB), Baffin Bay (BB), Davis Strait (DS), Foxe Basin (FB) and the western and southern Hudson Bay (WHB and SHB) populations. Bears in and around the Queen Elizabeth Islands have been proposed as a subpopulation but this is not universally accepted. A 2022 study has suggested that the bears in southeast Greenland should be considered a different subpopulation based on their geographic isolation and genetics. Polar bear populations can also be divided into four gene clusters: Southern Canadian, Canadian Archipelago, Western Basin (northwestern Canada west to the Russian Far East) and Eastern Basin (Greenland east to Siberia).The polar bear is dependent enough on the ocean to be considered a marine mammal. It is pagophilic and mainly inhabits annual sea ice covering continental shelves and between islands of archipelagos. These areas, known as the "Arctic Ring of Life", have high biological productivity. The species tends to frequent areas where sea ice meets water, such as polynyas and leads, to hunt the seals that make up most of its diet. Polar bears travel in response to changes in ice cover throughout the year. They are forced onto land in summer when the sea ice disappears. Terrestrial habitats used by polar bears include forests, mountains, rocky areas, lakeshores and creeks. In the Chukchi and Beaufort seas, where the sea ice breaks off and floats north during the summer, polar bears generally stay on the ice, though a large portion of the population (15–40%) has been observed spending all summer on land since the 1980s. Some areas have thick multiyear ice that does not completely melt and the bears can stay on all year, though this type of ice has fewer seals and allows for less productivity in the water. Behaviour and ecology Polar bears may travel areas as small as 3,500 km2 (1,400 sq mi) to as large as 38,000 km2 (15,000 sq mi) in a year, while drifting ice allows them to move further. Depending on ice conditions, a bear can travel an average of 12 km (7.5 mi) per day. These movements are powered by their energy-rich diet. Polar bears move by walking and galloping and do not trot. Walking bears tilt their front paws towards each other. They can run at estimated speeds of up to 40 km/h (25 mph) but typically move at around 5.5 km/h (3.4 mph). Polar bears are also capable swimmers and can swim at up to 6 km/h (3.7 mph). One study found they can swim for an average of 3.4 days at a time and travel an average of 154.2 km (95.8 mi). They can dive for as long as three minutes. When swimming, the broad front paws do the paddling, while the hind legs play a role in steering and diving. Most polar bears are active year-round. Hibernation occurs only among pregnant females. Non-hibernating bears typically have a normal 24-hour cycle even during days of all darkness or all sunlight, though cycles less than a day are more common during the former. The species is generally diurnal, being most active early in the day. Polar bears sleep close to eight hours a day on average. They will sleep in various positions, including curled up, sitting up, lying on one side, on the back with limbs spread, or on the belly with the rump elevated. On sea ice, polar bears snooze at pressure ridges where they dig on the sheltered side and lie down. After a snowstorm, a bear may rest under the snow for hours or days. On land, the bears may dig a resting spot on gravel or sand beaches. They will also sleep on rocky outcrops. In mountainous areas on the coast, mothers and subadults will sleep on slopes where they can better spot another bear coming. Adult males are less at risk from other bears and can sleep nearly anywhere. Social life Polar bears are typically solitary, aside from mothers with cubs and mating pairs. On land, they are found closer together and gather around food resources. Adult males, in particular, are more tolerant of each other in land environments and outside the breeding season. They have been recorded forming stable "alliances", travelling, resting and playing together. A dominant hierarchy exists among polar bears with the largest mature males ranking at the top. Adult females outrank subadults and adolescents and younger males outrank females of the same age. In addition, cubs with their mothers outrank those on their own. Females with dependent offspring tend to stay away from males, but are sometimes associated with other female–offspring units, creating "composite families".Polar bears are generally quiet but can produce various sounds. Chuffing, a soft pulsing call, is made by mother bears presumably to keep in contact with their young. During the breeding season, adult males will chuff at potential mates. Unlike other animals where chuffing is passed through the nostrils, in polar bears it is emitted through a partially open mouth. Cubs will cry for attention and produce humming noises while nursing. Teeth chops, jaw pops, blows, huffs, moans, growls and roars are heard in more hostile encounters. A polar bear visually communicates with its eyes, ears, nose and lips. Chemical communication can also be important: bears secrete their scent from their foot pads into their tracks, allowing individuals to keep track of one another. Diet and hunting The polar bear is a hypercarnivore, and the most carnivorous species of bear. It is an apex predator of the Arctic, preying on ice-living seals and consuming their energy-rich blubber. The most commonly taken species is the ringed seal, but they also prey on bearded seals and harp seals. Ringed seals are ideal prey as they are abundant and small enough to be overpowered by even small bears. Bearded seal adults are larger and are more likely to break free from an attacking bear, hence adult male bears are more successful in hunting them. Less common prey are hooded seals, spotted seals, ribbon seals and the more temperate-living harbour seals. Polar bears, mostly adult males, will occasionally hunt walruses, both on land and ice, though they mainly target the young, as adults are too large and formidable, with their thick skin and long tusks. Besides seals, bears will prey on cetacean species such as beluga whales and narwhals, as well as reindeer, birds and their eggs, fish and marine invertebrates. They rarely eat plant material as their digestive system is too specialized for animal matter, though they have been recorded eating berries, moss, grass and seaweed. In their southern range, especially near Hudson Bay and James Bay, polar bears endure all summer without sea ice to hunt from and must subsist more on terrestrial foods. Fat reserves allow polar bears to survive for months without eating. Cannibalism is known to occur in the species.Polar bears hunt their prey in several different ways. When a bear spots a seal hauling out on the sea ice, it slowly stalks it with the head and neck lowered, possibly to make its dark nose and eyes less noticeable. As it gets closer, the bear crouches more and eventually charges at a high speed, attempting to catch the seal before it can escape into its ice hole. Some stalking bears need to move through water; traversing through water cavities in the ice when approaching the seal or swimming towards a seal on an ice floe. The polar bear can stay underwater with its nose exposed. When it gets close enough, the animal lunges from the water to attack.During a limited time in spring, polar bears will search for ringed seal pups in their birth lairs underneath the ice. Once a bear catches the scent of a hiding pup and pinpoints its location, it approaches the den quietly to not alert it. It uses its front feet to smash through the ice and then pokes its head in to catch the pup before it can escape. A ringed seal's lair can be more than 1 m (3.3 ft) below the surface of the ice and thus more massive bears are better equipped for breaking in. Some bears may simply stay still near a breathing hole or other spot near the water and wait for prey to come by. This can last hours and when a seal surfaces the bear will try to pull it out with its paws and claws. This tactic is the primary hunting method from winter to early spring. Bears hunt walrus groups by provoking them into stampeding and then look for young that have been crushed or separated from their mothers during the turmoil. There are reports of bears trying to kill or injure walruses by throwing rocks and pieces of ice on them. Belugas and narwhals are vulnerable to bear attacks when they are stranded in shallow water or stuck in isolated breathing holes in the ice. When stalking reindeer, polar bears will hide in vegetation before an ambush. On some occasions, bears may try to catch prey in open water, swimming underneath a seal or aquatic bird. Seals in particular, however, are more agile than bears in the water. Polar bears rely on raw power when trying to kill their prey, and will employ bites or paw swipes. They have the strength to pull a mid-sized seal out of the water or haul a beluga carcass for quite some distance. Polar bears only occasionally store food for later—burying it under snow—and only in the short term.Arctic foxes routinely follow polar bears and scavenge scraps from their kills. The bears usually tolerate them but will charge a fox that gets too close when they are feeding. Polar bears themselves will scavenge. Subadult bears will eat remains left behind by others. Females with cubs often abandon a carcass when they see an adult male approaching, though are less likely to if they have not eaten in a long time. Whale carcasses are a valuable food source, particularly on land and after the sea ice melts, and attract several bears. In one area in northeastern Alaska, polar bears have been recorded competing with grizzly bears for whale carcasses. Despite their smaller size, grizzlies are more aggressive and polar bears are likely to yield to them in confrontations. Polar bears will also scavenge at garbage dumps during ice-free periods. Reproduction and development Polar bear mating takes place on the sea ice and during spring, mostly between March and May. Males search for females in estrus and often travel in twisting paths which reduces the chances of them encountering other males while still allowing them to find females. The movements of females remain linear and they travel more widely. The mating system can be labelled as female-defence polygyny, serial monogamy or promiscuity.Upon finding a female, a male will try to isolate and guard her. Courtship can be somewhat aggressive, and a male will pursue a female if she tries to run away. It can take days for the male to mate with the female which induces ovulation. After their first copulation, the couple bond. Undisturbed polar bear pairings typically last around two weeks during which they will sleep together and mate multiple times. Competition for mates can be intense and this has led to sexual selection for bigger males. Polar bear males often have scars from fighting. A male and female that have already bonded will flee together when another male arrives. A female mates with multiple males in a season and a single litter can have more than one father. When the mating season ends, the female will build up more fat reserves to sustain both herself and her young. Sometime between August and October, the female constructs and enters a maternity den for winter. Depending on the area, maternity dens can be found in sea ice just off the coastline or further inland and may be dug underneath snow, earth or a combination of both. The inside of these shelters can be around 1.5 m (4.9 ft) wide with a ceiling height of 1.2 m (3.9 ft) while the entrance may be 2.1 m (6.9 ft) long and 1.2 m (3.9 ft) wide. The temperature of a den can be much higher than the outside. Females hibernate and give birth to their cubs in the dens. Hibernating bears fast and internally recycle bodily waste. Polar bears experience delayed implantation and the fertilized embryo does not start development until the fall, between mid-September and mid-October. With delayed implantation, gestation in the species lasts seven to nine months but actual pregnancy is only two months.Mother polar bears typically give birth to two cubs per litter. As with other bear species, newborn polar bears are tiny and altricial. The newborns have woolly hair and pink skin, with a weight of around 600 g (21 oz). Their eyes remain closed for a month. The mother's fatty milk fuels their growth, and the cubs are kept warm both by the mother's body heat and the den. The mother emerges from the den between late February and early April, and her cubs are well-developed and capable of walking with her. At this time they weigh 10–15 kilograms (22–33 lb). A polar bear family stays near the dens for roughly two weeks; during this time the cubs will move and play around while the mother mostly rests. They eventually head out on the sea ice. Cubs under a year old stay close to their mother. When she hunts, they stay still and watch until she calls them back. Observing and imitating the mother helps the cubs hone their hunting skills. After their first year they become more independent and explore. At around two years old, they are capable of hunting on their own. The young suckle their mother as she is lying on her side or sitting on her rump. A lactating female cannot conceive and give birth, and cubs are weaned between two and two-and-a-half years. She may simply leave her weaned young or they may be chased away by a courting male. Polar bears reach sexual maturity at around four years for females and six years for males. Females reach their adult size at 4 or 5 years of age while males are fully grown at twice that age. Mortality Polar bears can live up to 30 years. The bear's long lifespan and ability to consistently produce young offsets cub deaths in a population. Some cubs die in the dens or the womb if the female is not in good condition. Nevertheless, the female has a chance to produce a surviving litter the next spring if she can eat better in the coming year. Cubs will eventually starve if their mothers cannot kill enough prey. Cubs also face threats from wolves and adult male bears. Males kill cubs to bring their mother back into estrus but also kill young outside the breeding season for food. A female and her cubs can flee from the slower male. If the male can get close to a cub, the mother may try to fight him off, sometimes at the cost of her life.Subadult bears, who are independent but not quite mature, have a particularly rough time as they are not as successful hunters as adults. Even when they do succeed, their kill will likely be stolen by a larger bear. Hence subadults have to scavenge and are often underweight and at risk of starvation. At adulthood, polar bears have a high survival rate, though adult males suffer injuries from fights over mates. Polar bears are especially susceptible to Trichinella, a parasitic roundworm they contract through cannibalism. Conservation status In 2015, the IUCN Red List categorized the polar bear as vulnerable due to a "decline in area of occupancy, extent of occurrence and/or quality of habitat". It estimated the total population to be between 22,000 to 31,000, and the current population trend is unknown. Threats to polar bear populations include climate change, pollution and energy development.In 2021, the IUCN/SSC Polar Bear Specialist Group labelled four subpopulations (Barents and Chukchi Sea, Foxe Basin and Gulf of Boothia) as "likely stable", two (Kane Basin and M'Clintock Channel) as "likely increased" and three (Southern Beaufort Sea, Southern and Western Hudson Bay) as "likely decreased" over specific periods between the 1980s and 2010s. The remaining ten did not have enough data. A 2008 study predicted two-thirds of the world's polar bears may disappear by 2050, based on the reduction of sea ice, and only one population would likely survive in 50 years. A 2016 study projected a likely decline in polar bear numbers of more than 30 percent over three generations. The study concluded that declines of more than 50 percent are much less likely. A 2012 review suggested that polar bears may become regionally extinct in southern areas by 2050 if trends continue, leaving the Canadian Archipelago and northern Greenland as strongholds.The key danger from climate change is malnutrition or starvation due to habitat loss. Polar bears hunt seals on the sea ice, and rising temperatures cause the ice to melt earlier in the year, driving the bears to shore before they have built sufficient fat reserves to survive the period of scarce food in the late summer and early fall. Thinner sea ice tends to break more easily, which makes it more difficult for polar bears to access seals. Insufficient nourishment leads to lower reproductive rates in adult females and lower survival rates in cubs and juvenile bears. Lack of access to seals also causes bears to find food on land which increases the risk of conflict with humans. Reduction in sea ice cover also forces bears to swim longer distances, which further depletes their energy stores and occasionally leads to drowning. Increased ice mobility may result in less stable sites for dens or longer distances for mothers travelling to and from dens on land. Thawing of permafrost would lead to more fire-prone roofs for bears denning underground. Less snow may affect insulation while more rain could cause more cave-ins. The maximum corticosteroid-binding capacity of corticosteroid-binding globulin in polar bear serum correlates with stress in polar bears, and this has increased with climate warming. Disease-causing bacteria and parasites would flourish more readily in a warmer climate.Oil and gas development also affects polar bear habitat. The Chukchi Sea Planning Area of northwestern Alaska, which has had many drilling leases, was found to be an important site for non-denning female bears. Oil spills are also a risk. A 2018 study found that ten percent or less of prime bear habitat in the Chukchi Sea is vulnerable to a potential spill, but a spill at full reach could impact nearly 40 percent of the polar bear population. Polar bears accumulate high levels of persistent organic pollutants such as polychlorinated biphenyl (PCBs) and chlorinated pesticides, due to their position at the top of the ecological pyramid. Many of these chemicals have been internationally banned due to the recognition of their harm to the environment. Traces of them have slowly dwindled in polar bears but persist and have even increased in some populations.Polar bears receive some legal protection in all the countries they inhabit. The species has been labelled as threatened under the US Endangered Species Act since 2008, while the Committee on the Status of Endangered Wildlife in Canada listed it as of 'Special concern' since 1991. In 1973, the Agreement on the Conservation of Polar Bears was signed by all five nations with polar bear populations, Canada, Denmark (of which Greenland is an autonomous territory), Russia (then USSR), Norway and the US. This banned most harvesting of polar bears, allowed indigenous hunting using traditional methods, and promoted the preservation of bear habitat. The Convention on International Trade in Endangered Species of Wild Fauna lists the species under Appendix II, which allows regulated trade. Relationship with humans Polar bears have coexisted and interacted with circumpolar peoples for millennia. "White bears" are mentioned as commercial items in the Japanese book Nihon Shoki in the seventh century. It is not clear if these were polar bears or white-coloured brown bears. During the Middle Ages, Europeans considered white bears to be a novelty and were more familiar with brown- and black-coloured bears. An early written account of the polar bear in its natural environment is found in the 13th-century anonymous Norwegian text Konungs skuggsjá, which mentions that "the white bear of Greenland wanders most of the time on the ice of the sea, hunting seals and whales and feeding on them" and says the bear is "as skillful a swimmer as any seal or whale". Over the next centuries, several European explorers would mention polar bears and describe their habits. Such accounts became more accurate after the Enlightenment, and both living and dead specimens were brought back. Nevertheless, some fanciful reports continued, including the idea that polar bears cover their noses during hunts. A relatively accurate drawing of a polar bear is found in Henry Ellis's work A Voyage to Hudson's Bay (1748). Polar bears were formally classified as a species by Constantine Phipps after his 1773 voyage to the Arctic. Accompanying him was a young Horatio Nelson, who was said to have wanted to get a polar bear coat for his father but failed in his hunt. In his 1785 edition of Histoire Naturelle, Comte de Buffon mentions and depicts a "sea bear", clearly a polar bear, and "land bears", likely brown and black bears. This helped promote ideas about speciation. Buffon also mentioned a "white bear of the forest", possibly a Kermode bear. Exploitation Polar bears were hunted as early as 8,000 years ago, as indicated by archaeological remains at Zhokhov Island in the East Siberian Sea. The oldest graphic depiction of a polar bear shows it being hunted by a man with three dogs. This rock art was among several petroglyphs found at Pegtymel in Siberia and dates from the fifth to eighth centuries. Before access to firearms, native people used lances, bows and arrows and hunted in groups accompanied by dogs. Though hunting typically took place on foot, some people killed swimming bears from boats with a harpoon. Polar bears were sometimes killed in their dens. Killing a polar bear was considered a rite of passage for boys in some cultures. Native people respected the animal and hunts were subject to strict rituals. Bears were harvested for the fur, meat, fat, tendons, bones and teeth. The fur was worn and slept on, while the bones and teeth were made into tools. For the Netsilik, the individual who finally killed the bear had the right to its fur while the meat was passed to all in the party. Some people kept the cubs of slain bears. Norsemen in Greenland traded polar bear furs in the Middle Ages. Russia traded polar bear products as early as 1556, with Novaya Zemlya and Franz Josef Land being important commercial centres. Large-scale hunting of bears at Svalbard occurred since at least the 18th century, when no less than 150 bears were killed each year by Russian explorers. In the next century, more Norwegians were harvesting the bears on the island. From the 1870s to the 1970s, around 22,000 of the animals were hunted in total. Over 150,000 polar bears in total were either killed or captured in Russia and Svalbard, from the 18th to the 20th century. In the Canadian Arctic, bears were harvested by commercial whalers especially if they could not get enough whales. The Hudson's Bay Company is estimated to have sold 15,000 polar bear coats between the late 19th century and early 20th century. In the mid-20th century, countries began to regulate polar bear harvesting, culminating in the 1973 agreement.Polar bear meat was commonly eaten as rations by explorers and sailors in the Arctic. Its taste and texture have been described both positively and negatively. Some have called it too coarse with a powerful smell, while others praised it as a "royal dish". The liver was known for being too toxic to eat. This is due to the accumulation of vitamin A from their prey. Polar bear fat was also used in lamps when other fuel was unavailable. Polar bear rugs were almost ubiquitous on the floors of Norwegian churches by the 13th and 14th centuries. In more modern times, classical Hollywood actors would pose on bearskin rugs, notably Marilyn Monroe. Such images often had sexual connotations. Conflicts When the sea ice melts, polar bears, particularly subadults, conflict with humans over resources on land. They are attracted to the smell of human-made foods, particularly at garbage dumps and may be shot when they encroach on private property. In Churchill, Manitoba, local authorities maintain a "polar bear jail" where nuisance bears are held until the sea ice freezes again. Climate change has increased conflicts between the two species. Over 50 polar bears swarmed a town in Novaya Zemlya in February 2019, leading local authorities to declare a state of emergency.From 1870 to 2014, there were an estimated 73 polar bear attacks on humans, which led to 20 deaths. The majority of attacks were by hungry males, typically subadults, while female attacks were usually in defence of the young. In comparison to brown and American black bears, attacks by polar bears were more often near and around where humans lived. This may be due to the bears getting desperate for food and thus more likely to seek out human settlements. As with the other two bear species, polar bears are unlikely to target more than two people at once. Though popularly thought of as the most dangerous bear, the polar bear is no more aggressive to humans than other species. Captivity The polar bear was a particularly sought-after species for exotic animal collectors due to being relatively rare and remote living, and its reputation as a ferocious beast. It is one of the few marine mammals that can reproduce well in captivity. They were originally kept only by royals and elites. The Tower of London got a polar bear as early as 1252 under King Henry III. In 1609, James VI and I of Scotland, England and Ireland were given two polar bear cubs by the sailor Jonas Poole, who got them during a trip to Svalbard. At the end of the 17th century, Frederick I of Prussia housed polar bears in menageries with other wild animals. He had their claws and canines removed to perform mock fights. Around 1726, Catherine I of Russia gifted two polar bears to Augustus II the Strong of Poland, who desired them for his animal collection. Later, polar bears were displayed to the public in zoos and circuses. In early 19th century, the species was exhibited at the Exeter Exchange in London, as well as menageries in Vienna and Paris. The first zoo in North America to exhibit a polar bear was the Philadelphia Zoo in 1859.Polar bear exhibits were innovated by Carl Hagenbeck, who replaced cages and pits with settings that mimicked the animal's natural environment. In 1907, he revealed a complex panoramic structure at the Tierpark Hagenbeck Zoo in Hamburg consisting of exhibits made of artificial snow and ice separated by moats. Different polar animals were displayed on each platform, giving the illusion of them living together. Starting in 1975, Hellabrunn Zoo in Munich housed its polar bears in an exhibit which consisted of a glass barrier, a house, concrete platforms mimicking ice floes and a large pool. Inside the house were maternity dens, and rooms for the staff to prepare and store the food. The exhibit was connected to an outdoor yard for extra room. Similar naturalistic and "immersive" exhibits were opened in the early 21st century, such as the "Arctic Ring of Life" at the Detroit Zoo and Ontario's Cochrane Polar Bear Habitat. Many zoos in Europe and North America have stopped keeping polar bears due to the size and costs of their complex exhibits. In North America, the population of polar bears in zoos reached its zenith in 1975 with 229 animals and declined in the 21st century. Polar bears have been trained to perform in circuses. Bears in general, being large, powerful, easy to train and human-like in form, were widespread in circuses, and the white coat of polar bears made them particularly attractive. Circuses helped change the polar bear's image from a fearsome monster to something more comical. Performing polar bears were used in 1888 by Circus Krone in Germany and later in 1904 by the Bostock and Wombwell Menagerie in England. Circus director Wilhelm Hagenbeck trained up to 75 polar bears to slide into a large tank through a chute. He began performing with them in 1908 and they had a particularly well-received show at the Hippodrome in London. Other circus tricks performed by polar bears involved tightropes, balls, roller skates and motorcycles. One of the most famous polar bear trainers in the second half of the twentieth century was the East German Ursula Böttcher, whose small stature contrasted with that of the large bears. Starting in the late 20th century, most polar bear acts were retired and the use of these bears for the circus is now prohibited in the US.Several captive polar bears gained celebrity status in the late 20th and early 21st century, notably Knut of the Berlin Zoological Garden, who was rejected by his mother and had to be hand-reared by zookeepers. Another bear, Binky of the Alaska Zoo in Anchorage, became famous for attacking two visitors who got too close. Captive polar bears may pace back and forth, a stereotypical behaviour. In one study, they were recorded to have spent 14 percent of their days pacing. Gus of the Central Park Zoo was prescribed Prozac by a therapist for constantly swimming in his pool. To reduce stereotypical behaviours, zookeepers provide the bears with enrichment items to trigger their play behaviour. Zoo polar bears may appear green due to algae concentrations. Cultural significance Polar bears have prominent roles in Inuit culture and religion. The deity Torngarsuk is sometimes imagined as a giant polar bear. He resides underneath the sea floor in an underworld of the dead and has power over sea creatures. Kalaallit shamans would worship him through singing and dancing and were expected to be taken by him to the sea and consumed if he considered them worthy. Polar bears were also associated with the goddess Nuliajuk who was responsible for their creation, along with other sea creatures. It is believed that shamans could reach the Moon or the bottom of the ocean by riding on a guardian spirit in the form of a polar bear. Some folklore involves people turning into or disguising themselves as polar bears by donning their skins or the reverse, with polar bears removing their skins. In Inuit astronomy, the Pleiades star cluster is conceived of as a polar bear trapped by dogs while Orion's Belt, the Hyades and Aldebaran represent hunters, dogs and a wounded bear respectively.Nordic folklore and literature have also featured polar bears. In The Tale of Auðun of the West Fjords, written around 1275, a poor man named Auðun spends all his money on a polar bear in Greenland, but ends up wealthy after giving the bear to the king of Denmark. In the 14th-century manuscript Hauksbók, a man named Odd kills and eats a polar bear that killed his father and brother. In the story of The Grimsey Man and the Bear, a mother bear nurses and rescues a farmer stuck on an ice floe and is repaid with sheep meat. 18th-century Icelandic writings mention the legend of a "polar bear king" known as the bjarndýrakóngur. This beast was depicted as a polar bear with "ruddy cheeks" and a unicorn-like horn, which glows in the dark. The king could understand when humans talk and was considered to be very astute. Two Norwegian fairy tales, "East of the Sun and West of the Moon" and "White-Bear-King-Valemon", involve white bears turning into men and seducing women.Drawings of polar bears have been featured on maps of the northern regions. Possibly the earliest depictions of a polar bear on a map is the Swedish Carta marina of 1539, which has a white bear on Iceland or "Islandia". A 1544 map of North America includes two polar bears near Quebec. Notable paintings featuring polar bears include François-Auguste Biard's Fighting Polar Bears (1839) and Edwin Landseer's Man Proposes, God Disposes (1864). Polar bears have also been filmed for cinema. An Inuit polar bear hunt was shot for the 1932 documentary Igloo, while the 1974 film The White Dawn filmed a simulated stabbing of a trained bear for a scene. In the film The Big Show (1961), two characters are killed by a circus polar bear. The scenes were shot using animal trainers instead of the actors. In modern literature, polar bears have been characters in both children's fiction, like Hans Beer's Little Polar Bear and the Whales and Sakiasi Qaunaq's The Orphan and the Polar Bear, and fantasy novels, like Philip Pullman's His Dark Materials series. In radio, Mel Blanc provided the vocals for Jack Benny's pet polar bear Carmichael on The Jack Benny Program. The polar bear is featured on flags and coats of arms, like the coat of arms of Greenland, and in many advertisements, notably for Coca-Cola since 1922.As charismatic megafauna, polar bears have been used to raise awareness of the dangers of climate change. Aurora the polar bear is a giant marionette created by Greenpeace for climate protests. The World Wide Fund for Nature has sold plush polar bears as part of its "Arctic Home" campaign. Photographs of polar bears have been featured in National Geographic and Time magazines, including ones of them standing on ice floes, while the climate change documentary and advocacy film An Inconvenient Truth (2006) includes an animated bear swimming. Automobile manufacturer Nissan used a polar bear in one of its commercials, hugging a man for using an electric car. To make a statement about global warming, in 2009 a Copenhagen ice statue of a polar bear with a bronze skeleton was purposely left to melt in the sun. See also 2011 Svalbard polar bear attack International Polar Bear Day List of individual bears – includes individual captive polar bears Polar Bears International – conservation organization Polar Bear Shores – an exhibit featuring polar bears at Sea World in Australia Notes References Bibliography External links Polar Bears International website ARKive — images and movies of the polar bear (Ursus maritimus)
atmospheric methane
Atmospheric methane is the methane present in Earth's atmosphere. The concentration of atmospheric methane is increasing due to methane emissions, and is causing climate change. Methane is one of the most potent greenhouse gases.: 82  Methane's radiative forcing (RF) of climate is direct,: 2  and it is the second largest contributor to human-caused climate forcing in the historical period.: 2  Methane is a major source of water vapour in the stratosphere through oxidation; and water vapour adds about 15% to methane's radiative forcing effect. The global warming potential (GWP) for methane is about 84 in terms of its impact over a 20-year timeframe. That means it traps 84 times more heat per mass unit than carbon dioxide (CO2) and 105 times the effect when accounting for aerosol interactions.Since the beginning of the Industrial Revolution (around 1750) the atmospheric methane concentration has increased by about 160%, with the increase being overwhelmingly caused by human activity. Since 1750 methane has contributed 3% of GHG emissions in terms of mass but is responsible for approximately 23% of radiative or climate forcing. In 2019, global methane concentrations rose from 722 parts per billion (ppb) in pre-industrial times to 1866 ppb. This is an increase by a factor of 2.6 and the highest value in at least 800,000 years.: 4  Methane increases the amount of ozone O3 in the troposphere (4 miles (6.4 km) to 12 miles (19 km) from the Earth's surface) and also in the stratosphere (from the troposphere to 31 miles (50 km) above the Earth's surface). Both water vapour and ozone are GHGs, which in turn adds to climate warming.: 2 Role in climate change Methane in the Earth's atmosphere is a powerful greenhouse gas with a global warming potential (GWP) 84 times greater than CO2 in a 20-year time frame.Radiative or climate forcing is the scientific concept used to measure the human impact on the environment in watts / meter². It refers to the "difference between solar irradiance absorbed by the Earth and energy radiated back to space" The direct radiative greenhouse gas forcing effect of methane relative to 1750 has been estimated at 0.5 W/m2 (watts per meter²) in the 2007 IPCC "Climate Change Synthesis Report 2007".: 38 In their May 21, 2021 173-page "Global Methane Assessment", the UNEP and CCAP said that their "understanding of methane's effect on radiative forcing" improved with research by teams led by M. Etminan in 2016, and William Collins in 2018, which resulted in an "upward revision" since the 2014 IPCC Fifth Assessment Report (AR5). The "improved understanding" says that prior estimates of the "overall societal impact of methane emissions" were likely underestimated.: 18 Etminan et al. published their new calculations for methane's radiative forcing (RF) in a 2016 Geophysical Research Letters journal article which incorporated the shortwave bands of CH4 in measuring forcing, not used in previous, simpler IPCC methods. Their new RF calculations which significantly revised those cited in earlier, successive IPCC reports for well mixed greenhouse gases (WMGHG) forcings by including the shortwave forcing component due to CH4, resulted in estimates that were approximately 20-25% higher. Collins et al. said that CH4 mitigation that reduces atmospheric methane by the end of the century, could "make a substantial difference to the feasibility of achieving the Paris climate targets," and would provide us with more "allowable carbon emissions to 2100".Methane is a strong GHG with a global warming potential 84 times greater than CO2 in a 20-year time frame. Methane is not as persistent a gas and tails off to about 28 times greater than CO2 for a 100-year time frame.In addition to the direct heating effect and the normal feedbacks, the methane breaks down to carbon dioxide and water. This water is often above the tropopause where little water usually reaches. Ramanathan (1988) notes that both water and ice clouds, when formed at cold lower stratospheric temperatures, are extremely efficient in enhancing the atmospheric greenhouse effect. He also notes that there is a distinct possibility that large increases in future methane may lead to a surface warming that increases nonlinearly with the methane concentration. Mitigation efforts to reduce short-lived climate pollutants, like methane and black carbon would help combat "near-term climate change" and would support Sustainable Development Goals. Sources Any process that results in the production of methane and its release into the atmosphere can be considered a "source". The known sources of methane are predominantly located near the Earth's surface. Two main processes that are responsible for methane production include microorganisms anaerobically converting organic compounds into methane (methanogenesis), which are widespread in aquatic ecosystems, and ruminant animals. Other natural sources include melting permafrost, wetlands, plants, and methane clathrates. Measurement techniques Methane was typically measured using gas chromatography. Gas chromatography is a type of chromatography used for separating or analyzing chemical compounds. It is less expensive in general, compared to more advanced methods, but it is more time and labor-intensive.Spectroscopic methods were the preferred method for atmospheric gas measurements due to its sensitivity and precision. Also, spectroscopic methods are the only way of remotely sensing the atmospheric gases. Infrared spectroscopy covers a large spectrum of techniques, one of which detects gases based on absorption spectroscopy. There are various methods for spectroscopic methods, including Differential optical absorption spectroscopy, Laser-induced fluorescence, and Fourier Transform Infrared.In 2011, cavity ring-down spectroscopy was the most widely used IR absorption technique of detecting methane. It is a form of laser absorption spectroscopy which determines the mole fraction to the order of parts per trillion. Global monitoring CH4 has been measured directly in the environment since the 1970s. The Earth's atmospheric methane concentration has increased 160% since preindustrial levels in the mid-18th century.Long term atmospheric measurements of methane by NOAA show that the build up of methane nearly tripled since pre-industrial times since 1750. In 1991 and 1998 there was a sudden growth rate of methane representing a doubling of growth rates in previous years. The June 15, 1991 eruption of Mount Pinatubo, measuring VEI-6—was the second-largest terrestrial eruption of the 20th century. In 2007 it was reported that unprecedented warm temperatures in 1998—the warmest year since surface records were recorded—could have could have induced elevated methane emissions, along with an increase in wetland and rice field emissions and the amount of biomass burning.Data from 2007 suggested methane concentrations were beginning to rise again. This was confirmed in 2010 when a study showed methane levels were on the rise for the 3 years 2007 to 2009. After a decade of near-zero growth in methane levels, "globally averaged atmospheric methane increased by [approximately] 7 nmol/mol per year during 2007 and 2008. During the first half of 2009, globally averaged atmospheric CH4 was [approximately] 7 nmol/mol greater than it was in 2008, suggesting that the increase will continue in 2009." From 2015 to 2019 sharp rises in levels of atmospheric methane have been recorded.In 2010, methane levels in the Arctic were measured at 1850 nmol/mol which is over twice as high as at any time in the last 400,000 years. According to the IPCC AR5, since 2011 concentrations continued to increase. After 2014, the increase accelerated and by 2017, it reached 1,850 (parts per billion) ppb. The annual average for methane (CH4) was 1866 ppb in 2019 and scientists reported with "very high confidence" that concentrations of CH4 were higher than at any time in at least 800,000 years. The largest annual increase occurred in 2021 with current concentrations reaching a record 260% of pre-industrial—with the overwhelming percentage caused by human activity. In 2013, IPCC scientists said with "very high confidence", that concentrations of atmospheric methane CH4 "exceeded the pre-industrial levels by about 150% which represented "levels unprecedented in at least the last 800,000 years." The globally averaged concentration of methane in Earth's atmosphere increased by about 150% from 722 ± 25 ppb in 1750 to 1803.1 ± 0.6 ppb in 2011. As of 2016, methane contributed radiative forcing of 0.62 ± 14% Wm−2, or about 20% of the total radiative forcing from all of the long-lived and globally mixed greenhouse gases. The atmospheric methane concentration has continued to increase since 2011 to an average global concentration of 1911.8 ± 0.6 ppb as of 2022. The May 2021 peak was 1891.6 ppb, while the April 2022 peak was 1909.4 ppb, a 0.9% increase. The Global Carbon Project consortium produces the Global Methane Budget. Working with over fifty international research institutions and 100 stations globally, it updates the methane budget every few years.In 2013, the balance between sources and sinks of methane was not yet fully understood. Scientists were unable to explain why the atmospheric concentration of methane had temporarily ceased to increase.The focus on the role of methane in anthropogenic climate change has become more relevant since the mid-2010s. Natural sinks or removal of atmospheric methane The amount of methane in the atmosphere is the result of a balance between the production of methane on the Earth's surface—its source—and the destruction or removal of methane, mainly in the atmosphere—its sink— in an atmospheric chemical process.Another major natural sink is through oxidation by methanotrophic or methane-consuming bacteria in Earth's soils. These 2005 NASA computer model simulations—calculated based on data available at that time—illustrate how methane is destroyed as it rises. As air rises in the tropics, methane is carried upwards through the troposphere—the lowest portion of Earth's atmosphere which is 4 miles (6.4 km) to 12 miles (19 km) from the Earth's surface, into the lower stratosphere—the ozone layer—and then the upper portion of the stratosphere.This atmospheric chemical process is the most effective methane sink, as it removes 90% of atmospheric methane. This global destruction of atmospheric methane mainly occurs in the troposphere.Methane molecules react with hydroxyl radicals (OH)—the "major chemical scavenger in the troposphere" that "controls the atmospheric lifetime of most gases in the troposphere". Through this CH4 oxidation process, atmospheric methane is destroyed and water vapor and carbon dioxide are produced. While this decreases the concentration of methane in the atmosphere, it also increases radiative forcing because both water vapor and carbon dioxide are more powerful GHGs factors in terms of affecting the warming of Earth. This additional water vapor in the stratosphere caused by CH4 oxidation, adds approximately 15% to methane's radiative forcing effect.By the 1980s, the global warming problem had been transformed by the inclusion of methane and other non-CO2 trace-gases—CFCs, N2O, and O3— on global warming, instead of focusing primarily on carbon dioxide. Both water and ice clouds, when formed at cold lower stratospheric temperatures, have a significant impact by increasing the atmospheric greenhouse effect. Large increases in future methane could lead to a surface warming that increases nonlinearly with the methane concentration.Methane also affects the degradation of the ozone layer—the lowest layer of the stratosphere from about 15 to 35 kilometers (9 to 22 mi) above Earth, just above the troposphere. NASA researchers in 2001, had said that this process was enhanced by global warming, because warmer air holds more water vapor than colder air, so the amount of water vapor in the atmosphere increases as it is warmed by the greenhouse effect. Their climate models based on data available at that time, had indicated that carbon dioxide and methane enhanced the transport of water into the stratosphere.Atmospheric methane could last about 120 years in the stratosphere until it is eventually destroyed through the hydroxyl radicals oxidation process. Mean lifespan As of 2001, the mean lifespan of methane in the atmosphere was estimated at 9.6 years. However, increasing emissions of methane over time reduced the concentration of the hydroxyl radical in the atmosphere. With less OH˚ to react with, the lifespan of methane could also increase, resulting in greater concentrations of atmospheric methane.By 2013, methane's mean lifetime in the atmosphere was estimated to be twelve years.The reaction of methane and chlorine atoms acts as a primary sink of Cl atoms and is a primary source of hydrochloric acid (HCl) in the stratosphere.CH4 + Cl → CH3 + HCl The HCl produced in this reaction leads to catalytic ozone destruction in the stratosphere. Methanotrophs in soils Soils act as a major sink for atmospheric methane through the methanotrophic bacteria that reside within them. This occurs with two different types of bacteria. "High capacity-low affinity" methanotrophic bacteria grow in areas of high methane concentration, such as waterlogged soils in wetlands and other moist environments. And in areas of low methane concentration, "low capacity-high affinity" methanotrophic bacteria make use of the methane in the atmosphere to grow, rather than relying on methane in their immediate environment.Forest soils act as good sinks for atmospheric methane because soils are optimally moist for methanotroph activity, and the movement of gases between soil and atmosphere (soil diffusivity) is high. With a lower water table, any methane in the soil has to make it past the methanotrophic bacteria before it can reach the atmosphere. Wetland soils, however, are often sources of atmospheric methane rather than sinks because the water table is much higher, and the methane can be diffused fairly easily into the air without having to compete with the soil's methanotrophs. Methanotrophic bacteria in soils – Methanotrophic bacteria that reside within soil use methane as a source of carbon in methane oxidation. Methane oxidation allows methanotrophic bacteria to use methane as a source of energy, reacting methane with oxygen and as a result producing carbon dioxide and water. CH4 + 2O2 → CO2 + 2H2O Removal technologies Various approaches have been suggested to actively remove methane from the atmosphere. In 2019, researchers proposed a technique for removing methane from the atmosphere using zeolite. Each molecule of methane would be converted into CO2, which has a far smaller impact on climate (99% less). Replacing all atmospheric methane with CO2 would reduce total greenhouse gas warming by approximately one-sixth.Zeolite is a crystalline material with a porous molecular structure. Powerful fans could push air through reactors of zeolite and catalysts to absorb the methane. The reactor could then be heated to form and release CO2. Because of methane's higher GWP, at a carbon price of $500/ton removing one ton of methane would earn $12,000.In 2021, Methane Action proposed adding iron to seawater sprays from ship smokestacks. The group claimed that an amount equal to approximately 10% of the iron dust that already reaches the atmosphere could readily restore methane to pre-industrial levels.Another approach is to apply titanium dioxide paint to large surfaces. Methane concentrations in the geologic past From 1996 to 2004, researchers in the European Project for Ice Coring in Antarctica (EPICA) project were able to drill and analyze gases trapped in the ice cores in Antarctica to reconstruct GHG concentrations in the atmosphere over the past 800,000 years". They found that prior to approximately 900,000 years ago, the cycle of ice ages followed by relatively short warm periods lasted about 40,000 years, but by 800,000 years ago the time interval changed dramatically to cycles that lasted 100,000 years. There were low values of GHG in ice ages, and high values during the warm periods.This 2016 EPA illustration above is a compilation of paleoclimatology showing methane concentrations over time based on analysis of gas bubbles from EPICA Dome C, Antarctica—approximately 797,446 BCE to 1937 CE, Law Dome, Antarctica—approximately 1008 CE to 1980 CE Cape Grim, Australia—1985 CE to 2015 CE Mauna Loa, Hawaii—1984 CE to 2015 CE and Shetland Islands, Scotland: 1993 CE to 2001 CE The massive and rapid release of large volumes of methane gas from such sediments into the atmosphere has been suggested as a possible cause for rapid global warming events in the Earth's distant past, such as the Paleocene–Eocene Thermal Maximum, and the Great Dying.In 2001, NASA's Goddard Institute for Space Studies and Columbia University's Center for Climate Systems Research scientists confirmed that other greenhouse gases apart from carbon dioxide were important factors in climate change in research presented at the annual meeting of the American Geophysical Union (AGU). They offered a theory on the 100,000-year long Paleocene–Eocene Thermal Maximum that occurred approximately 55 million years ago. They posited that there was a vast release of methane that had previously been kept stable through "cold temperatures and high pressure...beneath the ocean floor". This methane release into the atmosphere resulted in the warming of the earth. A 2009 journal article in Science, confirmed NASA research that the contribution of methane to global warming had previously been underestimated. Early in the Earth's history carbon dioxide and methane likely produced a greenhouse effect. The carbon dioxide would have been produced by volcanoes and the methane by early microbes. During this time, Earth's earliest life appeared. According to a 2003 article in the journal Geology, these first, ancient bacteria added to the methane concentration by converting hydrogen and carbon dioxide into methane and water. Oxygen did not become a major part of the atmosphere until photosynthetic organisms evolved later in Earth's history. With no oxygen, methane stayed in the atmosphere longer and at higher concentrations than it does today. == References ==
cloud feedback
Cloud feedback is a type of climate change feedback that has been difficult to quantify in contemporary climate models. It can affect the magnitude of internally generated climate variability or they can affect the magnitude of climate change resulting from external radiative forcings. Cloud representations vary among global climate models, and small changes in cloud cover have a large impact on the climate.Global warming is expected to change the distribution and type of clouds. Seen from below, clouds emit infrared radiation back to the surface, and so exert a warming effect; seen from above, clouds reflect sunlight and emit infrared radiation to space, and so exert a cooling effect. Differences in planetary boundary layer cloud modeling schemes can lead to large differences in derived values of climate sensitivity. A model that decreases boundary layer clouds in response to global warming has a climate sensitivity twice that of a model that does not include this feedback. However, satellite data show that cloud optical thickness actually increases with increasing temperature. Whether the net effect is warming or cooling depends on details such as the type and altitude of the cloud; details that are difficult to represent in climate models. The closely related effective climate sensitivity has increased substantially in the latest generation of global climate models. Differences in the physical representation of clouds in models drive this enhanced sensitivity relative to the previous generation of models. Mechanisms Role as contributor to climate sensitivity Changes in cloud cover is one of several contributors to climate change and climate sensitivity. Current understanding in climate models When the IPCC began to produce its IPCC Sixth Assessment Report, many climate models began to show a higher climate sensitivity. The estimates for Equilibrium Climate Sensitivity changed from 3.2 °C to 3.7 °C and the estimates for the Transient climate response from 1.8 °C, to 2.0 °C. That is probably because of better understanding of the role of clouds and aerosols.In preparation for the 2021 IPCC Sixth Assessment Report, a new generation of climate models have been developed by scientific groups around the world. The average estimated climate sensitivity has increased in Coupled Model Intercomparison Project Phase 6 (CMIP6) compared to the previous generation, with values spanning 1.8 to 5.6 °C (3.2 to 10.1 °F) across 27 global climate models and exceeding 4.5 °C (8.1 °F) in 10 of them. The cause of the increased equilibrium climate sensitivity (ECS) lies mainly in improved modelling of clouds. Temperature rises are now believed to cause sharper decreases in the number of low clouds, and fewer low clouds means more sunlight is absorbed by the planet and less reflected to space. Models with the highest ECS values, however, are not consistent with observed warming.A 2019 simulation predicts that if greenhouse gases reach three times the current level of atmospheric carbon dioxide that stratocumulus clouds could abruptly disperse, contributing to additional global warming. Relationship with other feedbacks In addition to how clouds themselves will respond to increased temperatures, other feedbacks affect clouds properties and formation. The amount and vertical distribution of water vapor is closely linked to the formation of clouds. Ice crystals have been shown to largely influence the amount of water vapor. Water vapor in the subtropical upper troposphere has been linked to the convection of water vapor and ice. Changes in subtropical humidity could provide a negative feedback that decreases the amount of water vapor which in turn would act to mediate global climate transitions.Changes in cloud cover are closely coupled with other feedback, including the water vapor feedback and ice–albedo feedback. Changing climate is expected to alter the relationship between cloud ice and supercooled cloud water, which in turn would influence the microphysics of the cloud which would result in changes in the radiative properties of the cloud. Climate models suggest that a warming will increase fractional cloudiness. The albedo of increased cloudiness cools the climate, resulting in a negative feedback; while the reflection of infrared radiation by clouds warms the climate, resulting in a positive feedback. Increasing temperatures in the polar regions is expected in increase the amount of low-level clouds, whose stratification prevents the convection of moisture to upper levels. This feedback would partially cancel the increased surface warming due to the cloudiness. This negative feedback has less effect than the positive feedback. The upper atmosphere more than cancels negative feedback that causes cooling, and therefore the increase of CO2 is actually exacerbating the positive feedback as more CO2 enters the system. See also Fixed anvil temperature hypothesis == References ==
george c. marshall institute
The George C. Marshall Institute (GMI) was a nonprofit conservative think tank in the United States. It was established in 1984 with a focus on science and public policy issues and had an initial focus in defense policy. Starting in the late 1980s, the institute advocated for views in line with environmental skepticism, most notably climate change denial. The think tank received extensive financial support from the fossil fuel industry.Though the institute officially closed in 2015, the climate-denialist CO2 Coalition is viewed as its immediate successor. GMI's defense research was absorbed by the Center for Strategic and International Studies. History The George C. Marshall institute was founded in 1984 by Frederick Seitz (former President of the United States National Academy of Sciences), Robert Jastrow (founder of NASA's Goddard Institute for Space Studies), and William Nierenberg (former director of the Scripps Institution of Oceanography). The institute's primary aim, initially, was to play a role in defense policy debates, defending Ronald Reagan's Strategic Defense Initiative (SDI, or "Star Wars"). In particular, it sought to defend SDI "from attack by the Union of Concerned Scientists, and in particular by the equally prominent physicists Hans Bethe, Richard Garwin, and astronomer Carl Sagan." The institute argued that the Soviet Union was a military threat. A 1987 article by Jastrow argued that in five years the Soviet Union would be so powerful that it would be able to achieve world domination without firing a shot. When the Cold War instead ended in the 1991 collapse of the Soviet Union, the institute shifted from an emphasis on defense to a focus on environmental skepticism, including global warming denial.The institute's shift to environmental skepticism began with the publication of a report on global warming by William Nierenberg. During the 1988 United States presidential election, George H. W. Bush had pledged to meet the "greenhouse effect with the White House effect." Nierenberg's report, which blamed global warming on solar activity, had a large impact on the incoming Bush presidency, strengthening those in it opposed to environmental regulation. In 1990 the institute's founders (Jastrow, Nierenberg and Seitz) published a book on climate change. The appointment of David Allan Bromley as presidential science advisor, however, saw Bush sign the United Nations Framework Convention on Climate Change in 1992, despite some opposition from within his administration.In 1994, the institute published a paper by its then chairman, Frederick Seitz, titled Global warming and ozone hole controversies: A challenge to scientific judgment. Seitz questioned the view that CFCs "are the greatest threat to the ozone layer". In the same paper, commenting on the dangers of secondary inhalation of tobacco smoke, he concluded "there is no good scientific evidence that passive inhalation is truly dangerous under normal circumstances."In 2012, the institute took over the responsibility for running the Missilethreat.com website from the Claremont Institute. Missilethreat.com aims to inform the American people of missile threats, thereby encouraging the deployment of a ballistic missile defense system. Since the closure of the institute, the Missilethreat.com website has been maintained by the Center for Strategic and International Studies. Publications Politicizing Science: The Alchemy of Policymaking is a book by the George C. Marshall Institute, edited by Michael Gough. The book, published in 2003, encourages a disinterested objectivity on the part of scientists and policymakers: Ideally, the scientists or analysts who generate estimates of harm that may result from a risk would consider all the relevant facts and alternative interpretations of the data, and remain skeptical about tentative conclusions. Ideally, too, the agency officials and politicians, who have to enact a regulatory program, would consider its costs and benefits, ensure that it will do more good than harm, and remain open to options to stop or change the regulation in situations where the underlying science is tentative. Global warming Starting in 1989 GMI was involved in what it terms "a critical examination of the scientific basis for global climate change policy." This was described by Sharon Begley as a "central cog in the denial machine" in a 2007 Newsweek cover story on climate change denial.In Requiem for a Species, Clive Hamilton is critical of the Marshall Institute and contends that the conservative backlash against global warming research was led by three prominent physicists—Frederick Seitz, Robert Jastrow, and William Nierenberg, who founded the institute in 1984. According to Hamilton, by the 1990s the Marshall Institute's main activity was attacking climate science. Naomi Oreskes and Erik M. Conway reach a similar conclusion in Merchants of Doubt (2010), where they identified a few contrarian scientists associated with conservative think-tanks who fought the scientific consensus and spread confusion and doubt about global warming. The book Climate Change: An Encyclopedia of Science and History, noting that GMI received funding from the automobile and fossil fuel industries and espouses "a mix of conservative, neoliberal, and libertarian ideological positions", states that GMI has "supported authors opposed to the hypothesis of anthropogenic warming and proposed mitigation policies ... stressing the free-market and the dangers of government regulation, which they said would hurt the US economy."GMI was one of only a few conservative environmental-policy think tanks to have natural scientists on staff. Noted climate change deniers Sallie Baliunas and (until his death in 2008) Frederick Seitz (a past president of the National Academy of Sciences from 1962 to 1969) served on its board of directors. Patrick Michaels was a visiting scientist and Stephen McIntyre, Willie Soon and Ross McKitrick were contributing writers. Richard Lindzen served on the institute's Science Advisory Board.In February 2005 GMI co-sponsored a congressional briefing at which Senator James Inhofe praised Michael Crichton's novel State of Fear and attacked the "hockey stick graph".William O'Keefe, chief executive officer of the Marshall Institute, questioned the methods used by advocates of new government restrictions to combat global warming: "We have never said that global warming isn't real. No self-respecting think tank would accept money to support preconceived notions. We make sure what we are saying is both scientifically and analytically defensible." Accusation of conflict of interest Matthew B. Crawford was appointed executive director of GMI in September 2001. He left the GMI after five months, saying that the institute was "fonder of some facts than others". He contended a conflict of interest existed in the funding of the institute. In Shop Class as Soulcraft, he wrote about the institute that "the trappings of scholarship were used to put a scientific cover on positions arrived at otherwise. These positions served various interests, ideological or material. For example, part of my job consisted of making arguments about global warming that just happened to coincide with the positions taken by the oil companies that funded the think tank."In 1998 Jeffrey Salmon, then executive director of GMI, helped develop the American Petroleum Institute's strategy of stressing the uncertainty of climate science.Naomi Oreskes states that the institute, in order to resist and delay regulation, lobbied politically to create a false public perception of scientific uncertainty over the negative effects of second-hand smoke, the carcinogenic nature of tobacco smoking, the existence of acid rain, and on the evidence connecting CFCs and ozone depletion. Funding sources Exxon-Mobil was a funder of the GMI until it pulled funding from it and several similar organizations in 2008. From 1998 to 2008, the institute received a total of $715,000 in funding from Exxon-Mobil. See also Americans for Prosperity Cato Institute The Heartland Institute Manhattan Institute References External links Official website Global-warming skeptics cite being 'treated like a pariah' - Eric Pfeiffer, The Washington Times - February 12, 2007
ice cap
In glaciology, an ice cap is a mass of ice that covers less than 50,000 km2 (19,000 sq mi) of land area (usually covering a highland area). Larger ice masses covering more than 50,000 km2 (19,000 sq mi) are termed ice sheets. Description Ice caps are not constrained by topographical features (i.e., they will lie over the top of mountains). By contrast, ice masses of similar size that are constrained by topographical features are known as ice fields. The dome of an ice cap is usually centred on the highest point of a massif. Ice flows away from this high point (the ice divide) towards the ice cap's periphery.Ice caps significantly affect the geomorphology of the area they occupy. Plastic moulding, gouging and other glacial erosional features become present upon the glacier's retreat. Many lakes, such as the Great Lakes in North America, as well as numerous valleys have been formed by glacial action over hundreds of thousands of years. The Antarctic and Greenland contain 99% of the ice volume on earth, about 33 million cubic kilometres (7.9 million cubic miles) of total ice mass. Formation Ice caps are formed when snow is deposited during the cold season but don’t completely melt during the hot season. Over time, the snow builds up and becomes dense, well-bonded snow known as perennial firn. Finally, the air passages between snow particles close off and transforms into ice.The shape of an ice cap is determined by the landscape it lies on, as melting patterns can vary with terrain. For example, the lower portions of an ice cap are forced to flow outwards under the weight of the entire ice cap and will follow the downward slopes of the land. Global warming Ice caps have been used as indicators of global warming, as increasing temperatures cause ice caps to melt and lose mass faster than they accumulate mass. Ice cap size can be monitored through different remote-sensing methods such as aircraft and satellite data.Ice caps accumulate snow on their upper surfaces, and ablate snow on their lower surfaces. An ice cap in equilibrium accumulates and ablates snow at the same rate. The AAR is the ratio between the accumulation area and the total area of the ice cap, which is used to indicate the health of the glacier. Depending on their shape and mass, healthy glaciers in equilibrium typically have an AAR of approximately 0.4 to 0.8. The AAR is impacted by environmental conditions such as temperature and precipitation.Data from 86 mountain glaciers and ice caps shows that over the long term, the AAR of glaciers has been about 0.57. In contrast, data from the most recent years of 1997–2006 yields an AAR of only 0.44. In other words, glaciers and ice caps are accumulating less snow and are out of equilibrium, causing melting and contributing to sea level rises.Assuming the climate continues to be in the same state as it was in 2006, it is estimated that ice caps will contribute a 95 ± 29 mm rise in global sea levels until they reach equilibrium. However, environmental conditions have worsened and are predicted to continue to worsen in the future. Given that the rate of melting will accelerate, and by using mathematical models to predict future climate patterns, the actual contribution of ice caps to rising sea levels is expected to be more than double from initial estimates. Variants High-latitude regions covered in ice, though strictly not an ice cap (since they exceed the maximum area specified in the definition above), are called polar ice caps; the usage of this designation is widespread in the mass media and arguably recognized by experts.Vatnajökull is an example of an ice cap in Iceland.Plateau glaciers are glaciers that overlie a generally flat highland area. Usually, the ice overflows as hanging glaciers in the lower parts of the edges. An example is Biscayarfonna in Svalbard. See also == References ==
energy in portugal
Energy in Portugal describes energy and electricity production, consumption and import in Portugal. Energy policy of Portugal will describe the politics of Portugal related to energy more in detail. Electricity sector in Portugal is the main article of electricity in Portugal. In 2000, 85% of energy was imported. In 2021 the last coal fired power station closed and renewable energy was expanded to fill the gap. A target of being carbon neutral by 2050 has been set. Energy statistics Energy plans Portugal aims to be climate neutral by 2050 and to cover 80% of its electricity consumption with renewables by 2030.Portugal has also developed a hydrogen strategy to decrease natural gas imports and reduce greenhouse gas emissions by 2030. Energy sources Fossil fuels Coal Sines power plant (hard coal) started operation in 1985–1989 in Portugal. According to WWF its CO2 emissions were among the top dirty ones in Portugal in 2007. That coal power plant went offline in January 2021, with the one remaining coal power plant in the country, closing at 7h15 on the 19th of November 2021. Natural gas Maghreb–Europe Gas Pipeline (MEG) is a natural gas pipeline, from Algeria through Morocco to Andalusia, Spain. Portugal has the Sines LNG import terminal to facilitate gas imports. There are three LNG storage tanks with a total capacity of 390,000 cbm and a regasification capacity of 5.6 mtpa. In 2021 Portugal imported 2.8 billion cubic meters of LNG from Nigeria, being almost 50% of the country's gas imports for the year. Renewable energy Renewable energy includes wind, solar, biomass and geothermal energy sources. Energy from renewable sources has been increasing in Portugal since 2000 and has been given a boost with the 2030 renewable energy target. Solar power Portugal has supported and increased the solar electricity (Photovoltaic power) and solar thermal energy (solar heating) during 2006–2010. Portugal was 9th in solar heating in the EU and 8th in solar power based on total volume in 2010. The largest solar farm in Europe is being built in Santiago do Cacém near Sines, creating up to 2,500 jobs, mostly local, it will be completed in 2025 and have a generating capacity of 1.2 GW. Wind power In 2023, plans for the first floating offshore wind farm were announced. Biomass Biomass provides around 8% of electricity generation capacity. Hydro power Portugal has also been using water power to generate power for the country. In the 2010s, a local company, Wave Roller installed many devices along the coast to make use of the water power.In 2021, 36% of Portugal’s total installed power generation capacity and 23% of total power generation came from hydro power.Drought can seriously reduce hydro energy generation in the summer months. Nuclear power Portugal does not produce any electricity from nuclear sources. Transport The sustainable strategy has been a shift from individual to collective transport within the Lisbon Metropolitan Area (Metro Lisbon (ML), collective buses, Companhia Carris de ferro de Lisboa). Global warming According to Energy Information Administration the CO2 emissions from energy consumption of Portugal were in 2009 56.5 Mt, slightly over Bangladesh with 160 million people and Finland with 5.3 million people. The emissions per capita were (tonnes): Portugal 5.58, India 1.38, China 5.83, Europe 7.14, Russia 11.23, North America 14.19, Singapore 34.59 and United Arab Emirates 40.31. See also Renewable energy in Portugal == References ==
arctic sea ice decline
Sea ice in the Arctic has declined in recent decades in area and volume due to climate change. It has been melting more in summer than it refreezes in winter. Global warming, caused by greenhouse gas forcing is responsible for the decline in Arctic sea ice. The decline of sea ice in the Arctic has been accelerating during the early twenty‐first century, with a decline rate of 4.7% per decade (it has declined over 50% since the first satellite records). It is also thought that summertime sea ice will cease to exist sometime during the 21st century.The region is at its warmest in at least 4,000 years and the Arctic-wide melt season has lengthened at a rate of five days per decade (from 1979 to 2013), dominated by a later autumn freeze-up. The IPCC Sixth Assessment Report (2021) stated that Arctic sea ice area will likely drop below 1 million km2 in at least some Septembers before 2050. In September 2020, the US National Snow and Ice Data Center reported that the Arctic sea ice in 2020 had melted to an area of 3.74 million km2, its second-smallest area since records began in 1979. Sea ice loss is one of the main drivers of Arctic amplification, the phenomenon that the Arctic warms faster than the rest of the world under climate change. It is hypothesized that sea ice decline also makes the jet stream weaker, which would cause more persistent and extreme weather in mid-latitudes. Shipping is more often possible in the Arctic now, and expected to increase further. Both the disappearance of sea ice and the resulting possibility of more human activity in the Arctic Ocean pose a risk to local wildlife such as polar bears. Definitions The Arctic Ocean is the mass of water positioned approximately above latitude 65° N. Arctic Sea Ice refers to the area of the Arctic Ocean covered by ice. The Arctic sea ice minimum is the day in a given year when Arctic sea ice reaches its smallest extent, occurring at the end of the summer melting season, normally during September. Arctic Sea ice maximum is the day of a year when Arctic sea ice reaches its largest extent near the end of the Arctic cold season, normally during March. Typical data visualizations for Arctic sea ice include average monthly measurements or graphs for the annual minimum or maximum extent, as shown in the adjacent images. Sea ice extent is defined as the area with at least 15% of sea ice cover; it is more often used as a metric than simple total sea ice area. This metric is used to address uncertainty in distinguishing open sea water from melted water on top of solid ice, which satellite detection methods have difficulty differentiating. This is primarily an issue in summer months. Observations A 2007 study found the decline to be "faster than forecasted" by model simulations. A 2011 study suggested that it could be reconciled by internal variability enhancing the greenhouse gas-forced sea ice decline over the last few decades. A 2012 study, with a newer set of simulations, also projected rates of retreat that were somewhat less than that actually observed. Satellite era Observation with satellites shows that Arctic sea ice area, extent, and volume have been in decline for a few decades. The amount of multi-year sea ice in the Arctic has declined considerably in recent decades. In 1988, ice that was at least 4 years old accounted for 26% of the Arctic's sea ice. By 2013, ice that age was only 7% of all Arctic sea ice.Scientists recently measured sixteen-foot (five-meter) wave heights during a storm in the Beaufort Sea in mid-August until late October 2012. This is a new phenomenon for the region, since a permanent sea ice cover normally prevents wave formation. Wave action breaks up sea ice, and thus could become a feedback mechanism, driving sea ice decline.For January 2016, the satellite-based data showed the lowest overall Arctic sea ice extent of any January since records began in 1979. Bob Henson from Wunderground noted: Hand in hand with the skimpy ice cover, temperatures across the Arctic have been extraordinarily warm for midwinter. Just before New Year’s, a slug of mild air pushed temperatures above freezing to within 200 miles of the North Pole. That warm pulse quickly dissipated, but it was followed by a series of intense North Atlantic cyclones that sent very mild air poleward, in tandem with a strongly negative Arctic oscillation during the first three weeks of the month. January 2016's remarkable phase transition of Arctic oscillation was driven by a rapid tropospheric warming in the Arctic, a pattern that appears to have increased surpassing the so-called stratospheric sudden warming. The previous record of the lowest extent of the Arctic Ocean covered by ice in 2012 saw a low of 1.31 million square miles (3.387 million square kilometers). This replaced the previous record set on September 18, 2007, at 1.61 million square miles (4.16 million square kilometers). The minimum extent on 18th Sept 2019 was 1.60 million square miles (4.153 million square kilometers). A 2018 study of the thickness of sea ice found a decrease of 66% or 2.0 m over the last six decades and a shift from permanent ice to largely seasonal ice cover. Future ice loss An "ice-free" Arctic Ocean, sometimes referred to as a "Blue Ocean Event", is often defined as "having less than 1 million square kilometers of sea ice", because it is very difficult to melt the thick ice around the Canadian Arctic Archipelago. The IPCC AR5 defines "nearly ice-free conditions" as a sea ice extent of less than 106 km2 for at least five consecutive years.Estimating the exact year when the Arctic Ocean will become "ice-free" is very difficult, due to the large role of interannual variability in sea ice trends. In Overland and Wang (2013), the authors investigated three different ways of predicting future sea ice levels. They noted that the average of all models used in 2013 was decades behind the observations, and only the subset of models with the most aggressive ice loss was able to match the observations. However, the authors cautioned that there is no guarantee those models would continue to match the observations, and hence that their estimate of ice-free conditions first appearing in 2040s may still be flawed. Thus, they advocated for the use of expert judgement in addition to models to help predict ice-free Arctic events, but they noted that expert judgement could also be done in two different ways: directly extrapolating ice loss trends (which would suggest and ice-free Arctic in 2020) or assuming a slower decline trend punctuated by the occasional "big melt" seasons (such as those of 2007 and 2012) which pushes back the date to 2028 or further into 2030s, depending on the starting assumptions about the timing and the extent of the next "big melt". Consequently, there has been a recent history of competing projections from climate models and from individual experts. Climate models A 2006 paper examined projections from the Community Climate System Model and predicted "near ice-free September conditions by 2040".A 2009 paper from Muyin Wang and James E. Overland applied observational constraints to the projections from six CMIP3 climate models and estimated nearly ice-free Arctic Ocean around September 2037, with a chance it could happen as early as 2028. In 2012, this pair of researchers repeated the exercise with CMIP5 models and found that under the highest-emission scenario in CMIP5, Representative Concentration Pathway 8.5, ice-free September first occurs between 14 and 36 years after the baseline year of 2007, with the median of 28 years (i.e. around 2035).In 2009, a study using 18 CMIP3 climate models found that they project ice-free Arctic a little before 2100 under a scenario of medium future greenhouse gas emissions. In 2012, a different team used CMIP5 models and their moderate emission scenario, RCP 4.5 (which represents somewhat lower emissions than the scenario in CMIP3), and found that while their mean estimate avoids ice-free Arctic before the end of the century, ice-free conditions in 2045 were within one standard deviation of the mean.In 2013, a study compared projections from the best-performing subset of CMIP5 models with the output from all 30 models after it was constrained by the historical ice conditions, and found good agreement between these approaches. Altogether, it projected ice-free September between 2054 and 2058 under RCP 8.5, while under RCP 4.5, Arctic ice gets very close to the ice-free threshold in 2060s, but does not cross it by the end of the century, and stays at an extent of 1.7 million km2.In 2014, IPCC Fifth Assessment Report indicated a risk of ice-free summer around 2050 under the scenario of highest possible emissions.The Third U.S. National Climate Assessment (NCA), released May 6, 2014, reported that the Arctic Ocean is expected to be ice free in summer before mid-century. Models that best match historical trends project a nearly ice-free Arctic in the summer by the 2030s.In 2021, IPCC Sixth Assessment Report report assesses that there is "high confidence" that the Arctic Ocean will likely become practically ice-free in September before the year 2050 under all SSP scenarios.A paper published in 2021 shows that the CMIP6 models which perform the best at simulating Arcic sea ice trends project the first ice-free conditions around 2035 under SSP5-8.5, which is the scenario of continually accelerating greenhouse gas emissions.By weighting multiple CMIP6 projections, the first year of an ice-free Arctic is likely to occur during 2040–2072 under the SSP3-7.0 scenario. Impacts on the physical environment Global climate change Arctic sea ice maintains the cool temperature of the polar regions and it has an important albedo effect on the climate. Its bright shiny surface reflects sunlight during the Arctic summer; dark ocean surface exposed by the melting ice absorbs more sunlight and becomes warmer, which increases the total ocean heat content and helps to drive further sea ice loss during the melting season, as well as potentially delaying its recovery during the polar night. Arctic ice decline between 1979 and 2011 is estimated to have been responsible for as much radiative forcing as a quarter of CO2 emissions the same period., which is equivalent to around 10% of the cumulative CO2 increase since the start of the Industrial Revolution. When compared to the other greenhouse gases, it has had the same impact as the cumulative increase in nitrous oxide, and nearly half of the cumulative increase in methane concentrations.The effect of Arctic sea ice decline on global warming will intensify in the future as more and more ice is lost. This feedback has been accounted for by all CMIP5 and CMIP6 models, and it is included in all warming projections they make, such as the estimated warming by 2100 under each Representative Concentration Pathway and Shared Socioeconomic Pathway. They are also capable of resolving the second-order effects of sea ice loss, such as the effect on lapse rate feedback, the changes in water vapor concentrations and regional cloud feedbacks. Ice-free summer vs. ice-free winter In 2021, the IPCC Sixth Assessment Report said with high confidence that there is no hysteresis and no tipping point in the loss of Arctic summer sea ice. This can be explained by the increased influence of stabilizing feedback compared to the ice albedo feedback. Specifically, thinner sea ice leads to increased heat loss in the winter, creating a negative feedback loop. This counteracts the positive ice albedo feedback. As such, sea ice would recover even from a true ice-free summer during the winter, and if the next Arctic summer is less warm, it may avoid another ice-free episode until another similarly warm year down the line. However, higher levels of global warming would delay the recovery from ice-free episodes and make them occur more often and earlier in the summer. A 2018 paper estimated that an ice-free September would occur once in every 40 years under a global warming of 1.5 degrees Celsius, but once in every 8 years under 2 degrees and once in every 1.5 years under 3 degrees.Very high levels of global warming could eventually prevent Arctic sea ice from reforming during the Arctic winter. This is known as an ice-free winter, and it ultimately amounts to a total of loss of Arctic ice throughout the year. A 2022 assessment found that unlike an ice-free summer, it may represent an irreversible tipping point. It estimated that it is most likely to occur at around 6.3 degrees Celsius, though it could potentially occur as early as 4.5 °C or as late as 8.7 °C. Relative to today's climate, an ice-free winter would add 0.6 degrees, with a regional warming between 0.6 and 1.2 degrees. Amplified Arctic warming Arctic amplification and its acceleration is strongly tied to declining Arctic sea ice: modelling studies show that strong Arctic amplification only occurs during the months when significant sea ice loss occurs, and that it largely disappears when the simulated ice cover is held fixed. Conversely, the high stability of ice cover in Antarctica, where the thickness of the East Antarctic ice sheet allows it to rise nearly 4 km above the sea level, means that this continent has not experienced any net warming over the past seven decades: ice loss in the Antarctic and its contribution to sea level rise is instead driven entirely by the warming of the Southern Ocean, which had absorbed 35–43% of the total heat taken up by all oceans between 1970 and 2017. Impacts on extreme weather Barents Sea ice Barents Sea is the fastest-warming part of the Arctic, and some assessments now treat Barents sea ice as a separate tipping point from the rest of the Arctic sea ice, suggesting that it could permanently disappear once the global warming exceeds 1.5 degrees. This rapid warming also makes it easier to detect any potential connections between the state of sea ice and weather conditions elsewhere than in any other area. The first study proposing a connection between floating ice decline in the Barents Sea and the neighbouring Kara Sea and more intense winters in Europe was published in 2010, and there has been extensive research into this subject since then. For instance, a 2019 paper holds BKS ice decline responsible for 44% of the 1995–2014 central Eurasian cooling trend, far more than indicated by the models, while another study from that year suggests that the decline in BKS ice reduces snow cover in the North Eurasia but increases it in central Europe. There are also potential links to summer precipitation: a connection has been proposed between the reduced BKS ice extent in November–December and greater June rainfall over South China. One paper even identified a connection between Kara Sea ice extent and the ice cover of Lake Qinghai on the Tibetan Plateau.However, BKS ice research is often subject to the same uncertainty as the broader research into Arctic amplification/whole-Arctic sea ice loss and the jet stream, and is often challenged by the same data. Nevertheless, the most recent research still finds connections which are statistically robust, yet non-linear in nature: two separate studies published in 2021 indicate that while autumn BKS ice loss results in cooler Eurasian winters, ice loss during winter makes Eurasian winters warmer: as BKS ice loss accelerates, the risk of more severe Eurasian winter extremes diminishes while heatwave risk in the spring and summer is magnified. Other possible impacts on weather In 2019, it was proposed that the reduced sea ice around Greenland in autumn affects snow cover during the Eurasian winter, and this intensifies Korean summer monsoon, and indirectly affects the Indian summer monsoon.2021 research suggested that autumn ice loss in the East Siberian Sea, Chukchi Sea and Beaufort Sea can affect spring Eurasian temperature. Autumn sea ice decline of one standard deviation in that region would reduce mean spring temperature over central Russia by nearly 0.8 °C, while increasing the probability of cold anomalies by nearly a third. Atmospheric chemistry Cracks in sea ice can expose the food chain to greater amounts of atmospheric mercury.A 2015 study concluded that Arctic sea ice decline accelerates methane emissions from the Arctic tundra, with the emissions for 2005-2010 being around 1.7 million tonnes higher than they would have been with the sea ice at 1981–1990 levels. One of the researchers noted, "The expectation is that with further sea ice decline, temperatures in the Arctic will continue to rise, and so will methane emissions from northern wetlands." Shipping Economic implications of ice-free summers and the decline in Arctic ice volumes include a greater number of journeys across the Arctic Ocean Shipping lanes during the year. This number has grown from 0 in 1979 to 400–500 along the Bering strait and >40 along the Northern Sea Route in 2013. Traffic through the Arctic Ocean is likely to increase further. An early study by James Hansen and colleagues suggested in 1981 that a warming of 5 to 10 °C, which they expected as the range of Arctic temperature change corresponding to doubled CO2 concentrations, could open the Northwest Passage. A 2016 study concludes that Arctic warming and sea ice decline will lead to "remarkable shifts in trade flows between Asia and Europe, diversion of trade within Europe, heavy shipping traffic in the Arctic and a substantial drop in Suez traffic. Projected shifts in trade also imply substantial pressure on an already threatened Arctic ecosystem."In August 2017, the first ship traversed the Northern Sea Route without the use of ice-breakers. Also in 2017, the Finnish icebreaker MSV Nordica set a record for the earliest crossing of the Northwest Passage. According to the New York Times, this forebodes more shipping through the Arctic, as the sea ice melts and makes shipping easier. A 2016 report by the Copenhagen Business School found that large-scale trans-Arctic shipping will become economically viable by 2040. Impacts on wildlife The decline of Arctic sea ice will provide humans with access to previously remote coastal zones. As a result, this will lead to an undesirable effect on terrestrial ecosystems and put marine species at risk.Sea ice decline has been linked to boreal forest decline in North America and is assumed to culminate with an intensifying wildfire regime in this region. The annual net primary production of the Eastern Bering Sea was enhanced by 40–50% through phytoplankton blooms during warm years of early sea ice retreat.Polar bears are turning to alternative food sources because Arctic sea ice melts earlier and freezes later each year. As a result, they have less time to hunt their historically preferred prey of seal pups, and must spend more time on land and hunt other animals. As a result, the diet is less nutritional, which leads to reduced body size and reproduction, thus indicating population decline in polar bears. The Arctic refuge is where polar bears main habitat is to den and the melting arctic sea ice is causing a loss of species. There are only about 900 bears in the Arctic refuge national conservation area.As arctic ice decays, microorganisms produce substances with various effects on melting and stability. Certain types of bacteria in rotten ice pores produce polymer-like substances, which may influence the physical properties of the ice. A team from the University of Washington studying this phenomenon hypothesizes that the polymers may provide a stabilizing effect to the ice. However, other scientists have found algae and other microorganisms help create a substance, cryoconite, or create other pigments that increase rotting and increase the growth of the microorganisms. See also References Sources IPCC (2021). Masson-Delmotte, V.; Zhai, P.; Pirani, A.; Connors, S. L.; et al. (eds.). Climate Change 2021: The Physical Science Basis (PDF). Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press (In Press).Fox-Kemper, Baylor; Hewitt, Helene T.; Xiao, Cunde; Aðalgeirsdóttir, Guðfinna; et al. (2021). "Chapter 9: Ocean, cryosphere, and sea level change" (PDF). IPCC AR6 WG1 2021. External links Third National Climate Assessment | Melting Ice NASA Earth Observatory | Arctic Sea Ice Piecing together the Arctic’s sea ice history back to 1850 How predictable is the first ice-free Arctic summer? Why is Arctic sea ice melting, when the Antarctic is not as much? Maps NSIDC | Arctic Sea Ice News Global Cryosphere Watch Daily AMSR2 Sea Ice Maps Video Annual Arctic sea ice minimum 1979–2016 with area graph The Arctic Meltdown & Extreme Weather – Jennifer Francis (2017)
porpita porpita
Porpita porpita, or the blue button, is a marine organism consisting of a colony of hydroids found in the warmer, tropical and sub-tropical waters of the Pacific, Atlantic, and Indian oceans, as well as the Mediterranean Sea and eastern Arabian Sea. It was first identified by Carl Linnaeus in 1758, under the basionym Medusa porpita. In addition, it is one of the two genera under the suborder Chondrophora, which is a group of cnidarians that also includes Velella. The chondrophores are similar to the better-known siphonophores, which includes the Portuguese man o' war, or Physalia physalis. Although it is superficially similar to a jellyfish, each apparent individual is actually a colony of hydrozoan polyps. The taxonomic class, Hydrozoa, falls under the phylum Cnidaria, which includes anemones, corals, and jellyfish, which explains their similar appearances. Description The blue button can grow up to 30 mm in diameter and lives on the surface of the sea and consists of two main parts: The float and the hydroid colony. The hard golden brown float is round, almost flat, and about one inch wide. The float organ is responsible for the organism's vertical movement and also contains pores that are able to communicate with other P. porpita organisms as well as its surroundings. The hydroid colony, whose polyps can range from bright blue turquoise to yellow; the polyps resemble the tentacles of jellyfish. Each strand has numerous branchlets, each of the knobs of stinging cells called nematocysts terminates at the distal end. The blue button has a single mouth located beneath the float, which is used for both the intake of prey and the expulsion of wastes. The mouth is surrounded by a ring of gonozooids and dactylozooids. Tentacles are only found on the dactylozooids, which exist furthest away from the mouth, towards the outer part of the hydroid colony. Habitat and feeding The blue button is a part of the neustonic food web, which covers the organisms that inhabit the region on or near the surface of the ocean. This is because it is a passive drifter, which means that it relies on water currents and wind to carry it through the ocean. It is preyed on by the sea slug Glaucus atlanticus (sea swallow or blue dragon), violet sea-snails of the genus Janthina, and the other blue dragon, Glaucus marginatus. Unlike Velella, which prefers a passive diet, Porpita will hunt active crustaceans like crab and fish. It competes with other drifters for food, and mainly feeds on copepods and crustacean larvae. Commensalism with a fish Young Carangoides malabaricus, also known as the 'Malabar trevally', have been shown to take shelter underneath the floats of Porpita porpita. When removed from its host, the fish will panic. These juvenile fish also appear to show preference for a particular siphonophore. When two pairs of Porpita porpita and Carangoides malabaricus are separated by species, then returned to the same tank, each fish will return to its respective partner. Effects of global warming The blue button sting is not powerful but may cause slight irritation to human skin. However, in recent years, it has been hypothesized that due to global warming, Porpita pacifica (another name for the species) colonies have begun appearing in larger numbers along coastlines in Japan and the first case of contact dermatitis from this species was recorded. A sudden increase in the abundance of Porpita porpita has also been observed in a separate study of its populations in the Ionian and Adriatic seas, possibly also due to rising temperatures throughout the oceans. == References ==
utqiagvik, alaska
Utqiagvik ( UUT-kee-AH-vik; Inupiaq: Utqiaġvik, IPA: [utqe.ɑʁvik]), formerly known as Barrow ( BARR-oh), is the borough seat and largest city of the North Slope Borough in the U.S. state of Alaska. Located north of the Arctic Circle, it is one of the northernmost cities and towns in the world and the northernmost in the United States, with nearby Point Barrow which is the country's northernmost land. Utqiagvik's population was 4,927 at the 2020 census, an increase from 4,212 in 2010. It is the 12th-most populated city in Alaska. Name The location has been home to the Iñupiat, an indigenous Inuit ethnic group, for more than 1,500 years. The city's Iñupiaq name refers to a place for gathering wild roots. It is derived from the Iñupiat word utqiq, also used for Claytonia tuberosa ("Eskimo potato"). The name was first recorded, by European explorers, in 1853 as "Ot-ki-a-wing" by Commander Rochfort Maguire, Royal Navy. John Simpson's native map dated 1855 has the name "Otkiawik", which was later misprinted on a British Admiralty chart as "Otkiovik."The former name Barrow was derived from Point Barrow, and was originally a general designation, because non-native Alaskan residents found it easier to pronounce than the Inupiat name. Point Barrow was named after Sir John Barrow of the British Admiralty by explorer Frederick William Beechey in 1825. A post office was established in 1901 helping the name "Barrow" to become dominant. In an October 4, 2016, referendum, city voters narrowly approved changing its name to Utqiaġvik which became official on December 1. City Council member Qaiyaan Harcharek said the name change supports the use of the Iñupiaq language and is part of a decolonization process.Another recorded Iñupiaq name is Ukpiaġvik (IPA: [ukpi.ɑʁvik]), which comes from ukpik "snowy owl" and is translated as "the place where snowy owls are hunted". A spelling which is a variant of this name was adopted by the Ukpeaġvik Iñupiat Corporation when it was established in 1973. History Prehistory to the 20th century Archaeological sites in the area indicate the Iñupiat lived around Utqiagvik as far back as 500 AD. Remains of 16 sod dwelling mounds, from the Birnirk culture of about 800, can be seen on the shore of the Arctic Ocean. Located on a slight rise above the high-water mark, they are at risk of being lost to erosion. Bill Streever who chairs the North Slope Science Initiative's Science Technical Advisory Panel, wrote in his 2009 book Cold: Adventures in the World’s Frozen Places: Barrow, like most communities in Alaska, looks temporary, like a pioneer settlement. It is not. Barrow is among the oldest permanent settlements in the United States. Hundreds of years before the European Arctic explorers showed up... Barrow was more or less where it is now, a natural hunting place at the base of a peninsula that pokes out into the Beaufort Sea... Yankee whalers sailed here, learning about the bowhead whale from Iñupiat hunters... Later, the military came, setting up a radar station, and in 1947 a science center was founded at Barrow. British Royal Navy officers came to the area to explore and map the Arctic coastline of North America. The US acquired Alaska in 1867. The United States Army established a meteorological and magnetic research station at Utqiagvik in 1881.In 1888, a Presbyterian church was built by United States missionaries at Utqiagvik. The church is still in use today. In 1889 a whaling supply and rescue station was built. It is the oldest wood-frame building in Utqiagvik and is listed on the National Register of Historic Places. The rescue station was converted for use in 1896 as the retail Cape Smythe Whaling and Trading Station. In the late 20th century, the building was used as Brower's Cafe. 20th century to the present A United States Post Office was opened in 1901. In 1935, famous humorist Will Rogers and pilot Wiley Post made an unplanned stop at Walakpa Bay, 15 mi (24 km) south of Utqiagvik, en route to the city. As they took off again, their plane stalled and plunged into a river killing them both. Two memorials have been erected at the location which is now called the Rogers-Post Site. Another memorial is located in Utqiagvik, where the airport was renamed as the Wiley Post–Will Rogers Memorial Airport in their honor.In 1940, the indigenous Iñupiat organized as the Native Village of Barrow Iñupiat Traditional Government (previously, Native Village of Barrow), which is a federally recognized Alaska Native Iñupiat "tribal entity", as listed by the US Bureau of Indian Affairs around 2003. They wrote a constitution and by-laws, under the provisions of the Indian Reorganization Act (IRA) of 1934. An IRA corporation was also created. Utqiagvik was incorporated as a first-class city under the name Barrow in 1958. Natural gas lines were brought to the town in 1965, eliminating traditional heating sources such as whale blubber.The Barrow Duck-In was a civil disobedience event that occurred in the spring of 1961.The residents of the North Slope were the only Native people to vote on acceptance of the Alaska Native Claims Settlement Act; they rejected it. The act was passed in December 1971, and despite their opposition, became law. The Ukpeaġvik Iñupiat Corporation is the for-profit village corporation established under the act. In 1972, the North Slope Borough was established. The borough has built sanitation facilities, water and electrical utilities, roads, and fire departments, and established health and educational services in Utqiagvik and the villages of the North Slope with millions of dollars in new revenues from the settlement and later oil revenues. In 1986, the North Slope Borough created the North Slope Higher Education Center. Renamed Iḷisaġvik College, it is an accredited two-year college providing education which is based on the Iñupiat culture and the needs of the North Slope Borough. The Tuzzy Consortium Library, in the Iñupiat Heritage Center, serves the communities of the North Slope Borough and functions as the academic library for Iḷisaġvik College. The library was named after Evelyn Tuzroyluk Higbee, an important leader in the community. Utqiagvik, like many communities in Alaska has enacted a "damp" law, prohibiting the sale of alcoholic beverages. However, the import, possession, and consumption of such beverages is still allowed.In 1988, Utqiagvik became the center of worldwide media attention when three California gray whales became trapped in the ice offshore. After a two-week rescue effort (Operation Breakthrough), a Soviet icebreaker freed two of the whales. Journalist Tom Rose details the rescue and the media frenzy that accompanied it, in his 1989 book Freeing The Whales. The movie Big Miracle is based on the rescue and was released on February 3, 2012. Geography Utqiagvik is roughly 1,300 mi (2,100 km) south of the North Pole. Only 2.6% of the Earth's surface lies as far or farther from the equator as Utqiagvik.According to the United States Census Bureau, the city has a total area of 21 sq mi (54 km2), of which 3 sq mi (7.8 km2) are covered by water (14% of the total area). The predominant land type in Utqiagvik is tundra, which is formed over a permafrost layer that is as much as 1,300 ft (400 m) deep.Utqiagvik is surrounded by the National Petroleum Reserve–Alaska. The city of Utqiagvik has three sections, which can be classified as south, central, and north; they are known to residents as Utqiagvik, Browerville, and NARL respectively. The southernmost of the sections, known historically as the "Barrow side", is the oldest and second-largest of the three; it serves as downtown. This area includes Wiley Post–Will Rogers Memorial Airport, Barrow High School, North Slope Borough School District, and Fred Ipalook Elementary School, as well as restaurants, hotels, the police station, the Utqiagvik City Hall, a Wells Fargo bank, and numerous houses. The central section is the largest of the three and is called Browerville. This has traditionally been a residential area for the City of Utqiagvik, but in recent years, many businesses have opened or moved to this area. Browerville is separated from the south section by a series of lagoons, with two connecting dirt roads. This area in addition to the houses includes Tuzzy Consortium Library, the US Post Office, Eben Hopson Middle School, Samuel Simmonds Memorial Hospital, the Iñupiat Heritage Center, two grocery stores, one hotel, and two restaurants. The north section is the smallest and most isolated of the three sections, known to the residents as NARL because it was originally the site of the Naval Arctic Research Lab. It is connected to the central section only by Stevenson Street which is a two-lane dirt road. The NARL facility was transferred by the federal government to the North Slope Borough, which adapted it for use as Iḷisaġvik College. This area also includes a small broadcasting station, which is run by the college students.An ancient 5.0 mi (8 km)-sized crater, Avak, is situated near Utqiagvik. Climate Owing to its location 330 mi (530 km) north of the Arctic Circle, Utqiagvik's climate is cold and dry, classified as a tundra climate (Köppen ET). Winter weather can be extremely dangerous because of the combination of cold and wind, while summers are cool even at their warmest. Weather observations are available for Utqiagvik dating back to the late 19th century. The National Oceanic and Atmospheric Administration Climate Monitoring Lab operates in Utqiagvik. The United States Department of Energy has a climate observation site in Utqiagvik as part of its Atmospheric Radiation Measurement Climate Research Facility. Despite the extreme northern location, temperatures at Utqiagvik are moderated by the surrounding topography. The Arctic Ocean is on three sides, and flat tundra stretches some 200 mi (320 km) to the south. No wind barriers or protected valleys exist where dense cold air can settle or form temperature inversions in the lower atmosphere, as commonly happens in the interior between the Brooks and the Alaska ranges.Utqiagvik has the lowest average temperatures of cities in Alaska. Although Utqiagvik rarely records the lowest temperatures statewide during cold waves, extremely low wind chill and "white out" conditions from blowing snow are very common. Temperatures remain below freezing from early October through late May and below 0 °F (−18 °C) from December through March. The high temperature reaches or tops the freezing point on an average of only 136 days per year, and 92 days have a maximum at or below 0 °F (−18 °C). Freezing temperatures and snowfall can occur during any month of the year.Regarding precipitation Utqiagvik has a desert climate, and averages less than 6 in (150 mm) "rainfall equivalent" per year. One inch of rain has an estimated equal water content to 12 in (30 cm) of snow. According to 1981−2010 normals, this includes 37 in (94 cm) of snow, compared to 99 in (250 cm) for Kuujjuaq in Nunavik, Quebec or 87 in (220 cm) and 69 in (180 cm) for much warmer Juneau and Kodiak, Alaska, respectively. Even Sable Island, at around 44 degrees latitude and under the influence of the Gulf Stream, received 44 in (110 cm), or 20 percent more snowfall than Utqiagvik. Snowfall in Utqiagvik has increased in recent years, with an average annual snowfall of 46 in (120 cm) according to the more recent 1991–2020 normals.The first snow (defined as snow that will not melt until next spring) generally falls during the first week of October, when temperatures cease to rise above freezing during the day. October is usually the month with the heaviest snowfall, with measurable amounts occurring on over half the days and a 1991−2020 normal total accumulation of 10.3 in (26 cm). By the end of October, the amount of sunlight is around 6 hours. When the sun sets on November 18 it will stay below the horizon until January 23, resulting in a polar night that lasts for about 66 days. When the polar night starts, about 6 hours of civil twilight occur, with the amount decreasing each day during the first half of the polar night. On the winter solstice (around December 21 or December 22), civil twilight in Utqiagvik lasts for a mere 3 hours. After this, the amount of civil twilight increases each day to around 6 hours at the end of the polar night. Particularly cold weather usually begins in January, and February is generally the coldest month, averaging −11.9 °F (−24.4 °C). By March 1, the sun is up for 9 hours, and temperatures begin to warm, though winds are usually higher. Starting on March 23, no more night (the phase of day) happens, with only daylight and twilight until the start of the midnight sun in May. This is also true from the end of the midnight sun at the beginning of August to September 22. April brings less extreme temperatures, with an average of 4.0 °F (−15.6 °C), and on April 1, the sun is up for more than 14 hours. By May 1, the sun is up for 19 hours, and by May 10 or 11 (depending on the year's relationship to the nearest leap year) the sun will stay above the horizon for the entire day. This phenomenon is known as the midnight sun. The sun does not set for 83 days, until either August 1 or 2 (again depending on the year's relationship to the nearest leap year). In May, the temperatures are much warmer, averaging 22.7 °F (−5.2 °C). On June 6, the daily mean temperature rises above freezing, and the normal daily mean temperature remains above freezing until September 21. July is the warmest month of the year, with a normal mean temperature of 41.7 °F (5.4 °C). Beginning in mid-July, the Arctic Ocean is relatively ice-free, and remains so until late October. The highest temperature recorded in Utqiagvik was 79 °F (26 °C) on July 13, 1993, while the lowest is −56 °F (−49 °C) on February 3, 1924; the highest minimum is 56 °F (13 °C) on August 5, 2023, while the lowest maximum is −47 °F (−44 °C) on January 3, 1975. On average during the 1991 to 2020 reference period, the coldest winter maximum was −29 °F (−34 °C) and the warmest summer minimum was 47 °F (8 °C). Utqiagvik records an average 26 days per year where the high reaches at least 50 °F (10 °C). Temperatures above 60 °F (16 °C) are rare, but have been recorded in most years. Even in July and August, the low falls to or below the freezing mark on an average of 18 days.In addition to its low temperatures and polar night, Utqiagvik is one of the cloudiest places on Earth. Owing to the prevailing easterly winds off the Arctic Ocean, it is completely overcast slightly more than 50% of the year. It is at least 70% overcast some 62% of the time. Cloud types are mainly low stratus and fog; cumuli forms are rare. Peak cloudiness occurs in August and September when the ocean is ice-free. Dense fog occurs an average of 65 days per year, mostly in the summer months. Ice fog is very common during the winter months, especially when the temperature drops below −30 °F (−34 °C).Variation of wind speed during the year is limited, with the fall days being windiest. Extreme winds from 40 to 60 mph (64 to 97 km/h) have been recorded in every month. Winds average 12 mph (19 km/h) and are typically from the east. See or edit raw graph data. Consequences of global warming The Arctic region is warming three times the global average, forcing major adjustments to life on the North Slope with regard to a prior millennium of hunting and whaling practices, as well as habitation. Thinner sea ice endangers the landing of bowhead whale strikes on offshore ice by springtime whalers. Caribou habitat is also affected, while thawing soil threatens homes and municipal and commercial structures. The city's infrastructure, particularly water, sanitation, power, and road stability, is endangered. The shoreline is rapidly eroding and has been encroaching on buildings for decades. According to Dr. Harold Wanless of the University of Miami, an anticipated rise in sea level and consequent global warming is inevitable, meaning the existence of Utqiagvik at its current location is doomed in the geological relatively short term. Smoothed data from NOAA show that Utqiagvik has warmed by more than 11 °F (6.1 °C) since 1976. On December 5, 2022, Utqiagvik broke its previous record for the warmest winter temperature, hitting 40 °F (4 °C). Demographics The town first appeared in census records on the 1880 U.S. Census as the unincorporated Inuit village of "Ootiwakh". All 225 of its residents were Inuit. In 1890, the community and area was returned as the "Cape Smythe Settlements", which was including the refuge and whaling stations, Pengnok, Utkeavie, Kugaru (Inaru) River villages, four other camps and Whaling Steamer Balaena. Of the 246 residents, 189 were Natives, 46 were White, one was Asian, and 10 were other races. This did not include nearby Point Barrow, which was a separate community. In 1900, it reported again as "Cape Smythe Settlements". In 1910, it first reported as Barrow, and in every successive census to 2010. It formally incorporated in 1959. The native name of Utqiagvik was adopted in 2016 and appeared on the 2020 census. As of the 2010 United States Census, 4,212 people were living in the city. The racial makeup of the city was 60.5% Alaskan Native, 16.2% White, 0.9% African, 8.9% Asian, 2.3% Pacific Islander, and 8.1% from two or more races; 3.1% were Hispanic or Latino of any race. As of the census of 2000, there were 4,683 people, 1,399 households, and 976 families living in the city. The population density was 249.0 inhabitants per square mile (96.1/km2). There were 1,620 housing units at an average density of 88.1 per square mile (34.0/km2). The racial makeup of the city is 57.2% Alaska Native, 21.8% White, 9.4% Asian, 1.0% African American, 1.4% Pacific Islander, 0.7% from other races, and 8.5% from two or more races. Hispanics or Latinos of any race were 3.3% of the population. Of the 1,399 households, 56.5% had children under 18 living with them, 45.2% were married couples living together, 14.8% had a female householder with no husband present, and 28.0% were not families; 23.0% of all households were made up of individuals, and 1.8% had someone living alone who was 65 or older. The average household size was 3.35, and the average family size was 4.80. In Utqiagvik, the age distribution was 27.7% under 18, 13.3% from 18 to 24, 31.6% from 25 to 44, 19.4% from 45 to 64, and 3.4% who were 65 or older. The median age was 29 years. For every 100 females, there were 107.1 males. For every 100 females age 18 and over, there were 109.5 males. The median income for a household in the city was $63,094.09, and the median income for a family was $68,223. Males had a median income of $51,959 versus $46,382 for females. The per capita income for the city was $22,902. About 7.7% of families and 8.6% of the population were below the poverty line, including 7.2% of those under 18 and 13.1% of those 65 and older. As of December 2022 the town's website says: "The largest city in the North Slope Borough, Utqiagvik has 4,429 residents, of which approximately 61% are Iñupiat Eskimo." Economy Utqiagvik is the economic center of the North Slope Borough, the city's primary employer. Many businesses provide support services to oil field operations. State and federal agencies are employers. The midnight sun has attracted tourism, and arts and crafts provide some cash income. Because transporting food to the city is very expensive, many residents continue to rely upon subsistence food sources. Whale, seal, polar bear, walrus, waterfowl, caribou, and fish are harvested from the coast or nearby rivers and lakes. Utqiagvik is the headquarters of the Arctic Slope Regional Corporation, one of the Alaska Native corporations set up following the Alaska Native Claims Settlement Act in 1971 to manage revenues and invest in development for their people in the region. Politics The city is the center of the North Slope borough, and has been a swing city for presidential elections. A substantial number of third-party voters have existed from time to time. Arts and culture Special events Kivgiq, the Messenger Feast, in more recent times has been held almost every year, but "officially" is held every two or three years in late January or early February, at the discretion of the North Slope Borough mayor. Kivgiq is an international event that attracts visitors from around the Arctic Circle. Piuraagiaqta, the Spring Festival, celebrates breaking a path in the ice for boats to hunt whales. Held in mid-April, it includes many outdoor activities. Nalukataq, the Blanket Toss Celebration, is held on multiple days beginning in the third week of June to celebrate each successful spring whale hunt. July 4, Independence Day, in Utqiagvik is time for Eskimo games, such as the two-foot high kick and ear pull, with the winners going on to compete at the World Eskimo Indian Olympics. Whaling generally happens during the second week of October. Qitik, Eskimo Games, also known as Christmas Games, are held from December 26 through January 1. Depictions in popular culture Singer-songwriter John Denver visited the town for his 1979 television special Alaska, The American Child.The ABC TV special “The Night They Saved Christmas” was filmed here, and first aired December 13, 1984.Fran Tate, a local restaurant owner, was a frequent guest by telephone on a Chicago radio program, the Steve and Johnnie Show on WGN, during the 1990s. She also appeared on the Tonight Show with Johnny Carson.The town is the setting for a series of horror comic books titled 30 Days of Night. A commercially successful film, named after and based upon the comic, was released on October 19, 2007, followed by a straight-to-video sequel on July 23, 2010.Karl Pilkington is sent to the town in the second season of An Idiot Abroad.On the Ice, a film released in 2011 about teenagers dealing with a tragic accidental death, was filmed entirely in the town, with locals acting in most roles.Big Miracle, a 2012 film starring Drew Barrymore, is based on the true story of whales trapped under ice near Point Barrow, and features scenes in and characters from the town.Stephen Fry visited the town and its people during the last segment of his documentary Stephen Fry in America.In 2015, the NFL Network began an eight-part documentary series focusing on the Barrow High School Whalers football team. Sports Football On August 19, 2006, the Whalers of Barrow High School played the first official football game in the Arctic against Delta Junction High School. Barrow High School recorded its first win two weeks later; the coaches and players celebrated the historic win by jumping into the Arctic Ocean, just 100 yd (91 m) from the makeshift dirt field. On August 17, 2007, the Whalers football team played their first game of the season on their new artificial-turf field. The historic game which was attended by former Miami Dolphins player Larry Csonka, was the first live Internet broadcast of a sporting event in the United States from north of the Arctic Circle.Since the team's formation, it has gathered a record of 33–24, and most recently, the team reached the semifinal round of the Alaskan State Small School Football Championship.In 2017, The Barrow High School football team won their first ever state championship with a win against the Homer Mariners 20–14. Basketball In 2015, the Barrow High School boys' basketball team won the Alaska Class 3A State Championship with a 50–40 victory over two-time defending state champion, Monroe Catholic. The Whalers' team was led by 5-star recruit Kamaka Hepa. As a 6'7" freshman he was regarded as one of the top basketball recruits in the country. He was ranked as the #21 ranked basketball recruit in the country by ESPN for the class of 2018. Hepa transferred to Jefferson High School in Portland, Oregon, for his junior year. By October 2017, at 6'8" tall, he had committed to go to the University of Texas.The Whalers' boys' basketball team finished the 2014–2015 season with a 24–3 record, the highest win percentage in school history. Guard Travis Adams was a standout as well. Coach Jeremy Arnhart's teams won 186 games in 10 seasons. In 2015, the Barrow High School girls' team also easily won the ACS tournament. Education Utqiagvik is served by the North Slope Borough School District. The schools serving the city are Ipalook Elementary School, Hopson Middle School, Barrow High School, and alternative learning center known as the Kiita Learning Community.Iḷisaġvik College which is a two-year college and the only tribal college in Alaska, is located in Utqiagvik. The school offers associate's degrees in accounting, allied health, business and management, construction technology, dental health therapy, Indigenous education, information technology, Iñupiaq studies, liberal arts, and office management. It also offers a bachelor's degree in business administration. The school additionally offers adult education courses for GED preparation and certificates in various programs. Local students may attend University of Alaska Fairbanks, and other colleges in Alaska and in other states in the country. Media Newspaper The Arctic Sounder is a newspaper published weekly by Alaska Media, LLC, which covers news of interest to the North Slope Borough, which includes Utqiagvik, and the Northwest Arctic Borough, which includes Kotzebue in northwestern Alaska. Radio KBRW (AM)/KBRW-FM broadcasts in Utqiagvik on 680 kHz AM and 91.9 MHz FM. KBRW is also broadcast via FM repeaters in all of the North Slope Borough villages, from Kaktovik to Point Hope. Infrastructure Transportation The roads in Utqiagvik are unpaved due to the permafrost, and no roads connect the city to the rest of Alaska. Utqiagvik is served by Alaska Airlines with passenger jet service at the Wiley Post–Will Rogers Memorial Airport from Anchorage and Fairbanks. New service between Fairbanks and Anchorage began from Era Aviation on June 1, 2009. Freight arrives by air cargo year round and by ocean-going marine barges during the annual summer sealift.Utqiagvik is the transportation hub for the North Slope Borough's Arctic coastal villages. Multiple jet aircraft, with service from Deadhorse (Prudhoe Bay), Fairbanks, and Anchorage provide daily mail, cargo, and passenger services, which connect with smaller single- and twin-engined general aviation aircraft that provide regular service to other villages, from Kaktovik in the east to Point Hope in the west. The town is also served by several radio taxi services, most using small four-wheel drive vehicles. Health care Samuel Simmonds Memorial Hospital which is located in the City of Utqiagvik, is the primary healthcare facility for the North Slope region of Alaska. Individuals in need of medical care in the city are able to access the hospital by road. Because no roads lead in or out of Utqiagvik, though, individuals in surrounding communities and towns (including Point Hope, Prudhoe Bay, and Wainwright) must be airlifted in by plane, helicopter, or air ambulance. The facility operates continuously, and is the northernmost hospital or medical facility in the United States. Notable people Sadie Neakok (1916–2004), first female magistrate in Alaska Eben Hopson (1922–1980), former member of the Alaska Senate Harry Brower Sr. (1924–1992), whaling captain, community leader John Nusunginya (1927–1981), former member of the Alaska House of Representatives Edna Ahgeak MacLean (born 1944), linguist, educator, and former President of Iḷisaġvik College Tara Sweeney (born 1973), former Assistant Secretary at the United States Department of the Interior Morgan Kibby (born 1984), actress, singer, songwriter Josiah Patkotak (born 1994), former member of the Alaska House of Representatives, mayor of the North Slope Borough Kamaka Hepa (born 2000), college basketball player for the Texas Longhorns and Hawaii Rainbow Warriors See also Notes References Further reading Dekin, Albert A. Jr. (June 1987). "Sealed in Time". National Geographic. Vol. 171, no. 6. pp. 824–836. ISSN 0027-9358. OCLC 643483454. National Science Foundation Barrow area cartography The Papers of Palmer W. Roberts on Eskimos at Point Barrow at Dartmouth College Library The Papers of Albert Dekin on the Recovered Remains of the Barrow Inuit Population at Dartmouth College Library The Papers of Charles D. Brower, Postmaster of Barrow at Dartmouth College Library External links Official website Utqiagvik Sea Ice Webcam Utqiagvik, Alaska at Curlie Iñupiat Heritage Center (IHC) - Official museum website CAC (Civil Applications Committee)/USGS Global Fiducials Program web page containing scientific description and interactive map viewer featuring declassified high-resolution time-series imagery Barrow, Alaska Visitor's Guide July 1993 weather record Barrow land development Archived July 26, 2017, at the Wayback Machine
global warming taxes
In response to widespread concerns about a general increase in the temperature of the earth's climate, a number of tax jurisdictions have proposed or imposed global warming taxes intended to generate revenues to mitigate the effects of the human activities contributing to global warming or to discourage such activities. History The idea of using taxes to fix problems, rather than merely raise government revenue, has a long history. The British economist Arthur Pigou advocated such corrective taxes to deal with pollution in the early 20th century. In his honor, economics textbooks now call them “Pigovian taxes.” Using a Pigovian tax to address global warming is also an old idea. It was proposed as far back as 1992 by Martin S. Feldstein, once chief economist to Ronald Reagan, on the editorial page of The Wall Street Journal. Jurisdictions Imposing Global Warming Taxes California San Francisco Bay Area Air Quality Management District has proposed a carbon dioxide emission fee that would charge businesses 4.2 cents for every metric ton of carbon dioxide released. The estimated $1.2 million in fees raised annually on businesses in the nine-county San Francisco Bay Area region would start July 1, 2008, and are not intended to deter greenhouse gas production in most cases, but would pay for the cost of monitoring greenhouse gases. California Assembly Bill 2358 will take the existing $34 vehicle registration fee and add up to an extra $25 based on the unladen weight of the vehicle. A second tax of up to $25 would be added based on the level of carbon dioxide emissions. Together, the new fees would more than double the total possible registration fee to $84.California Assembly Bill 2558 would give the Los Angeles County Metropolitan Transportation Authority, which operates bus and light rail service, the authority to impose either a three-percent sales tax on gas (11 cents per gallon at current prices) or a $90 hike in vehicle registration taxes. Funds raised from motorists would then be placed in a Climate Change Mitigation and Adaptation Fund that would be used on "public transit projects and programs." == References ==
permian–triassic extinction event
Approximately 251.9 million years ago, the Permian–Triassic (P–T, P–Tr) extinction event (PTME; also known as the Late Permian extinction event, the Latest Permian extinction event, the End-Permian extinction event, and colloquially as the Great Dying) forms the boundary between the Permian and Triassic geologic periods, and with them the Paleozoic and Mesozoic eras respectively. It is the Earth's most severe known extinction event, with the extinction of 57% of biological families, 83% of genera, 81% of marine species and 70% of terrestrial vertebrate species. It is also the largest known mass extinction of insects. It is the largest of the "Big Five" mass extinctions of the Phanerozoic. There is evidence for one to three distinct pulses, or phases, of extinction.The precise causes of the Great Dying remain unknown. The scientific consensus is that the main cause of extinction was the flood basalt volcanic eruptions that created the Siberian Traps, which released sulfur dioxide and carbon dioxide, resulting in euxinia, elevating global temperatures, and acidifying the oceans. The level of atmospheric carbon dioxide rose from around 400 ppm to 2,500 ppm with approximately 3,900 to 12,000 gigatonnes of carbon being added to the ocean-atmosphere system during this period. Important proposed contributing factors include the emission of much additional carbon dioxide from the thermal decomposition of hydrocarbon deposits, including oil and coal, triggered by the eruptions, emissions of methane from the gasification of methane clathrates, emissions of methane possibly by novel methanogenic microorganisms nourished by minerals dispersed in the eruptions, an extraterrestrial impact creating the Araguainha crater and consequent seismic release of methane, and the destruction of the ozone layer and increase in harmful solar radiation. Dating Previously, it was thought that rock sequences spanning the Permian–Triassic boundary were too few and contained too many gaps for scientists to reliably determine its details. However, it is now possible to date the extinction with millennial precision. U–Pb zircon dates from five volcanic ash beds from the Global Stratotype Section and Point for the Permian–Triassic boundary at Meishan, China, establish a high-resolution age model for the extinction – allowing exploration of the links between global environmental perturbation, carbon cycle disruption, mass extinction, and recovery at millennial timescales. The first appearance of the conodont Hindeous parvus has been used to delineate the Permian-Triassic boundary.The extinction occurred between 251.941 ± 0.037 and 251.880 ± 0.031 million years ago, a duration of 60 ± 48 thousand years. A large, abrupt global decrease in δ13C, the ratio of the stable isotope carbon-13 to that of carbon-12, coincides with this extinction, and is sometimes used to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating. The negative carbon isotope excursion's magnitude was 4-7% and lasted for approximately 500 kyr, though estimating its exact value is challenging due to diagenetic alteration of many sedimentary facies spanning the boundary.Further evidence for environmental change around the Permian-Triassic boundary suggests an 8 °C (14 °F) rise in temperature, and an increase in CO2 levels to 2,500 ppm (for comparison, the concentration immediately before the Industrial Revolution was 280 ppm, and the amount today is about 415 ppm). There is also evidence of increased ultraviolet radiation reaching the earth, causing the mutation of plant spores.It has been suggested that the Permian–Triassic boundary is associated with a sharp increase in the abundance of marine and terrestrial fungi, caused by the sharp increase in the amount of dead plants and animals fed upon by the fungi. This "fungal spike" has been used by some paleontologists to identify a lithological sequence as being on or very close to the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating or have a lack of suitable index fossils. However, even the proposers of the fungal spike hypothesis pointed out that "fungal spikes" may have been a repeating phenomenon created by the post-extinction ecosystem during the earliest Triassic. The very idea of a fungal spike has been criticized on several grounds, including: Reduviasporonites, the most common supposed fungal spore, may be a fossilized alga; the spike did not appear worldwide; and in many places it did not fall on the Permian–Triassic boundary. The Reduviasporonites may even represent a transition to a lake-dominated Triassic world rather than an earliest Triassic zone of death and decay in some terrestrial fossil beds. Newer chemical evidence agrees better with a fungal origin for Reduviasporonites, diluting these critiques.Uncertainty exists regarding the duration of the overall extinction and about the timing and duration of various groups' extinctions within the greater process. Some evidence suggests that there were multiple extinction pulses or that the extinction was long and spread out over a few million years, with a sharp peak in the last million years of the Permian. Statistical analyses of some highly fossiliferous strata in Meishan, Zhejiang Province in southeastern China, suggest that the main extinction was clustered around one peak, while a study of the Liangfengya section found evidence of two extinction waves, MEH-1 and MEH-2, which varied in their causes, and a study of the Shangsi section showed two extinction pulses with different causes too. Recent research shows that different groups became extinct at different times; for example, while difficult to date absolutely, ostracod and brachiopod extinctions were separated by around 670,000 to 1.17 million years. Palaeoenvironmental analysis of Lopingian strata in the Bowen Basin of Queensland indicates numerous intermittent periods of marine environmental stress from the middle to late Lopingian leading up to the end-Permian extinction proper, supporting aspects of the gradualist hypothesis. Additionally, the decline in marine species richness and the structural collapse of marine ecosystems may have been decoupled as well, with the former preceding the latter by about 61,000 years according to one study.Whether the terrestrial and marine extinctions were synchronous or asynchronous is another point of controversy. Evidence from a well-preserved sequence in east Greenland suggests that the terrestrial and marine extinctions began simultaneously. In this sequence, the decline of animal life is concentrated in a period approximately 10,000 to 60,000 years long, with plants taking an additional several hundred thousand years to show the full impact of the event. Many sedimentary sequences from South China show synchronous terrestrial and marine extinctions. Research in the Sydney Basin of the PTME’s duration and course also supports a synchronous occurrence of the terrestrial and marine biotic collapses. Other scientists believe the terrestrial mass extinction began between 60 and 370 thousand years before the onset of the marine mass extinction. Chemostratigraphic analysis from sections in Finnmark and Trøndelag shows the terrestrial floral turnover occurred before the large negative δ13C shift during the marine extinction. Dating of the boundary between the Dicynodon and Lystrosaurus assemblage zones in the Karoo Basin indicates that the terrestrial extinction occurred earlier than the marine extinction. The Sunjiagou Formation of South China also records a terrestrial ecosystem demise predating the marine crisis.Studies of the timing and causes of the Permian-Triassic extinction are complicated by the often-overlooked Capitanian extinction (also called the Guadalupian extinction), just one of perhaps two mass extinctions in the late Permian that closely preceded the Permian-Triassic event. In short, when the Permian-Triassic starts it is difficult to know whether the end-Capitanian had finished, depending on the factor considered. Many of the extinctions once dated to the Permian-Triassic boundary have more recently been redated to the end-Capitanian. Further, it is unclear whether some species who survived the prior extinction(s) had recovered well enough for their final demise in the Permian-Triassic event to be considered separate from Capitanian event. A minority point of view considers the sequence of environmental disasters to have effectively constituted a single, prolonged extinction event, perhaps depending on which species is considered. This older theory, still supported in some recent papers, proposes that there were two major extinction pulses 9.4 million years apart, separated by a period of extinctions that were less extensive, but still well above the background level, and that the final extinction killed off only about 80% of marine species alive at that time, whereas the other losses occurred during the first pulse or the interval between pulses. According to this theory, one of these extinction pulses occurred at the end of the Guadalupian epoch of the Permian. For example, all dinocephalian genera died out at the end of the Guadalupian, as did the Verbeekinidae, a family of large-size fusuline foraminifera. The impact of the end-Guadalupian extinction on marine organisms appears to have varied between locations and between taxonomic groups – brachiopods and corals had severe losses. Extinction patterns Marine organisms Marine invertebrates suffered the greatest losses during the P–Tr extinction. Evidence of this was found in samples from south China sections at the P–Tr boundary. Here, 286 out of 329 marine invertebrate genera disappear within the final two sedimentary zones containing conodonts from the Permian. The decrease in diversity was probably caused by a sharp increase in extinctions, rather than a decrease in speciation.The extinction primarily affected organisms with calcium carbonate skeletons, especially those reliant on stable CO2 levels to produce their skeletons. These organisms were susceptible to the effects of the ocean acidification that resulted from increased atmospheric CO2. There is also evidence that endemism was a strong risk factor influencing a taxon's likelihood of extinction. Bivalve taxa that were endemic and localised to a specific region were more likely to go extinct than cosmopolitan taxa. There was little latitudinal difference in the survival rates of taxa.Among benthic organisms the extinction event multiplied background extinction rates, and therefore caused maximum species loss to taxa that had a high background extinction rate (by implication, taxa with a high turnover). The extinction rate of marine organisms was catastrophic. Bioturbators were extremely severely affected, as evidenced by the loss of the sedimentary mixed layer in many marine facies during the end-Permian extinction.Surviving marine invertebrate groups included articulate brachiopods (those with a hinge), which had undergone a slow decline in numbers since the P–Tr extinction; the Ceratitida order of ammonites; and crinoids ("sea lilies"), which very nearly became extinct but later became abundant and diverse. The groups with the highest survival rates generally had active control of circulation, elaborate gas exchange mechanisms, and light calcification; more heavily calcified organisms with simpler breathing apparatuses suffered the greatest loss of species diversity. In the case of the brachiopods, at least, surviving taxa were generally small, rare members of a formerly diverse community.The ammonoids, which had been in a long-term decline for the 30 million years since the Roadian (middle Permian), suffered a selective extinction pulse 10 million years before the main event, at the end of the Capitanian stage. In this preliminary extinction, which greatly reduced disparity, or the range of different ecological guilds, environmental factors were apparently responsible. Diversity and disparity fell further until the P–Tr boundary; the extinction here (P–Tr) was non-selective, consistent with a catastrophic initiator. During the Triassic, diversity rose rapidly, but disparity remained low. The range of morphospace occupied by the ammonoids, that is, their range of possible forms, shapes or structures, became more restricted as the Permian progressed. A few million years into the Triassic, the original range of ammonoid structures was once again reoccupied, but the parameters were now shared differently among clades.Ostracods experienced prolonged diversity perturbations during the Changhsingian before the PTME proper, when immense proportions of them abruptly vanished. At least 74% of ostracods died out during the PTME itself.Deep water sponges suffered a significant diversity loss and exhibited a decrease in spicule size over the course of the PTME. Shallow water sponges were affected much less strongly; they experienced an increase in spicule size and much lower loss of morphological diversity compared to their deep water counterparts.Foraminifera suffered a severe bottleneck in diversity. Approximately 93% of latest Permian foraminifera became extinct, with 50% of the clade Textulariina, 92% of Lagenida, 96% of Fusulinida, and 100% of Miliolida disappearing. The reason why lagenides survived while fusulinoidean fusulinides went completely extinct may have been due to the greater range of environmental tolerance and greater geographic distribution of the former compared to the latter.Cladodontomorph sharks likely survived the extinction because of their ability to survive in refugia in the deep oceans. This hypothesis is based on the discovery of Early Cretaceous cladodontomorphs in deep, outer shelf environments. Ichthyosaurs, which are believed to have evolved immediately before the PTME, were also PTME survivors.The Lilliput effect, the phenomenon of dwarfing of species during and immediately following a mass extinction event, has been observed across the Permian-Triassic boundary, notably occurring in foraminifera, brachiopods, bivalves, and ostracods. Though gastropods that survived the cataclysm were smaller in size than those that did not, it remains debated whether the Lilliput effect truly took hold among gastropods. Some gastropod taxa, termed "Gulliver gastropods", ballooned in size during and immediately following the mass extinction, exemplifying the Lilliput effect's opposite, which has been dubbed the Brobdingnag effect. Terrestrial invertebrates The Permian had great diversity in insect and other invertebrate species, including the largest insects ever to have existed. The end-Permian is the largest known mass extinction of insects; according to some sources, it may well be the only mass extinction to significantly affect insect diversity. Eight or nine insect orders became extinct and ten more were greatly reduced in diversity. Palaeodictyopteroids (insects with piercing and sucking mouthparts) began to decline during the mid-Permian; these extinctions have been linked to a change in flora. The greatest decline occurred in the Late Permian and was probably not directly caused by weather-related floral transitions. Terrestrial plants The geological record of terrestrial plants is sparse and based mostly on pollen and spore studies. Floral changes across the Permian-Triassic boundary are highly variable depending on the location and preservation quality of any given site. Plants are relatively immune to mass extinction, with the impact of all the major mass extinctions "insignificant" at a family level. Even the reduction observed in species diversity (of 50%) may be mostly due to taphonomic processes. However, a massive rearrangement of ecosystems does occur, with plant abundances and distributions changing profoundly and all the forests virtually disappearing. The dominant floral groups changed, with many groups of land plants entering abrupt decline, such as Cordaites (gymnosperms) and Glossopteris (seed ferns). The severity of plant extinction has been disputed.The Glossopteris-dominated flora that characterised high-latitude Gondwana collapsed in Australia around 370,000 years before the Permian-Triassic boundary, with this flora's collapse being less constrained in western Gondwana but still likely occurring a few hundred thousand years before the boundary.Palynological or pollen studies from East Greenland of sedimentary rock strata laid down during the extinction period indicate dense gymnosperm woodlands before the event. At the same time that marine invertebrate macrofauna declined, these large woodlands died out and were followed by a rise in diversity of smaller herbaceous plants including Lycopodiophyta, both Selaginellales and Isoetales.The Cordaites flora, which dominated the Angaran floristic realm corresponding to Siberia, collapsed over the course of the extinction. In the Kuznetsk Basin, the aridity-induced extinction of the regions's humid-adapted forest flora dominated by cordaitaleans occurred approximately 252.76 Ma, around 820,000 years before the end-Permian extinction in South China, suggesting that the end-Permian biotic catastrophe may have started earlier on land and that the ecological crisis may have been more gradual and asynchronous on land compared to its more abrupt onset in the marine realm.In North China, the transition between the Upper Shihhotse and Sunjiagou Formations and their lateral equivalents marked a very large extinction of plants in the region. Those plant genera that did not go extinct still experienced a great reduction in their geographic range. Following this transition, coal swamps vanished. The North Chinese floral extinction correlates with the decline of the Gigantopteris flora of South China.In South China, the subtropical Cathaysian gigantopterid dominated rainforests abruptly collapsed. The floral extinction in South China is associated with bacterial blooms in soil and nearby lacustrine ecosystems, with soil erosion resulting from the die-off of plants being their likely cause. Wildfires too likely played a role in the fall of Gigantopteris.A conifer flora in what is now Jordan, known from fossils near the Dead Sea, showed unusual stability over the Permian-Triassic transition, and appears to have been only minimally affected by the crisis. Terrestrial vertebrates The terrestrial vertebrate extinction occurred rapidly, taking 50,000 years or fewer. Aridification induced by global warming was the chief culprit behind terrestrial vertebrate extinctions. There is enough evidence to indicate that over two thirds of terrestrial labyrinthodont amphibians, sauropsid ("reptile") and therapsid ("proto-mammal") taxa became extinct. Large herbivores suffered the heaviest losses. All Permian anapsid reptiles died out except the procolophonids (although testudines have morphologically-anapsid skulls, they are now thought to have separately evolved from diapsid ancestors). Pelycosaurs died out before the end of the Permian. Too few Permian diapsid fossils have been found to support any conclusion about the effect of the Permian extinction on diapsids (the "reptile" group from which lizards, snakes, crocodilians, and dinosaurs (including birds) evolved).The groups that survived suffered extremely heavy losses of species and some terrestrial vertebrate groups very nearly became extinct at the end of the Permian. Some of the surviving groups did not persist for long past this period, but others that barely survived went on to produce diverse and long-lasting lineages. However, it took 30 million years for the terrestrial vertebrate fauna to fully recover both numerically and ecologically.It is difficult to analyze extinction and survival rates of land organisms in detail because few terrestrial fossil beds span the Permian–Triassic boundary. The best-known record of vertebrate changes across the Permian–Triassic boundary occurs in the Karoo Supergroup of South Africa, but statistical analyses have so far not produced clear conclusions. One study of the Karoo Basin found that 69% of terrestrial vertebrates went extinct over 300,000 years leading up to the Permian-Triassic boundary, followed by a minor extinction pulse involving four taxa that survived the previous extinction interval. Another study of latest Permian vertebrates in the Karoo Basin found that 54% of them went extinct due to the PTME. Biotic recovery In the wake of the extinction event, the ecological structure of present-day biosphere evolved from the stock of surviving taxa. In the sea, the "Palaeozoic evolutionary fauna" declined while the "modern evolutionary fauna" achieved greater dominance; the Permian-Triassic mass extinction marked a key turning point in this ecological shift that began after the Capitanian mass extinction and culminated in the Late Jurassic. Typical taxa of shelly benthic faunas were now bivalves, snails, sea urchins and Malacostraca, whereas bony fishes and marine reptiles diversified in the pelagic zone. On land, dinosaurs and mammals arose in the course of the Triassic. The profound change in the taxonomic composition was partly a result of the selectivity of the extinction event, which affected some taxa (e.g., brachiopods) more severely than others (e.g., bivalves). However, recovery was also differential between taxa. Some survivors became extinct some million years after the extinction event without having rediversified (dead clade walking, e.g. the snail family Bellerophontidae), whereas others rose to dominance over geologic times (e.g., bivalves). Marine ecosystems A cosmopolitanism event began immediately after the end-Permian extinction event. Marine post-extinction faunas were mostly species-poor and were dominated by few disaster taxa such as the bivalves Claraia, Unionites, Eumorphotis, and Promyalina, the foraminifera Rectocornuspira kalhori and Earlandia, and the inarticulate brachiopod Lingularia. Their guild diversity was also low.The speed of recovery from the extinction is disputed. Some scientists estimate that it took 10 million years (until the Middle Triassic) due to the severity of the extinction. However, studies in Bear Lake County, near Paris, Idaho, and nearby sites in Idaho and Nevada showed a relatively quick rebound in a localized Early Triassic marine ecosystem (Paris biota), taking around 1.3 million years to recover, while an unusually diverse and complex ichnobiota is known from Italy less than a million years after the end-Permian extinction. Additionally, the complex Guiyang biota found near Guiyang, China also indicates life thrived in some places just a million years after the mass extinction, as does a fossil assemblage known as the Shanggan fauna found in Shanggan, China and a gastropod fauna from the Al Jil Formation of Oman. Regional differences in the pace of biotic recovery suggest that the impact of the extinction may have been felt less severely in some areas than others, with differential environmental stress and instability being the source of the variance. In addition, it has been proposed that although overall taxonomic diversity rebounded rapidly, functional ecological diversity took much longer to return to its pre-extinction levels; one study concluded that marine ecological recovery was still ongoing 50 million years after the extinction, during the latest Triassic, even though taxonomic diversity had rebounded in a tenth of that time.The pace and timing of recovery also differed based on clade and mode of life. Seafloor communities maintained a comparatively low diversity until the end of the Early Triassic, approximately 4 million years after the extinction event. Epifaunal benthos took longer to recover than infaunal benthos. This slow recovery stands in remarkable contrast with the quick recovery seen in nektonic organisms such as ammonoids, which exceeded pre-extinction diversities already two million years after the crisis, and conodonts, which diversified considerably over the first two million years of the Early Triassic.Recent work suggests that the pace of recovery was intrinsically driven by the intensity of competition among species, which drives rates of niche differentiation and speciation. That recovery was slow in the Early Triassic can be explained by low levels of biological competition due to the paucity of taxonomic diversity, and that biotic recovery explosively accelerated in the Anisian can be explained by niche crowding, a phenomenon that would have drastically increased competition, becoming prevalent by the Anisian. Biodiversity rise thus behaved as a positive feedback loop enhancing itself as it took off in the Spathian and Anisian. Accordingly, low levels of interspecific competition in seafloor communities that are dominated by primary consumers correspond to slow rates of diversification and high levels of interspecific competition among nektonic secondary and tertiary consumers to high diversification rates. Other explanations state that life was delayed in its recovery because grim conditions returned periodically over the course of the Early Triassic, causing further extinction events, such as the Smithian-Spathian boundary extinction. Continual episodes of extremely hot climatic conditions during the Early Triassic have been held responsible for the delayed recovery of oceanic life, in particular skeletonised taxa that are most vulnerable to high carbon dioxide concentrations. The relative delay in the recovery of benthic organisms has been attributed to widespread anoxia, but high abundances of benthic species contradict this explanation. A 2019 study attributed the dissimilarity of recovery times between different ecological communities to differences in local environmental stress during the biotic recovery interval, with regions experiencing persistent environmental stress post-extinction recovering more slowly, supporting the view that recurrent environmental calamities were culpable for retarded biotic recovery. Recurrent Early Triassic environmental stresses also acted as a ceiling limiting the maximum ecological complexity of marine ecosystems until the Spathian. Recovery biotas appear to have been ecologically uneven and unstable into the Anisian, making them vulnerable to environmental stresses.Whereas most marine communities were fully recovered by the Middle Triassic, global marine diversity reached pre-extinction values no earlier than the Middle Jurassic, approximately 75 million years after the extinction event. Prior to the extinction, about two-thirds of marine animals were sessile and attached to the seafloor. During the Mesozoic, only about half of the marine animals were sessile while the rest were free-living. Analysis of marine fossils from the period indicated a decrease in the abundance of sessile epifaunal suspension feeders such as brachiopods and sea lilies and an increase in more complex mobile species such as snails, sea urchins and crabs. Before the Permian mass extinction event, both complex and simple marine ecosystems were equally common. After the recovery from the mass extinction, the complex communities outnumbered the simple communities by nearly three to one, and the increase in predation pressure led to the Mesozoic Marine Revolution. Bivalves rapidly recolonised many marine environments in the wake of the catastrophe. Bivalves were fairly rare before the P–Tr extinction but became numerous and diverse in the Triassic, taking over niches that were filled primarily by brachiopods before the mass extinction event, and one group, the rudist clams, became the Mesozoic's main reef-builders. The success of bivalves in the aftermath of the extinction event may have been a function of them possessing greater resilience to environmental stress compared to the brachiopods that they competed with. The rise of bivalves to taxonomic and ecological dominance over brachiopods was not synchronous, however, and brachiopods retained an outsized ecological dominance into the Middle Triassic even as bivalves eclipsed them in taxonomic diversity. Some researchers think the change was attributable not only to the end-Permian extinction but also the ecological restructuring that began as a result of the Capitanian extinction. Infaunal habits in bivalves became more common after the PTME.Linguliform brachiopods were commonplace immediately after the extinction event, their abundance having been essentially unaffected by the crisis. Adaptations for oxygen-poor and warm environments, such as increased lophophoral cavity surface, shell width/length ratio, and shell miniaturisation, are observed in post-extinction linguliforms. The surviving brachiopod fauna was very low in diversity and exhibited no provincialism whatsoever. Brachiopods began their recovery around 250.1 ± 0.3 Ma, as marked by the appearance of the genus Meishanorhynchia, believed to be the first of the progenitor brachiopods that evolved after the mass extinction. Major brachiopod rediversification only began in the late Spathian and Anisian in conjunction with the decline of widespread anoxia and extreme heat and the expansion of more habitable climatic zones. Brachiopod taxa during the Anisian recovery interval were only phylogenetically related to Late Permian brachiopods at a familial taxonomic level or higher; the ecology of brachiopods had radically changed from before in the mass extinction's aftermath.Ostracods were extremely rare during the basalmost Early Triassic. Taxa associated with microbialites were disproportionately represented among ostracod survivors. Ostracod recovery began in the Spathian. Despite high taxonomic turnover, the ecological life modes of Early Triassic ostracods remained rather similar to those of pre-PTME ostracods.Crinoids ("sea lilies") suffered a selective extinction, resulting in a decrease in the variety of their forms. Though cladistic analyses suggest the beginning of their recovery to have taken place in the Induan, the recovery of their diversity as measured by fossil evidence was far less brisk, showing up in the late Ladinian. Their adaptive radiation after the extinction event resulted in forms possessing flexible arms becoming widespread; motility, predominantly a response to predation pressure, also became far more prevalent. Though their taxonomic diversity remained relatively low, crinoids regained much of their ecological dominance by the Middle Triassic epoch.Microbial reefs predominated across shallow seas for a short time during the earliest Triassic. Microbial-metazoan reefs appeared very early in the Early Triassic; and they dominated many surviving communities across the recovery from the mass extinction. Microbialite deposits appear to have declined in the early Griesbachian synchronously with a significant sea level drop that occurred then. Metazoan-built reefs reemerged during the Olenekian, mainly being composed of sponge biostrome and bivalve builups. Keratose sponges were particularly noteworthy in their integral importance to Early Triassic microbial-metazoan reef communities. "Tubiphytes"-dominated reefs appeared at the end of the Olenekian, representing the earliest platform-margin reefs of the Triassic, though they did not become abundant until the late Anisian, when reefs' species richness increased. The first scleractinian corals appear in the late Anisian as well, although they would not become the dominant reef builders until the end of the Triassic period. Bryozoans, after sponges, were the most numerous organisms in Tethyan reefs during the Anisian. Metazoan reefs became common again during the Anisian because the oceans cooled down then from their overheated state during the Early Triassic. Microbially induced sedimentary structures (MISS) from the earliest Triassic have been found to be associated with abundant opportunistic bivalves and vertical burrows, and it is likely that post-extinction microbial mats played a vital, indispensable role in the survival and recovery of various bioturbating organisms.Ichnocoenoses show that marine ecosystems recovered to pre-extinction levels of ecological complexity by the late Olenekian. Anisian ichnocoenoses show slightly lower diversity than Spathian ichnocoenoses, although this was likely a taphonomic consequence of increased and deeper bioturbation erasing evidence of shallower bioturbation.Ichnological evidence suggests that recovery and recolonisation of marine environments may have taken place by way of outward dispersal from refugia that suffered relatively mild perturbations and whose local biotas were less strongly affected by the mass extinction compared to the rest of the world’s oceans. Although complex bioturbation patterns were rare in the Early Triassic, likely reflecting the inhospitability of many shallow water environments in the extinction's wake, complex ecosystem engineering managed to persist locally in some places, and may have spread from there after harsh conditions across the global ocean were ameliorated over time. Wave-dominated shoreface settings (WDSS) are believed to have served as refugium environments because they appear to have been unusually diverse in the mass extinction’s aftermath. Terrestrial plants The proto-recovery of terrestrial floras took place from a few tens of thousands of years after the end-Permian extinction to around 350,000 years after it, with the exact timeline varying by region. Furthermore, severe extinction pulses continued to occur after the Permian-Triassic boundary, causing additional floral turnovers. Gymnosperms recovered within a few thousand years after the Permian-Triassic boundary, but around 500,000 years after it, the Dominant gymnosperm genera were replaced by lycophytes – extant lycophytes are recolonizers of disturbed areas – during an extinction pulse at the Griesbachian-Dienerian boundary. The particular post-extinction dominance of lycophytes, which were well adapted for coastal environments, can be explained in part by global marine transgressions during the Early Triassic. The worldwide recovery of gymnosperm forests took approximately 4–5 million years. However, this trend of prolonged lycophyte dominance during the Early Triassic was not universal, as evidenced by the much more rapid recovery of gymnosperms in certain regions, and floral recovery likely did not follow a congruent, globally universal trend but instead varied by region according to local environmental conditions.In East Greenland, lycophytes replaced gymnosperms as the dominant plants. Later, other groups of gymnosperms again become dominant but again suffered major die-offs. These cyclical flora shifts occurred a few times over the course of the extinction period and afterward. These fluctuations of the dominant flora between woody and herbaceous taxa indicate chronic environmental stress resulting in a loss of most large woodland plant species. The successions and extinctions of plant communities do not coincide with the shift in δ13C values but occurred many years after.In what is now the Barents Sea of the coast of Norway, the post-extinction fauna is dominated by pteridophytes and lycopods, which were suited for primary succession and recolonisation of devastated areas, although gymnosperms made a rapid recovery, with the lycopod dominated flora not persisting across most of the Early Triassic as postulated in other regions.In Europe and North China, the interval of recovery was dominated by the lycopsid Pleuromeia, an opportunistic pioneer plant that filled ecological vacancies until other plants were able to expand out of refugia and recolonise the land. Conifers became common by the early Anisian, while pteridosperms and cycadophytes only fully recovered by the late Anisian.During the survival phase in the terrestrial extinction's immediate aftermath, from the latest Changhsingian to the Griesbachian, South China was dominated by opportunistic lycophytes. Low-lying herbaceous vegetation dominated by the isoetalean Tomiostrobus was ubiquitous following the collapse of the gigantopterid-dominated forests of before. In contrast to the highly biodiverse gigantopterid rainforests, the post-extinction landscape of South China was near-barren and had vastly lower diversity. Plant survivors of the PTME in South China experienced extremely high rates of mutagenesis induced by heavy metal poisoning. From the late Griesbachian to the Smithian, conifers and ferns began to rediversify. After the Smithian, the opportunistic lycophyte flora declined, as the newly radiating conifer and fern species permanently replaced them as the dominant components of South China's flora.In Tibet, the early Dienerian Endosporites papillatus–Pinuspollenites thoracatus assemblages closely resemble late Changhsingian Tibetan floras, suggesting that the widespread, dominant latest Permian flora resurged easily after the PTME. However, in the late Dienerian, a major shift towards assemblages dominated by cavate trilete spores took place, heralding widespread deforestation and a rapid change to hotter, more humid conditions. Quillworts and spike mosses dominated Tibetan flora for about a million years after this shift.In Pakistan, then the northern margin of Gondwana, the flora was rich in lycopods associated with conifers and pteridosperms. Floral turnovers continued to occur due to repeated perturbations arising from recurrent volcanic activity until terrestrial ecosystems stabilised around 2.1 Myr after the PTME.In southwestern Gondwana, the post-extinction flora was dominated by bennettitaleans and cycads, with members of Peltaspermales, Ginkgoales, and Umkomasiales being less common constituents of this flora. Around the Induan-Olenekian boundary, as palaeocommunities recovered, a new Dicroidium flora was established, in which Umkomasiales continued to be prominent and in which Equisetales and Cycadales were subordinate forms. The Dicroidium flora further diversified in the Anisian to its peak, wherein Umkomasiales and Ginkgoales constituted most of the tree canopy and Peltaspermales, Petriellales, Cycadales, Umkomasiales, Gnetales, Equisetales, and Dipteridaceae dominated the understory. Coal gap No coal deposits are known from the Early Triassic, and those in the Middle Triassic are thin and low-grade. This "coal gap" has been explained in many ways. It has been suggested that new, more aggressive fungi, insects, and vertebrates evolved and killed vast numbers of trees. These decomposers themselves suffered heavy losses of species during the extinction and are not considered a likely cause of the coal gap. It could simply be that all coal-forming plants were rendered extinct by the P–Tr extinction and that it took 10 million years for a new suite of plants to adapt to the moist, acid conditions of peat bogs. Abiotic factors (factors not caused by organisms), such as decreased rainfall or increased input of clastic sediments, may also be to blame.On the other hand, the lack of coal may simply reflect the scarcity of all known sediments from the Early Triassic. Coal-producing ecosystems, rather than disappearing, may have moved to areas where we have no sedimentary record for the Early Triassic. For example, in eastern Australia a cold climate had been the norm for a long period, with a peat mire ecosystem adapted to these conditions. Approximately 95% of these peat-producing plants went locally extinct at the P–Tr boundary; coal deposits in Australia and Antarctica disappear significantly before the P–Tr boundary. Terrestrial vertebrates Land vertebrates took an unusually long time to recover from the P–Tr extinction; palaeontologist Michael Benton estimated the recovery was not complete until 30 million years after the extinction, i.e. not until the Late Triassic, when the first dinosaurs had risen from bipedal archosaurian ancestors and the first mammals from small cynodont ancestors. A tetrapod gap may have existed from the Induan until the early Spathian between ~30 °N and ~ 40 °S due to extreme heat making these low latitudes uninhabitable for these animals. During the hottest phases of this interval, the gap would have spanned an even greater latitudinal range. East-central Pangaea, with its relatively wet climate, served as a dispersal corridor for PTME surviviors during their Early Triassic recolonisation of the supercontinent. In North China, tetrapod body and ichnofossils are extremely rare in Induan facies, but become more abundant in the Olenekian and Anisian, showing a biotic recovery of tetrapods synchronous with the decreasing aridity during the Olenekian and Anisian. In Russia, even after 15 Myr of recovery, during which ecosystems were rebuilt and remodelled, many terrestrial vertebrate guilds were absent, including small insectivores, small piscivores, large herbivores, and apex predators. Coprolitic evidence indicates that freshwater food webs had recovered by the early Ladinian, with a lacustrine coprolite assemblage from the Ordos Basin of China providing evidence of a trophically multileveled ecosystem containing at least six different trophic levels. The highest trophic levels were filled by vertebrate predators. Overall, terrestrial faunas after the extinction event tended to be more variable and heterogeneous across space than those of the Late Permian, which exhibited less provincialism, being much more geographically homogeneous. Synapsids Lystrosaurus, a pig-sized herbivorous dicynodont therapsid, constituted as much as 90% of some earliest Triassic land vertebrate fauna, although some recent evidence has called into question its status as a post-PTME disaster taxon. The evolutionary success of Lystrosaurus in the aftermath of the PTME is believed to be attributable to the dicynodont taxon’s grouping behaviour and tolerance for extreme and highly variable climatic conditions. Other likely factors behind the success of Lystrosaurus included extremely fast growth rate exhibited by the dicynodont genus, along with its early onset of sexual maturity. Antarctica may have served as a refuge for dicynodonts during the PTME from which surviving dicynodonts spread out of in its aftermath. Ichnological evidence from the earliest Triassic of the Karoo Basin shows dicynodonts were abundant in the immediate aftermath of the biotic crisis. Smaller carnivorous cynodont therapsids also survived, a group that included the ancestors of mammals. As with dicynodonts, selective pressures favoured endothermic epicynodonts. Therocephalians likewise survived; burrowing may have been a key adaptation that helped them make it through the PTME. In the Karoo region of southern Africa, the therocephalians Tetracynodon, Moschorhinus and Ictidosuchoides survived, but do not appear to have been abundant in the Triassic. Early Triassic therocephalians were mostly survivors of the PTME rather than newly evolved taxa that originated during the evolutionary radiation in its aftermath. Both therocephalians and cynodonts, known collectively as eutheriodonts, decreased in body size from the Late Permian to the Early Triassic. This decrease in body size has been interpreted as an example of the Lilliput effect. Sauropsids Archosaurs (which included the ancestors of dinosaurs and crocodilians) were initially rarer than therapsids, but they began to displace therapsids in the mid-Triassic. Olenekian tooth fossil assemblages from the Karoo Basin indicate that archosauromorphs were already highly diverse by this point in time, though not very ecologically specialised. In the mid to late Triassic, the dinosaurs evolved from one group of archosaurs, and went on to dominate terrestrial ecosystems during the Jurassic and Cretaceous. This "Triassic Takeover" may have contributed to the evolution of mammals by forcing the surviving therapsids and their mammaliform successors to live as small, mainly nocturnal insectivores; nocturnal life probably forced at least the mammaliforms to develop fur, better hearing and higher metabolic rates, while losing part of the differential color-sensitive retinal receptors reptilians and birds preserved. Archosaurs also experienced an increase in metabolic rates over time during the Early Triassic. The archosaur dominance would end again due to the Cretaceous–Paleogene extinction event, after which both birds (only extant dinosaurs) and mammals (only extant synapsids) would diversify and share the world. Temnospondyls Temnospondyl amphibians made a quick recovery; the appearance in the fossil record of so many temnospondyl clades suggests they may have been ideally suited as pioneer species that recolonised decimated ecosystems. During the Induan, tupilakosaurids in particular thrived as disaster taxa, though they gave way to other temnospondyls as ecosystems recovered. Temnospondyls were reduced in size during the Induan, but their body size rebounded to pre-PTME levels during the Olenekian. Mastodonsaurus and trematosaurians were the main aquatic and semiaquatic predators during most of the Triassic, some preying on tetrapods and others on fish. Terrestrial invertebrates Most fossil insect groups found after the Permian–Triassic boundary differ significantly from those before: Of Paleozoic insect groups, only the Glosselytrodea, Miomoptera, and Protorthoptera have been discovered in deposits from after the extinction. The caloneurodeans, monurans, paleodictyopteroids, protelytropterans, and protodonates became extinct by the end of the Permian. Though Triassic insects are very different from those of the Permian, a gap in the insect fossil record spans approximately 15 million years from the late Permian to early Triassic. In well-documented Late Triassic deposits, fossils overwhelmingly consist of modern fossil insect groups.Microbially induced sedimentary structures (MISS) dominated North Chinese terrestrial fossil assemblages in the Early Triassic. In Arctic Canada as well, MISS became a common occurrence following the Permian-Triassic extinction. The prevalence of MISS in many Early Triassic rocks shows that microbial mats were an important feature of post-extinction ecosystems that were denuded of bioturbators that would have otherwise prevented their widespread occurrence. The disappearance of MISS later in the Early Triassic likely indicated a greater recovery of terrestrial ecosystems and specifically a return of prevalent bioturbation. Hypotheses about cause Pinpointing the exact causes of the Permian–Triassic extinction event is difficult, mostly because it occurred over 250 million years ago, and since then much of the evidence that would have pointed to the cause has been destroyed or is concealed deep within the Earth under many layers of rock. The sea floor is completely recycled each 200 million years or so by the ongoing processes of plate tectonics and seafloor spreading, leaving no useful indications beneath the ocean. Yet, scientists have gathered significant evidence for causes, and several mechanisms have been proposed. The proposals include both catastrophic and gradual processes (similar to those theorized for the Cretaceous–Paleogene extinction event). The catastrophic group includes one or more large bolide impact events, increased volcanism, and sudden release of methane from the seafloor, either due to dissociation of methane hydrate deposits or metabolism of organic carbon deposits by methanogenic microbes. The gradual group includes sea level change, increasing hypoxia, and increasing aridity.Any hypothesis about the cause must explain the selectivity of the event, which affected organisms with calcium carbonate skeletons most severely; the long period (4 to 6 million years) before recovery started, and the minimal extent of biological mineralization (despite inorganic carbonates being deposited) once the recovery began. Volcanism Siberian Traps The flood basalt eruptions that produced the Siberian Traps constituted one of the largest known volcanic events on Earth and covered over 2,000,000 square kilometres (770,000 sq mi) with lava (roughly the size of Saudi Arabia). Such a vast aerial extent of the flood basalts may have contributed to their exceptionally catastrophic impact. The date of the Siberian Traps eruptions and the extinction event are in good agreement.The timeline of the extinction event strongly indicates it was caused by events in the large igneous province of the Siberian Traps. A study of the Norilsk and Maymecha-Kotuy regions of the northern Siberian platform indicates that volcanic activity occurred during a small number of high intensity pulses that exuded enormous volumes of magma, as opposed to flows emplaced at regular intervals.The rate of carbon dioxide release from the Siberian Traps represented one of the most rapid rises of carbon dioxide levels in the geologic record, with the rate of carbon dioxide emissions being estimated by one study to be five times faster than the rate during the already catastrophic Capitanian mass extinction event, which occurred as a result of the activity of the Emeishan Traps in southwestern China at the end of the Middle Permian. Carbon dioxide levels prior to and after the eruptions are poorly constrained, but may have jumped from between 500 and 4,000 ppm prior to the extinction event to around 8,000 ppm after the extinction according to one estimate.Another study estimated pre-PTME carbon dioxide levels at 400 ppm that then rose to around 2,500 ppm during the extinction event, with approximately 3,900 to 12,000 gigatonnes of carbon being added to the ocean-atmosphere system. As carbon dioxide levels shot up, extreme temperature rise would have followed, though some evidence suggests a lag of 12,000 to 128,000 years between the rise in volcanic carbon dioxide emissions and global warming. During the latest Permian, before the PTME, global average surface temperatures were about 18.2 °C. Global temperatures shot up to as much as 35 °C, and this hyperthermal condition may have lasted as long as 500,000 years. Air temperatures at Gondwana's high southern latitudes experienced a warming of ~10–14 °C. According to oxygen isotope shifts from conodont apatite in South China, low latitude surface water temperatures skyrocketed by about 8 °C. In Iran, tropical SSTs were between 27 and 33 °C during the Changhsingian but jumped to over 35 °C during the PTME.So much carbon dioxide was released that inorganic carbon sinks were overwhelmed and depleted, enabling the extremely high carbon dioxide concentrations to persist in the atmosphere for much longer than would have otherwise been possible. The position and alignment of Pangaea at the time made the inorganic carbon cycle very inefficient at returning volcanically emitted carbon back to the lithosphere and thus contributed to the exceptional lethality of carbon dioxide emissions during the PTME. In a 2020 paper, scientists reconstructed the mechanisms that led to the extinction event in a biogeochemical model, showed the consequences of the greenhouse effect on the marine environment, and concluded that the mass extinction can be traced back to volcanic CO2 emissions. Further evidence based on paired coronene-mercury spikes for a volcanic combustion cause of the mass extinction has also been found. The synchronicity of geographically disparate mercury anomalies with the environmental enrichment in isotopically light carbon confirms a common volcanogenic cause for these mercury spikes.The Siberian Traps had unusual features that made them even more dangerous. The Siberian lithosphere is significantly enriched in halogens, whose properties render them extremely destructive to the ozone layer, and evidence from subcontinental lithospheric xenoliths indicates that as much as 70% of the halogen content was released into the atmosphere from sections of the lithosphere intruded into by the Siberian Traps. Around 18 teratonnes of hydrochloric acid were emitted by the Siberian Traps. The Siberian Traps eruptions released sulphur-rich volatiles that caused dust clouds and the formation of acid aerosols, which would have blocked out sunlight and thus disrupted photosynthesis both on land and in the photic zone of the ocean, causing food chains to collapse. These volcanic outbursts of sulphur also induced brief but severe global cooling that interrupted the broader trend of rapid global warming, leading to glacio-eustatic sea level fall.The eruptions may also have caused acid rain as the aerosols washed out of the atmosphere. That may have killed land plants and mollusks and planktonic organisms which had calcium carbonate shells. Pure flood basalts produce fluid, low-viscosity lava, and do not hurl debris into the atmosphere. It appears, however, that 20% of the output of the Siberian Traps eruptions was pyroclastic (consisted of ash and other debris thrown high into the atmosphere), increasing the short-term cooling effect. When all of the dust and ash clouds and aerosols washed out of the atmosphere, the excess carbon dioxide emitted by the Siberian Traps would have remained and global warming would have proceeded without any mitigating effects.The Siberian Traps are underlain by thick sequences of Early-Mid Paleozoic aged carbonate and evaporite deposits, as well as Carboniferous-Permian aged coal bearing clastic rocks. When heated, such as by igneous intrusions, these rocks are capable of emitting large amounts of greenhouse and toxic gases. The unique setting of the Siberian Traps over these deposits is likely the reason for the severity of the extinction. The basalt lava erupted or intruded into carbonate rocks and into sediments that were in the process of forming large coal beds, both of which would have emitted large amounts of carbon dioxide, leading to stronger global warming after the dust and aerosols settled. The timing of the change of the Siberian Traps from flood basalt dominated emplacement to sill dominated emplacement, the latter of which would have liberated the largest amounts of trapped hydrocarbon deposits, coincides with the onset of the main phase of the mass extinction and is linked to a major negative δ13C excursion. Venting of coal-derived methane was not the only mechanism of carbon release; there is also extensive evidence of explosive combustion of coal and discharge of coal-fly ash. A 2011 study led by Stephen E. Grasby reported evidence that volcanism caused massive coal beds to ignite, possibly releasing more than 3 trillion tons of carbon. The team found ash deposits in deep rock layers near what is now the Buchanan Lake Formation. According to their article, "coal ash dispersed by the explosive Siberian Trap eruption would be expected to have an associated release of toxic elements in impacted water bodies where fly ash slurries developed. ... Mafic megascale eruptions are long-lived events that would allow significant build-up of global ash clouds." In a statement, Grasby said, "In addition to these volcanoes causing fires through coal, the ash it spewed was highly toxic and was released in the land and water, potentially contributing to the worst extinction event in earth history." However, some researchers propose that these supposed fly ashes were actually the result of wildfires instead, and were not related to massive coal combustion by intrusive volcanism. A 2013 study led by Q.Y. Yang reported that the total amounts of important volatiles emitted from the Siberian Traps consisted of 8.5 × 107 Tg CO2, 4.4 × 106 Tg CO, 7.0 × 106 Tg H2S, and 6.8 × 107 Tg SO2. The data support a popular notion that the end-Permian mass extinction on the Earth was caused by the emission of enormous amounts of volatiles from the Siberian Traps into the atmosphere.Mercury anomalies corresponding to the time of Siberian Traps activity have been found in many geographically disparate sites, evidencing that these volcanic eruptions released significant quantities of toxic mercury into the atmosphere and ocean, causing even further large scale die-offs of terrestrial and marine life. A series of surges in mercury emissions raised environmental mercury concentrations to levels orders of magnitude above normal background levels and caused intervals of extreme environmental toxicity, each lasting for over a thousand years, in both terrestrial and marine ecosystems. Mutagenesis in surviving plants after the PTME coeval with mercury and copper loading confirms the existence of volcanically induced heavy metal toxicity. Enhanced bioproductivity may have sequestered mercury and acted as a mitigating factor that ameliorated mercury poisoning to an extent. Immense volumes of nickel aerosols were also released by Siberian Traps volcanic activity, further contributing to metal poisoning. Cobalt and arsenic emissions from the Siberian Traps caused further still environmental stress. A major volcanogenic influx of isotopically light zinc from the Siberian Traps has also been recorded.The devastation wrought by the Siberian Traps did not end following the Permian-Triassic boundary. Stable carbon isotope fluctuations suggest that massive Siberian Traps activity recurred many times over the course of the Early Triassic; this episodic return of severe volcanism caused further extinction events during the epoch. Additionally, enhanced reverse weathering and depletion of siliceous carbon sinks enabled extreme warmth to persist for much longer than expected if the excess carbon dioxide was sequestered by silicate rock. The decline in biological silicate deposition resulting from the mass extinction of siliceous organisms acted as a positive feedback loop wherein mass death of marine life exacerbated and prolonged extreme hothouse conditions. Choiyoi Silicic Large Igneous Province A second flood basalt event that emplaced what is now known as the Choiyoi Silicic Large Igneous Province in southwestern Gondwana between around 286 Ma and 247 Ma has also been suggested as a possible extinction mechanism. Being about 1,300,000 cubic kilometres in volume and 1,680,000 square kilometres in area, this flood basalt event was approximately 40% the size of the Siberian Traps and thus may have been a significant additional factor explaining the severity of the end-Permian extinction. Specifically, this flood basalt has been implicated in the regional demise of the Gondwanan Glossopteris flora. Indochina-South China subduction-zone volcanic arc Mercury anomalies preceding the end-Permian extinction have been discovered in what was then the boundary between the South China Craton and the Indochinese plate, which was home to a subduction zone and a corresponding volcanic arc. Hafnium isotopes from syndepositional magmatic zircons found in ash beds created by this pulse of volcanic activity confirm its origin in subduction-zone volcanism rather than large igneous province activity. The enrichment of copper samples from these deposits in isotopically light copper provide additional confirmation of the felsic nature of this volcanism and that its origin was not a large igneous province. This volcanism has been speculated to have caused local episodes of biotic stress among radiolarians, sponges, and brachiopods that took place over the 60,000 years preceding the end-Permian marine extinction, as well as an ammonoid crisis manifested in their decreased morphological complexity and size and their increased rate of turnover that began in the lower C. yini biozone, around 200,000 years prior to the end-Permian extinction. Methane clathrate gasification Methane clathrates, also known as methane hydrates, consist of methane molecules trapped in cages of water molecules. The methane, produced by methanogens (microscopic single-celled organisms), has a 13C ⁄ 12C ratio about 6.0% below normal (δ13C −6.0%). At the right combination of pressure and temperature, the methane is trapped in clathrates fairly close to the surface of permafrost and, in much larger quantities, on continental shelves and the deeper seabed close to them. Oceanic methane hydrates are usually found buried in sediments where the seawater is at least 300 m (980 ft) deep. They can be found up to about 2,000 m (6,600 ft) below the sea floor, but usually only about 1,100 m (3,600 ft) below the sea floor.The release of methane from the clathrates has been considered as a cause because scientists have found worldwide evidence of a swift decrease of about 1% in the 13C ⁄ 12C isotope ratio in carbonate rocks from the end-Permian. This is the first, largest, and most rapid of a series of negative and positive excursions (decreases and increases in 13C ⁄ 12C ratio) that continues until the isotope ratio abruptly stabilised in the middle Triassic, followed soon afterwards by the recovery of calcifying life forms (organisms that use calcium carbonate to build hard parts such as shells). While a variety of factors may have contributed to this drop in the 13C ⁄ 12C ratio, a 2002 review found most of them to be insufficient to account fully for the observed amount: Gases from volcanic eruptions have a 13C ⁄ 12C ratio about 0.5 to 0.8% below standard (δ13C about −0.5 to −0.8%), but an assessment made in 1995 concluded that the amount required to produce a reduction of about 1.0% worldwide requires eruptions greater by orders of magnitude than any for which evidence has been found. (However, this analysis addressed only CO2 produced by the magma itself, not from interactions with carbon bearing sediments, as later proposed.) A reduction in organic activity would extract 12C more slowly from the environment and leave more of it to be incorporated into sediments, thus reducing the 13C ⁄ 12C ratio. Biochemical processes preferentially use the lighter isotopes since chemical reactions are ultimately driven by electromagnetic forces between atoms and lighter isotopes respond more quickly to these forces, but a study of a smaller drop of 0.3 to 0.4% in 13C ⁄ 12C (δ13C −3 to −4 ‰) at the Paleocene-Eocene Thermal Maximum (PETM) concluded that even transferring all the organic carbon (in organisms, soils, and dissolved in the ocean) into sediments would be insufficient: Even such a large burial of material rich in 12C would not have produced the 'smaller' drop in the 13C ⁄ 12C ratio of the rocks around the PETM. Buried sedimentary organic matter has a 13C ⁄ 12C ratio 2.0 to 2.5% below normal (δ13C −2.0 to −2.5%). Theoretically, if the sea level fell sharply, shallow marine sediments would be exposed to oxidation. But 6,500–8,400 gigatons (1 gigaton = 109 metric tons) of organic carbon would have to be oxidized and returned to the ocean-atmosphere system within less than a few hundred thousand years to reduce the 13C ⁄ 12C ratio by 1.0%, which is not thought to be a realistic possibility. Moreover, sea levels were rising rather than falling at the time of the extinction. Rather than a sudden decline in sea level, intermittent periods of ocean-bottom hyperoxia and anoxia (high-oxygen and low- or zero-oxygen conditions) may have caused the 13C ⁄ 12C ratio fluctuations in the Early Triassic; and global anoxia may have been responsible for the end-Permian blip. The continents of the end-Permian and early Triassic were more clustered in the tropics than they are now, and large tropical rivers would have dumped sediment into smaller, partially enclosed ocean basins at low latitudes. Such conditions favor oxic and anoxic episodes; oxic/anoxic conditions would result in a rapid release/burial, respectively, of large amounts of organic carbon, which has a low 13C ⁄ 12C ratio because biochemical processes use the lighter isotopes more. That or another organic-based reason may have been responsible for both that and a late Proterozoic/Cambrian pattern of fluctuating 13C ⁄ 12C ratios.Prior to consideration of the inclusion of roasting carbonate sediments by volcanism, the only proposed mechanism sufficient to cause a global 1% reduction in the 13C ⁄ 12C ratio was the release of methane from methane clathrates. Carbon-cycle models confirm that it would have had enough effect to produce the observed reduction. It was also suggested that a large-scale release of methane and other greenhouse gases from the ocean into the atmosphere was connected to the anoxic events and euxinic (i.e. sulfidic) events at the time, with the exact mechanism compared to the 1986 Lake Nyos disaster.The area covered by lava from the Siberian Traps eruptions is about twice as large as was originally thought, and most of the additional area was shallow sea at the time. The seabed probably contained methane hydrate deposits, and the lava caused the deposits to dissociate, releasing vast quantities of methane. A vast release of methane might cause significant global warming since methane is a very powerful greenhouse gas. Strong evidence suggests the global temperatures increased by about 6 °C (10.8 °F) near the equator and therefore by more at higher latitudes: a sharp decrease in oxygen isotope ratios (18O ⁄ 16O); the extinction of Glossopteris flora (Glossopteris and plants that grew in the same areas), which needed a cold climate, with its replacement by floras typical of lower paleolatitudes.However, the pattern of isotope shifts expected to result from a massive release of methane does not match the patterns seen throughout the Early Triassic. Not only would such a cause require the release of five times as much methane as postulated for the PETM, but would it also have to be reburied at an unrealistically high rate to account for the rapid increases in the 13C ⁄ 12C ratio (episodes of high positive δ13C) throughout the early Triassic before it was released several times again. The latest research suggests that greenhouse gas release during the extinction event was dominated by volcanic carbon dioxide, and while methane release had to have contributed, isotopic signatures show that thermogenic methane released from the Siberian Traps had consistently played a larger role than methane from clathrates and any other biogenic sources such as wetlands during the event. Adding to the evidence against methane clathrate release as the central driver of warming, the main rapid warming event is also associated with marine transgression rather than regression; the former would not normally have initiated methane release, which would have instead required a decrease in pressure, something that would be generated by a retreat of shallow seas. Hypercapnia and acidification Marine organisms are more sensitive to changes in CO2 (carbon dioxide) levels than terrestrial organisms for a variety of reasons. CO2 is 28 times more soluble in water than is oxygen. Marine animals normally function with lower concentrations of CO2 in their bodies than land animals, as the removal of CO2 in air-breathing animals is impeded by the need for the gas to pass through the respiratory system's membranes (lungs' alveolus, tracheae, and the like), even when CO2 diffuses more easily than oxygen. In marine organisms, relatively modest but sustained increases in CO2 concentrations hamper the synthesis of proteins, reduce fertilization rates, and produce deformities in calcareous hard parts. Higher concentrations of CO2 also result in decreased activity levels in many active marine animals, hindering their ability to obtain food. An analysis of marine fossils from the Permian's final Changhsingian stage found that marine organisms with a low tolerance for hypercapnia (high concentration of carbon dioxide) had high extinction rates, and the most tolerant organisms had very slight losses. The most vulnerable marine organisms were those that produced calcareous hard parts (from calcium carbonate) and had low metabolic rates and weak respiratory systems, notably calcareous sponges, rugose and tabulate corals, calcite-depositing brachiopods, bryozoans, and echinoderms; about 81% of such genera became extinct. Close relatives without calcareous hard parts suffered only minor losses, such as sea anemones, from which modern corals evolved. Animals with high metabolic rates, well-developed respiratory systems, and non-calcareous hard parts had negligible losses except for conodonts, in which 33% of genera died out. This pattern is also consistent with what is known about the effects of hypoxia, a shortage but not total absence of oxygen. However, hypoxia cannot have been the only killing mechanism for marine organisms. Nearly all of the continental shelf waters would have had to become severely hypoxic to account for the magnitude of the extinction, but such a catastrophe would make it difficult to explain the very selective pattern of the extinction. Mathematical models of the Late Permian and Early Triassic atmospheres show a significant but protracted decline in atmospheric oxygen levels, with no acceleration near the P–Tr boundary. Minimum atmospheric oxygen levels in the Early Triassic are never less than present-day levels and so the decline in oxygen levels does not match the temporal pattern of the extinction.In addition, an increase in CO2 concentration is inevitably linked to ocean acidification, consistent with the preferential extinction of heavily calcified taxa and other signals in the rock record that suggest a more acidic ocean. The decrease in ocean pH is calculated to be up to 0.7 units. Ocean acidification was most extreme at mid-latitudes, and the major marine transgression associated with the end-Permian extinction is believed to have devastated shallow shelf communities in conjunction with anoxia. Evidence from paralic facies spanning the Permian-Triassic boundary in western Guizhou and eastern Yunnan, however, shows a local marine transgression dominated by carbonate deposition, suggesting that ocean acidification did not occur across the entire globe and was likely limited to certain regions of the world's oceans. One study, published in Scientific Reports, concluded that widespread ocean acidification, if it did occur, was not intense enough to impede calcification and only occurred during the beginning of the extinction event. The persistence of highly elevated carbon dioxide concentrations in the atmosphere during the Early Triassic would have impeded the recovery of biocalcifying organisms after the PTME.Acidity generated by increased carbon dioxide concentrations in soil and sulphur dioxide dissolution in rainwater was also a kill mechanism on land. The increasing acidification of rainwater caused increased soil erosion as a result of the increased acidity of forest soils, evidenced by the increased influx of terrestrially derived organic sediments found in marine sedimentary deposits during the end-Permian extinction. Further evidence of an increase in soil acidity comes from elevated Ba/Sr ratios in earliest Triassic soils. A positive feedback loop further enhancing and prolonging soil acidification may have resulted from the decline of infaunal invertebrates like tubificids and chironomids, which remove acid metabolites from the soil. The increased abundance of vermiculitic clays in Shansi, South China coinciding with the Permian-Triassic boundary strongly suggests a sharp drop in soil pH causally related to volcanogenic emissions of carbon dioxide and sulphur dioxide. Hopane anomalies have also been interpreted as evidence of acidic soils and peats. As with many other environmental stressors, acidity on land episodically persisted well into the Triassic, stunting the recovery of terrestrial ecosystems. Anoxia and euxinia Evidence for widespread ocean anoxia (severe deficiency of oxygen) and euxinia (presence of hydrogen sulfide) is found from the Late Permian to the Early Triassic. Throughout most of the Tethys and Panthalassic Oceans, evidence for anoxia appears at the extinction event, including small pyrite framboids, negative uranium isotope excursions, negative nitrogen isotope excursions, relatively positive carbon isotope ratios in polycyclic aromatic hydrocarbons, high thorium/uranium ratios, positive cerium enrichments, and fine laminations in sediments. However, evidence for anoxia precedes the extinction at some other sites, including Spiti, India, Shangsi, China, Meishan, China, Opal Creek, Alberta, and Kap Stosch, Greenland. Biogeochemical evidence also points to the presence of euxinia during the PTME. Biomarkers for green sulfur bacteria, such as isorenieratane, the diagenetic product of isorenieratene, are widely used as indicators of photic zone euxinia because green sulfur bacteria require both sunlight and hydrogen sulfide to survive. Their abundance in sediments from the P–T boundary indicates euxinic conditions were present even in the shallow waters of the photic zone. Negative mercury isotope excursion further bolster evidence for extensive euxinia during the PTME. The disproportionate extinction of high-latitude marine species provides further evidence for oxygen depletion as a killing mechanism; low-latitude species living in warmer, less oxygenated waters are naturally better adapted to lower levels of oxygen and are able to migrate to higher latitudes during periods of global warming, whereas high-latitude organisms are unable to escape from warming, hypoxic waters at the poles. Evidence of a lag between volcanic mercury inputs and biotic turnovers provides further support for anoxia and euxinia as the key killing mechanism, because extinctions would be expected to be synchronous with volcanic mercury discharge if volcanism and hypercapnia was the primary driver of extinction.The sequence of events leading to anoxic oceans may have been triggered by carbon dioxide emissions from the eruption of the Siberian Traps. In that scenario, warming from the enhanced greenhouse effect would reduce the solubility of oxygen in seawater, causing the concentration of oxygen to decline. Increased coastal evaporation would have caused the formation of warm saline bottom water (WSBW) depleted in oxygen and nutrients, which spread across the world through the deep oceans. The influx of WSBW caused thermal expansion of water that raised sea levels, bringing anoxic waters onto shallow shelfs and enhancing the formation of WSBW in a positive feedback loop. The flux of terrigeneous material into the oceans increased as a result of soil erosion, which would have facilitated increased eutrophication. Increased chemical weathering of the continents due to warming and the acceleration of the water cycle would increase the riverine flux of nutrients to the ocean. Increased phosphate levels would have supported greater primary productivity in the surface oceans. The increase in organic matter production would have caused more organic matter to sink into the deep ocean, where its respiration would further decrease oxygen concentrations. Once anoxia became established, it would have been sustained by a positive feedback loop because deep water anoxia tends to increase the recycling efficiency of phosphate, leading to even higher productivity. Along the Panthalassan coast of South China, oxygen decline was also driven by large-scale upwelling of deep water enriched in various nutrients, causing this region of the ocean to be hit by especially severe anoxia. Convective overturn helped facilitate the expansion of anoxia throughout the water column. A severe anoxic event at the end of the Permian would have allowed sulfate-reducing bacteria to thrive, causing the production of large amounts of hydrogen sulfide in the anoxic ocean, turning it euxinic.This spread of toxic, oxygen-depleted water would have devastated marine life, causing widespread die-offs. Models of ocean chemistry suggest that anoxia and euxinia were closely associated with hypercapnia (high levels of carbon dioxide). This suggests that poisoning from hydrogen sulfide, anoxia, and hypercapnia acted together as a killing mechanism. Hypercapnia best explains the selectivity of the extinction, but anoxia and euxinia probably contributed to the high mortality of the event. The persistence of anoxia through the Early Triassic may explain the slow recovery of marine life and low levels of biodiversity after the extinction, particularly that of benthic organisms. Reexpansions of oxygen-minimum zones did not cease until the late Spathian, periodically setting back and restarting the biotic recovery process. Some sections show a rather quick return to oxic water column conditions, however, so for how long anoxia persisted remains debated. The volatility of the Early Triassic sulphur cycle suggests marine life continued to face returns of euxinia as well.Some scientists have challenged the anoxia hypothesis on the grounds that long-lasting anoxic conditions could not have been supported if Late Permian thermohaline ocean circulation conformed to the "thermal mode" characterised by cooling at high latitudes. Anoxia may have persisted under a "haline mode" in which circulation was driven by subtropical evaporation, although the "haline mode" is highly unstable and was unlikely to have represented Late Permian oceanic circulation.Oxygen depletion via extensive microbial blooms also played a role in the biological collapse of not just marine ecosystems but freshwater ones as well. Persistent lack of oxygen after the extinction event itself helped delay biotic recovery for much of the Early Triassic epoch. Aridification Increasing continental aridity, a trend well underway even before the PTME as a result of the coalescence of the supercontinent Pangaea, was drastically exacerbated by terminal Permian volcanism and global warming. The combination of global warming and drying generated an increased incidence of wildfires. Tropical coastal swamp floras such as those in South China may have been very detrimentally impacted by the increase in wildfires, though it is ultimately unclear if an increase in wildfires played a role in driving taxa to extinction.Aridification trends varied widely in their tempo and regional impact. Analysis of the fossil river deposits of the floodplains of the Karoo Basin indicate a shift from meandering to braided river patterns, indicating a very abrupt drying of the climate. The climate change may have taken as little as 100,000 years, prompting the extinction of the unique Glossopteris flora and its associated herbivores, followed by the carnivorous guild. A pattern of aridity-induced extinctions that progressively ascended up the food chain has been deduced from Karoo Basin biostratigraphy. Evidence for aridification in the Karoo across the Permian-Triassic boundary is not, however, universal, as some palaeosol evidence indicates a wettening of the local climate during the transition between the two geologic periods. Evidence from the Sydney Basin of eastern Australia, on the other hand, suggests that the expansion of semi-arid and arid climatic belts across Pangaea was not immediate but was instead a gradual, prolonged process. Apart from the disappearance of peatlands, there was little evidence of significant sedimentological changes in depositional style across the Permian-Triassic boundary. Instead, a modest shift to amplified seasonality and hotter summers is suggested by palaeoclimatological models based on weathering proxies from the region's Late Permian and Early Triassic deposits. In the Kuznetsk Basin of southwestern Siberia, an increase in aridity led to the demise of the humid-adapted cordaites forests in the region a few hundred thousand years before the Permian-Triassic boundary. Drying of this basin has been attributed to a broader poleward shift of drier, more arid climates during the late Changhsingian before the more abrupt main phase of the extinction at the Permian-Triassic boundary that disproportionately affected tropical and subtropical species.The persistence of hyperaridity varied regionally as well. In the North China Basin, highly arid climatic conditions are recorded during the latest Permian, near the Permian-Triassic boundary, with a swing towards increased precipitation during the Early Triassic, the latter likely assisting biotic recovery following the mass extinction. Elsewhere, such as in the Karoo Basin, episodes of dry climate recurred regularly in the Early Triassic, with profound effects on terrestrial tetrapods.The types and diversity of ichnofossils in a locality has been used as an indicator measuring aridity. Nurra, an ichnofossil site on the island of Sardinia, shows evidence of major drought-related stress among crustaceans. Whereas the Permian subnetwork at Nurra displays extensive horizontal backfilled traces and high ichnodiversity, the Early Triassic subnetwork is characterised by an absence of backfilled traces, an ichnological sign of aridification. Ozone depletion A collapse of the atmospheric ozone shield has been invoked as an explanation for the mass extinction, particularly that of terrestrial plants. Ozone production may have been reduced by 60-70%, increasing the flux of ultraviolet radiation by 400% at equatorial latitudes and 5,000% at polar latitudes. The hypothesis has the advantage of explaining the mass extinction of plants, which would have added to the methane levels and should otherwise have thrived in an atmosphere with a high level of carbon dioxide. Fossil spores from the end-Permian further support the theory; many spores show deformities that could have been caused by ultraviolet radiation, which would have been more intense after hydrogen sulfide emissions weakened the ozone layer. Malformed plant spores from the time of the extinction event show an increase in ultraviolet B absorbing compounds, confirming that increased ultraviolet radiation played a role in the environmental catastrophe and excluding other possible causes of mutagenesis, such as heavy metal toxicity, in these mutated spores.Multiple mechanisms could have reduced the ozone shield and rendered it ineffective. Computer modelling shows high atmospheric methane levels are associated with ozone shield decline and may have contributed to its reduction during the PTME. Volcanic emissions of sulphate aerosols into the stratosphere would have dealt significant destruction to the ozone layer. As mentioned previously, the rocks in the region where the Siberian Traps were emplaced are extremely rich in halogens. The intrusion of Siberian Traps volcanism into deposits rich in organohalogens, such as methyl bromide and methyl chloride, would have been another source of ozone destruction. An uptick in wildfires, a natural source of methyl chloride, would have had further deleterious effects still on the atmospheric ozone shield. Upwelling of euxinic water may have released massive hydrogen sulphide emissions into the atmosphere and would poison terrestrial plants and animals and severely weaken the ozone layer, exposing much of the life that remained to fatal levels of UV radiation. Indeed, biomarker evidence for anaerobic photosynthesis by Chlorobiaceae (green sulfur bacteria) from the Late-Permian into the Early Triassic indicates that hydrogen sulphide did upwell into shallow waters because these bacteria are restricted to the photic zone and use sulfide as an electron donor. Asteroid impact Evidence that an impact event may have caused the Cretaceous–Paleogene extinction has led to speculation that similar impacts may have been the cause of other extinction events, including the P–Tr extinction, and thus to a search for evidence of impacts at the times of other extinctions, such as large impact craters of the appropriate age, however, suggestions that an asteroid impact was the trigger of the Permian-Triassic extinction are now largely rejected.Reported evidence for an impact event from the P–Tr boundary level includes rare grains of shocked quartz in Australia and Antarctica; fullerenes trapping extraterrestrial noble gases; meteorite fragments in Antarctica; and grains rich in iron, nickel, and silicon, which may have been created by an impact. However, the accuracy of most of these claims has been challenged. For example, quartz from Graphite Peak in Antarctica, once considered "shocked", has been re-examined by optical and transmission electron microscopy. The observed features were concluded to be due not to shock, but rather to plastic deformation, consistent with formation in a tectonic environment such as volcanism. Iridium levels in many sites straddling the Permian-Triassic boundaries are not anomalous, providing evidence against an extraterrestrial impact as the PTME’s cause.An impact crater on the seafloor would be evidence of a possible cause of the P–Tr extinction, but such a crater would by now have disappeared. As 70% of the Earth's surface is currently sea, an asteroid or comet fragment is now perhaps more than twice as likely to hit the ocean as it is to hit land. However, Earth's oldest ocean-floor crust is only 200 million years old as it is continually being destroyed and renewed by spreading and subduction. Furthermore, craters produced by very large impacts may be masked by extensive flood basalting from below after the crust is punctured or weakened. Yet, subduction should not be entirely accepted as an explanation for the lack of evidence: as with the K-T event, an ejecta blanket stratum rich in siderophilic elements (such as iridium) would be expected in formations from the time. A large impact might have triggered other mechanisms of extinction described above, such as the Siberian Traps eruptions at either an impact site or the antipode of an impact site. The abruptness of an impact also explains why more species did not rapidly evolve to survive, as would be expected if the Permian–Triassic event had been slower and less global than a meteorite impact. Bolide impact claims have been criticised on the grounds that they are unnecessary as explanations for the extinctions, and they do not fit the known data compatible with a protracted extinction spanning thousands of years. Additionally, many sites spanning the Permian-Triassic boundary display a complete lack of evidence of an impact event. Possible impact sites Possible impact craters proposed as the site of an impact causing the P–Tr extinction include the 250 km (160 mi) Bedout structure off the northwest coast of Australia and the hypothesized 480 km (300 mi) Wilkes Land crater of East Antarctica. An impact has not been proved in either case, and the idea has been widely criticized. The Wilkes Land geophysical feature is of very uncertain age, possibly later than the Permian–Triassic extinction. Another impact hypothesis postulates that the impact event which formed the Araguainha crater, whose formation has been dated to 254.7 ± 2.5 million, a possible temporal range overlapping with the end-Permian extinction, precipitated the mass extinction. The impact occurred around extensive deposits of oil shale in the shallow marine Paraná–Karoo Basin, whose perturbation by the seismicity resulting from impact likely discharged about 1.6 teratonnes of methane into Earth's atmosphere, buttressing the already rapid warming caused by hydrocarbon release due to the Siberian Traps. The large earthquakes generated by the impact would have additionally generated massive tsunamis across much of the globe. Despite this, most palaeontologists reject the impact as being a significant driver of the extinction, citing the relatively low energy (equivalent to 105 to 106 of TNT, around two orders of magnitude lower than the impact energy believed to be required to induce mass extinctions) released by the impact.A 2017 paper noted the discovery of a circular gravity anomaly near the Falkland Islands which might correspond to an impact crater with a diameter of 250 km (160 mi), as supported by seismic and magnetic evidence. Estimates for the age of the structure range up to 250 million years old. This would be substantially larger than the well-known 180 km (110 mi) Chicxulub impact crater associated with a later extinction. However, Dave McCarthy and colleagues from the British Geological Survey illustrated that the gravity anomaly is not circular and also that the seismic data presented by Rocca, Rampino and Baez Presser did not cross the proposed crater or provide any evidence for an impact crater. Methanogens A hypothesis published in 2014 posits that a genus of anaerobic methanogenic archaea known as Methanosarcina was responsible for the event. Three lines of evidence suggest that these microbes acquired a new metabolic pathway via gene transfer at about that time, enabling them to efficiently metabolize acetate into methane. That would have led to their exponential reproduction, allowing them to rapidly consume vast deposits of organic carbon that had accumulated in the marine sediment. The result would have been a sharp buildup of methane and carbon dioxide in the Earth's oceans and atmosphere, in a manner that may be consistent with the 13C/12C isotopic record. Massive volcanism facilitated this process by releasing large amounts of nickel, a scarce metal which is a cofactor for enzymes involved in producing methane. Chemostratigraphic analysis of Permian-Triassic boundary sediments in Chaotian demonstrates a methanogenic burst could be responsible for some percentage of the carbon isotopic fluctuations. On the other hand, in the canonical Meishan sections, the nickel concentration increases somewhat after the δ13C concentrations have begun to fall. Supercontinent Pangaea In the mid-Permian (during the Kungurian age of the Permian's Cisuralian epoch), Earth's major continental plates joined, forming a supercontinent called Pangaea, which was surrounded by the superocean, Panthalassa. Oceanic circulation and atmospheric weather patterns during the mid-Permian produced seasonal monsoons near the coasts and an arid climate in the vast continental interior.As the supercontinent formed, the ecologically diverse and productive coastal areas shrank. The shallow aquatic environments were eliminated and exposed formerly protected organisms of the rich continental shelves to increased environmental volatility. Pangaea's formation depleted marine life at near catastrophic rates. However, Pangaea's effect on land extinctions is thought to have been smaller. In fact, the advance of the therapsids and increase in their diversity is attributed to the late Permian, when Pangaea's global effect was thought to have peaked. While Pangaea's formation initiated a long period of marine extinction, its impact on the "Great Dying" and the end of the Permian is uncertain. Interstellar dust John Gribbin argues that the Solar System last passed through a spiral arm of the Milky Way around 250 million years ago and that the resultant dusty gas clouds may have caused a dimming of the Sun, which combined with the effect of Pangaea to produce an ice age. Comparison to present global warming The PTME has been compared to the current anthropogenic global warming situation and Holocene extinction due to sharing the common characteristic of rapid rates of carbon dioxide release. Though the current rate of greenhouse gas emissions is more than an order of magnitude greater than the rate measured over the course of the PTME, the discharge of greenhouse gases during the PTME is poorly constrained geo-chronologically and was most likely pulsed and constrained to a few key, short intervals, rather than continuously occurring at a constant rate for the whole extinction interval; the rate of carbon release within these intervals was likely to have been similar in timing to modern anthropogenic emissions. As they did during the PTME, oceans in the present day are experiencing drops in pH and in oxygen levels, prompting further comparisons between modern anthropogenic ecological conditions and the PTME. Another biocalcification event similar in its effects on modern marine ecosystems is predicted to occur if carbon dioxide levels continue to rise. The similarities between the two extinction events have led to warnings from geologists about the urgent need for reducing carbon dioxide emissions if an event similar to the PTME is to be prevented from occurring. See also Carbon dioxide Extinction event Climate change List of possible impact structures on Earth Silurian hypothesis References Further reading Huang, Yuangeng; Chen, Zhong-Qiang; et al. (2023). "The stability and collapse of marine ecosystems during the Permian-Triassic mass extinction". Current Biology. 33 (6): 1059–1070.e4. doi:10.1016/j.cub.2023.02.007. PMID 36841237. S2CID 257186215. Mays, Chris; McLoughlin, Stephen; et al. (September 17, 2021). "Lethal microbial blooms delayed freshwater ecosystem recovery following the end-Permian extinction". Nature Communications. 12 (5511): 5511. Bibcode:2021NatCo..12.5511M. doi:10.1038/s41467-021-25711-3. PMC 8448769. PMID 34535650. Over, Jess (editor), Understanding Late Devonian and Permian–Triassic Biotic and Climatic Events (Volume 20 in series Developments in Palaeontology and Stratigraphy (2006)). The state of the inquiry into the extinction events. Sweet, Walter C. (editor), Permo–Triassic Events in the Eastern Tethys : Stratigraphy Classification and Relations with the Western Tethys (in series World and Regional Geology) External links "Siberian Traps". Retrieved 2011-04-30. "Big Bang In Antarctica: Killer Crater Found Under Ice". Retrieved 2011-04-30. "Global Warming Led To Atmospheric Hydrogen Sulfide And Permian Extinction". Retrieved 2011-04-30. Morrison, D. "Did an Impact Trigger the Permian-Triassic Extinction?". NASA. Archived from the original on 2011-06-10. Retrieved 2011-04-30. "Permian Extinction Event". Retrieved 2011-04-30. Ogden, DE; Sleep, NH (2012). "Explosive eruption of coal and basalt and the end-Permian mass extinction". Proceedings of the National Academy of Sciences of the United States of America. 109 (1): 59–62. Bibcode:2012PNAS..109...59O. doi:10.1073/pnas.1118675109. PMC 3252959. PMID 22184229. "BBC Radio 4 In Our Time discussion of the Permian-Triassic boundary". Retrieved 2012-02-01. Podcast available. Zimmer, Carl (2018-12-07). "The Planet Has Seen Sudden Warming Before. It Wiped Out Almost Everything". The New York Times. Retrieved 2018-12-10. "The Great Dying: Earth's largest-ever mass extinction is a warning for humanity". CBS News. Retrieved 2021-03-05.
2 degree climate target
The two degree target is the international climate policy goal of limiting global warming to less than two degrees Celsius by 2100 compared to the pre-industrialization level. It is an integral part of the Paris climate agreement. This objective is a political determination based on scientific knowledge concerning the probable consequences of global warming, which dates from the Copenhagen Conference in 2009. It is criticized as insufficient, because even a warming of two degrees will have serious consequences for humans and the environment, as demonstrated in particular by the IPCC Special Report on the consequences of a global warming of 1,5°C. See also Anthropocene Climate target Keeling Curve Eco-sufficiency == References ==
350.org
350.org is an international environmental organization addressing the climate crisis. Its stated goal is to end the use of fossil fuels and transition to renewable energy by building a global, grassroots movement.The 350 in the name stands for 350 parts per million (ppm) of carbon dioxide (CO2), which has been identified as a safe upper limit to avoid a climate tipping point. By the end of 2007, the year 350.org was founded, atmospheric CO2 had already exceeded this threshold, reaching 383 ppm CO2; as of July 2022, the concentration had reached 421 ppm CO2, a level 50% higher than pre-industrial levels.Through online campaigns, grassroots organizing, mass public actions, and collaboration with an extensive network of partner groups and organizations, 350.org mobilized thousands of volunteer organizers in over 188 countries. It was one of the many organizers of the September 2019 Global Climate Strike, which evolved from the Fridays for Future movement. Campaigns 350.org runs a variety of campaigns, from the local to the global scale. Fossil fuel divestment The fossil fuel divestment campaign, also known as "Fossil Free", borrows activist tactics from other social movements, notably the successful campaign for disinvestment from South Africa over apartheid. From its inception in 2012 through October 2021, over 1500 institutions with more than US$40.43 trillion in assets under management had committed to divest from fossil fuels.350.org explains that the reasoning behind this campaign is simple: "If it is wrong to wreck the climate, then it is wrong to profit from that wreckage." 350.org states their demand as the following "We want institutions to immediately freeze any new investment in fossil fuel companies and divest from direct ownership and any commingled funds that include fossil-fuel public equities and corporate bonds." The campaign has grown from colleges and universities around the United States to now include other kinds of public and private institutions, such as the City of New York, major Japanese banks, development banks, religious institutions, and more. Campaigns for divestment are active and growing around the world. From 2013 to 2020, Australian members built a network of local groups across the country advocating for institutions to divest. Keystone XL pipeline 350.org named the Keystone XL pipeline as a critical issue and turning point for the environmental movement, as well as for then-President Barack Obama's legacy. NASA climatologist James Hansen labeled the Keystone XL pipeline as "game over" for the planet and called the amount of carbon stored in Canadian bitumen sands a "fuse to the largest carbon bomb on the planet."350.org cited oil spills along the proposed pipeline route, which would pass near Texas' Carrizo-Wilcox Aquifer, which supplies drinking water to more than 12 million people, as one important reason to reject the pipeline. They argued that it could also pose a danger to the Ogallala Aquifer, the largest aquifer in western North America that supplies drinking water and irrigation to millions of people and agricultural businesses.350.org has opposed the economic argument that has been made by proponents of the pipeline, arguing that Keystone XL would create only a few thousand temporary jobs during construction. The State Department estimated that ultimately the pipeline will create 35 permanent jobs. Additionally, the Natural Resources Defense Council (NRDC) has said that the Keystone XL pipeline will increase gas prices instead of lowering them as oil industry proponents claimed. The NRDC's study also rebutted the claim that the pipeline will lead to energy independence because the pipeline would carry tar sands from Canada to Texas for export to the global market.Partly due to efforts from 350.org and other organizations, President Obama officially rejected the building of Keystone XL on November 6, 2015. This marked the end of a seven-year review of the pipeline. Speaking on the decision, Bill McKibben said, "President Obama is the first world leader to reject a project because of its effect on the climate. That gives him new stature as an environmental leader, and it eloquently confirms the five years and millions of hours of work that people of every kind put into this fight."In response, proponent TC Energy filed a US$15 billion lawsuit under NAFTA's Chapter 11.On January 24, 2017, President Donald Trump took action intended to permit the pipeline's completion, whereupon TC Energy suspended their NAFTA Chapter 11 action.On January 18, 2018, TransCanada Pipelines (now TC PipeLines) announced they had secured commitments from oil companies to ship 500,000 barrels (79,000 m3) of dilbit per day for 20 years, meeting the threshold to make the project economically viable.On January 20, 2021, President Joe Biden revoked the permit for the pipeline on his first day in office. On June 9, 2021, the project was abandoned by TC Energy. In its coverage of the abandonment, The Wall Street Journal highlighted the role of 350.org in the project's failure. Fossil Fuel bans Local campaigns in jurisdictions around the world have passed laws limiting or banning fossil fuel production. These include 410 municipal bans for fracking in Brazil and two state bans: Santa Catarina and Paraná. International Day of Climate Action An "International Day of Climate Action" on October 24, 2009, was organized by 350.org to influence the delegates going to the United Nations Framework Convention on Climate Change meeting in December 2009 (COP15). This was the first global campaign ever organized around a scientific data point. The actions organized by 350.org included gigantic depictions of the number "350", walks, marches, rallies, teach-ins, bike rides, sing-a-thons, carbon-free dinners, retrofitting houses to save energy, tree plantings, mass dives at the Great Barrier Reef, solar-cooked bake-outs, church bell ringings, underwater cabinet meetings (Maldives), and armband distributions to athletes. The organization reported that over 5,200 synchronized demonstrations occurred in 181 countries on the day. The group reports that they organized the world's "most widespread day of political action" on Saturday, October 24, 2009, reporting 5,245 actions in 181 countries. Global Work Party As a follow-up to 2009's International Day of Climate Action, 350.org and the 10:10 Climate Campaign joined forces to help coordinate another global day of action, which occurred on October 10, 2010. The 2010 campaign was focused on concrete actions that can be taken locally to help combat climate change. Actions from tree-plantings to solar panel installations to huge electricity service-provider switching parties occurred in almost every country around the world. Connect the dots The organization's efforts continued into 2012 with a planned May 5 worldwide series of rallies under the slogan "Connect the Dots," to draw attention to the links between climate change and extreme weather. Per the 350.org website the day is called Climate Impacts Day. Global Power Shift Phase 1 of Global Power Shift was a convergence in Istanbul, Turkey in June 2013 of about five-hundred climate organizers from 135 countries. Stated objectives include sharing and developing skills to organize movements, building upon existing plans to organize in-country Power Shift events after the kickoff event in Turkey, building political alignment and a clear theory of change, sharing experiences from different countries, formulating strategies to overcome challenges, and building relationships to strengthen regional and international cooperation and collaboration. Phase 2 of Global Power Shift involves the organizers who were in Turkey in June 2013 to bring home what they learned to organize summits, events, and mobilizations. Summer Heat 350.org launched the Summer Heat campaign in the summer of 2013, a wave of mass mobilizations across the USA. Summer Heat actions took place at eleven locations: Richmond, California; Vancouver, Washington; Green River, Utah; Albuquerque, New Mexico; Houston, Texas; St. Ignace, Michigan; Warren, Ohio; Washington, D.C.; Camp David, Maryland; Somerset, Massachusetts; and Sebago Lake, Maine. Participants included grassroots organizers, labor unions, farmers, ranchers, environmental justice groups, and others. The slogan that was used for the Summer Heat campaign was As The Temperature Rises, So Do We. People's Climate March 350.org helped organize the People's Climate March, which took place on September 21, 2014. 2,000 events took place around the world. Global Climate Strike 350.org was one of the leading organizers of the Global Climate Strike, September 20–27, 2019. Strike actions were planned in more than 150 countries. Worn by a broad coalition of NGOs, unions, and social movements, the strikes were inspired by the school strikes of the Fridays for Future movement. Also supported is the digital climate strike, which calls for a shutdown or 'go green' of websites with redirection to coverage of the physical mobilizations. The aim of the Global Climate Strike is to draw attention to the emergency climate crisis and to create pressure on politics, the media and the fossil fuel industry. The strikes are intended as a prelude to a permanent mass mobilization. Over 7.6 million people across 185 countries participated in this mass mobilization event, making the Global Climate Strike the largest climate mobilization in history. Other activities Apart from special events, 350.org organizes actions on an ongoing basis to promote its message. These activities include tree plantings (350 trees in each instance) for biosequestration, promoting the term "350", publishing adverts in major newspapers calling for the target level of carbon dioxide to be lowered to 350ppm, conducting polls on the subject of climate change, educating youth leaders, lobbying governments on the issue of carbon targets, and joining a campaign to establish a .eco top-level domain or "tld". In December 2009, the group petitioned the United States Environmental Protection Agency to set national limits for greenhouse gases using the Clean Air Act, asking the agency to cap atmospheric concentrations of carbon dioxide at 350 parts per million. The organization created and distributed a time-lapse video showing the recent retreat of Mendenhall Glacier in Alaska, graphically depicting the impacts of warming climates. Do The Math movie The Do The Math movie is a 42-minute documentary film about the rising movement to change the terrifying math of the climate crisis and challenge the fossil fuel industry. The math revolves around these three numbers: to stay below 2 degrees Celsius of global warming we can emit only 565 more gigatons of carbon dioxide versus the 2,795 gigatons held in proven reserves by fossil fuel corporations. This warming rise was agreed to in the 2009 Copenhagen Summit as a limit. NASA scientist James Hansen says "2 degrees of warming is actually a prescription for long-term disaster." Rise: From One Island to Another poem "Rise: From One Island to Another" is a poem and video project that showcases the impacts of sea level rise and the ways the climate crisis spans across national borders. The poem is written by two islanders, Kathy Jetñil-Kijiner from the Marshall Islands and Aka Niviâna from Greenland. Through their poetry, they draw connections between their realities of melting glaciers and rising sea levels. The video provides viewers with "a glimpse at how large, and yet so small and interdependent our world is." 350.org founder, Bill McKibben, writes that climate change "science is uncontroversial. But science alone can’t make change, because it appeals only to the hemisphere of the brain that values logic and reason. We’re also creatures of emotion, intuition, spark." "Rise" seeks to overcome this challenge by appealing to human emotion to inspire social change and climate action, showing us that "the destruction of one’s homeland is the inevitable destruction of the other’s." "Rise" was created in 2018. The "Rise" film project team included photographer and photojournalist Dan Lin, freelance filmmaker Nick Stone, visual storyteller Rob Lau, and filmmaker Oz Go. Origins 350.org was founded by American environmentalist Bill McKibben and a group of students from Middlebury College in Vermont. Their 2007 "Step It Up" campaign involved 1,400 demonstrations at famous sites across the United States. McKibben credits these activities with making Hillary Clinton and Barack Obama change their energy policies during the 2008 United States presidential campaign. Starting in 2008, 350.org built upon the "Step It Up" campaign and made it into a global organization. McKibben is an American environmentalist and writer who wrote one of the first books on global warming for the general public, and frequently writes about climate change, alternative energy, and the need for more localized economies. As of 2022, McKibben was a senior advisor to 350.org and May Boeve is the Executive Director. Rajendra Pachauri, the UN's "top climate scientist" and leader of the Intergovernmental Panel on Climate Change (IPCC), has come out, as have others, in favor of reducing atmospheric concentrations of carbon dioxide to 350 ppm. McKibben called news of Pachauri's embrace of the 350ppm target "amazing". Some media have indicated that Pachauri's endorsement of the 350 ppm target was a victory for 350.org's activism.The organization had a lift in prominence after McKibben appeared on The Colbert Report television show on Monday August 17, 2009. McKibben promotes the organization on speaking tours and by writing articles about it for many major newspapers and media, such as the Los Angeles Times and The Guardian. In 2012 the organization was presented with the 2012 Katerva Award for Behavioural Change. Science of 350 NASA climate scientist James Hansen contended that any atmospheric concentration of CO2 above 350 parts per million was unsafe. Hansen opined in 2009 that "if humanity wishes to preserve a planet similar to that on which civilization developed and to which life on Earth is adapted, paleoclimate evidence and ongoing climate change suggest that CO2 will need to be reduced from its current 400 ppm to at most 350 ppm, but likely less than that." Hansen has noted that nuclear energy is a viable solution to lower CO2 in the atmosphere, at odds with 350.0rg Carbon dioxide, the main greenhouse gas, rose by 2.6 parts per million to 396 ppm in 2013 from the previous year (annual global averages). In May 2013, two independent teams of scientists measuring CO2 near the summit of Mauna Loa in Hawaii recorded that the amount of carbon dioxide in the atmosphere exceeded 400 parts per million, probably for the first time in more than 3 million years of Earth history. It crossed 415 ppm in May 2019 and the amount continues to rise.2 °C (3.6 °F) was agreed upon during the 2009 Copenhagen Accord as a limit for global temperature rise. In the 2015 Paris Agreement, 1.5 °C of warming was introduced as a limit, reflecting the significant difference in impacts between 2 °C and 1.5 °C, especially for climate-vulnerable areas. This was reaffirmed in the 2018 report by the Intergovernmental Panel on Climate Change, where the world's leading scientists urged action to limit warming to 1.5 °C. In order to stay below a 2 °C increase, scientists have estimated that humans can pour roughly 565 more gigatons of carbon dioxide into the atmosphere. Fossil-fuel companies have about 2,795 gigatons of carbon already contained in their proven coal and oil and gas reserves, and is the amount of fossil fuels they are currently planning to burn. 2,795 gigatons is five times higher than the limit of 565 gigatons that would keep Earth under a global temperature increase of 2 degrees Celsius which is already unsafe according to the latest science. Membership 350.org claims alliance with 300 organizations around the world Many notable figures have publicly allied themselves with the organization or its goal to spread the movement, including Archbishop Desmond Tutu, Alex Steffen, Bianca Jagger, David Suzuki, and Colin Beavan.1Sky merged into 350.org in 2011. See also 10:10 (climate change campaign) 2010 United Nations Climate Change Conference Air pollution reduction efforts Climate change mitigation Climate change policy of the United States Climate Reality Project Conservation (ethic) Criticism of non-governmental organizations Environmental movement Individual and political action on climate change List of environmental issues NGO-ization Politics of global warming Stern Review References External links Official website Check the current level of CO2 in the Earth's atmosphere "350 and counting". Haaretz. Retrieved September 15, 2009.
black carbon
Chemically, black carbon (BC) is a component of fine particulate matter (PM ≤ 2.5 µm in aerodynamic diameter). Black carbon consists of pure carbon in several linked forms. It is formed through the incomplete combustion of fossil fuels, biofuel, and biomass, and is one of the main types of particle in both anthropogenic and naturally occurring soot. Black carbon causes human morbidity and premature mortality. Because of these human health impacts, many countries have worked to reduce their emissions, making it an easy pollutant to abate in anthropogenic sources.In climatology, black carbon is a climate forcing agent contributing to global warming. Black carbon warms the Earth by absorbing sunlight and heating the atmosphere and by reducing albedo when deposited on snow and ice (direct effects) and indirectly by interaction with clouds, with the total forcing of 1.1 W/m2. Black carbon stays in the atmosphere for only several days to weeks, whereas potent greenhouse gases have longer lifecycles, for example, carbon dioxide (CO2) has an atmospheric lifetime of more than 100 years. The IPCC and other climate researchers have posited that reducing black carbon is one of the easiest ways to slow down short term global warming.The term black carbon is also used in soil sciences and geology, referring either to deposited atmospheric black carbon or to directly incorporated black carbon from vegetation fires. Especially in the tropics, black carbon in soils significantly contributes to fertility as it is able to absorb important plant nutrients. Overview Michael Faraday recognized that soot was composed of carbon and that it was produced by the incomplete combustion of carbon-containing fuels. The term black carbon was coined by Serbian physicist Tihomir Novakov, referred to as "the godfather of black carbon studies" by James Hansen, in the 1970s. Smoke or soot was the first pollutant to be recognized as having significant environmental impact yet one of the last to be studied by the contemporary atmospheric research community. Soot is composed of a complex mixture of organic compounds which are weakly absorbing in the visible spectral region and a highly absorbing black component which is variously called "elemental", "graphitic" or "black carbon". The term elemental carbon has been used in conjunction with thermal and wet chemical determinations and the term graphitic carbon suggests the presence of graphite-like micro-crystalline structures in soot as evidenced by Raman spectroscopy. The term black carbon is used to imply that this soot component is primarily responsible for the absorption of visible light. The term black carbon is sometimes used as a synonym for both the elemental and graphitic component of soot. It can be measured using different types of devices based on absorption or dispersion of a light beam or derived from noise measurements. Early mitigation attempts The disastrous effects of coal pollution on human health and mortality in the early 1950s in London led to the UK Clean Air Act 1956. This act led to dramatic reductions of soot concentrations in the United Kingdom which were followed by similar reductions in US cities like Pittsburgh and St. Louis. These reductions were largely achieved by the decreased use of soft coal for domestic heating by switching either to "smokeless" coals or other forms of fuel, such as fuel oil and natural gas. The steady reduction of smoke pollution in the industrial cities of Europe and United States caused a shift in research emphasis away from soot emissions and the almost complete neglect of black carbon as a significant aerosol constituent, at least in the United States. In the 1970s, however, a series of studies substantially changed this picture and demonstrated that black carbon as well as the organic soot components continued to be a large component in urban aerosols across the United States and Europe which led to improved controls of these emissions. In the less-developed regions of the world where there were limited or no controls on soot emissions the air quality continued to degrade as the population increased. It was not generally realized until many years later that from the perspective of global effects the emissions from these regions were extremely important. Influence on Earth's atmosphere Most of the developments mentioned above relate to air quality in urban atmospheres. The first indications of the role of black carbon in a larger, global context came from studies of the Arctic Haze phenomena. Black carbon was identified in the Arctic haze aerosols and in the Arctic snow.In general, aerosol particles can affect the radiation balance leading to a cooling or heating effect with the magnitude and sign of the temperature change largely dependent on aerosol optical properties, aerosol concentrations, and the albedo of the underlying surface. A purely scattering aerosol will reflect energy that would normally be absorbed by the earth-atmosphere system back to space and leads to a cooling effect. As one adds an absorbing component to the aerosol, it can lead to a heating of the earth-atmosphere system if the reflectivity of the underlying surface is sufficiently high. Early studies of the effects of aerosols on atmospheric radiative transfer on a global scale assumed a dominantly scattering aerosol with only a small absorbing component, since this appears to be a good representation of naturally occurring aerosols. However, as discussed above, urban aerosols have a large black carbon component and if these particles can be transported on a global scale then one would expect a heating effect over surfaces with a high surface albedo like snow or ice. Furthermore, if these particles are deposited in the snow an additional heating effect would occur due to reductions in the surface albedo. Measuring and modeling spatial distribution Levels of Black carbon are most often determined based on the modification of the optical properties of a fiber filter by deposited particles. Either filter transmittance, filter reflectance or a combination of transmittance and reflectance is measured. Aethalometers are frequently used devices that optically detect the changing absorption of light transmitted through a filter ticket. The USEPA Environmental Technology Verification program evaluated both the aethalometer and also the Sunset Laboratory thermal-optical analyzer. A multiangle absorption photometer takes into account both transmitted and reflected light. Alternative methods rely on satellite based measurements of optical depth for large areas or more recently on spectral noise analysis for very local concentrations.In the late 1970s and early 1980s surprisingly large ground level concentrations of black carbon were observed throughout the western Arctic. Modeling studies indicated that they could lead to heating over polar ice. One of the major uncertainties in modeling the effects of the Arctic haze on the solar radiation balance was limited knowledge of the vertical distributions of black carbon. During 1983 and 1984 as part of the NOAA AGASP program, the first measurements of such distributions in the Arctic atmosphere were obtained with an aethalometer which had the capability of measuring black carbon on a real-time basis. These measurements showed substantial concentrations of black carbon found throughout the western Arctic troposphere including the North Pole. The vertical profiles showed either a strongly layered structure or an almost uniform distribution up to eight kilometers with concentrations within layers as large as those found at ground level in typical mid-latitude urban areas in the United States. The absorption optical depths associated with these vertical profiles were large as evidenced by a vertical profile over the Norwegian arctic where absorption optical depths of 0.023 to 0.052 were calculated respectively for external and internal mixtures of black carbon with the other aerosol components.Optical depths of these magnitudes lead to a substantial change in the solar radiation balance over the highly reflecting Arctic snow surface during the March–April time frame of these measurements modeled the Arctic aerosol for an absorption optical depth of 0.021 (which is close to the average of an internal and external mixtures for the AGASP flights), under cloud-free conditions. These heating effects were viewed at the time as potentially one of the major causes of Arctic warming trends as described in Archives of Dept. of Energy, Basic Energy Sciences Accomplishments. Presence in soils Typically, black carbon accounts for 1 to 6% but also up to 60% of the total organic carbon stored in soils is contributed by black carbon. Especially for tropical soils black carbon serves as a reservoir for nutrients. Experiments showed that soils without high amounts of black carbon are significantly less fertile than soils that contain black carbon. An example for this increased soil fertility are the Terra preta soils of central Amazonia, which are presumably human-made by pre-Columbian native populations. Terra preta soils have on average three times higher soil organic matter (SOM) content, higher nutrient levels and a better nutrient retention capacity than surrounding infertile soils. In this context, the slash and burn agricultural practice used in tropical regions does not only enhance productivity by releasing nutrients from the burned vegetation but also by adding black carbon to the soil. Nonetheless, for a sustainable management, a slash-and-char practice would be better in order to prevent high emissions of CO2 and volatile black carbon. Furthermore, the positive effects of this type of agriculture are counteracted if used for large patches so that soil erosion is not prevented by the vegetation. Presence in waters Soluble and colloidal black carbon retained on the landscape from wildfires can make its way to groundwater. On a global scale, the flow of black carbon into fresh and salt water bodies approximates the rate of wildfire black carbon production. Emission sources By region Developed countries were once the primary source of black carbon emissions, but this began to change in the 1950s with the adoption of pollution control technologies in those countries. Whereas the United States emits about 21% of the world's CO2, it emits 6.1% of the world's soot. The European Union and United States might further reduce their black carbon emissions by accelerating implementation of black carbon regulations that currently take effect in 2015 or 2020 and by supporting the adoption of pending International Maritime Organization (IMO) regulations. Existing regulations also could be expanded to increase the use of clean diesel and clean coal technologies and to develop second-generation technologies. Today, the majority of black carbon emissions are from developing countries and this trend is expected to increase. The largest sources of black carbon are Asia, Latin America, and Africa. China and India together account for 25–35% of global black carbon emissions. Black carbon emissions from China doubled from 2000 to 2006. Existing and well-tested technologies used by developed countries, such as clean diesel and clean coal, could be transferred to developing countries to reduce their emissions.Black carbon emissions are highest in and around major source regions. This results in regional hotspots of atmospheric solar heating due to black carbon. Hotspot areas include: the Indo-Gangetic plains of India eastern China most of Southeast Asia and Indonesia equatorial regions of Africa Mexico and Central America most of Brazil and Peru in South America.Approximately three billion people live in these hotspots. By source Approximately 20% of black carbon is emitted from burning biofuels, 40% from fossil fuels, and 40% from open biomass burning. Similar estimates of the sources of black carbon emissions are as follows: 42% Open biomass burning. (forest and savanna burning) 18% Residential biomass burned with traditional technologies. 14% Diesel engines for transportation. 10% Diesel engines for industrial use. 10% Industrial processes and power generation, usually from smaller boilers. 6% Residential coal burned with traditional technologies.Black carbon sources vary by region. For example, the majority of soot emissions in South Asia are due to biomass cooking, whereas in East Asia, coal combustion for residential and industrial uses plays a larger role. In Western Europe, traffic seems to be the most important source since high concentrations coincide with proximity to major roads or participation to (motorized) traffic.Fossil fuel and biomass soot have significantly greater amounts of black carbon than climate-cooling aerosols and particulate matter, making reductions of these sources particularly powerful mitigation strategies. For example, emissions from the diesel engines and marine vessels contain higher levels of black carbon compared to other sources. Regulating black carbon emissions from diesel engines and marine vessels therefore presents a significant opportunity to reduce black carbon's global warming impact.Biomass burning emits greater amounts of climate-cooling aerosols and particulate matter than black carbon, resulting in short-term cooling. However, over the long-term, biomass burning may cause a net warming when CO2 emissions and deforestation are considered. Reducing biomass emissions would therefore reduce global warming in the long-term and provide co-benefits of reduced air pollution, CO2 emissions, and deforestation. It has been estimated that by switching to slash-and-char from slash-and-burn agriculture, which turns biomass into ash using open fires that release black carbon and GHGs, 12% of anthropogenic carbon emissions caused by land use change could be reduced annually, which is approximately 0.66 Gt CO2-eq. per year, or 2% of all annual global CO2-eq emissions.In a research study published in June 2022, atmospheric scientist Christopher Maloney and his colleagues noted that rocket launches release tiny particles called aerosols in the stratosphere and increase ozone layer loss. They used a climate model to determine the impact of the black carbon coming out of the rocket's engine nozzle. Using various scenarios of growing number of rocket launches, they found that each year, rocket launches could expel 1–10 gigagrams of black carbon at the lower end to 30–100 gigagrams at the extreme end in next few decades. In another study published in June 2022, researchers used a 3D model to study the impact of rocket launches and reentry. They determined that the black carbon particles emitted by the rockets results in an enhanced warming effect of almost 500 times more than other sources. Impacts Black carbon is a form of ultrafine particulate matter, which when released in the air causes premature human mortality and disability. In addition, atmospheric black carbon changes the radiative energy balance of the climate system in a way that raises air and surface temperatures, causing a variety of detrimental environmental impacts on humans, on agriculture, and on plant and animal ecosystems. Public health impacts Particulate matter is the most harmful to public health of all air pollutants in Europe. Black carbon particulate matter contains very fine carcinogens and is therefore particularly harmful.It is estimated that from 640,000 to 4,900,000 premature human deaths could be prevented every year by utilizing available mitigation measures to reduce black carbon in the atmosphere.Humans are exposed to black carbon by inhalation of air in the immediate vicinity of local sources. Important indoor sources include candles and biomass burning whereas traffic and occasionally forest fires are the major outdoor sources of black carbon exposure. Concentrations of black carbon decrease sharply with increasing distance from (traffic) sources which makes it an atypical component of particulate matter. This makes it difficult to estimate exposure of populations. For particulate matter, epidemiological studies have traditionally relied on single fixed site measurements or inferred residential concentrations. Recent studies have shown that as much black carbon is inhaled in traffic and at other locations as at the home address. Despite the fact that a large portion of the exposure occurs as short peaks of high concentrations, it is unclear how to define peaks and determine their frequency and health impact. High peak concentrations are encountered during car driving. High in-vehicle concentrations of black carbon have been associated with driving during rush hours, on highways and in dense traffic.Even relatively low exposure concentrations of black carbon have a direct effect on the lung function of adults and an inflammatory effect on the respiratory system of children. A recent study found no effect of black carbon on blood pressure when combined with physical activity. The public health benefits of reduction in the amount of soot and other particulate matter has been recognized for years. However, high concentrations persist in industrializing areas in Asia and in urban areas in the West such as Chicago. The WHO estimates that air pollution causes nearly two million premature deaths per year. By reducing black carbon, a primary component of fine particulate matter, the health risks from air pollution will decline. In fact, public health concerns have given rise to leading to many efforts to reduce such emissions, for example, from diesel vehicles and cooking stoves. Climate impacts Direct effect Black carbon particles directly absorb sunlight and reduce the planetary albedo when suspended in the atmosphere. Semi-direct effect Black carbon absorb incoming solar radiation, perturb the temperature structure of the atmosphere, and influence cloud cover. They may either increase or decrease cloud cover under different conditions.Snow/ice albedo effect When deposited on high albedo surfaces like ice and snow, black carbon particles reduce the total surface albedo available to reflect solar energy back into space. Small initial snow albedo reduction may have a large forcing because of a positive feedback: Reduced snow albedo would increase surface temperature. The increased surface temperature would decrease the snow cover and further decrease surface albedo.Indirect effect Black carbon may also indirectly cause changes in the absorption or reflection of solar radiation through changes in the properties and behavior of clouds. Research scheduled for publication in 2013 shows black carbon plays a role second only to carbon dioxide in climate change. Effects are complex, resulting from a variety of factors, but due to the short life of black carbon in the atmosphere, about a week as compared to carbon dioxide which last centuries, control of black carbon offers possible opportunities for slowing, or even reversing, climate change. Radiative forcing Estimates of black carbon's globally averaged direct radiative forcing vary from the IPCC's estimate of + 0.34 watts per square meter (W/m2) ± 0.25, to a more recent estimate by V. Ramanathan and G. Carmichael of 0.9 W/m2.The IPCC also estimated the globally averaged snow albedo effect of black carbon at +0.1 ± 0.1 W/m2. Based on the IPCC estimate, it would be reasonable to conclude that the combined direct and indirect snow albedo effects for black carbon rank it as the third largest contributor to globally averaged positive radiative forcing since the pre-industrial period. In comparison, the more recent direct radiative forcing estimate by Ramanathan and Carmichael would lead one to conclude that black carbon has contributed the second largest globally averaged radiative forcing after carbon dioxide (CO2), and that the radiative forcing of black carbon is "as much as 55% of the CO2 forcing and is larger than the forcing due to the other greenhouse gasses (GHGs) such as CH4, CFCs, N2O, or tropospheric ozone." Table 1: Estimates of Black Carbon Radiative Forcing, by Effect Table 2: Estimated Climate Forcings (W/m2) Effects on Arctic ice and Himalayan glaciers According to the IPCC, "the presence of black carbon over highly reflective surfaces, such as snow and ice, or clouds, may cause a significant positive radiative forcing." The IPCC also notes that emissions from biomass burning, which usually have a negative forcing, have a positive forcing over snow fields in areas such as the Himalayas. A 2013 study quantified that gas flares contributed over 40% of the black carbon deposited in the Arctic.According to Charles Zender, black carbon is a significant contributor to Arctic ice-melt, and reducing such emissions may be "the most efficient way to mitigate Arctic warming that we know of". The "climate forcing due to snow/ice albedo change is of the order of 1.0 W/m2 at middle- and high-latitude land areas in the Northern Hemisphere and over the Arctic Ocean." The "soot effect on snow albedo may be responsible for a quarter of observed global warming." "Soot deposition increases surface melt on ice masses, and the meltwater spurs multiple radiative and dynamical feedback processes that accelerate ice disintegration," according to NASA scientists James Hansen and Larissa Nazarenko. As a result of this feedback process, "BC on snow warms the planet about three times more than an equal forcing of CO2." When black carbon concentrations in the Arctic increase during the winter and spring due to Arctic Haze, surface temperatures increase by 0.5 °C. Black carbon emissions also significantly contribute to Arctic ice-melt, which is critical because "nothing in climate is more aptly described as a 'tipping point' than the 0 °C boundary that separates frozen from liquid water—the bright, reflective snow and ice from the dark, heat-absorbing ocean."Black carbon emissions from northern Eurasia, North America, and Asia have the greatest absolute impact on Arctic warming. However, black carbon emissions actually occurring within the Arctic have a disproportionately larger impact per particle on Arctic warming than emissions originating elsewhere. As Arctic ice melts and shipping activity increases, emissions originating within the Arctic are expected to rise.In some regions, such as the Himalayas, the impact of black carbon on melting snowpack and glaciers may be equal to that of CO2. Warmer air resulting from the presence of black carbon in South and East Asia over the Himalayas contributes to a warming of approximately 0.6 °C. An "analysis of temperature trends on the Tibetan side of the Himalayas reveals warming in excess of 1 °C." A summer aerosol sampling on a glacier saddle of Mt. Everest (Qomolangma) in 2003 showed industrially induced sulfate from South Asia may cross over the highly elevated Himalaya. This indicated BC in South Asia could also have the same transport mode. And such kind of signal might have been detected in at a black carbon monitoring site in the hinterland of Tibet. Snow sampling and measurement suggested black carbon deposited in some Himalayan glaciers may reduce the surface albedo by 0.01–0.02. Black carbon record based on a shallow ice core drilled from the East Rongbuk glacier showed a dramatic increasing trend of black carbon concentrations in the ice stratigraphy since the 1990s, and simulated average radiative forcing caused by black carbon was nearly 2 W/m2 in 2002. This large warming trend is the proposed causal factor for the accelerating retreat of Himalayan glaciers, which threatens fresh water supplies and food security in China and India. A general darkening trend in the mid-Himalaya glaciers revealed by MODIS data since 2000 could be partially attributed to black carbon and light absorbing impurities like dust in the springtime, which was later extended to the whole Hindu Kush-Kararoram-Himalaya glaciers research finding a widespread darkening trend of -0.001 yr−1 over the period of 2000–2011. The most rapid decrease in albedo (more negative than -0.0015 yr−1) occurred in the altitudes over 5500 m above sea level. Global warming In its 2007 report, the IPCC estimated for the first time the direct radiative forcing of black carbon from fossil fuel emissions at + 0.2 W/m2, and the radiative forcing of black carbon through its effect on the surface albedo of snow and ice at an additional + 0.1 W/m2. More recent studies and public testimony by many of the same scientists cited in the IPCC's report estimate that emissions from black carbon are the second-largest contributor to global warming after carbon dioxide emissions, and that reducing these emissions may be the fastest strategy for slowing climate change.Since 1950, many countries have significantly reduced black carbon emissions, especially from fossil fuel sources, primarily to improve public health from improved air quality, and "technology exists for a drastic reduction of fossil fuel related BC" throughout the world.Given black carbon's relatively short lifespan, reducing black carbon emissions would reduce warming within weeks. Because black carbon remains in the atmosphere only for a few weeks, reducing black carbon emissions may be the fastest means of slowing climate change in the near term. Control of black carbon, particularly from fossil-fuel and biofuel sources, is very likely to be the fastest method of slowing global warming in the immediate future, and major cuts in black carbon emissions could slow the effects of climate change for a decade or two. Reducing black carbon emissions could help keep the climate system from passing the tipping points for abrupt climate changes, including significant sea-level rise from the melting of Greenland and/or Antarctic ice sheets."Emissions of black carbon are the second strongest contribution to current global warming, after carbon dioxide emissions". Calculation of black carbon's combined climate forcing at 1.0–1.2 W/m2, which "is as much as 55% of the CO2 forcing and is larger than the forcing due to the other [GHGs] such as CH4, CFCs, N2O or tropospheric ozone." Other scientists estimate the total magnitude of black carbon's forcing between + 0.2 and 1.1 W/m2 with varying ranges due to uncertainties. (See Table 1.) This compares with the IPCC's climate forcing estimates of 1.66 W/m2 for CO2 and 0.48 W/m2 for CH4. (See Table 2.) In addition, black carbon forcing is two to three times as effective in raising temperatures in the Northern Hemisphere and the Arctic than equivalent forcing values of CO2.Jacobson calculates that reducing fossil fuel and biofuel soot particles would eliminate about 40% of the net observed global warming. (See Figure 1.) In addition to black carbon, fossil fuel and biofuel soot contain aerosols and particulate matter that cool the planet by reflecting the sun's radiation away from the Earth. When the aerosols and particulate matter are accounted for, fossil fuel and biofuel soot are increasing temperatures by about 0.35 °C.Black carbon alone is estimated to have a 20-year Global Warming Potential (GWP) of 4,470, and a 100-year GWP of 1,055–2,240. Fossil fuel soot, as a result of mixing with cooling aerosols and particulate matter, has a lower 20-year GWP of 2,530, and a 100-year GWP of 840–1,280.The Integrated Assessment of Black Carbon and Tropospheric Ozone published in 2011 by the United Nations Environment Programme and World Meteorological Organization calculates that cutting black carbon, along with tropospheric ozone and its precursor, methane, can reduce the rate of global warming by half and the rate of warming in the Arctic by two-thirds, in combination with CO2 cuts. By trimming "peak warming", such cuts can keep current global temperature rise below 1.5 ˚C for 30 years and below 2 ˚C for 60 years, in combination with CO2 cuts. (FN: UNEP-WMO 2011.) See Table 1, on page 9 of the UNEP-WMO report.The reduction of CO2 as well as SLCFs could keep global temperature rise under 1.5 ˚C through 2030, and below 2 ˚C through 2070, assuming CO2 is also cut. See the graph on page 12 of the UNEP-WMO report. Control technologies Ramanathan notes that "developed nations have reduced their black carbon emissions from fossil fuel sources by a factor of 5 or more since 1950. Thus, the technology exists for a drastic reduction of fossil fuel related black carbon."Jacobson believes that "[g]iven proper conditions and incentives, [soot] polluting technologies can be quickly phased out. In some small-scale applications (such as domestic cooking in developing countries), health and convenience will drive such a transition when affordable, reliable alternatives are available. For other sources, such as vehicles or coal boilers, regulatory approaches may be required to nudge either the transition to existing technology or the development of new technology."Hansen states that "technology is within reach that could greatly reduce soot, restoring snow albedo to near pristine values, while having multiple other benefits for climate, human health, agricultural productivity, and environmental aesthetics. Already soot emissions from coal are decreasing in many regions with transition from small users to power plants with scrubbers."Jacobson suggests converting "[U.S.] vehicles from fossil fuel to electric, plug-in-hybrid, or hydrogen fuel cell vehicles, where the electricity or hydrogen is produced by a renewable energy source, such as wind, solar, geothermal, hydroelectric, wave, or tidal power. Such a conversion would eliminate 160 Gg/yr (24%) of U.S. (or 1.5% of world) fossil-fuel soot and about 26% of U.S. (or 5.5% of world) carbon dioxide." According to Jacobson's estimates, this proposal would reduce soot and CO2 emissions by 1.63 GtCO2–eq. per year. He notes, however, "that the elimination of hydrocarbons and nitrogen oxides would also eliminate some cooling particles, reducing the net benefit by at most, half, but improving human health," a substantial reduction for one policy in one country.For diesel vehicles in particular there are several effective technologies available. Newer, more efficient diesel particulate filters (DPFs), or traps, can eliminate over 90% of black carbon emissions, but these devices require ultra-low sulfur diesel fuel (ULSD). To ensure compliance with new particulate rules for new on-road and non-road vehicles in the U.S., the EPA first required a nationwide shift to ULSD, which allowed DPFs to be used in diesel vehicles in order to meet the standards. Because of recent EPA regulations, black carbon emissions from diesel vehicles are expected to decline about 70 percent from 2001 to 2020." Overall, "BC emissions in the United States are projected to decline by 42 percent from 2001 to 2020. By the time the full fleet is subject to these rules, EPA estimates that over 239,000 tons of particulate matter will be reduced annually. Outside of the US diesel oxidation catalysts are often available and DPFs will become available as ULSD is more widely commercialized. Another technology for reducing black carbon emissions from diesel engines is to shift fuels to compressed natural gas. In New Delhi, India, the supreme court ordered shift to compressed natural gas for all public transport vehicles, including buses, taxis, and rickshaws, resulted in a climate benefit, "largely because of the dramatic reduction of black carbon emissions from the diesel bus engines." Overall, the fuel switch for the vehicles reduced black carbon emissions enough to produce a 10 percent net reduction in CO2-eq., and perhaps as much as 30 percent. The main gains were from diesel bus engines whose CO2-eq. emissions were reduced 20 percent. According to a study examining these emissions reductions, "there is a significant potential for emissions reductions through the [UNFCCC] Clean Development for such fuel switching projects."Technologies are also in development to reduce some of the 133,000 metric tons of particulate matter emitted each year from ships. Ocean vessels use diesel engines, and particulate filters similar to those in use for land vehicles are now being tested on them. As with current particulate filters these too would require the ships to use ULSD, but if comparable emissions reductions are attainable, up to 120,000 metric tons of particulate emissions could be eliminated each year from international shipping. That is, if particulate filters could be shown reduce black carbon emissions 90 percent from ships as they do for land vehicles, 120,000 metric tons of today's 133,000 metric tons of emissions would be prevented. Other efforts can reduce the amount of black carbon emissions from ships simply by decreasing the amount of fuel the ships use. By traveling at slower speeds or by using shore side electricity when at port instead of running the ship's diesel engines for electric power, ships can save fuel and reduce emissions. Reynolds and Kandlikar estimate that the shift to compressed natural gas for public transport in New Delhi ordered by the Supreme Court reduced climate emissions by 10 to 30%.Ramanathan estimates that "providing alternative energy-efficient and smoke-free cookers and introducing transferring technology for reducing soot emissions from coal combustion in small industries could have major impacts on the radiative forcing due to soot." Specifically, the impact of replacing biofuel cooking with black carbon-free cookers (solar, bio, and natural gas) in South and East Asia is dramatic: over South Asia, a 70 to 80% reduction in black carbon heating; and in East Asia, a 20 to 40% reduction." Biodegradation Condensed aromatic ring structures indicate black carbon degradation in soil. Saprophytic fungi are being researched for their potential role in the degradation of black carbon. Policy options Many countries have existing national laws to regulate black carbon emissions, including laws that address particulate emissions. Some examples include: banning or regulating slash-and-burn clearing of forests and savannas; requiring shore-based power/electrification of ships at port, regulating idling at terminals, and mandating fuel standards for ships seeking to dock at port; requiring regular vehicle emissions tests, retirement, or retrofitting (e.g. adding particulate traps), including penalties for failing to meet air quality emissions standards, and heightened penalties for on-the-road "super-emitting" vehicles; banning or regulating the sale of certain fuels and/or requiring the use of cleaner fuels for certain uses; limiting the use of chimneys and other forms of biomass burning in urban and non-urban areas; requiring permits to operate industrial, power generating, and oil refining facilities and periodic permit renewal and/or modification of equipment; and requiring filtering technology and high-temperature combustion (e.g. supercritical coal) for existing power generation plants, and regulating annual emissions from power generation plants.The International Network for Environmental Compliance & Enforcement issued a Climate Compliance Alert on Black Carbon in 2008 which cited reduction of carbon black as a cost-effective way to reduce a major cause of global warming. See also Nuclear winter Asian brown cloud Global dimming Peat bog Environmental impact of the coal industry Diesel exhaust References Further reading Stone, R. S.; Sharma, S.; Herber, A.; Eleftheriadis, K.; Nelson, D. W. (10 June 2014). "A characterization of Arctic aerosols on the basis of aerosol optical depth and black carbon measurements". Elementa: Science of the Anthropocene. 2: 000027. Bibcode:2014EleSA...2.0027S. doi:10.12952/journal.elementa.000027. External links Integrated Assessment of Black Carbon and Tropospheric Ozone Archived 2012-10-20 at the Wayback Machine, 2012, United Nations Environmental Programme. Why Black Carbon and Ozone Also Matter, in September/October 2009 Foreign Affairs with Veerabhadran Ramanathan and Jessica Seddon Wallack. The Climate Threat We Can Beat, in May/June 2012 Foreign Affairs with David G. Victor, Charles F. Kennel, Veerabhadran Ramanathan UCSD Researchers: Where International Climate Policy Has Failed, Grassroots Efforts Can Succeed; Control of greenhouse agents other than CO2 needs to reach the local level, according to a new Foreign Affairs essay April 26, 2012 University of California, San Diego
precipitation
In meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls from clouds due to gravitational pull. The main forms of precipitation include drizzle, rain, sleet, snow, ice pellets, graupel and hail. Precipitation occurs when a portion of the atmosphere becomes saturated with water vapor (reaching 100% relative humidity), so that the water condenses and "precipitates" or falls. Thus, fog and mist are not precipitation but colloids, because the water vapor does not condense sufficiently to precipitate. Two processes, possibly acting together, can lead to air becoming saturated: cooling the air or adding water vapor to the air. Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Short, intense periods of rain in scattered locations are called showers.Moisture that is lifted or otherwise forced to rise over a layer of sub-freezing air at the surface may be condensed into clouds and rain. This process is typically active when freezing rain occurs. A stationary front is often present near the area of freezing rain and serves as the focus for forcing and rising air. Provided there is necessary and sufficient atmospheric moisture content, the moisture within the rising air will condense into clouds, namely nimbostratus and cumulonimbus if significant precipitation is involved. Eventually, the cloud droplets will grow large enough to form raindrops and descend toward the Earth where they will freeze on contact with exposed objects. Where relatively warm water bodies are present, for example due to water evaporation from lakes, lake-effect snowfall becomes a concern downwind of the warm lakes within the cold cyclonic flow around the backside of extratropical cyclones. Lake-effect snowfall can be locally heavy. Thundersnow is possible within a cyclone's comma head and within lake effect precipitation bands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation. On the leeward side of mountains, desert climates can exist due to the dry air caused by compressional heating. Most precipitation occurs within the tropics and is caused by convection. The movement of the monsoon trough, or intertropical convergence zone, brings rainy seasons to savannah regions. Precipitation is a major component of the water cycle, and is responsible for depositing fresh water on the planet. Approximately 505,000 cubic kilometres (121,000 cu mi) of water falls as precipitation each year: 398,000 cubic kilometres (95,000 cu mi) over oceans and 107,000 cubic kilometres (26,000 cu mi) over land. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in), but over land it is only 715 millimetres (28.1 in). Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes. Global warming is already causing changes to weather, increasing precipitation in some geographies, and reducing it in others, resulting in additional extreme weather.Precipitation may occur on other celestial bodies. Saturn's largest satellite, Titan, hosts methane precipitation as a slow-falling drizzle, which has been observed as Rain puddles at its equator and polar regions. Types Precipitation is a major component of the water cycle, and is responsible for depositing most of the fresh water on the planet. Approximately 505,000 km3 (121,000 cu mi) of water falls as precipitation each year, 398,000 km3 (95,000 cu mi) of it over the oceans. Given the Earth's surface area, that means the globally averaged annual precipitation is 990 millimetres (39 in). Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall. Convective processes involve strong vertical motions that can cause the overturning of the atmosphere in that location within an hour and cause heavy precipitation, while stratiform processes involve weaker upward motions and less intense precipitation. Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice. Mixtures of different types of precipitation, including types in different categories, can fall simultaneously. Liquid forms of precipitation include rain and drizzle. Rain or drizzle that freezes on contact within a subfreezing air mass is called "freezing rain" or "freezing drizzle". Frozen forms of precipitation include snow, ice needles, ice pellets, hail, and graupel. Measurement Liquid precipitation Rainfall (including drizzle and rain) is usually measured using a rain gauge and expressed in units of millimeters (mm) of height or depth. Equivalently, it can be expressed as a physical quantity with dimension of volume of water per collection area, in units of liters per square meter (L/m2); as 1L=1dm3=1mm·m2, the units of area (m2) cancel out, resulting in simply "mm". This also corresponds to an area density expressed in kg/m2, if assuming that 1 liter of water has a mass of 1 kg (water density), which is acceptable for most practical purposes. The corresponding English unit used is usually inches. In Australia before metrication, rainfall was also measured in "points", each of which was defined as one-hundredth of an inch.Solid precipitation A snow gauge is usually used to measure the amount of solid precipitation. Snowfall is usually measured in centimeters by letting snow fall into a container and then measure the height. The snow can then optionally be melted to obtain a water equivalent measurement in millimeters like for liquid precipitation. The relationship between snow height and water equivalent depends on the water content of the snow; the water equivalent can thus only provide a rough estimate of snow depth. Other forms of solid precipitation, such as snow pellets and hail or even sleet (rain and snow mixed), can also be melted and measured as their respective water equivalents, usually expressed in millimeters as for liquid precipitation. Air becomes saturated Cooling air to its dew point The dew point is the temperature to which a parcel of air must be cooled in order to become saturated, and (unless super-saturation occurs) condenses to water. Water vapor normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. The cloud condensation nuclei concentration will determine the cloud microphysics. An elevated portion of a frontal zone forces broad areas of lift, which form cloud decks such as altostratus or cirrostratus. Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions.There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands. The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface, usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation. Adding moisture to the air The main ways water vapor is added to the air are: wind convergence into areas of upward motion, precipitation or virga falling from above, daytime heating evaporating water from the surface of oceans, water bodies or wet land, transpiration from plants, cool or dry air moving over warmer water, and lifting air over mountains. Forms of precipitation Raindrops Coalescence occurs when water droplets fuse to create larger water droplets, or when water droplets freeze onto an ice crystal, which is known as the Bergeron process. The fall rate of very small droplets is negligible, hence clouds do not fall out of the sky; precipitation will only occur when these coalesce into larger drops. droplets with different size will have different terminal velocity that cause droplets collision and producing larger droplets, Turbulence will enhance the collision process. As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain.Raindrops have sizes ranging from 5.1 to 20 millimetres (0.20 to 0.79 in) mean diameter, above which they tend to break up. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Contrary to the cartoon pictures of raindrops, their shape does not resemble a teardrop. Intensity and duration of rainfall are usually inversely related, i.e., high intensity storms are likely to be of short duration and low intensity storms can have a long duration. Rain drops associated with melting hail tend to be larger than other rain drops. The METAR code for rain is RA, while the coding for rain showers is SHRA. Ice pellets Ice pellets or sleet are a form of precipitation consisting of small, translucent balls of ice. Ice pellets are usually (but not always) smaller than hailstones. They often bounce when they hit the ground, and generally do not freeze into a solid mass unless mixed with freezing rain. The METAR code for ice pellets is PL.Ice pellets form when a layer of above-freezing air exists with sub-freezing air both above and below. This causes the partial or complete melting of any snowflakes falling through the warm layer. As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze, and freezing rain will be the result at the surface. A temperature profile showing a warm layer above the ground is most likely to be found in advance of a warm front during the cold season, but can occasionally be found behind a passing cold front. Hail Like other precipitation, hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted again. Hail has a diameter of 5 millimetres (0.20 in) or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 6.4 millimetres (0.25 in). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil. Stones just larger than golf ball-sized are one of the most frequently reported hail sizes. Hailstones can grow to 15 centimetres (6 in) and weigh more than 500 grams (1 lb). In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud. Snowflakes Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. Once a droplet has frozen, it grows in the supersaturated environment. Because water droplets are more numerous than the ice crystals the crystals are able to grow to hundreds of micrometers in size at the expense of the water droplets. This process is known as the Wegener–Bergeron–Findeisen process. The corresponding depletion of water vapor causes the droplets to evaporate, meaning that the ice crystals grow at the droplets' expense. These large crystals are an efficient source of precipitation, since they fall through the atmosphere due to their mass, and may collide and stick together in clusters, or aggregates. These aggregates are snowflakes, and are usually the type of ice particle that falls to the ground. Guinness World Records list the world's largest snowflakes as those of January 1887 at Fort Keogh, Montana; allegedly one measured 38 cm (15 in) wide. The exact details of the sticking mechanism remain a subject of research. Although the ice is clear, scattering of light by the crystal facets and hollows/imperfections mean that the crystals often appear white in color due to diffuse reflection of the whole spectrum of light by the small ice particles. The shape of the snowflake is determined broadly by the temperature and humidity at which it is formed. Rarely, at a temperature of around −2 °C (28 °F), snowflakes can form in threefold symmetry—triangular snowflakes. The most common snow particles are visibly irregular, although near-perfect snowflakes may be more common in pictures because they are more visually appealing. No two snowflakes are alike, as they grow at different rates and in different patterns depending on the changing temperature and humidity within the atmosphere through which they fall on their way to the ground. The METAR code for snow is SN, while snow showers are coded SHSN. Diamond dust Diamond dust, also known as ice needles or ice crystals, forms at temperatures approaching −40 °C (−40 °F) due to air with slightly higher moisture from aloft mixing with colder, surface-based air. They are made of simple ice crystals, hexagonal in shape. The METAR identifier for diamond dust within international hourly weather reports is IC. Occult deposition Occult deposition occurs when mist or air that is highly saturated with water vapour interacts with the leaves of trees or shrubs it passes over. Causes Frontal activity Stratiform or dynamic precipitation occurs as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as over surface cold fronts, and over and ahead of warm fronts. Similar ascent is seen around tropical cyclones outside of the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones. A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas. Precipitation may occur on celestial bodies other than Earth. When it gets cold, Mars has precipitation that most likely takes the form of ice needles, rather than rain or snow. Convection Convective rain, or showery precipitation, occurs from convective clouds, e.g. cumulonimbus or cumulus congestus. It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs. Graupel and hail indicate convection. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts. Convective precipitation mostly consist of mesoscale convective systems and they produce torrential rainfalls with thunderstorms, wind damages, and other forms of severe weather events. Orographic effects Orographic precipitation occurs on the windward (upwind) side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed.In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it has the second-highest average annual rainfall on Earth, with 12,000 millimetres (460 in). Storm systems affect the state with heavy rains between October and March. Local climates vary considerably on each island due to their topography, divisible into windward (Koʻolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover.In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts. Similarly, in Asia, the Himalaya mountains create an obstacle to monsoons which leads to extremely high precipitation on the southern side and lower precipitation levels on the northern side. Snow Extratropical cyclones can bring cold and dangerous conditions with heavy rain and snow with winds exceeding 119 km/h (74 mph), (sometimes referred to as windstorms in Europe). The band of precipitation that is associated with their warm front is often extensive, forced by weak upward vertical motion of air over the frontal boundary which condenses as it cools and produces precipitation within an elongated band, which is wide and stratiform, meaning falling out of nimbostratus clouds. When moist air tries to dislodge an arctic air mass, overrunning snow can result within the poleward side of the elongated precipitation band. In the Northern Hemisphere, poleward is towards the North Pole, or north. Within the Southern Hemisphere, poleward is towards the South Pole, or south. Southwest of extratropical cyclones, curved cyclonic flow bringing cold air across the relatively warm water bodies can lead to narrow lake-effect snow bands. Those bands bring strong localized snowfall which can be understood as follows: Large water bodies such as lakes efficiently store heat that results in significant temperature differences (larger than 13 °C or 23 °F) between the water surface and the air above. Because of this temperature difference, warmth and moisture are transported upward, condensing into vertically oriented clouds (see satellite picture) which produce snow showers. The temperature decrease with height and cloud depth are directly affected by both the water temperature and the large-scale environment. The stronger the temperature decrease with height, the deeper the clouds get, and the greater the precipitation rate becomes.In mountainous areas, heavy snowfall accumulates when air is forced to ascend the mountains and squeeze out precipitation along their windward slopes, which in cold conditions, falls in the form of snow. Because of the ruggedness of terrain, forecasting the location of heavy snowfall remains a significant challenge. Within the tropics The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls. The term green season is also sometimes used as a euphemism by tourist authorities. Areas with wet seasons are dispersed across portions of the tropics and subtropics. Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year. Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the intertropical convergence zone or monsoon trough move poleward of their location during the middle of the warm season. When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves, and vegetation grows significantly. Soil nutrients diminish and erosion increases. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season.Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counterclockwise (northern hemisphere). Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage. Large-scale geographical distribution On the large scale, the highest precipitation amounts outside topography fall in the tropics, closely tied to the Intertropical Convergence Zone, itself the ascending branch of the Hadley cell. Mountainous locales near the equator in Colombia are amongst the wettest places on Earth. North and south of this are regions of descending air that form subtropical ridges where precipitation is low; the land surface underneath these ridges is usually arid, and these regions make up most of the Earth's deserts. An exception to this rule is in Hawaii, where upslope flow due to the trade winds lead to one of the wettest locations on Earth. Otherwise, the flow of the Westerlies into the Rocky Mountains lead to the wettest, and at elevation snowiest, locations within North America. In Asia during the wet season, the flow of moist air into the Himalayas leads to some of the greatest rainfall amounts measured on Earth in northeast India. Measurement The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in 100 mm (3.9 in) plastic and 200 mm (7.9 in) metal varieties. The inner cylinder is filled by 25 mm (0.98 in) of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to 0.25 mm (0.0098 in) resolution, while metal gauges require use of a stick designed with the appropriate 0.25 mm (0.0098 in) markings. After the inner cylinder is filled, the amount inside is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty. These gauges are used in the winter by removing the funnel and inner cylinder and allowing snow and freezing rain to collect inside the outer cylinder. Some add anti-freeze to their gauge so they do not have to melt the snow or ice that falls into the gauge. Once the snowfall/ice is finished accumulating, or as 300 mm (12 in) is approached, one can either bring it inside to melt, or use lukewarm water to fill the inner cylinder with in order to melt the frozen precipitation in the outer cylinder, keeping track of the warm fluid added, which is subsequently subtracted from the overall total once all the ice/snow is melted.Other types of gauges include the popular wedge gauge (the cheapest rain gauge and most fragile), the tipping bucket rain gauge, and the weighing rain gauge. The wedge and tipping bucket gauges have problems with snow. Attempts to compensate for snow/ice by warming the tipping bucket meet with limited success, since snow may sublimate if the gauge is kept much above freezing. Weighing gauges with antifreeze should do fine with snow, but again, the funnel needs to be removed before the event begins. For those looking to measure rainfall the most inexpensively, a can that is cylindrical with straight sides will act as a rain gauge if left out in the open, but its accuracy will depend on what ruler is used to measure the rain with. Any of the above rain gauges can be made at home, with enough know-how.When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE. If a network is not available in the area where one lives, the nearest local weather office will likely be interested in the measurement. Hydrometeor definition A concept used in precipitation measurement is the hydrometeor. Any particulates of liquid or solid water in the atmosphere are known as hydrometeors. Formations due to condensation, such as clouds, haze, fog, and mist, are composed of hydrometeors. All precipitation types are made up of hydrometeors by definition, including virga, which is precipitation which evaporates before reaching the ground. Particles blown from the Earth's surface by wind, such as blowing snow and blowing sea spray, are also hydrometeors, as are hail and snow. Satellite estimates Although surface precipitation gauges are considered the standard for measuring precipitation, there are many areas in which their use is not feasible. This includes the vast expanses of ocean and remote land areas. In other cases, social, technical or administrative issues prevent the dissemination of gauge observations. As a result, the modern global record of precipitation largely depends on satellite observations.Satellite sensors work by remotely sensing precipitation—recording various parts of the electromagnetic spectrum that theory and practice show are related to the occurrence and intensity of precipitation. The sensors are almost exclusively passive, recording what they see, similar to a camera, in contrast to active sensors (radar, lidar) that send out a signal and detect its impact on the area being observed. Satellite sensors now in practical use for precipitation fall into two categories. Thermal infrared (IR) sensors record a channel around 11 micron wavelength and primarily give information about cloud tops. Due to the typical structure of the atmosphere, cloud-top temperatures are approximately inversely related to cloud-top heights, meaning colder clouds almost always occur at higher altitudes. Further, cloud tops with a lot of small-scale variation are likely to be more vigorous than smooth-topped clouds. Various mathematical schemes, or algorithms, use these and other properties to estimate precipitation from the IR data.The second category of sensor channels is in the microwave part of the electromagnetic spectrum. The frequencies in use range from about 10 gigahertz to a few hundred GHz. Channels up to about 37 GHz primarily provide information on the liquid hydrometeors (rain and drizzle) in the lower parts of clouds, with larger amounts of liquid emitting higher amounts of microwave radiant energy. Channels above 37 GHz display emission signals, but are dominated by the action of solid hydrometeors (snow, graupel, etc.) to scatter microwave radiant energy. Satellites such as the Tropical Rainfall Measuring Mission (TRMM) and the Global Precipitation Measurement (GPM) mission employ microwave sensors to form precipitation estimates. Additional sensor channels and products have been demonstrated to provide additional useful information including visible channels, additional IR channels, water vapor channels and atmospheric sounding retrievals. However, most precipitation data sets in current use do not employ these data sources. Satellite data sets The IR estimates have rather low skill at short time and space scales, but are available very frequently (15 minutes or more often) from satellites in geosynchronous Earth orbit. IR works best in cases of deep, vigorous convection—such as the tropics—and becomes progressively less useful in areas where stratiform (layered) precipitation dominates, especially in mid- and high-latitude regions. The more-direct physical connection between hydrometeors and microwave channels gives the microwave estimates greater skill on short time and space scales than is true for IR. However, microwave sensors fly only on low Earth orbit satellites, and there are few enough of them that the average time between observations exceeds three hours. This several-hour interval is insufficient to adequately document precipitation because of the transient nature of most precipitation systems as well as the inability of a single satellite to appropriately capture the typical daily cycle of precipitation at a given location. Since the late 1990s, several algorithms have been developed to combine precipitation data from multiple satellites' sensors, seeking to emphasize the strengths and minimize the weaknesses of the individual input data sets. The goal is to provide "best" estimates of precipitation on a uniform time/space grid, usually for as much of the globe as possible. In some cases the long-term homogeneity of the dataset is emphasized, which is the Climate Data Record standard. In other cases, the goal is producing the best instantaneous satellite estimate, which is the High Resolution Precipitation Product approach. In either case, of course, the less-emphasized goal is also considered desirable. One key result of the multi-satellite studies is that including even a small amount of surface gauge data is very useful for controlling the biases that are endemic to satellite estimates. The difficulties in using gauge data are that 1) their availability is limited, as noted above, and 2) the best analyses of gauge data take two months or more after the observation time to undergo the necessary transmission, assembly, processing and quality control. Thus, precipitation estimates that include gauge data tend to be produced further after the observation time than the no-gauge estimates. As a result, while estimates that include gauge data may provide a more accurate depiction of the "true" precipitation, they are generally not suited for real- or near-real-time applications. The work described has resulted in a variety of datasets possessing different formats, time/space grids, periods of record and regions of coverage, input datasets, and analysis procedures, as well as many different forms of dataset version designators. In many cases, one of the modern multi-satellite data sets is the best choice for general use. Return period The likelihood or probability of an event with a specified intensity and duration is called the return period or frequency. The intensity of a storm can be predicted for any return period and storm duration, from charts based on historical data for the location. The term 1 in 10 year storm describes a rainfall event which is rare and is only likely to occur once every 10 years, so it has a 10 percent likelihood any given year. The rainfall will be greater and the flooding will be worse than the worst storm expected in any single year. The term 1 in 100 year storm describes a rainfall event which is extremely rare and which will occur with a likelihood of only once in a century, so has a 1 percent likelihood in any given year. The rainfall will be extreme and flooding to be worse than a 1 in 10 year event. As with all probability events, it is possible though unlikely to have two "1 in 100 Year Storms" in a single year. Uneven pattern of precipitation A significant portion of the annual precipitation in any particular place (no weather station in Africa or South America were considered) falls on only a few days, typically about 50% during the 12 days with the most precipitation. Role in Köppen climate classification The Köppen classification depends on average monthly values of temperature and precipitation. The most commonly used form of the Köppen classification has five primary types labeled A through E. Specifically, the primary types are A, tropical; B, dry; C, mild mid-latitude; D, cold mid-latitude; and E, polar. The five primary classifications can be further divided into secondary classifications such as rain forest, monsoon, tropical savanna, humid subtropical, humid continental, oceanic climate, Mediterranean climate, steppe, subarctic climate, tundra, polar ice cap, and desert. Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between 1,750 and 2,000 mm (69 and 79 in). A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between 750 and 1,270 mm (30 and 50 in) a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone is where winter rainfall (and sometimes snowfall) is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees from the equator.An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year-round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of western and southern Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation. Effect on agriculture Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion, while overly wet weather can cause harmful fungus growth. Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water, while tropical plants may need up to hundreds of inches of rain per year to survive. In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season. Changes due to global warming Increasing temperatures tend to increase evaporation which leads to more precipitation. Precipitation has generally increased over land north of 30°N from 1900 to 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. In 2018, a study assessing changes in precipitation across spatial scales using a high-resolution global precipitation dataset of over 33+ years, concluded that "While there are regional trends, there is no evidence of increase in precipitation at the global scale in response to the observed global warming."Each region of the world is going to have changes in precipitation due to their unique conditions. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation, more evaporation, or both). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1% per century since 1900, with the greatest increases within the East North Central climate region (11.6% per century) and the South (11.1%). Hawaii was the only region to show a decrease (−9.25%). Changes due to urban heat island The urban heat island warms cities 0.6 to 5.6 °C (1.1 to 10.1 °F) above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between 32 and 64 kilometres (20 and 40 mi) downwind of cities, compared with upwind. Some cities induce a total precipitation increase of 51%. Forecasting The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time period over a specified area. A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States. Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within six to seven hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast. See also List of meteorology topics Basic precipitation Bioprecipitation, the concept of rain-making bacteria. Mango showers, pre-monsoon showers in the Indian states of Karnataka and Kerala that help in the ripening of mangoes. Sunshower, an unusual meteorological phenomenon in which rain falls while the sun is shining. Wintry showers, an informal meteorological term for various mixtures of rain, freezing rain, sleet and snow. References External links Current global map of predicted precipitation for the next three hours Global Precipitation Climatology Centre GPCC
methanogenesis
Methanogenesis or biomethanation is the formation of methane coupled to energy conservation by microbes known as methanogens. Organisms capable of producing methane for energy conservation have been identified only from the domain Archaea, a group phylogenetically distinct from both eukaryotes and bacteria, although many live in close association with anaerobic bacteria. The production of methane is an important and widespread form of microbial metabolism. In anoxic environments, it is the final step in the decomposition of biomass. Methanogenesis is responsible for significant amounts of natural gas accumulations, the remainder being thermogenic. Biochemistry Methanogenesis in microbes is a form of anaerobic respiration. Methanogens do not use oxygen to respire; in fact, oxygen inhibits the growth of methanogens. The terminal electron acceptor in methanogenesis is not oxygen, but carbon. The two best described pathways involve the use of acetic acid or inorganic carbon dioxide as terminal electron acceptors: CO2 + 4 H2 → CH4 + 2 H2OCH3COOH → CH4 + CO2During anaerobic respiration of carbohydrates, H2 and acetate are formed in a ratio of 2:1 or lower, so H2 contributes only c. 33% to methanogenesis, with acetate contributing the greater proportion. In some circumstances, for instance in the rumen, where acetate is largely absorbed into the bloodstream of the host, the contribution of H2 to methanogenesis is greater.However, depending on pH and temperature, methanogenesis has been shown to use carbon from other small organic compounds, such as formic acid (formate), methanol, methylamines, tetramethylammonium, dimethyl sulfide, and methanethiol. The catabolism of the methyl compounds is mediated by methyl transferases to give methyl coenzyme M. Proposed mechanism The biochemistry of methanogenesis involves the following coenzymes and cofactors: F420, coenzyme B, coenzyme M, methanofuran, and methanopterin. The mechanism for the conversion of CH3–S bond into methane involves a ternary complex of methyl coenzyme M and coenzyme B fit into a channel terminated by the axial site on nickel of the cofactor F430. One proposed mechanism invokes electron transfer from Ni(I) (to give Ni(II)), which initiates formation of CH4. Coupling of the coenzyme M thiyl radical (RS.) with HS coenzyme B releases a proton and re-reduces Ni(II) by one-electron, regenerating Ni(I). Reverse methanogenesis Some organisms can oxidize methane, functionally reversing the process of methanogenesis, also referred to as the anaerobic oxidation of methane (AOM). Organisms performing AOM have been found in multiple marine and freshwater environments including methane seeps, hydrothermal vents, coastal sediments and sulfate-methane transition zones. These organisms may accomplish reverse methanogenesis using a nickel-containing protein similar to methyl-coenzyme M reductase used by methanogenic archaea. Reverse methanogenesis occurs according to the reaction: SO2−4 + CH4 → HCO−3 + HS− + H2O Importance in carbon cycle Methanogenesis is the final step in the decay of organic matter. During the decay process, electron acceptors (such as oxygen, ferric iron, sulfate, and nitrate) become depleted, while hydrogen (H2) and carbon dioxide accumulate. Light organics produced by fermentation also accumulate. During advanced stages of organic decay, all electron acceptors become depleted except carbon dioxide. Carbon dioxide is a product of most catabolic processes, so it is not depleted like other potential electron acceptors. Only methanogenesis and fermentation can occur in the absence of electron acceptors other than carbon. Fermentation only allows the breakdown of larger organic compounds, and produces small organic compounds. Methanogenesis effectively removes the semi-final products of decay: hydrogen, small organics, and carbon dioxide. Without methanogenesis, a great deal of carbon (in the form of fermentation products) would accumulate in anaerobic environments. Natural occurrence In ruminants Enteric fermentation occurs in the gut of some animals, especially ruminants. In the rumen, anaerobic organisms, including methanogens, digest cellulose into forms nutritious to the animal. Without these microorganisms, animals such as cattle would not be able to consume grasses. The useful products of methanogenesis are absorbed by the gut, but methane is released from the animal mainly by belching (eructation). The average cow emits around 250 liters of methane per day. In this way, ruminants contribute about 25% of anthropogenic methane emissions. One method of methane production control in ruminants is by feeding them 3-nitrooxypropanol. In humans Some humans produce flatus that contains methane. In one study of the feces of nine adults, five of the samples contained archaea capable of producing methane. Similar results are found in samples of gas obtained from within the rectum. Even among humans whose flatus does contain methane, the amount is in the range of 10% or less of the total amount of gas. In plants Many experiments have suggested that leaf tissues of living plants emit methane. Other research has indicated that the plants are not actually generating methane; they are just absorbing methane from the soil and then emitting it through their leaf tissues. In soils Methanogens are observed in anoxic soil environments, contributing to the degradation of organic matter. This organic matter may be placed by humans through landfill, buried as sediment on the bottom of lakes or oceans as sediments, and as residual organic matter from sediments that have formed into sedimentary rocks. In Earth's crust Methanogens are a notable part of the microbial communities in continental and marine deep biosphere. Role in global warming Atmospheric methane is an important greenhouse gas with a global warming potential 25 times greater than carbon dioxide (averaged over 100 years), and methanogenesis in livestock and the decay of organic material is thus a considerable contributor to global warming. It may not be a net contributor in the sense that it works on organic material which used up atmospheric carbon dioxide when it was created, but its overall effect is to convert the carbon dioxide into methane which is a much more potent greenhouse gas. Methanogenesis can also be beneficially exploited, to treat organic waste, to produce useful compounds, and the methane can be collected and used as biogas, a fuel. It is the primary pathway whereby most organic matter disposed of via landfill is broken down. Extra-terrestrial life The presence of atmospheric methane has a role in the scientific search for extra-terrestrial life. The justification is that on an astronomical timescale, methane in the atmosphere of an Earth-like celestial body will quickly dissipate, and that its presence on such a planet or moon therefore indicates that something is replenishing it. If methane is detected (by using a spectrometer for example) this may indicate that life is, or recently was, present. This was debated when methane was discovered in the Martian atmosphere by M.J. Mumma of NASA's Goddard Flight Center, and verified by the Mars Express Orbiter (2004) and in Titan's atmosphere by the Huygens probe (2005). This debate was furthered with the discovery of 'transient', 'spikes of methane' on Mars by the Curiosity Rover.It is argued that atmospheric methane can come from volcanoes or other fissures in the planet's crust and that without an isotopic signature, the origin or source may be difficult to identify.On 13 April 2017, NASA confirmed that the dive of the Cassini orbiter spacecraft on 28 October 2015 discovered an Enceladus plume which has all the ingredients for methanogenesis-based life forms to feed on. Previous results, published in March 2015, suggested hot water is interacting with rock beneath the sea of Enceladus; the new finding supported that conclusion, and add that the rock appears to be reacting chemically. From these observations scientists have determined that nearly 98 percent of the gas in the plume is water, about 1 percent is hydrogen, and the rest is a mixture of other molecules including carbon dioxide, methane and ammonia. See also Aerobic methane production Anaerobic digestion Anaerobic oxidation of methane Electromethanogenesis Hydrogen cycle Methanotroph Mootral == References ==
ice–albedo feedback
Ice–albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice–albedo feedback plays an important role in global climate change. For instance, at higher latitudes, warmer temperatures melt the ice sheets. However, if warm temperatures decrease the ice cover and the area is replaced by water or land, the albedo would decrease. This increases the amount of solar energy absorbed, leading to more warming. The change in albedo acts to reinforce the initial alteration in ice area leading to more warming. Warming tends to decrease ice cover and hence decrease the albedo, increasing the amount of solar energy absorbed and leading to more warming. In the geologically recent past, the ice–albedo positive feedback has played a major role in the advances and retreats of the Pleistocene (~2.6 Ma to ~10 ka ago) ice sheets. Inversely, cooler temperatures increase ice, which increases albedo, leading to more cooling. Significance Current Snow– and ice–albedo feedback have a substantial effect on regional temperatures. In particular, the presence of ice cover makes the North Pole and the South Pole colder than they would have been without it. Consequently, recent Arctic sea ice decline is one of the primary factors behind the Arctic warming nearly four times faster than the global average since 1979 (the year when continuous satellite readings of the Arctic sea ice began)., in a phenomenon known as Arctic amplification. Modelling studies show that strong Arctic amplification only occurs during the months when significant sea ice loss occurs, and that it largely disappears when the simulated ice cover is held fixed. Conversely, the high stability of ice cover in Antarctica, where the thickness of the East Antarctic ice sheet allows it to rise nearly 4 km above the sea level, means that this continent has not experienced any net warming over the past seven decades: ice loss in the Antarctic and its contribution to sea level rise is instead driven entirely by the warming of the Southern Ocean, which had absorbed 35–43% of the total heat taken up by all oceans between 1970 and 2017.Ice–albedo feedback also has a smaller, but still notable effect on the global temperatures. Arctic ice decline between 1979 and 2011 is estimated to have been responsible for 0.21 watts per square meter (W/m2) of radiative forcing, which is equivalent to a quarter of radiative forcing from CO2 increases over the same period. When compared to cumulative increases in greenhouse gas radiative forcing since the start of the Industrial Revolution, it is equivalent to the estimated 2019 radiative forcing from nitrous oxide (0.21 W/m2), nearly half of 2019 radiative forcing from methane (0.54 W/m2) and 10% of the cumulative CO2 increase (2.16 W/m2). Future The impact of ice-albedo feedback on temperature will intensify in the future as the Arctic sea ice decline is projected to become more pronounced, with a likely near-complete loss of sea ice cover (falling below 1 million km2) at the end of the Arctic summer in September at least once before 2050 under all climate change scenarios, and around 2025 under the scenario of continually accelerating greenhouse gas emissions. Since September marks the end of the Arctic summer, it also represents the nadir of sea ice cover in the present climate, with an annual recovery process beginning in the Arctic winter. Consecutive ice-free Septembers are considered highly unlikely in the near future, but their frequency will increase with greater levels of global warming: a 2018 paper estimated that an ice-free September would occur once in every 40 years under a warming of 1.5 °C (2.7 °F), but once in every 8 years under 2 °C (3.6 °F) and once in every 1.5 years under 3 °C (5.4 °F). This means that the loss of Arctic sea ice during September or earlier in the summer would not be irreversible, and in the scenarios where global warming begins to reverse, its annual frequency would begin to go down as well. As such, it is not considered one of the tipping points in the climate system. Notably, while the loss of sea ice cover in September would be a historic event with significant implications for Arctic wildlife like polar bears, its impact on the ice-albedo feedback is relatively limited, as the total amount of solar energy received by the Arctic in September is already very low. On the other hand, even a relatively small reduction in June sea ice extent would have a far greater effect, since June represents the peak of the Arctic summer and the most intense transfer of solar energy. CMIP5 models estimate that a total loss of Arctic sea ice cover from June to September would increase the global temperatures by 0.19 °C (0.34 °F), with a range of 0.16–0.21 °C, while the regional temperatures would increase by over 1.5 °C (2.7 °F). This estimate includes not just the ice-albedo feedback itself, but also its second-order effects such the impact of such sea ice loss on lapse rate feedback, the changes in water vapor concentrations and regional cloud feedbacks. Since these calculations are already part of every CMIP5 and CMIP6 model, they are also included in their warming projections under every climate change pathway, and do not represent a source of "additional" warming on top of their existing projections. Very high levels of global warming could prevent Arctic sea ice from reforming during the Arctic winter. Unlike an ice-free summer, this ice-free Arctic winter may represent an irreversible tipping point. It is most likely to occur at around 6.3 °C (11.3 °F), though it could potentially occur as early as 4.5 °C (8.1 °F) or as late as 8.7 °C (15.7 °F). While the Arctic sea ice would be gone for an entire year, it would only have an impact on the ice-albedo feedback during the months where sunlight is received by the Arctic - i.e. from March to September. The difference between this total loss of sea ice and its 1979 state is equivalent to a trillion tons of CO2 emissions - around 40% of the 2.39 trillion tons of cumulative emissions between 1850 and 2019, although around a quarter of this impact has already happened with the current sea ice loss. Relative to now, an ice-free winter would have a global warming impact of 0.6 °C (1.1 °F), with a regional warming between 0.6 °C (1.1 °F) and 1.2 °C (2.2 °F). Ice–albedo feedback also exists with the other large ice masses on the Earth's surface, such as mountain glaciers, Greenland ice sheet, West Antarctic and East Antarctic ice sheet. However, their large-scale melt is expected to take centuries or even millennia, and any loss in area between now and 2100 will be negligible. Thus, climate change models do not include them in their projections of 21st century climate change: experiments where they model their disappearance indicate that the total loss of the Greenland Ice Sheet adds 0.13 °C (0.23 °F) to global warming (with a range of 0.04–0.06 °C), while the loss of the West Antarctic Ice Sheet adds 0.05 °C (0.090 °F) (0.04–0.06 °C), and the loss of mountain glaciers adds 0.08 °C (0.14 °F) (0.07–0.09 °C). Since the East Antarctic ice sheet would not be at risk of complete disappearance until the very high global warming of 5–10 °C (9.0–18.0 °F) is reached, and since its total melting is expected to take a minimum of 10,000 years to disappear entirely even then, it is rarely considered in such assessments. If it does happen, the maximum impact on global temperature is expected to be around 0.06 °C (0.11 °F). Total loss of the Greenland ice sheet would increase regional temperatures in the Arctic by between 0.5 °C (0.90 °F) and 3 °C (5.4 °F), while the regional temperature in Antarctica is likely to go up by 1 °C (1.8 °F) after the loss of the West Antarctic ice sheet and 2 °C (3.6 °F) after the loss of the East Antarctic ice sheet. Snowball Earth The runaway ice–albedo feedback was also important for the Snowball Earth. Geological evidence show glaciers near the equator, and models have suggested the ice–albedo feedback played a role. As more ice formed, more of the incoming solar radiation was reflected back into space, causing temperatures on Earth to drop. Whether the Earth was a complete solid snowball (completely frozen over), or a slush ball with a thin equatorial band of water still remains debated, but the ice–albedo feedback mechanism remains important for both cases. Ice–albedo feedback on exoplanets On Earth, the climate is heavily influenced by interactions with solar radiation and feedback processes. One might expect exoplanets around other stars to also experience feedback processes caused by stellar radiation that affect the climate of the world. In modeling the climates of other planets, studies have shown that the ice–albedo feedback is much stronger on terrestrial planets that are orbiting stars (see: stellar classification) that have a high near-ultraviolet radiation. See also Climate change feedback Climate sensitivity Dark Snow Project Polar amplification Polar see-saw – phenomenon where temperature variations at each of Earth's poles may not be in phase Soil carbon feedback References External links Turton, Steve (3 June 2021). "Why is the Arctic warming faster than other parts of the world? Scientists explain". WEForum.org. World Economic Forum. Archived from the original on 3 June 2021.
desflurane
Desflurane (1,2,2,2-tetrafluoroethyl difluoromethyl ether) is a highly fluorinated methyl ethyl ether used for maintenance of general anesthesia. Like halothane, enflurane, and isoflurane, it is a racemic mixture of (R) and (S) optical isomers (enantiomers). Together with sevoflurane, it is gradually replacing isoflurane for human use, except in economically undeveloped areas, where its high cost precludes its use. It has the most rapid onset and offset of the volatile anesthetic drugs used for general anesthesia due to its low solubility in blood. Some drawbacks of desflurane are its low potency, its pungency and its high cost (though at low flow fresh gas rates, the cost difference between desflurane and isoflurane appears to be insignificant). It may cause tachycardia and airway irritability when administered at concentrations greater than 10% by volume. Due to this airway irritability, desflurane is infrequently used to induce anesthesia via inhalation techniques. Though it vaporizes very readily, it is a liquid at room temperature. Anaesthetic machines are fitted with a specialized anaesthetic vaporiser unit that heats liquid desflurane to a constant temperature. This enables the agent to be available at a constant vapor pressure, negating the effects fluctuating ambient temperatures would otherwise have on its concentration imparted into the fresh gas flow of the anesthesia machine. Desflurane, along with enflurane and to a lesser extent isoflurane, has been shown to react with the carbon dioxide absorbent in anesthesia circuits to produce detectable levels of carbon monoxide through degradation of the anesthetic agent. The CO2 absorbent Baralyme, when dried, is most culpable for the production of carbon monoxide from desflurane degradation, although it is also seen with soda lime absorbent as well. Dry conditions in the carbon dioxide absorbent are conducive to this phenomenon, such as those resulting from high fresh gas flows. Pharmacology As of 2005 the exact mechanism of the action of general anaesthetics has not been delineated. Desflurane is known to act as a positive allosteric modulator of the GABAA and glycine receptors, and as a negative allosteric modulator of the nicotinic acetylcholine receptor, as well as affecting other ligand-gated ion channels. Stereochemistry Desflurane medications are a racemate of two enantiomers. Physical properties Physiologic effects Desflurane induces a dose dependent reduction in blood pressure due to reduced systemic vascular resistance. However, rapid increases in desflurane may induce a transient sympathetic response secondary to catecholamine release. Even though it is highly pungent, it is still a bronchodilator. It reduces the ventilatory response to hypoxia and hypercapnia. Like sevoflurane, desflurane vasodilatory properties also cause it to increase intracranial pressure and cerebral blood flow. However, it reduces cerebral metabolic rate. It also promotes muscle relaxation and potentiate neuromuscular blockade at a greater level than sevoflurane. Contraindications It is contraindicated for induction of general anesthesia in the non-intubated pediatric population due to the high risk of laryngospasm. It should not be used in patients with known or suspected susceptibility to malignant hyperthermia. It is also contraindicated in patients with elevated intracranial pressure. Global-warming potential Desflurane is a greenhouse gas. The twenty-year global-warming potential, GWP(20), for desflurane is 3714, meaning that one tonne of desflurane emitted is equivalent to 3714 tonnes of carbon dioxide in the atmosphere, much higher than sevoflurane or isoflurane. In addition to global warming potentials, drug potency and fresh gas flow rates must be considered for meaningful comparisons between anesthetic gases. When a steady state hourly amount of anesthetic necessary for 1 minimum alveolar concentration (MAC) at 2 liters per minute (LPM) for Sevoflurane, and 1 LPM for Desflurane and Isoflurane is weighted by the GWP, the clinically relevant quantities of each anesthetic can then be compared. On a per-MAC-hour basis, the total life cycle GHG impact of desflurane is more than 20 times higher than Isoflurane and Sevoflurane (1 minimal alveolar concentration-hour). One paper finds anesthesia gases used globally contribute the equivalent of 1 million cars to global warming. This estimate is commonly cited as a reason to neglect pollution prevention by anesthesiologists. However, this is problematic. This estimate is extrapolated from only one U.S. institution's anesthetic practices, and this institution uses virtually no Desflurane. Researchers neglected to include nitrous oxide in their calculations, and reported an erroneous average of 17 kg CO2e per anesthetic. However, institutions that utilize some Desflurane and account for nitrous oxide have reported an average of 175–220 kg CO2e per anesthetic. Sulbaek-Anderson's group therefore likely underestimated the total worldwide contribution of inhaled anesthetics, and yet still advocates for inhaled anesthetic emissions prevention.In March 2023, Scotland became the first country to ban its use due to its environmental impact. References Further reading External links "Desflurane". Drug Information Portal. U.S. National Library of Medicine.
blue dasher
The blue dasher (Pachydiplax longipennis) is an insect of the skimmer family. It is the only species in the genus Pachydiplax. It is very common and widely distributed through North America and into the Bahamas.Although the species name longipennis means "long wings", their wings are not substantially longer than those of related species. Females do, however, have a short abdomen that makes the wings appear longer in comparison. The blue dasher grows up to 25–43 millimetres (0.98–1.69 in) long. The males are easy to recognize with their vibrant blue color, yellow-striped thorax, and metallic green eyes. Females are somewhat less colorful than the male, an example of sexual dimorphism. While they have a matching yellow-striped thorax, their abdomen has a distinct brown and yellow striping that sets them apart from the male, along with contrasting red eyes. Both sexes develop a frosted color with age.Pachydiplax longipennis exhibits aggression while finding mates and foraging, and they are not under any conservation threats. Distribution and habitat Pachydiplax longipennis is a commonly spotted dragonfly species in the United States, and this species is found in many types of habitats. These habitats generally consist of some kind of body of water, like a stream, river, or lake. This species has now been spotted in lower portions of Canada (Ottawa), and it is suggested that climate change is allowing for a broadening of this species’ distribution. Dispersal Dispersal of this species is linked to territorial behavior. Males of this species exhibit extreme territorial behavior, often leading to repercussions for smaller males. Smaller males tend to be driven away from breeding grounds by larger males, resulting in these smaller males dispersing to other areas. Researchers believe that this method of dispersal could be important in further studies of population genetics and gene flow of this species. Wing coloration also varies with the range of this species, indicating that dispersal location and wing coloration are connected. Populations of P. longipennis occurring in more hot regions tend to lack the darker wing coloration present in populations in cooler regions. This darker wing coloration can help with thermoregulation, flight performance, and territory securement. Thus, temperature has a large effect on the evolution of this species’ wing coloration across its dispersal range. Habitat Blue dashers live near still, calm bodies of water, such as ponds, marshes, slow-moving waterways, and ditches, in warm areas typically at low elevations. The adults roost in trees at night. Diet and feeding These dragonflies, like others of their infraorder, are carnivorous, and are capable of eating hundreds of insects every day, including mosquito and mayfly larvae. The adult dragonfly will eat nearly any flying insect, such as a moth or fly. Nymphs have a diet that includes other aquatic larvae, small fish, and tadpoles. These dragonflies are known to be voracious predators, consuming up to 10% of their body weight each day in food.The blue dasher hunts by keeping still and waiting for suitable prey to come within range. When it does, they dart from their position to catch it.The foraging behavior of this dragonfly is influenced by different factors, such as external temperature, prey availability, and perch position. P. longipennis tends to forage on small prey, which differs from the unselective foraging behavior of other Odonata species. This species also moves to different foraging sites frequently, meaning they do not stay put in one place too long searching for food. P. longipennis also exhibits aggressive behavior when foraging for food. Both males and females take part in this aggression when looking for prey. P. longipennis will engage in this behavior towards individuals of the same species and individuals of other species, but males tend to fight (and win) more often than females. Researchers suggest that the more successful an individual is using aggression, then the more likely they will gain a better perch and thus increase their chance to find prey. Life history Pachydiplax longipennis larvae exhibit asynchronous emergence, meaning that the larvae do not emerge at the same time as one another. Based on general time of emergence, this species is still classified as a summer species. The larvae of this species often vary greatly in regard to size due to generational overlap of groups. This generational overlap is created by some groups producing one brood and other groups producing two broods in a breeding season. The timing of P. longipennis larvae emergence has also been linked to the presence of its predator, Anax junius. Research has shown that if larvae are in their peak physical state, then they have a higher likelihood of emerging in the presence of their predator, as opposed to weaker larvae likely emerging in the absence of the predator. Cannibalism also poses a threat, and the stronger larvae emerge earlier when this threat is high. Conservation and global warming This species is at a low vulnerability in regard to conservation. However, P. longipennis and all other dragonflies are indicators of a healthy ecosystem. As wetlands and other various habitats for dragonflies decrease due to habitat destruction, so do the populations of dragonflies. Therefore, dragonflies are at the forefront of conservation movements.In regard to global warming, studies have shown that increasing temperature has an effect on larvae emergence time and survival. Larvae under the conditions predicted for 100 years in the future emerge significantly earlier, and their survival rate is much lower, indicating possible effects of global warming on this dragonfly. References External links Media related to Pachydiplax longipennis at Wikimedia CommonsCitizen science observations for Blue dasher at iNaturalist Species Pachydiplax longipennis - Blue Dasher — Bug Guide https://bugguide.net/node/view/598
marine cloud brightening
Marine cloud brightening also known as marine cloud seeding and marine cloud engineering is a proposed solar radiation management climate engineering technique that would make clouds brighter, reflecting a small fraction of incoming sunlight back into space in order to offset anthropogenic global warming. Along with stratospheric aerosol injection, it is one of the two solar radiation management methods that may most feasibly have a substantial climate impact. The intention is that increasing the Earth's albedo, in combination with greenhouse gas emissions reduction, carbon dioxide removal, and adaptation, would reduce climate change and its risks to people and the environment. If implemented, the cooling effect is expected to be felt rapidly and to be reversible on fairly short time scales. However, technical barriers remain to large-scale marine cloud brightening. There are also risks with such modification of complex climate systems. Basic principles Marine cloud brightening is based on phenomena that are currently observed in the climate system. Today, emissions particles mix with clouds in the atmosphere and increase the amount of sunlight they reflect, reducing warming. This 'cooling' effect is estimated at between 0.5 and 1.5 °C, and is one of the most important unknowns in climate. Marine cloud brightening proposes to generate a similar effect using benign material (e.g. sea salt) delivered to clouds that are most susceptible to these effects (marine stratocumulus). Most clouds are quite reflective, redirecting incoming solar radiation back into space. Increasing clouds' albedo would increase the portion of incoming solar radiation that is reflected, in turn cooling the planet. Clouds consist of water droplets, and clouds with smaller droplets are more reflective (because of the Twomey effect). Cloud condensation nuclei are necessary for water droplet formation. The central idea underlying marine cloud brightening is to add aerosols to atmospheric locations where clouds form. These would then act as cloud condensation nuclei, increasing the cloud albedo. The marine environment has a deficit of cloud condensation nuclei due to lower levels of dust and pollution at sea, so marine cloud brightening would be more effective over the ocean than over land. In fact, marine cloud brightening on a small scale already occurs unintentionally due to the aerosols in ships' exhaust, leaving ship tracks. Changes to shipping regulations in enacted by the United Nations’ International Maritime Organization (IMO) to reduce certain aerosols are hypothesized to be leading to reduced cloud cover and increased oceanic warming, providing additional support to the potential effectiveness of marine cloud brightening at modifying ocean temperature. Different cloud regimes are likely to have differing susceptibility to brightening strategies, with marine stratocumulus clouds (low, layered clouds over ocean regions) most sensitive to aerosol changes. These marine stratocumulus clouds are thus typically proposed as the suited target. They are common over the cooler regions of subtropical and midlatitude oceans, where their coverage can exceed 50% in the annual mean.The leading possible source of additional cloud condensation nuclei is salt from seawater, although there are others.Even though the importance of aerosols for the formation of clouds is, in general, well understood, many uncertainties remain. In fact, the latest IPCC report considers aerosol-cloud interactions as one of the current major challenges in climate modeling in general. In particular, the number of droplets does not increase proportionally when more aerosols are present and can even decrease. Extrapolating the effects of particles on clouds observed on the microphysical scale to the regional, climatically relevant scale, is not straightforward. Climatic impacts Reduction in global warming The modeling evidence of the global climatic effects of marine cloud brightening remains limited. Current modeling research indicates that marine cloud brightening could substantially cool the planet. One study estimated that it could produce 3.7 W/m2 of globally averaged negative forcing. This would counteract the warming caused by a doubling of the preindustrial atmospheric carbon dioxide concentration, or an estimated 3 degrees Celsius, although models have indicated less capacity. A 2020 study found a substantial increase in cloud reflectivity from shipping in southeast Atlantic basin, suggesting that a regional-scale test of MCB in stratocumulus‐dominated regions could be successful.The climatic impacts of marine cloud brightening would be rapidly responsive and reversible. If the brightening activity were to change in intensity, or stop altogether, then the clouds' brightness would respond within a few days to weeks, as the cloud condensation nuclei particles precipitate naturally.Again unlike stratospheric aerosol injection, marine cloud brightening might be able to be used regionally, albeit in a limited manner. Marine stratocumulus clouds are common in particular regions, specifically the eastern Pacific Ocean and the eastern South Atlantic Ocean. A typical finding among simulation studies was a persistent cooling of the Pacific, similar to the “La Niña” phenomenon, and, despite the localized nature of the albedo change, an increase in polar sea ice. Recent studies aim at making simulation findings derived from different models comparable. Side effects There is some potential for changes to precipitation patterns and amplitude, although modeling suggests that the changes are likely less than those for stratospheric aerosol injection and considerably smaller than for unabated anthropogenic global warming. Research Marine cloud brightening was originally suggested by John Latham in 1990.Because clouds remain a major source of uncertainty in climate change, some research projects into cloud reflectivity in the general climate change context have provided insight into marine cloud brightening specifically. For example, one project released smoke behind ships in the Pacific Ocean and monitored the particulates' impact on clouds. Although this was done in order to better understand clouds and climate change, the research has implications for marine cloud brightening. A research coalition called the Marine Cloud Brightening Project was formed in order to coordinate research activities. Its proposed program includes modeling, field experiments, technology development and policy research to study cloud-aerosol effects and marine cloud brightening. The proposed program currently serves as a model for process-level (environmentally benign) experimental programs in the atmosphere. Formed in 2009 by Kelly Wanser with support from Ken Caldeira, the project is now housed at the University of Washington. Its co-principals are Robert Wood, Thomas Ackerman, Philip Rasch, Sean Garner (PARC), and Kelly Wanser (Silver Lining). The project is managed by Sarah Doherty. The shipping industry may have been carrying out an unintentional experiment in marine cloud brightening due to the emissions of ships and causing a global temperature reduction of as much as 0.25 ˚C lower than they would otherwise have been. A 2020 study found a substantial increase in cloud reflectivity from shipping in southeast Atlantic basin, suggesting that a regional-scale test of MCB in stratocumulus‐dominated regions could be successful.Marine cloud brightening is being examined as a way to shade and cool coral reefs such as the Great Barrier Reef. Proposed methods The leading proposed method for marine cloud brightening is to generate a fine mist of salt from seawater, and to deliver into targeted banks of marine stratocumulus clouds from ships traversing the ocean. This requires technology that can generate optimally-sized (~100 nm) sea-salt particles and deliver them at sufficient force and scale to penetrate low-lying marine clouds. The resulting spray mist must then be delivered continuously into target clouds over the ocean. In the earliest published studies, John Latham and Stephen Salter proposed a fleet of around 1500 unmanned Rotor ships, or Flettner ships, that would spray mist created from seawater into the air. The vessels would spray sea water droplets at a rate of approximately 50 cubic meters per second over a large portion of Earth's ocean surface. The power for the rotors and the ship could be generated from underwater turbines. Salter and colleagues proposed using active hydro foils with controlled pitch for power.[1] Subsequent researchers determined that transport efficiency was only relevant for use at scale, and that for research requirements, standard ships could be used for transport. (Some researchers considered aircraft as an option, but concluded that it would be too costly.) Droplet generation and delivery technology is critical to progress, and technology research has been focused on solving this challenging problem. Other methods were proposed and discounted, including: Using small droplets of seawater into the air through ocean foams. When bubbles in the foams burst, they loft small droplets of seawater. Using piezoelectric transducer. This would create faraday waves at a free surface. If the waves are steep enough, droplets of sea water will be thrown from the crests and the resulting salt particles can enter into the clouds. However, a significant amount of energy is required. Electrostatic atomization of seawater drops. This technique would utilize mobile spray platforms that move to adjust to changing weather conditions. These too could be on unmanned ships. Using engine or smoke emissions as a source for CCN. Paraffin oil particles have also been proposed, though their viability has been discounted. Costs The costs of marine cloud brightening remain largely unknown. One academic paper implied annual costs of approximately 50 to 100 million UK pounds (roughly 75 to 150 million US dollars). A report of the US National Academies suggested roughly five billion US dollars annually for a large deployment program (reducing radiative forcing by 5 W/m2). Governance Marine cloud brightening would be governed primarily by international law because it would likely take place outside of countries' territorial waters, and because it would affect the environment of other countries and of the oceans. For the most part, the international law governing solar radiation management in general would apply. For example, according to customary international law, if a country were to conduct or approve a marine cloud brightening activity that would pose significant risk of harm to the environments of other countries or of the oceans, then that country would be obligated to minimize this risk pursuant to a due diligence standard. In this, the country would need to require authorization for the activity (if it were to be conducted by a private actor), perform a prior environmental impact assessment, notify and cooperate with potentially affected countries, inform the public, and develop plans for a possible emergency. Marine cloud brightening activities would be furthered governed by the international law of sea, and particularly by the United Nations Convention on the Law of the Sea (UNCLOS). Parties to the UNCLOS are obligated to "protect and preserve the marine environment," including by preventing, reducing, and controlling pollution of the marine environment from any source. The "marine environment" is not defined but is widely interpreted as including the ocean's water, lifeforms, and the air above. "Pollution of the marine environment" is defined in a way that includes global warming and greenhouse gases. The UNCLOS could thus be interpreted as obligating the involved Parties to use methods such as marine cloud brightening if these were found to be effective and environmentally benign. Whether marine cloud brightening itself could be such pollution of the marine environment is unclear. At the same time, in combating pollution, Parties are "not to transfer, directly or indirectly, damage or hazards from one area to another or transform one type of pollution into another." If marine cloud brightening were found to cause damage or hazards, the UNCLOS could prohibit it. If marine cloud brightening activities were to be "marine scientific research"—also an undefined term—then UNCLOS Parties have a right to conduct the research, subject to some qualifications. Like all other ships, those that would conduct marine cloud brightening must bear the flag of the country that has given them permission to do so and to which the ship has a genuine link, even if the ship is unmanned or automated. The flagged state must exercise its jurisdiction over those ships. The legal implications would depend on, among other things, whether the activity were to occur in territorial waters, an exclusive economic zone (EEZ), or the high seas; and whether the activity was scientific research or not. Coastal states would need to approve any marine cloud brightening activities in their territorial waters. In the EEZ, the ship must comply with the coastal state's laws and regulations. It appears that the state conducting marine cloud brightening activities in another state's EEZ would not need the latter's permission, unless the activity were marine scientific research. In that case, the coastal state should grant permission in normal circumstances. States would be generally free to conduct marine cloud brightening activities on the high seas, provided that this is done with "due regard" for other states' interests. There is some legal unclarity regarding unmanned or automated ships. Advantages and disadvantages Marine cloud brightening appears to have most of the advantages and disadvantages of solar radiation management in general. For example, it presently appears to be inexpensive relative to suffering climate change damages and greenhouse gas emissions abatement, fast acting, and reversible in its direct climatic effects. Some advantages and disadvantages are specific to it, relative to other proposed solar radiation management techniques. Compared with other proposed solar radiation management methods, such as stratospheric aerosols injection, marine cloud brightening may be able to be partially localized in its effects. This could, for example, be used to stabilize the West Antarctic Ice Sheet. Furthermore, marine cloud brightening, as it is currently envisioned, would use only natural substances sea water and wind, instead of introducing human-made substances into the environment. See also Climate engineering Solar radiation management Stratospheric sulfate aerosols (geoengineering) Cirrus cloud thinning == References ==
turpan water system
The Turpan water system or Turfan karez system Uyghur: كارىز, romanized: kariz) in Turpan, located in the Turpan Depression, Xinjiang, China, is a vertical tunnel system adapted by the Uyghur people. The word karez means "well" in the local Uyghur language. Turpan has the Turpan Karez Paradise museum (a Protected Area of the People's Republic of China) dedicated to demonstrating its karez water system, as well as exhibiting other historical artifacts. Turpan's karez well system was crucial in Turpan's development as an important oasis stopover on the ancient Silk Road skirting the barren and hostile Taklamakan Desert. Turpan owes its prosperity to the water provided by its karez well system. Description Turpan's karez water system is made up of a horizontal series of vertically dug wells that are then linked by underground water canals to collect water from the watershed surface runoff from the base of the Tian Shan Mountains and the nearby Flaming Mountains. The canals channel the water to the surface, taking advantage of the current provided by the gravity of the downward slope of the Turpan Depression. The canals are mostly underground to reduce water evaporation and to make the slope long enough to reach far distances being only gravity fed.The system has wells, dams and underground canals built to store the water and control the amount of water flow. Vertical wells are dug at various points to tap into the groundwater flowing down sloping land from the source, the mountain runoff. The water is then channeled through underground canals dug from the bottom of one well to the next well and then to the desired destination. Turpan's karez irrigation system of special connected wells is believed to be of indigenous origin in China, perhaps combined with technology arriving from more western regions.In Xinjiang, the greatest number of karez wells are in the Turpan Depression, where today there remain over 1100 karez wells and channels having a total length of over 5,000 kilometres (3,100 mi). The local geography makes karez wells practical for agricultural irrigation and other uses. Turpan is located in the second deepest geographical depression in the world, with over 4,000 km2 (1,500 sq mi) of land below sea level and with soil that forms a sturdy basin. Water naturally flows down from the nearby mountains during the rainy season in an underground current to the low depression basin under the desert. The Turpan summer is very hot and dry with periods of wind and blowing sand. Importance Ample water was crucial to Turpan, so that the oasis city could service the many caravans on the Silk Route resting there near a route skirting the Taklamakan Desert. The caravans included merchant traders and missionaries with their armed escorts, animals including camels, sometimes numbering into the thousands, along with camel drivers, agents and other personnel, all of whom might stay for a week or more. The caravans needed pastures for their animals, resting facilities, trading bazaars for conducting business and replenishment of food and water. Potential UNESCO World Heritage Site Karez wells in the Turfan area are on the UNESCO World Heritage Sites Tentative List for China. Threatened by global warming There are 20,000 glaciers in Xinjiang – nearly half of all the glaciers in China. The water from the glaciers via the underground channels has provided a stable water source year round, independent of season, for thousands of years. But since the 1950s, Xinjiang's glaciers have retreated by between 21 percent to 27 percent due to global warming, threatening the agricultural productivity of the region. See also Qanat – Water management system using underground channels Taklamakan Desert – Desert in Xinjiang, China Tarim Basin – Endorheic basin in Xinjiang, China Cities along the Silk Road References External links Satellite map showing deep basin from Google Link to Silk Road map Turpan – Ancient Stop on the Silk Road Karez close to Turfan
asian brown cloud
The Indian Ocean brown cloud or Asian brown cloud is a layer of air pollution that recurrently covers parts of South Asia, namely the northern Indian Ocean, India, and Pakistan. Viewed from satellite photos, the cloud appears as a giant brown stain hanging in the air over much of the Indian subcontinent and the Indian Ocean every year between October and February, possibly also during earlier and later months. The term was coined in reports from the UNEP Indian Ocean Experiment (INDOEX). It was found to originate mostly due to farmers burning stubble in Punjab and to lesser extent Haryana and Uttar Pradesh. The debilitating air quality in Delhi is also due to the stubble burning in Punjab.The term atmospheric brown cloud is used for a more generic context not specific to the Asian region. Causes The Asian brown cloud is created by a range of airborne particles and pollutants from combustion (e.g., woodfires, cars, and factories), biomass burning and industrial processes with incomplete burning. The cloud is associated with the winter monsoon (October/November to February/March) during which there is no rain to wash pollutants from the air. Observations This pollution layer was observed during the Indian Ocean Experiment (INDOEX) intensive field observation in 1999 and described in the UNEP impact assessment study published 2002. Scientists in India claimed that the Asian Brown cloud is not something specific to Asia. Subsequently, when the United Nations Environment Programme (UNEP) organized a follow-up international project, the subject of study was renamed the Atmospheric Brown Cloud with focus on Asia. The cloud was also reported by NASA in 2004 and 2007.Although aerosol particles are generally associated with a global cooling effect, recent studies have shown that they can actually have a warming effect in certain regions such as the Himalayas. Impacts Health problems One major impact is on health. A 2002 study indicated nearly two million people die each year, in Asia alone, from conditions related to the brown cloud. Regional weather A second assessment study was published in 2008. It highlighted regional concerns regarding: Changes of rainfall patterns with the Asian monsoon, as well as a delaying of the start of the Asian monsoon, by several weeks. The observed weakening Indian monsoon and in China northern drought and southern flooding is influenced by the clouds. Increase in rainfall over the Australian Top End and Kimberley regions. A CSIRO study has found that by displacing the thermal equator southwards via cooling of the air over East Asia, the monsoon which brings most of the rain to these regions has been intensified and displaced southward. Retreat of the Hindu Kush-Himalayan glaciers and snow packs. The cause is attributed to rising air temperatures that are more pronounced in elevated regions, a combined warming effect of greenhouse gases and the Asian Brown Cloud. Also deposition of black carbon decreases the reflection and exacerbates the retreat. Asian glacial melting could lead to water shortages and floods for the hundreds of millions of people who live downstream. Decrease of crop harvests. Elevated concentrations of surface ozone are likely to affect crop yields negatively. The impact is crop specific. Cyclone intensity in Arabian Sea A 2011 study found that pollution is making Arabian Sea cyclones more intense as the atmospheric brown clouds has been producing weakening wind patterns which prevent wind shear patterns that historically have prohibited cyclones in the Arabian Sea from becoming major storms. This phenomenon was found responsible for the formation of stronger storms in 2007 and 2010 that were the first recorded storms to enter the Gulf of Oman. Global warming and dimming The 2008 report also addressed the global concern of warming and concluded that the brown clouds have masked 20 to 80 percent of greenhouse gas forcing in the past century. The report suggested that air pollution regulations can have large amplifying effects on global warming.Another major impact is on the polar ice caps. Black carbon (soot) in the Asian Brown Cloud may be reflecting sunlight and dimming Earth below but it is warming other places by absorbing incoming radiation and warming the atmosphere and whatever it touches. Black carbon is three times more effective than carbon dioxide—the most common greenhouse gas—at melting polar ice and snow. Black carbon in snow causes about three times the temperature change as carbon dioxide in the atmosphere. On snow—even at concentrations below five parts per billion–dark carbon triggers melting, and may be responsible for as much as 94 percent of Arctic warming. See also Asian Dust Arctic haze Air pollution in India Chemical Equator 1997, 2006, 2009, 2013 Southeast Asian haze References Further reading Ramanathan, V.; Crutzen, P. J. (2003). "New Directions: Atmospheric Brown "Clouds"". Atmospheric Environment. 37 (28): 4033–4035. Bibcode:2003AtmEn..37.4033R. doi:10.1016/S1352-2310(03)00536-3. Silva-Send, Nilmini (2007) Preventing regional air pollution in Asia : the potential role of the European Convention on Long Range Transboundary Air Pollution in Asian regions University of Kiel, Kiel, Germany, OCLC 262737812 External links Bray, Marianne (2002) "'Asian Brown Cloud' poses global threat" CNN, from WebArchive
meltwater
Meltwater (or melt water) is water released by the melting of snow or ice, including glacial ice, tabular icebergs and ice shelves over oceans. Meltwater is often found during early spring when snow packs and frozen rivers melt with rising temperatures, and in the ablation zone of glaciers where the rate of snow cover is reducing. Meltwater can be produced during volcanic eruptions, in a similar way in which the more dangerous lahars form. It can also be produced by the heat generated by the flow itself. When meltwater pools on the surface rather than flowing, it forms melt ponds. As the weather gets colder meltwater will often re-freeze. Meltwater can also collect or melt under the ice's surface. These pools of water, known as subglacial lakes can form due to geothermal heat and friction. Melt ponds may also form above and below Arctic sea ice, decreasing its albedo and causing the formation of thin underwater ice layers or false bottoms. Water source Meltwater is water that melts off of glaciers or snow. It then flows into a river or congregates on the surface forming a melt pond, which may re-freeze. It may also collect under ice or frozen ground. Meltwater provides drinking water for a large proportion of the world's population, as well as providing water for irrigation and hydroelectric plants. This meltwater can originate from seasonal snowfall, or from the melting of more permanent glaciers. Climate change threatens the precipitation of snow and the shrinking volume of glaciers.Some cities around the world have large lakes that collect snow melt to supplement water supply. Others have artificial reservoirs that collect water from rivers, which receive large influxes of meltwater from their higher elevation tributaries. After that, leftover water will flow into oceans causing sea levels to rise. Snow melt hundreds of miles away can contribute to river replenishment. Snowfall can also replenish groundwater in a highly variable process. Cities that indirectly source water from meltwater include Melbourne, Canberra, Los Angeles, Las Vegas among others.In North America, 78% of meltwater flows west of the Continental Divide, and 22% flows east of the Continental Divide. Agriculture in Wyoming and Alberta relies on water sources made more stable during the growing season by glacial meltwater.The Tian Shan region in China once had such significant glacial runoff that it was known as the "Green Labyrinth", but it has faced significant reduction in glacier volume from 1964 to 2004 and become more arid, already impacting the sustainability of water sources.In tropical regions, there is much seasonal variability in the flow of mountainous rivers, and glacial meltwater provides a buffer for this variability providing more water security year-round, but this is threatened by climate change and aridification. Cities that rely heavily on glacial meltwater include La Paz and El Alto in Bolivia, about 30%. Changes in the glacial meltwater are a concern in more remote highland regions of the Andes, where the proportion of water from glacial melt is much greater than in lower elevations. In parts of the Bolivian Andes, surface water contributions from glaciers are as high as 31-65% in the wet season and 39-71% in the dry season. Glacial meltwater Glacial meltwater comes from glacial melt due to external forces or by pressure and geothermal heat. Often, there will be rivers flowing through glaciers into lakes. These brilliantly blue lakes get their color from "rock flour", sediment that has been transported through the rivers to the lakes. This sediment comes from rocks grinding together underneath the glacier. The fine powder is then suspended in the water and absorbs and scatters varying colors of sunlight, giving a milky turquoise appearance. Meltwater also acts as a lubricant in the basal sliding of glaciers. GPS measurements of ice flow have revealed that glacial movement is greatest in summer when the meltwater levels are highest.Glacial meltwater can also affect important fisheries, such as in Kenai River, Alaska. Rapid changes Meltwater can be an indication of abrupt climate change. An instance of a large meltwater body is the case of the region of a tributary of Bindschadler Ice Stream, West Antarctica where rapid vertical motion of the ice sheet surface has suggested shifting of a subglacial water body.It can also destabilize glacial lakes leading to sudden floods, and destabilize snowpack causing avalanches. Dammed glacial meltwater from a moraine-dammed lake that is released suddenly can result in the floods, such as those that created the granite chasms in Purgatory Chasm State Reservation. Global warming In a report published in June 2007, the United Nations Environment Programme estimated that global warming could lead to 40% of the world population being affected by the loss of glaciers, snow and the associated meltwater in Asia. The predicted trend of glacial melt signifies seasonal climate extremes in these regions of Asia. Historically Meltwater pulse 1A was a prominent feature of the last deglaciation and took place 14.7-14.2 thousand years ago.The snow of glaciers in the central Andes melted rapidly due to a heatwave, increasing the proportion of darker-coloured mountains. With alpine glacier volume in decline, much of the environment is affected. These black particles are recognized for their propensity to change the albedo – or reflectance – of a glacier. Pollution particles affect albedo by preventing sun energy from bouncing off a glacier's white, gleaming surface and instead absorbing the heat, causing the glacier to melt. See also Extreme Ice Survey Groundwater Kryal Moulin (geology) Snowmelt Surface water False bottom (sea ice) In the media June 4, 2007, BBC: UN warning over global ice loss References External links United Nations Environment Program: Global Outlook for Ice and Snow Archived 2007-06-08 at the Wayback Machine
the coming global superstorm
The Coming Global Superstorm (ISBN 0-671-04190-8) is a 1999 book by Art Bell and Whitley Strieber, which warns that global warming might produce sudden and catastrophic climate change. Thesis First, the Gulf Stream and North Atlantic drift would generate a cordon of warm water around the North Pole, which in turn holds in a frozen mass of Arctic air. Second, if the North Atlantic drift were to shut down, that barrier would fail, releasing a flood of frozen air into the Northern Hemisphere, causing a sudden and drastic temperature shift. The book discusses a possible cause of the failure of the Gulf Stream: the melting of the polar ice caps could drastically affect the ocean salinity of the North Atlantic drift by dumping a large quantity of freshwater into the world's oceans. Bell and Strieber contend that such destabilizations have occurred before, and cite seemingly impossible engineering feats by ancient civilizations which must have been catastrophically destroyed since they do not appear in the historical record. Among their examples are the archaeological ruins of Nan Madol, which the book claims were built with exacting tolerances and extremely heavy basalt materials, necessitating a high degree of technical competency. Since no such society exists in the modern record, or even in legend, the society must have been destroyed by dramatic means. While other explanations besides a global meteorological event are possible, a correlating evidence set is presented in the woolly mammoth. Strieber and Bell assert that since mammoths have been found preserved with food still in their mouths and undigested in their stomachs, these animals must have been killed quickly, in otherwise normal conditions. They were preserved so well by quick freezing, which is taken as evidence of a rapid onset of a global blizzard or similar event. In popular culture Interspersed with the analytical parts of the book are a series of interlinked short fictional scenarios, written in italics, describing what might transpire today if a destabilization of the North Atlantic Current were to occur. The fictional accounts of "current events" as the meteorological situation deteriorates provided background for, and is the source material of, the 2004 science fiction film The Day After Tomorrow. Indeed, some events from the book are portrayed in the film with little modification, such as the failure of the Gulf Stream which freezes over large portions of the northern hemisphere including New York City. See also Arnold Federbush, author of Ice!, a 1978 novel with similar themes References External links What Is A Global Superstorm Archived website
global warming pollution reduction act of 2007
The Global Warming Pollution Reduction Act of 2007 (S. 309) was a bill proposed to amend the 1963 Clean Air Act, a bill that aimed to reduce emissions of carbon dioxide (CO2). A U.S. Senator, Bernie Sanders (I-VT), introduced the resolution in the 110th United States Congress on January 16, 2007. The bill was referred to the Senate Committee on Environment and Public Works but was not enacted into law.The act was intended to increase performance standards for electricity generation and motor vehicles, and included provisions for an optional emissions cap and trade system. This system would have begun in 2010 with the goal of reducing greenhouse gas emissions by 15 percent before 2020 and 83 percent before 2050. This bill would have provided funding for research and development of carbon sequestration initiatives as well as other projects. It would have also set emissions standards for new vehicles and a renewable fuels requirement for gasoline beginning in 2016, established energy efficiency and renewable portfolio standards beginning in 2008 and low-carbon electricity generation standards beginning in 2016 for electric utilities, and would have required periodic evaluations by the National Academy of Sciences to determine whether emissions targets were adequate. Background Senate Bill 309 (S.309) proposed to amend the Clean Air Act of 1970 to include carbon dioxide emissions as a regulated pollutant in the U.S. It established a regulatory framework to nationally regulate carbon dioxide emissions through a set of programs, regulations, and market-based incentives. The Environmental Protection Agency (EPA) would have been directed to implement and enforce the provisions of the bill. Bernie Sanders (I-VT) and Barbara Boxer (D-CA) proposed S. 309 in January 2007, which aimed to incrementally reduce U.S. carbon dioxide emissions from the highest polluting sectors: transportation and electric generation. The bill's goal was to reduce emissions to 80 percent of 1990 levels by 2050. Act Overview Proposed solution of S. 309 The bill proposes a list of requirements to reduce CO2 emission through the following programs: Vehicle emissions standards: Changes made by the bill set the emission standard on vehicles so that it cannot exceed more carbon dioxide than it required (205 g/m and 332 g/m for automobiles and 405 g/m for 3 vehicles over 8,501 pounds). This method was proven to be effective enough to reduce carbon dioxide emission even though the cost for the automobile industry can be high. Emissions standards for electric generation units: Approximately 52 percent of energy in the United States comes from coal, which is defined as the main source of carbon dioxide emissions. The bill may have also required the development of new technology. Low carbon generation requirement: Electricity generators would have been required to produce a specific amount of low carbon energy. These requirements could have been achieved by several techniques: by generating or purchasing low-carbon electric energy, or purchasing credits pursuant to the Low-Carbon Generation Credit Trading Program. Renewable portfolio standard: A key benefit of this approach would have been the development of renewable energy technology and a market force for clean energy generation. However, monitoring the standard may have proven difficult, and states that rely more heavily on traditional fossil fuel production could have distorted the market. Research and development: The program had three goals: develop an advanced system to monitor global warming pollution, create a baseline reference line for future global warming pollution, and start an international exchange of information to expand measurements. Research is a very important program of this bill, which would have increased the state of knowledge technology of clean energy productions. Global warming and Defenders of Wildlife The main concern of scientists and Defenders of Wildlife were global warming issues. The world is threatened by rising sea levels, melting ice, droughts, and distractions of species. As a consequence, the reduction of global warming pollutants was proposed as a solution to the problem, and Senators Bernie Sanders and Barbara Boxer introduced the Global Warming Pollution Reduction Act of 2007. It was designed to implement an emissions reduction strategy that would avoid the most catastrophic consequences of global warming. In particular, the legislation would have mandated an increase in energy efficiency, which would be expected to reduce air pollution and oil consumption while increasing employment in the renewable energy sector. The bill also identified targets for pollution reduction. A focus on those targets was expected to be helpful in maintaining the worldwide average atmospheric temperature below a dangerous "tipping point", beyond which climate change would be unavoidable. Current emission standards Emission standards are defined with specific limits to the amount of pollutants that can or cannot be released into the environment. In the U.S., the emission standards are managed by the EPA.Standards of the average global warming pollution emissions of a vehicle, according to SEC 707 in S309: cannot exceed 205 g CO2/mile for vehicles that have a gross weight of at most 8,500 pounds and a loaded weight of at most 3,750 pounds. cannot exceed 332 g CO2/mile for vehicles that have a gross weight of at most 8,500 pounds, a loaded weight of at most 3,750 pounds, and medium-duty passenger vehicles. cannot exceed 405 g CO2/mile for vehicles that have a gross weight between 8501 pounds and 10000 pounds, and does not also contain a medium duty passenger vehicle. == References ==
global warming tour
The Global Warming Tour, by American hard rock band Aerosmith, included 82 concert performances across North America, Oceania, Asia, Latin America, and Europe. "It's something so magical," remarked Steven Tyler. "Other people see it. We don't, because we're in the middle of it. This is Aerosmith, man. The second we get up there on stage, it's insane."Prior to the first leg, the band played a private event for Walmart shareholders. The first leg included 23 performances and lasted from late May through early August 2012. The second leg included 14 performances in November and December 2012. Before the second leg, the band performed a brief set at the iHeartRadio Music Festival in mid September. Also prior to the second leg, to promote their new album in early November, the band played three nationally televised performances in New York City and did a special performance in front of their old Boston apartment. The first two legs were held primarily in indoor arenas, with a couple outdoor shows and a few festivals on the first leg, including three in eastern Canada and Milwaukee's Summerfest. The third leg of the tour ran from late April to mid May 2013 and saw Aerosmith playing their first shows in Australia since 1990, as well as their first shows in New Zealand and the Philippines. On May 30, the band played at the "Boston Strong" charity concert for victims of the Boston Marathon bombing. In July 2013, the band played at the Greenbrier Classic in West Virginia and at Foxwoods Resort Casino in Connecticut. In August 2013, the band performed four concerts in Japan, but their first shows in China and Taiwan were cancelled due to poor ticket sales. The band performed in August at the Harley-Davidson 110th anniversary concert series in Milwaukee. Concerts were planned for Latin America in September and October, including their first shows in Uruguay, Guatemala and El Salvador. Cheap Trick opened all dates on the first two legs bar a few festivals. The Dead Dasies, featuring Jon Stevens and Richard Fortus, opened the Australia/New Zealand leg. In Argentina and Brazil, Aerosmith toured with Whitesnake, including the Personal Fest in Buenos Aires and at the Monsters of Rock in São Paulo. The tour promoted Music from Another Dimension!, released on November 6, 2012. In addition to hits and choice album cuts, the band performed four new songs from the album, three of them regularly ("Oh Yeah", "Legendary Child", and "Lover Alot"). On May 5, 2013, the band announced they had cancelled their first show in Jakarta due to safety concerns.In 2014, Aerosmith played 17 concerts across Europe from May 14 to July 2. The Let Rock Rule Tour was scheduled to follow in July, August, and September 2014 and see Aerosmith play several dates in North America. This tour featured Slash (with Myles Kennedy and the Conspirators) as the opening act. Full details of the tour were announced on April 8, 2014. On May 14, 2014, the band announced that they had cancelled their concert in Istanbul after Turkey declared a three-day mourning for the victims of Soma mine disaster. The July 2, 2014 concert in Kyiv was cancelled due to civil unrest in Ukraine. Stage setup The stage was very close to the design of past tours. The main stage, occupying one end of the venue had the classic Aerosmith logo painted on top and two small platforms off to each side. Kramer's drums were at the back, Hamilton and Whitford were on the left side, and Perry was on the right side. The back-up musicians were at the back-left of the stage behind a stack of amps, the order usually went Melanie Taylor (backing vocals), Mindi Abair (saxophone and backing vocals) and Russ Irwin (keyboards and backing vocals). In the middle of the main stage was the catwalk, which ran through nine rows at each venue. At the end of the catwalk was a B-stage, which ran through the tenth row to the sixteenth row. Around the entire stage was a half-meter wide barricade that contained security and a few select fans. Performance The show would start with a video playing on the main video screen that was reminiscent of the original opening from The Outer Limits. Near the end of the video, smoke would arise from the end of the B-stage and from the main stage. When the video finished, Kramer, Whitford and Hamilton would kick into the opening song while Tyler and Perry would rise from a trapdoor at the end of the B-stage. For the first song, Tyler and Perry would stay on the B-stage and the rest of the band would stay on the main stage. After the first song finished, Hamilton, Whitford, Tyler and Perry would all go where they pleased. At "What It Takes", Tyler would look for an attractive girl to sing the opening lines, like in most tours. At the encore which was "Dream On" at every date, smoke would again appear from the end of the B-stage and Tyler on a white piano would appear from the same trapdoor as before. The piano had a few blocks beside it used as stairs by Perry and Tyler, as they would walk on top of the piano. They followed with a second encore – "Train Kept A-Rollin'" – and at a few venues, a third encore was even played, either "Mama Kin" or "Chip Away the Stone". When the encores wrapped up, a few cannons fired silver confetti into the audience. After the confetti storm, Tyler would introduce the back-up musicians (Taylor, Abair and Irwin) and the members of Aerosmith. Finally, he would hand the microphone off to Perry, who would introduce Tyler. After the introductions, the band would walk out with "Mannish Boy" by Muddy Waters playing over the speakers. Top 200 North American Tours 2012: #23 Total Gross: US $31 million Total Attendance: 306,475 (33 concerts) Tour dates Notes Trivia Aerosmith premiered six new songs on this tour: "Oh Yeah", "Legendary Child", "What Could Have Been Love", "Lover Alot", "Freedom Fighter" and, in excerpted form only, "Street Jesus". Whitford's sons Harrison and Graham guested on "Last Child" at, respectively, the American Airlines Center in Dallas and Madison Square Garden in New York City. At the Hollywood Bowl, instead of Tyler and Perry appearing from a trapdoor at the end of the catwalk, Stan Lee introduced the band and Aerosmith simply walked onstage. Actor Johnny Depp played rhythm guitar and one solo on "Train Kept A-Rollin'" in Los Angeles at the Hollywood Bowl, and guested on "Come Together" and "Stop Messin' Around" at the Staples Center. Sean Lennon, son of John, added vocals on "Come Together" at Madison Square Garden in New York City. "Lick and a Promise" got its first play in 24 years on United States soil, the last time being at the Pacific Coast Amphitheater in Costa Mesa on September 15, 1988, on the Permanent Vacation Tour. Aerosmith performed a clip of "Woman of the World" in Atlanta at the Phillips Center. The song hadn't been played anywhere since 1974. Aerosmith had to reschedule their show in Bristow, Virginia, because of reported voice problems by Tyler. Originally scheduled for July 3, it was moved to August 12. A difficult political situation in Ukraine meant the show in Kyiv was moved from May 21 to July 2, 2014. Jesse Sky Kramer joined his father on percussion for many songs at every show. This is the first tour where Aerosmith utilised back-up singers other than Russ Irwin. Tyler joined Kramer on drums during "Combination" at every date. At a few shows, saxophonist and back-up singer Mindi Abair jammed with Kramer during his drum solo. On some nights Whitford played guitar and provided backing vocals on "Ain't That a Shame" or "Surrender" with opening act Cheap Trick. On other nights, Tyler would run out during "I Want You to Want Me", play with Trick bassist Tom Petersson and guitarist Rick Nielsen, help sing the chorus, then quickly exit. Former Guns N' Roses guitarist Izzy Stradlin guested on "Mama Kin" at the Staples Center in Los Angeles. Russ Irwin, Aerosmith keyboardist, occasionally performed the "Abbey Road Medley" (originally by The Beatles) with Cheap Trick during their set. At the Sunrise, Florida, show on 12/09, longtime Aerosmith collaborator Richie Supa guested on "Chip Away the Stone". On the 11/27 Toronto show, Aerosmith performed "Red House" as an homage to Jimi Hendrix on what would have been his seventieth birthday. Before the Melbourne show, Hamilton suffered a chest infection and was flown home. David Hull from the Joe Perry Project was flown in from the United States to play in his place. Hamilton was set to return with the band after the Australian dates. "Throughout tonight's two-hour show," noted Classic Rock of June 27's Toronto fixture, "the singer heaps generous praise on his bandmates, and name-checks (rather than hip-checks) Perry so many times that it borders on idolatry… Intoxicating stuff." List of songs played Setlist Personnel AerosmithSteven Tyler – lead vocals, harmonica, percussion, piano Joe Perry – guitar, backing vocals, lead vocals, pedal steel guitar, talkbox Brad Whitford – guitar Tom Hamilton – bass Joey Kramer – drums, percussionAdditional musiciansRuss Irwin – keyboards, backing vocals, percussion, guitar (until 2014-04-08) Mindi Abair – saxophones (First leg) Melanie Taylor – backing vocals, percussion (First leg) Jesse Sky Kramer – percussion David Hull – bass (after Tom leaves due to sickness) Buck Johnson – keyboards, backing vocals (after Russ Irwin quits the band beginning on 2014-05-17) == References ==
the noun project
The Noun Project is a website that aggregates and catalogs symbols that are created and uploaded by graphic designers around the world. Based in Los Angeles, the project functions both as a resource for people in search of typographic symbols and a design history of the genre. History The Noun Project was co-founded by Sofya Polyakov, Edward Boatman, and Scott Thomas and is headed by Polyakov. Boatman recalled his frustration while working at an architectural firm at the lack of a central repository for common icons, "things such as airplanes, bicycles and people." That idea morphed into a broader platform for visual communication. The site was launched on Kickstarter in December 2010, which raised more than $14,000 in donations, with symbols from the National Park Service and other sources whose content was in the public domain. Site design was by the firm Simple.Honest.Work, with mentoring from the Designer Fund.The Noun Project has generated interest and new symbols by hosting a series of "Iconathons", the first of which was held in the summer of 2011. The sessions typically run five hours and include graphic designers, content experts, and interested volunteers, all working in small groups that focus on a specific issue, such as democracy, transportation or nutrition. The idea for the event came from Chacha Sikes, who was at the time a fellow at Code for America. Operation Contributors come from around the world. A 2012 New York Times story profiled one of them: Luis Prado, a graphic designer at the Washington State Department of Natural Resources, who uploaded 83 icons he had created for his agency, including a pruning saw, a logging truck and a candidate symbol for global warming, which he created when he could not find one online.The site has four stylistic guidelines: include only the essential characteristics of the idea conveyed, maintain a consistent design style, favor an industrial look over a hand-drawn one, and avoid conveying personal opinions, feelings and beliefs. Contributors select a public domain mark or a Creative Commons attribution license, which enables others to use the symbol with attribution, free of charge. The attribution requirement can be waived upon payment of a nominal fee, which is split between the artist and The Noun Project. The founders envisioned the site as being primarily useful for designers and architects, but the range of users includes people with autism and amyotrophic lateral sclerosis, who sometimes favor a visual language, as well as business professionals incorporating the symbols into presentations. References External links Official site
albedo
Albedo (; from Latin albedo 'whiteness') is the fraction of sunlight that is diffusely reflected by a body. It is measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation). Surface albedo is defined as the ratio of radiosity Je to the irradiance Ee (flux per unit area) received by a surface. The proportion reflected is not only determined by properties of the surface itself, but also by the spectral and angular distribution of solar radiation reaching the Earth's surface. These factors vary with atmospheric composition, geographic location, and time (see position of the Sun). While bi-hemispherical reflectance is calculated for a single angle of incidence (i.e., for a given position of the Sun), albedo is the directional integration of reflectance over all solar angles in a given period. The temporal resolution may range from seconds (as obtained from flux measurements) to daily, monthly, or annual averages. Unless given for a specific wavelength (spectral albedo), albedo refers to the entire spectrum of solar radiation. Due to measurement constraints, it is often given for the spectrum in which most solar energy reaches the surface (between 0.3 and 3 μm). This spectrum includes visible light (0.4–0.7 μm), which explains why surfaces with a low albedo appear dark (e.g., trees absorb most radiation), whereas surfaces with a high albedo appear bright (e.g., snow reflects most radiation). Ice–albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice–albedo feedback plays an important role in global climate change.Albedo is an important concept in climatology, astronomy, and environmental management. The average albedo of the Earth from the upper atmosphere, its planetary albedo, is 30–35% because of cloud cover, but widely varies locally across the surface because of different geological and environmental features. Terrestrial albedo Any albedo in visible light falls within a range of about 0.9 for fresh snow to about 0.04 for charcoal, one of the darkest substances. Deeply shadowed cavities can achieve an effective albedo approaching the zero of a black body. When seen from a distance, the ocean surface has a low albedo, as do most forests, whereas desert areas have some of the highest albedos among landforms. Most land areas are in an albedo range of 0.1 to 0.4. The average albedo of Earth is about 0.3. This is far higher than for the ocean primarily because of the contribution of clouds. Earth's surface albedo is regularly estimated via Earth observation satellite sensors such as NASA's MODIS instruments on board the Terra and Aqua satellites, and the CERES instrument on the Suomi NPP and JPSS. As the amount of reflected radiation is only measured for a single direction by satellite, not all directions, a mathematical model is used to translate a sample set of satellite reflectance measurements into estimates of directional-hemispherical reflectance and bi-hemispherical reflectance (e.g.,). These calculations are based on the bidirectional reflectance distribution function (BRDF), which describes how the reflectance of a given surface depends on the view angle of the observer and the solar angle. BDRF can facilitate translations of observations of reflectance into albedo.Earth's average surface temperature due to its albedo and the greenhouse effect is currently about 15 °C (59 °F). If Earth were frozen entirely (and hence be more reflective), the average temperature of the planet would drop below −40 °C (−40 °F). If only the continental land masses became covered by glaciers, the mean temperature of the planet would drop to about 0 °C (32 °F). In contrast, if the entire Earth was covered by water – a so-called ocean planet – the average temperature on the planet would rise to almost 27 °C (81 °F).In 2021, scientists reported that Earth dimmed by ~0.5% over two decades (1998–2017) as measured by earthshine using modern photometric techniques. This may have both been co-caused by climate change as well as a substantial increase in global warming. However, the link to climate change has not been explored to date and it is unclear whether or not this represents an ongoing trend. White-sky, black-sky, and blue-sky albedo For land surfaces, it has been shown that the albedo at a particular solar zenith angle θi can be approximated by the proportionate sum of two terms: the directional-hemispherical reflectance at that solar zenith angle, α ¯ ( θ i ) {\displaystyle {{\bar {\alpha }}(\theta _{i})}} , sometimes referred to as black-sky albedo, and the bi-hemispherical reflectance, α ¯ ¯ {\displaystyle {\bar {\bar {\alpha }}}} , sometimes referred to as white-sky albedo.with 1 − D {\displaystyle {1-D}} being the proportion of direct radiation from a given solar angle, and D {\displaystyle {D}} being the proportion of diffuse illumination, the actual albedo α {\displaystyle {\alpha }} (also called blue-sky albedo) can then be given as: α = ( 1 − D ) α ¯ ( θ i ) + D α ¯ ¯ . {\displaystyle \alpha =(1-D){\bar {\alpha }}(\theta _{i})+D{\bar {\bar {\alpha }}}.} This formula is important because it allows the albedo to be calculated for any given illumination conditions from a knowledge of the intrinsic properties of the surface. Human activities Human activities (e.g., deforestation, farming, and urbanization) change the albedo of various areas around the globe. As per Campra et al., human impacts to "the physical properties of the land surface can perturb the climate by altering the Earth’s radiative energy balance" even on a small scale or when undetected by satellites.The tens of thousands of hectares of greenhouses in Almería, Spain form a large expanse of whitened plastic roofs. A 2008 study found that this anthropogenic change lowered the local surface area temperature of the high-albedo area, although changes were localized. A follow-up study found that "CO2-eq. emissions associated to changes in surface albedo are a consequence of land transformation" and can reduce surface temperature increases associated with climate change.It has been found that urbanization generally decreases albedo (commonly being 0.01–0.02 lower than adjacent croplands), which contributes to global warming. Deliberately increasing albedo in urban areas can mitigate urban heat island. Ouyang et al. estimated that, on a global scale, "an albedo increase of 0.1 in worldwide urban areas would result in a cooling effect that is equivalent to absorbing ~44 Gt of CO2 emissions."Intentionally enhancing the albedo of the Earth's surface, along with its daytime thermal emittance, has been proposed as a solar radiation management strategy to mitigate energy crises and global warming known as passive daytime radiative cooling (PDRC). Efforts toward widespread implementation of PDRCs may focus on maximizing the albedo of surfaces from very low to high values, so long as a thermal emittance of at least 90% can be achieved. Examples of terrestrial albedo effects Illumination Albedo is not directly dependent on illumination because changing the amount of incoming light proportionally changes the amount of reflected light, except in circumstances where a change in illumination induces a change in the Earth's surface at that location (e.g. through melting of reflective ice). That said, albedo and illumination both vary by latitude. Albedo is highest near the poles and lowest in the subtropics, with a local maximum in the tropics. Insolation effects The intensity of albedo temperature effects depends on the amount of albedo and the level of local insolation (solar irradiance); high albedo areas in the Arctic and Antarctic regions are cold due to low insolation, whereas areas such as the Sahara Desert, which also have a relatively high albedo, will be hotter due to high insolation. Tropical and sub-tropical rainforest areas have low albedo, and are much hotter than their temperate forest counterparts, which have lower insolation. Because insolation plays such a big role in the heating and cooling effects of albedo, high insolation areas like the tropics will tend to show a more pronounced fluctuation in local temperature when local albedo changes.Arctic regions notably release more heat back into space than what they absorb, effectively cooling the Earth. This has been a concern since arctic ice and snow has been melting at higher rates due to higher temperatures, creating regions in the arctic that are notably darker (being water or ground which is darker color) and reflects less heat back into space. This feedback loop results in a reduced albedo effect. Climate and weather Albedo affects climate by determining how much radiation a planet absorbs. The uneven heating of Earth from albedo variations between land, ice, or ocean surfaces can drive weather.The response of the climate system to an initial forcing is modified by feedbacks: increased by "self-reinforcing" or "positive" feedbacks and reduced by "balancing" or "negative" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds.: 58 Albedo–temperature feedback When an area's albedo changes due to snowfall, a snow–temperature feedback results. A layer of snowfall increases local albedo, reflecting away sunlight, leading to local cooling. In principle, if no outside temperature change affects this area (e.g., a warm air mass), the raised albedo and lower temperature would maintain the current snow and invite further snowfall, deepening the snow–temperature feedback. However, because local weather is dynamic due to the change of seasons, eventually warm air masses and a more direct angle of sunlight (higher insolation) cause melting. When the melted area reveals surfaces with lower albedo, such as grass, soil, or ocean, the effect is reversed: the darkening surface lowers albedo, increasing local temperatures, which induces more melting and thus reducing the albedo further, resulting in still more heating. Snow Snow albedo is highly variable, ranging from as high as 0.9 for freshly fallen snow, to about 0.4 for melting snow, and as low as 0.2 for dirty snow. Over Antarctica snow albedo averages a little more than 0.8. If a marginally snow-covered area warms, snow tends to melt, lowering the albedo, and hence leading to more snowmelt because more radiation is being absorbed by the snowpack (the ice–albedo positive feedback). Just as fresh snow has a higher albedo than does dirty snow, the albedo of snow-covered sea ice is far higher than that of sea water. Sea water absorbs more solar radiation than would the same surface covered with reflective snow. When sea ice melts, either due to a rise in sea temperature or in response to increased solar radiation from above, the snow-covered surface is reduced, and more surface of sea water is exposed, so the rate of energy absorption increases. The extra absorbed energy heats the sea water, which in turn increases the rate at which sea ice melts. As with the preceding example of snowmelt, the process of melting of sea ice is thus another example of a positive feedback. Both positive feedback loops have long been recognized as important for global warming.Cryoconite, powdery windblown dust containing soot, sometimes reduces albedo on glaciers and ice sheets.The dynamical nature of albedo in response to positive feedback, together with the effects of small errors in the measurement of albedo, can lead to large errors in energy estimates. Because of this, in order to reduce the error of energy estimates, it is important to measure the albedo of snow-covered areas through remote sensing techniques rather than applying a single value for albedo over broad regions. Small-scale effects Albedo works on a smaller scale, too. In sunlight, dark clothes absorb more heat and light-coloured clothes reflect it better, thus allowing some control over body temperature by exploiting the albedo effect of the colour of external clothing. Solar photovoltaic effects Albedo can affect the electrical energy output of solar photovoltaic devices. For example, the effects of a spectrally responsive albedo are illustrated by the differences between the spectrally weighted albedo of solar photovoltaic technology based on hydrogenated amorphous silicon (a-Si:H) and crystalline silicon (c-Si)-based compared to traditional spectral-integrated albedo predictions. Research showed impacts of over 10% for vertically (90°) mounted systems, but such effects were substantially lower for systems with lower surface tilts. Spectral albedo strongly affects the performance of bifacial solar cells where rear surface performance gains of over 20% have been observed for c-Si cells installed above healthy vegetation. An analysis on the bias due to the specular reflectivity of 22 commonly occurring surface materials (both human-made and natural) provided effective albedo values for simulating the performance of seven photovoltaic materials mounted on three common photovoltaic system topologies: industrial (solar farms), commercial flat rooftops and residential pitched-roof applications. Trees Forests generally have a low albedo because the majority of the ultraviolet and visible spectrum is absorbed through photosynthesis. For this reason, the greater heat absorption by trees could offset some of the carbon benefits of afforestation (or offset the negative climate impacts of deforestation). In other words: The climate change mitigation effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo).In the case of evergreen forests with seasonal snow cover albedo reduction may be great enough for deforestation to cause a net cooling effect. Trees also impact climate in extremely complicated ways through evapotranspiration. The water vapor causes cooling on the land surface, causes heating where it condenses, acts a strong greenhouse gas, and can increase albedo when it condenses into clouds. Scientists generally treat evapotranspiration as a net cooling impact, and the net climate impact of albedo and evapotranspiration changes from deforestation depends greatly on local climate.Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary mitigation benefit.In seasonally snow-covered zones, winter albedos of treeless areas are 10% to 50% higher than nearby forested areas because snow does not cover the trees as readily. Deciduous trees have an albedo value of about 0.15 to 0.18 whereas coniferous trees have a value of about 0.09 to 0.15. Variation in summer albedo across both forest types is associated with maximum rates of photosynthesis because plants with high growth capacity display a greater fraction of their foliage for direct interception of incoming radiation in the upper canopy. The result is that wavelengths of light not used in photosynthesis are more likely to be reflected back to space rather than being absorbed by other surfaces lower in the canopy. Studies by the Hadley Centre have investigated the relative (generally warming) effect of albedo change and (cooling) effect of carbon sequestration on planting forests. They found that new forests in tropical and midlatitude areas tended to cool; new forests in high latitudes (e.g., Siberia) were neutral or perhaps warming. Water Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations. At the scale of the wavelength of light even wavy water is always smooth so the light is reflected in a locally specular manner (not diffusely). The glint of light off water is a commonplace effect of this. At small angles of incident light, waviness results in reduced reflectivity because of the steepness of the reflectivity-vs.-incident-angle curve and a locally increased average incident angle.Although the reflectivity of water is very low at low and medium angles of incident light, it becomes very high at high angles of incident light such as those that occur on the illuminated side of Earth near the terminator (early morning, late afternoon, and near the poles). However, as mentioned above, waviness causes an appreciable reduction. Because light specularly reflected from water does not usually reach the viewer, water is usually considered to have a very low albedo in spite of its high reflectivity at high angles of incident light. Note that white caps on waves look white (and have high albedo) because the water is foamed up, so there are many superimposed bubble surfaces which reflect, adding up their reflectivities. Fresh 'black' ice exhibits Fresnel reflection. Snow on top of this sea ice increases the albedo to 0.9. Clouds Cloud albedo has substantial influence over atmospheric temperatures. Different types of clouds exhibit different reflectivity, theoretically ranging in albedo from a minimum of near 0 to a maximum approaching 0.8. "On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth."Albedo and climate in some areas are affected by artificial clouds, such as those created by the contrails of heavy commercial airliner traffic. A study following the burning of the Kuwaiti oil fields during Iraqi occupation showed that temperatures under the burning oil fires were as much as 10 °C (18 °F) colder than temperatures several miles away under clear skies. Aerosol effects Aerosols (very fine particles/droplets in the atmosphere) have both direct and indirect effects on Earth's radiative balance. The direct (albedo) effect is generally to cool the planet; the indirect effect (the particles act as cloud condensation nuclei and thereby change cloud properties) is less certain. As per Spracklen et al. the effects are: Aerosol direct effect. Aerosols directly scatter and absorb radiation. The scattering of radiation causes atmospheric cooling, whereas absorption can cause atmospheric warming. Aerosol indirect effect. Aerosols modify the properties of clouds through a subset of the aerosol population called cloud condensation nuclei. Increased nuclei concentrations lead to increased cloud droplet number concentrations, which in turn leads to increased cloud albedo, increased light scattering and radiative cooling (first indirect effect), but also leads to reduced precipitation efficiency and increased lifetime of the cloud (second indirect effect). In extremely polluted cities like Delhi, aerosol pollutants influence local weather and induce an urban cool island effect during the day. Black carbon Another albedo-related effect on the climate is from black carbon particles. The size of this effect is difficult to quantify: the Intergovernmental Panel on Climate Change estimates that the global mean radiative forcing for black carbon aerosols from fossil fuels is +0.2 W m−2, with a range +0.1 to +0.4 W m−2. Black carbon is a bigger cause of the melting of the polar ice cap in the Arctic than carbon dioxide due to its effect on the albedo. Astronomical albedo In astronomy, the term albedo can be defined in several different ways, depending upon the application and the wavelength of electromagnetic radiation involved. Optical or visual albedo The albedos of planets, satellites and minor planets such as asteroids can be used to infer much about their properties. The study of albedos, their dependence on wavelength, lighting angle ("phase angle"), and variation in time composes a major part of the astronomical field of photometry. For small and far objects that cannot be resolved by telescopes, much of what we know comes from the study of their albedos. For example, the absolute albedo can indicate the surface ice content of outer Solar System objects, the variation of albedo with phase angle gives information about regolith properties, whereas unusually high radar albedo is indicative of high metal content in asteroids. Enceladus, a moon of Saturn, has one of the highest known optical albedos of any body in the Solar System, with an albedo of 0.99. Another notable high-albedo body is Eris, with an albedo of 0.96. Many small objects in the outer Solar System and asteroid belt have low albedos down to about 0.05. A typical comet nucleus has an albedo of 0.04. Such a dark surface is thought to be indicative of a primitive and heavily space weathered surface containing some organic compounds. The overall albedo of the Moon is measured to be around 0.14, but it is strongly directional and non-Lambertian, displaying also a strong opposition effect. Although such reflectance properties are different from those of any terrestrial terrains, they are typical of the regolith surfaces of airless Solar System bodies. Two common optical albedos that are used in astronomy are the (V-band) geometric albedo (measuring brightness when illumination comes from directly behind the observer) and the Bond albedo (measuring total proportion of electromagnetic energy reflected). Their values can differ significantly, which is a common source of confusion. In detailed studies, the directional reflectance properties of astronomical bodies are often expressed in terms of the five Hapke parameters which semi-empirically describe the variation of albedo with phase angle, including a characterization of the opposition effect of regolith surfaces. One of these five parameters is yet another type of albedo called the single-scattering albedo. It is used to define scattering of electromagnetic waves on small particles. It depends on properties of the material (refractive index), the size of the particle, and the wavelength of the incoming radiation. An important relationship between an object's astronomical (geometric) albedo, absolute magnitude and diameter is given by: where A {\displaystyle A} is the astronomical albedo, D {\displaystyle D} is the diameter in kilometers, and H {\displaystyle H} is the absolute magnitude. Radar albedo In planetary radar astronomy, a microwave (or radar) pulse is transmitted toward a planetary target (e.g. Moon, asteroid, etc.) and the echo from the target is measured. In most instances, the transmitted pulse is circularly polarized and the received pulse is measured in the same sense of polarization as the transmitted pulse (SC) and the opposite sense (OC). The echo power is measured in terms of radar cross-section, σ O C {\displaystyle {\sigma }_{OC}} , σ S C {\displaystyle {\sigma }_{SC}} , or σ T {\displaystyle {\sigma }_{T}} (total power, SC + OC) and is equal to the cross-sectional area of a metallic sphere (perfect reflector) at the same distance as the target that would return the same echo power.Those components of the received echo that return from first-surface reflections (as from a smooth or mirror-like surface) are dominated by the OC component as there is a reversal in polarization upon reflection. If the surface is rough at the wavelength scale or there is significant penetration into the regolith, there will be a significant SC component in the echo caused by multiple scattering.For most objects in the solar system, the OC echo dominates and the most commonly reported radar albedo parameter is the (normalized) OC radar albedo (often shortened to radar albedo): where the denominator is the effective cross-sectional area of the target object with mean radius, r {\displaystyle r} . A smooth metallic sphere would have σ ^ OC = 1 {\displaystyle {\hat {\sigma }}_{\text{OC}}=1} . Radar albedos of Solar System objects The values reported for the Moon, Mercury, Mars, Venus, and Comet P/2005 JQ5 are derived from the total (OC+SC) radar albedo reported in those references. Relationship to surface bulk density In the event that most of the echo is from first surface reflections ( σ ^ OC < 0.1 {\displaystyle {\hat {\sigma }}_{\text{OC}}<0.1} or so), the OC radar albedo is a first-order approximation of the Fresnel reflection coefficient (aka reflectivity) and can be used to estimate the bulk density of a planetary surface to a depth of a meter or so (a few wavelengths of the radar wavelength which is typically at the decimeter scale) using the following empirical relationships: ρ = { 3.20 g cm − 3 ln ⁡ ( 1 + 0.83 σ ^ OC 1 − 0.83 σ ^ OC ) for σ ^ OC ≤ 0.07 ( 6.944 σ ^ OC + 1.083 ) g cm − 3 for σ ^ OC > 0.07 {\displaystyle \rho ={\begin{cases}3.20{\text{ g cm}}^{-3}\ln \left({\frac {1+{\sqrt {0.83{\hat {\sigma }}_{\text{OC}}}}}{1-{\sqrt {0.83{\hat {\sigma }}_{\text{OC}}}}}}\right)&{\text{for }}{\hat {\sigma }}_{\text{OC}}\leq 0.07\\(6.944{\hat {\sigma }}_{\text{OC}}+1.083){\text{ g cm}}^{-3}&{\text{for }}{\hat {\sigma }}_{\text{OC}}>0.07\end{cases}}} . History The term albedo was introduced into optics by Johann Heinrich Lambert in his 1760 work Photometria. See also References External links Albedo Project Archived 3 April 2019 at the Wayback Machine Albedo – Encyclopedia of Earth NASA MODIS BRDF/albedo product site Ocean surface albedo look-up-table Surface albedo derived from Meteosat observations A discussion of Lunar albedos reflectivity of metals (chart) Archived 4 March 2016 at the Wayback Machine
american association of petroleum geologists
The American Association of Petroleum Geologists (AAPG) is one of the world's largest professional geological societies with more than 40,000 members across 129 countries as of 2021. The AAPG works to "advance the science of geology, especially as it relates to petroleum, natural gas, other subsurface fluids, and mineral resources; to promote the technology of exploring for, finding, and producing these materials in an economically and environmentally sound manner; and to advance the professional well-being of its members." The AAPG was founded in 1917 and is headquartered in Tulsa, Oklahoma; currently almost one-third of its members live outside the United States. Over the years, the activities of the AAPG have broadened so that they bring together not just geology but also geophysics, geochemistry, engineering, and innovative analytics to enable the more efficient and environmentally-friendly approaches to the development of all earth-based energy sources. New transformative technologies, such as the ability to better characterize reservoirs through imaging and the integration of multiple data sources, are coupled with concerns about the environment. Members and affiliated societies are very much involved in preserving the quality of groundwater, dealing responsibly with produced water, and understanding the mechanisms of induced seismicity. In addition to subsurface investigations, the society supports mapping of the surface and the use of new technologies (UAVs, drones, big data analytics), with the goals of advancing the science and understanding of geological processes. AAPG publishes the AAPG Explorer magazine and AAPG Bulletin scientific journal, and co-publishes a scientific journal with the Society of Exploration Geophysicists called Interpretation. The organization holds an annual meeting including a technical conference and exhibition, sponsors other conferences and continuing education for members around the world such as ongoing Geosciences Technology Workshops, and provides various other services to its members. The organization also includes divisions focused on particular aspects of the profession. These include the Division of Environmental Geosciences, Division of Professional Affairs, and the Energy and Minerals Division. The association membership has included Harrison "Jack" Schmitt, a U.S. astronaut who walked on the Moon. Awards At its annual conventions and international conferences AAPG recognizes the distinguished contributions in the field of petroleum geosciences with various awards, including the Sidney Powers Memorial Award, Michel T. Halbouty Outstanding Leadership Award, Grover E. Murray Memorial Distinguished Educator Award, Wallace Pratt Memorial Award, and Ziad Rafiq Beydoun Memorial Award. The AAPG IBA award is given immediately following the IBA competition that is held at that year's annual convention. AAPG IBA (Imperial Barrel Award) AAPG promotes student involvement in the profession by holding an annual Imperial Barrel Award competition where geoscience graduate students are encouraged to explore a career in the energy industry. In this global competition, university teams analyze a dataset (geology, geophysics, land, production infrastructure, and other relevant materials) and deliver their results in a 25-minute presentation. The students' presentations are judged by industry experts, providing the students a real-world, career-development experience.IBA offers students and their faculty advisor a chance to win accolades for themselves and cash prizes for their schools, and winning teams travel free to the annual AAPG convention to network with both future colleagues and future employers. Correlation of Stratigraphic Units of North America The Correlation of Stratigraphic Units of North America (COSUNA) was a project of the AAPG which resulted in the publication of sixteen correlation charts depicting modern concepts of the stratigraphy of North America. Pioneering positions The AAPG has supported the investigation of the earth, and over the years, ideas about how oil is formed have changed. In the 1960s, the AAPG supported the then-revolutionary idea of plate tectonics (vs. isostasy), and looked at plate tectonics as a key to the evolution of basins, and thus the formation of oil and gas. An example is Tanya Atwater's work on plate tectonics. As a whole, women geoscientists have played an important role in the AAPG's 100-year history as scientists and leaders. Since that time, the AAPG has worked closely with scientific organizations such as the USGS to apply new scientific breakthroughs to the generation, migration, and entrapment of oil and gas. The results have brought new understanding of ultra-deepwater reservoirs (such as off the coast of Brazil). Further understanding about kerogen typing and natural fracture development led to a better understanding of shale resources, and contributed to the "shale revolution." In addition, the AAPG has looked closely at the role of independent oil companies in the roll-out of new technologies used in new types of plays such as shales.The AAPG has supported geomechanics in order to be able to predict pore pressure and avoid drilling hazards. The AAPG has also been supportive of investigations having to do with the impact of policies of disposing of produced water by injecting it into deep formations. Workshops and forums have been held since 2009 (Geosciences Technology Workshops) to analyze the problems and to discuss solutions. They have been held throughout the world, and are documented through the presentations. The presentations from the workshops have been made available for free via the AAPG's open access online journal, Search and Discovery. Global warming controversy In 2006, the AAPG was criticized for selecting author Michael Crichton for their Journalism Award for Jurassic Park and "for his recent science-based thriller State of Fear", in which Crichton exposed his rejection of scientific evidence for anthropogenic global warming. Daniel P. Schrag, a geochemist who directs the Harvard University Center for the Environment, called the award "a total embarrassment" that he said "reflects the politics of the oil industry and a lack of professionalism" on the association's part. The AAPG's award for journalism lauded "notable journalistic achievement, in any medium, which contributes to public understanding of geology, energy resources or the technology of oil and gas exploration." The name of the journalism award has since been changed to the "Geosciences in the Media" Award.The criticism drew attention to the AAPG's 1999 position statement formally rejecting the likelihood of human influence on recent climate. The Council of the American Quaternary Association wrote in a criticism of the award that the "AAPG stands alone among scientific societies in its denial of human-induced effects on global warming."As recently as March 2007, articles in the newsletter of the AAPG Division of Professional Affairs stated that "the data does not support human activity as the cause of global warming" and characterize the Intergovernmental Panel on Climate Change reports as "wildly distorted and politicized." 2007 AAPG revised position Acknowledging that the association's previous policy statement on Climate Change was "not supported by a significant number of our members and prospective members", AAPG's formal stance was reviewed and changed in July 2007. The new statement formally accepts human activity as at least one contributor to carbon dioxide increase, but does not confirm its link to climate change, saying its members are "divided on the degree of influence that anthropogenic CO2 has" on climate. AAPG also stated support for "research to narrow probabilistic ranges on the effect of anthropogenic CO2 on global climate."AAPG also withdrew its earlier criticism of other scientific organizations and research stating, "Certain climate simulation models predict that the warming trend will continue, as reported through NAS, AGU, AAAS, and AMS. AAPG respects these scientific opinions but wants to add that the current climate warming projections could fall within well-documented natural variations in past climate and observed temperature data. These data do not necessarily support the maximum case scenarios forecast in some models." Affiliated organizations Organizations may request affiliation with AAPG if they meet a set of criteria including goals compatible with those of AAPG; membership of at least 60% professional geologists with degrees; dissemination of scientific information through publications or meetings; and membership not restricted by region. Pittsburgh Association of Petroleum Geologists Pittsburgh Geological Society Canadian Society of Petroleum Geologists Pacific Section of AAPG (PSAAPG) See also Betty Ann Elliott Randi Martinsen Fred Meissner List of geoscience organizations Society of Exploration Geophysicists Society of Petroleum Engineers Denise M. Stone References External links Official website
bølling–allerød warming
The Bølling–Allerød interstadial (Danish: [ˈpøle̝ŋ ˈæləˌʁœðˀ]), also called the Late Glacial Interstadial, was an abrupt warm and moist interstadial period that occurred during the final stages of the Last Glacial Period. This warm period ran from 14,690 to 12,890 years before the present (BP). It began with the end of the cold period known as the Oldest Dryas, and ended abruptly with the onset of the Younger Dryas, a cold period that reduced temperatures back to near-glacial levels within a decade.In some regions, a cold period known as the Older Dryas can be detected in the middle of the Bølling–Allerød interstadial. In these regions the period is divided into the Bølling oscillation, which peaked around 14,500 BP, and the Allerød oscillation, which peaked closer to 13,000 BP.Estimates of CO2 rise are 20–35 ppmv within 200 years, a rate less than 29–50% compared to the anthropogenic global warming signal from the past 50 years, and with a radiative forcing of 0.59–0.75 W m−2. A previously unidentified contributor to atmospheric CO2 was the expansion of Antarctic Intermediate Water, which is poor at sequestering the gas. History In 1901, the Danish geologists Nikolaj Hartz (1867–1937) and Vilhelm Milthers (1865–1962) provided evidence for climatic warming during the last glacial period, sourced from a clay-pit near Allerød, Denmark. Effects It has been postulated that teleconnections, oceanic and atmospheric processes, on different timescales, connect both hemispheres during abrupt climate change. The Bølling–Allerød was almost completely synchronous across the Northern Hemisphere.The Meltwater pulse 1A event coincides with or closely follows the abrupt onset of the Bølling–Allerød (BA), when global sea level rose about 16 m during this event at rates of 26–53 mm/yr.Records obtained from the Gulf of Alaska show abrupt sea-surface warming of about 3 °C (in less than 90 years), matching ice-core records that register this transition as occurring within decades.Scientists from the Center for Arctic Gas Hydrate (CAGE), Environment and Climate at the University of Tromsø, published a study in June 2017, describing over a hundred ocean sediment craters, some 3,000 meters wide and up to 300 meters deep, formed due to explosive eruptions, attributed to destabilizing methane hydrates, following ice-sheet retreat during the last glacial period, around 12,000 years ago, a few centuries after the Bølling–Allerød warming. These areas around the Barents Sea still seep methane today, and still-existing bulges with methane reservoirs could eventually have the same fate. Ice-sheet retreat A 2017 study attributed the second Weichselian Icelandic ice sheet collapse, onshore (est. net wastage 221 Gt a−1 over 750 years) and similar to today's Greenland rates of mass loss, to atmospheric Bølling–Allerød warming. The study's authors noted: Geothermal conditions impart a significant control on the ice sheet's transient response, particularly during phases of rapid retreat. Insights from this study suggests that large sectors of contemporary ice sheets overlying geothermally active regions, such as Siple Coast, Antarctica, and northeastern Greenland, have the potential to experience rapid phases of mass loss and deglaciation once initial retreat is initiated.The melting of the glaciers of Hardangerfjord began during this interstadial. Boknafjord had already begun to deglaciate before the onset of the Bølling–Allerød interstadial. Flora Ice uncovered large parts of north Europe and temperate forests covered Europe from N 29° to 41° latitude. Pioneer vegetation, such as Salix polaris and Dryas octopetala, began to grow in regions that were previously too cold to support these plants. Later, mixed evergreen and deciduous forests prevailed in Eurasia, more deciduous toward the south, just as today. Birch, aspen, spruce, pine, larch and juniper were to be found extensively, mixed with Quercus and Corylus. Poaceae was to be found in more open regions. Fauna During this time late Pleistocene animals spread northward from refugia in the three peninsulas, Iberian Peninsula, Italy and the Balkans. Geneticists can identify the general location by studying degrees of consanguinity in the modern animals of Europe. Many animal species were able to move into regions far more northerly than they could have survived in during the preceding colder periods. Reindeer, horse, saiga, antelope, bison, woolly mammoth and woolly rhinoceros were attested, and were hunted by early man. In the alpine regions ibex and chamois were hunted. Throughout the forest were red deer. Smaller animals, such as fox, wolf, hare and squirrel also appear. Salmon was fished. When this interstadial period ended, with the onset of the Younger Dryas, many of these species were forced to migrate south or become regionally extinct. In the Great Barrier Reef, the Bølling–Allerød warming is associated with a substantial accumulation of calcium carbonate. Causes Ocean circulation In recent years research tied the Bølling–Allerød warming to the release of heat from warm waters originating from the deep North Atlantic Ocean, possibly triggered by a strengthening of the Atlantic meridional overturning circulation (AMOC) at the time.Study results which would help to explain the abruptness of the Bølling–Allerød warming, based on observations and simulations, found that 3°–5 °C Ocean warming occurred at intermediate depths in the North Atlantic over several millennia during Heinrich stadial 1 (HS1). The authors postulated that this warm salty water (WSW) layer, situated beneath the colder surface freshwater in the North Atlantic, generated ocean convective available potential energy (OCAPE) over decades at the end of HS1. According to fluid modelling, at one point the accumulation of OCAPE was released abruptly (c. 1 month) into kinetic energy of thermobaric cabbeling convection (TCC), resulting in the warmer salty waters getting to the surface and subsequently warming of c. 2 °C sea surface warming. Volcanism Isostatic rebound in response to glacier retreat (unloading) and an increase in local salinity (i.e., δ18Osw) have been attributed to increased volcanic activity at the onset of Bølling–Allerød. The association with the interval of intense volcanic activity hints at an interaction between climate and volcanism, with one possible route being enhanced short-term melting of glaciers via albedo changes from particle fallout on glacier surfaces. Human cultures Humans reentered the forests of Europe in search of big game to hunt. Some theories suggest that humans as well as the changing climate were the driving force that led many of these species to extinction. Their cultures were the last of the Late Upper Palaeolithic. Magdalenian hunters moved up the Loire into the Paris Basin. In the drainage basin of the Dordogne, the Perigordian prevailed. The Epigravettian dominated Italy. In the north, the Hamburgian and Federmesser cultures are found. The Lyngby, Bromme, Ahrensburg and Swiderian were also attested in Europe at this time. To the south and far east the Neolithic had already begun. In the Middle East, the pre-agricultural Natufian settled around the Eastern Mediterranean coast to exploit wild cereals, such as emmer and two-row barley. In the Allerød they would begin to domesticate these plants. See also Abrupt climate change African humid period Antarctic Cold Reversal Dansgaard–Oeschger event Hiawatha Glacier Ice age Sources External links Sensitivity and rapidity of vegetational response to abrupt climate change Climate change clues revealed by ice sheet collapse
cornwall alliance
The Cornwall Alliance for the Stewardship of Creation is a conservative Christian public policy group that claims that a free-market approach to care for the environment is sufficient, and is critical of much of the current environmental movement. The Alliance is "engaged in a wide range of antienvironmental activities" and denies man-made global warming.Originally called the "Interfaith Stewardship Alliance", it was founded in 2005 in reaction to the efforts of evangelical leaders, such as Rick Warren, to fight global warming. The name Cornwall came from the 2000 Cornwall Declaration. The organization's views on the environment have been strongly influenced by the wise use movement of the 1980s and 1990s. It is a member of the Atlas Network and also has ties to The Heritage Foundation. Cornwall Declaration In 2000, a statement called the Cornwall Declaration on Environmental Stewardship was put forward and has been signed by over 1,500 clergy, theologians and others according to Cornwall Alliance. Signatories include prominent American religious individuals from the Roman Catholic, Jewish, and Evangelical worlds such as Charles Colson, James Dobson, Rabbi Jacob Neusner, R. C. Sproul, Richard John Neuhaus, and D. James Kennedy.The declaration states that human beings should be regarded as "producers and stewards" rather than "consumers and polluters". It states: The Cornwall Declaration further sets forth an articulate and Biblically-grounded set of beliefs and aspirations in which God can be glorified through a world in which "human beings care wisely and humbly for all creatures" and "widespread economic freedom…makes sound ecological stewardship available to ever greater numbers." The declaration expresses concern over what it calls "unfounded or undue concerns" of environmentalists such as "fears of destructive manmade global warming, overpopulation, and rampant species loss". Evangelical Declaration on Global Warming In July 2006, the Cornwall Alliance published an open letter in response to Christian leaders of the Evangelical Climate Initiative who had, in February of the same year, expressed concern over man-made global warming, urging legislators to consider a cap-and-trade system, promoting new technology and reducing carbon emissions from the burning of fossil fuels. Advisory board member Wayne Grudem was quoted in reply saying, "It does not seem likely to me that God would set up the world to work in such a way that human beings would eventually destroy the earth by doing such ordinary and morally good and necessary things as breathing, building a fire to cook or keep warm, burning fuel to travel, or using energy for a refrigerator to preserve food."The missive was accompanied by "Call to Truth, Prudence and the Protection of the Poor", a paper discussing the theology, science and economics of climate change, denying that dangerous anthropogenic global warming was taking place at all, and describing mandatory emission reduction as a "draconian measure" that would deprive people of cheap energy and hurt the poor. The letter was endorsed by over 170 individuals, including atmospheric physicist Richard Lindzen, palaeontologist Robert M. Carter and former Energy & Environment journal editor Sonja Boehmer-Christiansen.On December 2, 2009, the Cornwall Alliance issued a statement called "An Evangelical Declaration on Global Warming", in which they declare in list form both "What We Believe" and "What We Deny". The first point from each list is; We believe Earth and its ecosystems – created by God’s intelligent design and infinite power and sustained by His faithful providence – are robust, resilient, self-regulating, and self-correcting, admirably suited for human flourishing, and displaying His glory. Earth’s climate system is no exception. Recent global warming is one of many natural cycles of warming and cooling in geologic history.We deny that Earth and its ecosystems are the fragile and unstable products of chance, and particularly that Earth’s climate system is vulnerable to dangerous alteration because of minuscule changes in atmospheric chemistry. Recent warming was neither abnormally large nor abnormally rapid. There is no convincing scientific evidence that human contribution to greenhouse gases is causing dangerous global warming. Prominent signatories of the declaration include climate scientist Roy Spencer, climatologist David Legates, meteorologist Joseph D'Aleo, television meteorologist James Spann, and Neil Frank, former director of the National Hurricane Center.According to social scientists Riley Dunlap and Aaron McCright the declaration "was laden with denialist claims and designed to counteract progressive Christians’ efforts to generate support for dealing with climate change".Along with the "Evangelical Declaration", Cornwall Alliance issued "A Renewed Call to Truth, Prudence, and Protection of the Poor". The executive summary of their document states, The world is in the grip of an idea: that burning fossil fuels to provide affordable, abundant energy is causing global warming that will be so dangerous that we must stop it by reducing our use of fossil fuels, no matter the cost. Is that idea true? We believe not. We believe that idea – we'll call it "global warming alarmism" – fails the tests of theology, science, and economics. Criticism The website Skeptical Science has published criticism of the Cornwall Alliance. Critics of the Cornwall Alliance have accused the organization of being a "front group for fossil fuel special interests," citing its strong ties to the Committee for a Constructive Tomorrow, which in the past was funded by oil industry giants such as Exxon-Mobil and Chevron. See also Christianity and environmentalism Evangelical environmentalism The Heritage Foundation References External links Official website "Cornwall Alliance Internal Revenue Service filings". ProPublica Nonprofit Explorer.
la niña
La Niña ( lə NEEN-yə, Spanish: [la ˈniɲa]; lit. 'The Girl') is an oceanic and atmospheric phenomenon that is the colder counterpart of El Niño, as part of the broader El Niño–Southern Oscillation (ENSO) climate pattern. The name La Niña originates from Spanish for "the girl", by analogy to El Niño, meaning "the boy". In the past, it was also called an anti-El Niño and El Viejo, meaning "the old man."During a La Niña period, the sea surface temperature across the eastern equatorial part of the central Pacific Ocean will be lower than normal by 3–5 °C (5.4–9 °F). An appearance of La Niña often persists for longer than five months. El Niño and La Niña can be indicators of weather changes across the globe. Atlantic and Pacific hurricanes can have different characteristics due to lower or higher wind shear and cooler or warmer sea surface temperatures. Background La Niña is a complex weather pattern that occurs every few years, as a result of variations in ocean temperatures in the equatorial band of the Pacific Ocean, The phenomenon occurs as strong winds blow warm water at the ocean's surface away from South America, across the Pacific Ocean towards Indonesia. As this warm water moves west, cold water from the deep sea rises to the surface near South America; it is considered to be the cold phase of the broader El Niño–Southern Oscillation (ENSO) weather phenomenon, as well as the opposite of El Niño weather pattern. The movement of so much heat across a quarter of the planet, and particularly in the form of temperature at the ocean surface, can have a significant effect on weather across the entire planet. Tropical instability waves visible on sea surface temperature maps, showing a tongue of colder water, are often present during neutral or La Niña conditions.La Niña events have been observed for hundreds of years, and occurred on a regular basis during the early parts of both the 17th and 19th centuries. Since the start of the 20th century, La Niña events have occurred during the following years: Impacts on the global climate La Niña impacts the global climate and disrupts normal weather patterns, which can lead to intense storms in some places and droughts in others. Regional impacts Observations of La Niña events since 1950 show that impacts associated with La Niña events depend on what season it is. However, while certain events and impacts are expected to occur during these periods, it is not certain or guaranteed that they will occur. Africa La Niña results in wetter-than-normal conditions in southern Africa from December to February, and drier-than-normal conditions over equatorial east Africa over the same period. Asia During La Niña years, the formation of tropical cyclones, along with the subtropical ridge position, shifts westward across the western Pacific Ocean, which increases the landfall threat in China. In March 2008, La Niña caused a drop in sea surface temperatures over Southeast Asia by 2 °C (3.6 °F). It also caused heavy rains over Malaysia, the Philippines, and Indonesia. Australia Across most of the continent, El Niño and La Niña have more impact on climate variability than any other factor. There is a strong correlation between the strength of La Niña and rainfall: the greater the sea surface temperature and Southern Oscillation difference from normal, the larger the rainfall change. North America La Niña causes mostly the opposite effects of El Niño: above-average precipitation across the northern Midwest, the northern Rockies, Northern California, and the Pacific Northwest's southern and eastern regions. Meanwhile, precipitation in the southwestern and southeastern states, as well as southern California, is below average. This also allows for the development of many stronger-than-average hurricanes in the Atlantic and fewer in the Pacific. The synoptic condition for Tehuantepecer winds is associated with high-pressure system forming in Sierra Madre of Mexico in the wake of an advancing cold front, which causes winds to accelerate through the Isthmus of Tehuantepec. Tehuantepecers primarily occur during the cold season months for the region in the wake of cold fronts, between October and February, with a summer maximum in July caused by the westward extension of the Azores–Bermuda high-pressure system. Wind magnitude is weaker during La Niña years than El Niño years, due to the less frequent cold frontal incursions during La Niña winters, with its effects can last from a few hours to six days. Between 1942 and 1957, La Niña had an impact that caused isotope changes in the plants of Baja California.In Canada, La Niña will, in general, cause a cooler, snowier winter, such as the near-record-breaking amounts of snow recorded in the La Niña winter of 2007–2008 in eastern Canada.In the spring of 2022, La Niña caused above-average precipitation and below-average temperatures in the state of Oregon. April was one of the wettest months on record, and La Niña effects, while less severe, were expected to continue into the summer. South America During a time of La Niña, drought plagues the coastal regions of Peru and Chile. From December to February, northern Brazil is wetter than normal. La Niña causes higher than normal rainfall in the central Andes, which in turn causes catastrophic flooding on the Llanos de Mojos of Beni Department, Bolivia. Such flooding is documented from 1853, 1865, 1872, 1873, 1886, 1895, 1896, 1907, 1921, 1928, 1929 and 1931. Diversity The ‘traditional’ or conventional La Niña is called an Eastern Pacific (EP) La Niña; it involves temperature anomalies in the eastern Pacific. However, aside from differences in diagnostic criteria, non-traditional La Niñas were observed in the last two decades, in which the usual place of the temperature anomaly (Niño 1 and 2) is not affected, but rather an anomaly arises in the central Pacific (Niño 3.4). The phenomenon is called Central Pacific (CP) La Niña, dateline La Niña (because the anomaly arises near the dateline), or La Niña "Modoki" ("Modoki" is Japanese for "alternate / meta / similar-but-different"). These "flavors" of ENSO are in addition to EP and CP types, leading some scientists argue that ENSO is a continuum of phenomena – often with hybrid types.The effects of the CP La Niña similarly contrast with the EP La Niña – it strongly tends to increase rainfall over northwestern Australia and northern Murray–Darling basin, rather than over the east as in a conventional La Niña. Also, La Niña Modoki increases the frequency of cyclonic storms over Bay of Bengal, but decreases the occurrence of severe storms in the Indian Ocean overall, with the Arabian Sea becoming severely non-conducive to tropical cyclone formation.Recent years when La Niña Modoki events occurred include 1973–1974, 1975–1976, 1983–1984, 1988–1989, 1998–1999, 2000–2001, 2008–2009, 2010–2011, and 2016–2017.The recent discovery of ENSO Modoki has some scientists believing it to be linked to global warming. However, comprehensive satellite data go back only to 1979. Generally, there is no scientific consensus on how or if climate change may affect ENSO.There is also a scientific debate on the very existence of this "new" ENSO. A number of studies dispute the reality of this statistical distinction or its increasing occurrence, or both, either arguing the reliable record is too short to detect such a distinction, finding no distinction or trend using other statistical approaches, or that other types should be distinguished, such as standard and extreme ENSO. See also 2000 Mozambique flood (attributed to La Niña) 2010 Pakistan floods (attributed to La Niña) 2010–2011 Queensland floods (attributed to La Niña) 2010–2012 La Niña event 2010–2011 Southern Africa floods (attributed to La Niña) 2010–2013 Southern United States and Mexico drought (attributed to La Niña) 2011 East Africa drought (attributed to La Niña) 2020 Atlantic hurricane season (unprecedented severity fueled by La Niña) 2021 New South Wales floods (severity fueled by La Niña) March 2022 Suriname flooding (attributed to La Niña) 2023 Auckland Anniversary Weekend floods (attributed to La Niña) Ocean dynamical thermostat Walker circulation Footnotes References External links "Current map of sea surface temperature anomalies in the Pacific Ocean". earth.nullschool.net. "Southern Oscillation diagnostic discussion". Climate Prediction Center (CPC). National Oceanic and Atmospheric Administration. "ENSO Outlook – An alert system for the El Niño–Southern Oscillation". Australian Bureau of Meteorology.
post-glacial rebound
Post-glacial rebound (also called isostatic rebound or crustal rebound) is the rise of land masses after the removal of the huge weight of ice sheets during the last glacial period, which had caused isostatic depression. Post-glacial rebound and isostatic depression are phases of glacial isostasy (glacial isostatic adjustment, glacioisostasy), the deformation of the Earth's crust in response to changes in ice mass distribution. The direct raising effects of post-glacial rebound are readily apparent in parts of Northern Eurasia, Northern America, Patagonia, and Antarctica. However, through the processes of ocean siphoning and continental levering, the effects of post-glacial rebound on sea level are felt globally far from the locations of current and former ice sheets. Overview During the last glacial period, much of northern Europe, Asia, North America, Greenland and Antarctica was covered by ice sheets, which reached up to three kilometres thick during the glacial maximum about 20,000 years ago. The enormous weight of this ice caused the surface of the Earth's crust to deform and warp downward, forcing the viscoelastic mantle material to flow away from the loaded region. At the end of each glacial period when the glaciers retreated, the removal of this weight led to slow (and still ongoing) uplift or rebound of the land and the return flow of mantle material back under the deglaciated area. Due to the extreme viscosity of the mantle, it will take many thousands of years for the land to reach an equilibrium level. The uplift has taken place in two distinct stages. The initial uplift following deglaciation was almost immediate due to the elastic response of the crust as the ice load was removed. After this elastic phase, uplift proceeded by slow viscous flow at an exponentially decreasing rate. Today, typical uplift rates are of the order of 1 cm/year or less. In northern Europe, this is clearly shown by the GPS data obtained by the BIFROST GPS network; for example in Finland, the total area of the country is growing by about seven square kilometers per year. Studies suggest that rebound will continue for at least another 10,000 years. The total uplift from the end of deglaciation depends on the local ice load and could be several hundred metres near the centre of rebound. Recently, the term "post-glacial rebound" is gradually being replaced by the term "glacial isostatic adjustment". This is in recognition that the response of the Earth to glacial loading and unloading is not limited to the upward rebound movement, but also involves downward land movement, horizontal crustal motion, changes in global sea levels and the Earth's gravity field, induced earthquakes, and changes in the Earth's rotation. Another alternate term is "glacial isostasy", because the uplift near the centre of rebound is due to the tendency towards the restoration of isostatic equilibrium (as in the case of isostasy of mountains). Unfortunately, that term gives the wrong impression that isostatic equilibrium is somehow reached, so by appending "adjustment" at the end, the motion of restoration is emphasized. Effects Post-glacial rebound produces measurable effects on vertical crustal motion, global sea levels, horizontal crustal motion, gravity field, Earth's rotation, crustal stress, and earthquakes. Studies of glacial rebound give us information about the flow law of mantle rocks, which is important to the study of mantle convection, plate tectonics and the thermal evolution of the Earth. It also gives insight into past ice sheet history, which is important to glaciology, paleoclimate, and changes in global sea level. Understanding postglacial rebound is also important to our ability to monitor recent global change. Vertical crustal motion Erratic boulders, U-shaped valleys, drumlins, eskers, kettle lakes, bedrock striations are among the common signatures of the Ice Age. In addition, post-glacial rebound has caused numerous significant changes to coastlines and landscapes over the last several thousand years, and the effects continue to be significant. In Sweden, Lake Mälaren was formerly an arm of the Baltic Sea, but uplift eventually cut it off and led to its becoming a freshwater lake in about the 12th century, at the time when Stockholm was founded at its outlet. Marine seashells found in Lake Ontario sediments imply a similar event in prehistoric times. Other pronounced effects can be seen on the island of Öland, Sweden, which has little topographic relief due to the presence of the very level Stora Alvaret. The rising land has caused the Iron Age settlement area to recede from the Baltic Sea, making the present day villages on the west coast set back unexpectedly far from the shore. These effects are quite dramatic at the village of Alby, for example, where the Iron Age inhabitants were known to subsist on substantial coastal fishing. As a result of post-glacial rebound, the Gulf of Bothnia is predicted to eventually close up at Kvarken in more than 2,000 years. The Kvarken is a UNESCO World Natural Heritage Site, selected as a "type area" illustrating the effects of post-glacial rebound and the holocene glacial retreat. In several other Nordic ports, like Tornio and Pori (formerly at Ulvila), the harbour has had to be relocated several times. Place names in the coastal regions also illustrate the rising land: there are inland places named 'island', 'skerry', 'rock', 'point' and 'sound'. For example, Oulunsalo "island of Oulujoki" is a peninsula, with inland names such as Koivukari "Birch Rock", Santaniemi "Sandy Cape", and Salmioja "the brook of the Sound". (Compare [1] and [2].) In Great Britain, glaciation affected Scotland but not southern England, and the post-glacial rebound of northern Great Britain (up to 10 cm per century) is causing a corresponding downward movement of the southern half of the island (up to 5 cm per century). This will eventually lead to an increased risk of floods in southern England and south-western Ireland.Since the glacial isostatic adjustment process causes the land to move relative to the sea, ancient shorelines are found to lie above present day sea level in areas that were once glaciated. On the other hand, places in the peripheral bulge area which was uplifted during glaciation now begins to subside. Therefore, ancient beaches are found below present day sea level in the bulge area. The "relative sea level data", which consists of height and age measurements of the ancient beaches around the world, tells us that glacial isostatic adjustment proceeded at a higher rate near the end of deglaciation than today. The present-day uplift motion in northern Europe is also monitored by a GPS network called BIFROST. Results of GPS data show a peak rate of about 11 mm/year in the north part of the Gulf of Bothnia, but this uplift rate decreases away and becomes negative outside the former ice margin. In the near field outside the former ice margin, the land sinks relative to the sea. This is the case along the east coast of the United States, where ancient beaches are found submerged below present day sea level and Florida is expected to be submerged in the future. GPS data in North America also confirms that land uplift becomes subsidence outside the former ice margin. Global sea levels To form the ice sheets of the last Ice Age, water from the oceans evaporated, condensed as snow and was deposited as ice in high latitudes. Thus global sea level fell during glaciation. The ice sheets at the last glacial maximum were so massive that global sea level fell by about 120 metres. Thus continental shelves were exposed and many islands became connected with the continents through dry land. This was the case between the British Isles and Europe (Doggerland), or between Taiwan, the Indonesian islands and Asia (Sundaland). A land bridge also existed between Siberia and Alaska that allowed the migration of people and animals during the last glacial maximum.The fall in sea level also affects the circulation of ocean currents and thus has important impact on climate during the glacial maximum. During deglaciation, the melted ice water returns to the oceans, thus sea level in the ocean increases again. However, geological records of sea level changes show that the redistribution of the melted ice water is not the same everywhere in the oceans. In other words, depending upon the location, the rise in sea level at a certain site may be more than that at another site. This is due to the gravitational attraction between the mass of the melted water and the other masses, such as remaining ice sheets, glaciers, water masses and mantle rocks and the changes in centrifugal potential due to Earth's variable rotation. Horizontal crustal motion Accompanying vertical motion is the horizontal motion of the crust. The BIFROST GPS network shows that the motion diverges from the centre of rebound. However, the largest horizontal velocity is found near the former ice margin. The situation in North America is less certain; this is due to the sparse distribution of GPS stations in northern Canada, which is rather inaccessible. Tilt The combination of horizontal and vertical motion changes the tilt of the surface. That is, locations farther north rise faster, an effect that becomes apparent in lakes. The bottoms of the lakes gradually tilt away from the direction of the former ice maximum, such that lake shores on the side of the maximum (typically north) recede and the opposite (southern) shores sink. This causes the formation of new rapids and rivers. For example, Lake Pielinen in Finland, which is large (90 x 30 km) and oriented perpendicularly to the former ice margin, originally drained through an outlet in the middle of the lake near Nunnanlahti to Lake Höytiäinen. The change of tilt caused Pielinen to burst through the Uimaharju esker at the southwestern end of the lake, creating a new river (Pielisjoki) that runs to the sea via Lake Pyhäselkä to Lake Saimaa. The effects are similar to that concerning seashores, but occur above sea level. Tilting of land will also affect the flow of water in lakes and rivers in the future, and thus is important for water resource management planning. In Sweden Lake Sommen's outlet in the northwest has a rebound of is 2.36 mm/a while in the eastern Svanaviken it is 2.05 mm/a. This means the lake is being slowly tilted and the southeastern shores drowned. Gravity field Ice, water, and mantle rocks have mass, and as they move around, they exert a gravitational pull on other masses towards them. Thus, the gravity field, which is sensitive to all mass on the surface and within the Earth, is affected by the redistribution of ice/melted water on the surface of the Earth and the flow of mantle rocks within.Today, more than 6000 years after the last deglaciation terminated, the flow of mantle material back to the glaciated area causes the overall shape of the Earth to become less oblate. This change in the topography of Earth's surface affects the long-wavelength components of the gravity field.The changing gravity field can be detected by repeated land measurements with absolute gravimeters and recently by the GRACE satellite mission. The change in long-wavelength components of Earth's gravity field also perturbs the orbital motion of satellites and has been detected by LAGEOS satellite motion. Vertical datum The vertical datum is a reference surface for altitude measurement and plays vital roles in many human activities, including land surveying and construction of buildings and bridges. Since postglacial rebound continuously deforms the crustal surface and the gravitational field, the vertical datum needs to be redefined repeatedly through time. State of stress, intraplate earthquakes and volcanism According to the theory of plate tectonics, plate-plate interaction results in earthquakes near plate boundaries. However, large earthquakes are found in intraplate environment like eastern Canada (up to M7) and northern Europe (up to M5) which are far away from present-day plate boundaries. An important intraplate earthquake was the magnitude 8 New Madrid earthquake that occurred in mid-continental US in the year 1811. Glacial loads provided more than 30 MPa of vertical stress in northern Canada and more than 20 MPa in northern Europe during glacial maximum. This vertical stress is supported by the mantle and the flexure of the lithosphere. Since the mantle and the lithosphere continuously respond to the changing ice and water loads, the state of stress at any location continuously changes in time. The changes in the orientation of the state of stress is recorded in the postglacial faults in southeastern Canada. When the postglacial faults formed at the end of deglaciation 9000 years ago, the horizontal principal stress orientation was almost perpendicular to the former ice margin, but today the orientation is in the northeast–southwest, along the direction of seafloor spreading at the Mid-Atlantic Ridge. This shows that the stress due to postglacial rebound had played an important role at deglacial time, but has gradually relaxed so that tectonic stress has become more dominant today. According to the Mohr–Coulomb theory of rock failure, large glacial loads generally suppress earthquakes, but rapid deglaciation promotes earthquakes. According to Wu & Hasagawa, the rebound stress that is available to trigger earthquakes today is of the order of 1 MPa. This stress level is not large enough to rupture intact rocks but is large enough to reactivate pre-existing faults that are close to failure. Thus, both postglacial rebound and past tectonics play important roles in today's intraplate earthquakes in eastern Canada and southeast US. Generally postglacial rebound stress could have triggered the intraplate earthquakes in eastern Canada and may have played some role in triggering earthquakes in the eastern US including the New Madrid earthquakes of 1811. The situation in northern Europe today is complicated by the current tectonic activities nearby and by coastal loading and weakening. Increasing pressure due to the weight of the ice during glaciation may have suppressed melt generation and volcanic activities below Iceland and Greenland. On the other hand, decreasing pressure due to deglaciation can increase the melt production and volcanic activities by 20-30 times. Recent global warming Recent global warming has caused mountain glaciers and the ice sheets in Greenland and Antarctica to melt and global sea level to rise. Therefore, monitoring sea level rise and the mass balance of ice sheets and glaciers allows people to understand more about global warming. Recent rise in sea levels has been monitored by tide gauges and satellite altimetry (e.g. TOPEX/Poseidon). As well as the addition of melted ice water from glaciers and ice sheets, recent sea level changes are affected by the thermal expansion of sea water due to global warming, sea level change due to deglaciation of the last glacial maximum (postglacial sea level change), deformation of the land and ocean floor and other factors. Thus, to understand global warming from sea level change, one must be able to separate all these factors, especially postglacial rebound, since it is one of the leading factors. Mass changes of ice sheets can be monitored by measuring changes in the ice surface height, the deformation of the ground below and the changes in the gravity field over the ice sheet. Thus ICESat, GPS and GRACE satellite mission are useful for such purpose. However, glacial isostatic adjustment of the ice sheets affect ground deformation and the gravity field today. Thus understanding glacial isostatic adjustment is important in monitoring recent global warming. One of the possible impacts of global warming-triggered rebound may be more volcanic activity in previously ice-capped areas such as Iceland and Greenland. It may also trigger intraplate earthquakes near the ice margins of Greenland and Antarctica. Unusually rapid (up to 4.1 cm/year) present glacial isostatic rebound due to recent ice mass losses in the Amundsen Sea embayment region of Antarctica coupled with low regional mantle viscosity is predicted to provide a modest stabilizing influence on marine ice sheet instability in West Antarctica, but likely not to a sufficient degree to arrest it. Applications The speed and amount of postglacial rebound is determined by two factors: the viscosity or rheology (i.e., the flow) of the mantle, and the ice loading and unloading histories on the surface of Earth. The viscosity of the mantle is important in understanding mantle convection, plate tectonics, dynamical processes in Earth, the thermal state and thermal evolution of Earth. However viscosity is difficult to observe because creep experiments of mantle rocks at natural strain rates would take thousands of years to observe and the ambient temperature and pressure conditions are not easy to attain for a long enough time. Thus, the observations of postglacial rebound provide a natural experiment to measure mantle rheology. Modelling of glacial isostatic adjustment addresses the question of how viscosity changes in the radial and lateral directions and whether the flow law is linear, nonlinear, or composite rheology. Mantle viscosity may additionally be estimated using seismic tomography, where seismic velocity is used as a proxy observable Ice thickness histories are useful in the study of paleoclimatology, glaciology and paleo-oceanography. Ice thickness histories are traditionally deduced from the three types of information: First, the sea level data at stable sites far away from the centers of deglaciation give an estimate of how much water entered the oceans or equivalently how much ice was locked up at glacial maximum. Secondly, the location and dates of terminal moraines tell us the areal extent and retreat of past ice sheets. Physics of glaciers gives us the theoretical profile of ice sheets at equilibrium, it also says that the thickness and horizontal extent of equilibrium ice sheets are closely related to the basal condition of the ice sheets. Thus the volume of ice locked up is proportional to their instantaneous area. Finally, the heights of ancient beaches in the sea level data and observed land uplift rates (e.g. from GPS or VLBI) can be used to constrain local ice thickness. A popular ice model deduced this way is the ICE5G model. Because the response of the Earth to changes in ice height is slow, it cannot record rapid fluctuation or surges of ice sheets, thus the ice sheet profiles deduced this way only gives the "average height" over a thousand years or so.Glacial isostatic adjustment also plays an important role in understanding recent global warming and climate change. Discovery Before the eighteenth century, it was thought, in Sweden, that sea levels were falling. On the initiative of Anders Celsius a number of marks were made in rock on different locations along the Swedish coast. In 1765 it was possible to conclude that it was not a lowering of sea levels but an uneven rise of land. In 1865 Thomas Jamieson came up with a theory that the rise of land was connected with the ice age that had been first discovered in 1837. The theory was accepted after investigations by Gerard De Geer of old shorelines in Scandinavia published in 1890. Legal implications In areas where the rising of land is seen, it is necessary to define the exact limits of property. In Finland, the "new land" is legally the property of the owner of the water area, not any land owners on the shore. Therefore, if the owner of the land wishes to build a pier over the "new land", they need the permission of the owner of the (former) water area. The landowner of the shore may redeem the new land at market price. Usually the owner of the water area is the partition unit of the landowners of the shores, a collective holding corporation. Formulation: sea-level equation The sea-level equation (SLE) is a linear integral equation that describes the sea-level variations associated with the PGR. The basic idea of the SLE dates back to 1888, when Woodward published his pioneering work on the form and position of mean sea level, and only later has been refined by Platzman and Farrell in the context of the study of the ocean tides. In the words of Wu and Peltier, the solution of the SLE yields the space– and time–dependent change of ocean bathymetry which is required to keep the gravitational potential of the sea surface constant for a specific deglaciation chronology and viscoelastic earth model. The SLE theory was then developed by other authors as Mitrovica & Peltier, Mitrovica et al. and Spada & Stocchi. In its simplest form, the SLE reads S = N − U , {\displaystyle S=N-U,} where S {\displaystyle S} is the sea–level change, N {\displaystyle N} is the sea surface variation as seen from Earth's center of mass, and U {\displaystyle U} is vertical displacement. In a more explicit form the SLE can be written as follow: S ( θ , λ , t ) = ρ i γ G s ⊗ i I + ρ w γ G s ⊗ o S + S E − ρ i γ G s ⊗ i I ¯ − ρ w γ G o ⊗ o S ¯ , {\displaystyle S(\theta ,\lambda ,t)={\frac {\rho _{i}}{\gamma }}G_{s}\otimes _{i}I+{\frac {\rho _{w}}{\gamma }}G_{s}\otimes _{o}S+S^{E}-{\frac {\rho _{i}}{\gamma }}{\overline {G_{s}\otimes _{i}I}}-{\frac {\rho _{w}}{\gamma }}{\overline {G_{o}\otimes _{o}S}},} where θ {\displaystyle \theta } is colatitude and λ {\displaystyle \lambda } is longitude, t {\displaystyle t} is time, ρ i {\displaystyle \rho _{i}} and ρ w {\displaystyle \rho _{w}} are the densities of ice and water, respectively, γ {\displaystyle \gamma } is the reference surface gravity, G s = G s ( h , k ) {\displaystyle G_{s}=G_{s}(h,k)} is the sea–level Green's function (dependent upon the h {\displaystyle h} and k {\displaystyle k} viscoelastic load–deformation coefficients - LDCs), I = I ( θ , λ , t ) {\displaystyle I=I(\theta ,\lambda ,t)} is the ice thickness variation, S E = S E ( t ) {\displaystyle S^{E}=S^{E}(t)} represents the eustatic term (i.e. the ocean–averaged value of S {\displaystyle S} ), ⊗ i {\displaystyle \otimes _{i}} and ⊗ o {\displaystyle \otimes _{o}} denote spatio-temporal convolutions over the ice- and ocean-covered regions, and the overbar indicates an average over the surface of the oceans that ensures mass conservation. See also Holocene glacial retreat – Global deglaciation starting about 19,000 years ago and accelerating about 15,000 years ago Raised beach, also known as marine terrace – Emergent coastal landform Physical impacts of climate change Stress (mechanics) – Physical quantity that expresses internal forces in a continuous material Isostatic depression - The opposite of isostatic rebound References Further reading As Alaska Glaciers Melt, It’s Land That’s Rising May 17, 2009 New York Times External links Glacial Rebound NASA GRACE Gravity Mission from GPZ Archived 2008-05-08 at the Wayback Machine, Potsdam BIFROST GPS results from Harvard University
antarctic peninsula
The Antarctic Peninsula, known as O'Higgins Land in Chile and Tierra de San Martín in Argentina, and originally as Graham Land in the United Kingdom and the Palmer Peninsula in the United States, is the northernmost part of mainland Antarctica. The Antarctic Peninsula is part of the larger peninsula of West Antarctica, protruding 1,300 km (810 miles) from a line between Cape Adams (Weddell Sea) and a point on the mainland south of the Eklund Islands. Beneath the ice sheet that covers it, the Antarctic Peninsula consists of a string of bedrock islands; these are separated by deep channels whose bottoms lie at depths considerably below current sea level. They are joined by a grounded ice sheet. Tierra del Fuego, the southernmost tip of South America, is about 1,000 km (620 miles) away across the Drake Passage.The Antarctic Peninsula is 522,000 square kilometres (202,000 sq mi) in area and 80% ice-covered.The marine ecosystem around the western continental shelf of the Antarctic Peninsula (WAP) has been subjected to rapid climate change. Over the past 50 years, the warm, moist maritime climate of the northern WAP has shifted south. This climatic change increasingly displaces the once dominant cold, dry continental Antarctic climate. This regional warming has caused multi-level responses in the marine ecosystem such as increased heat transport, decreased sea ice extent and duration, local declines in ice-dependent Adélie penguins, increase in ice-tolerant gentoo and chinstrap penguins, alterations in phytoplankton and zooplankton community composition as well as changes in krill recruitment, abundance and availability to predators.The Antarctic Peninsula is currently dotted with numerous research stations, and nations have made multiple claims of sovereignty. The peninsula is part of disputed and overlapping claims by Argentina, Chile, and the United Kingdom. None of these claims have international recognition and, under the Antarctic Treaty System, the respective countries do not attempt to enforce their claims. The British claim, however, is recognised by Australia, France, New Zealand, and Norway. Argentina has the most bases and personnel stationed on the peninsula. History Discovery and naming The most likely first sighting of the Antarctic Peninsula, and therefore also of the whole Antarctic mainland, was on 27 January 1820 by an expedition of the Imperial Russian Navy led by Fabian Gottlieb von Bellingshausen. But the party did not recognize as the mainland what they thought was an icefield covered by small hillocks. Three days later, on 30 January 1820, Edward Bransfield and William Smith, with a British expedition, were the first to chart part of the Antarctic Peninsula. This area was later to be called Trinity Peninsula and is the extreme northeast portion of the peninsula. The next confirmed sighting was in 1832 by John Biscoe, a British explorer, who named the northern part of the Antarctic Peninsula as Graham Land.The first European to land on the continent is also disputed. A 19th-century seal hunter, John Davis, was almost certainly the first. But, sealers were secretive about their movements and their logbooks were deliberately unreliable, to protect any new sealing grounds from competition.Between 1901 and 1904, Otto Nordenskjöld led the Swedish Antarctic Expedition, one of the first expeditions to explore parts of Antarctica. They landed on the Antarctic Peninsula in February 1902, aboard the ship Antarctic, which sank not far from the peninsula. All crew were rescued by an Argentine ship. The British Graham Land expedition between 1934 and 1937 carried out aerial surveys using a de Havilland Fox Moth aircraft, and concluded that Graham Land was not an archipelago but a peninsula.Agreement on the name "Antarctic Peninsula" by the Advisory Committee on Antarctic Names and UK Antarctic Place-Names Committee in 1964 resolved a long-standing difference over the use of the United States' name "Palmer Peninsula" or the British name "Graham Land" for this geographic feature. This dispute was resolved by making Graham Land the part of the Antarctic Peninsula northward of a line between Cape Jeremy and Cape Agassiz; and Palmer Land the part southward of that line, which is roughly 69° S. Palmer Land is named for the United States seal hunter Nathaniel Palmer. The Chilean name for the feature, O'Higgins Land, is in honor of Bernardo O'Higgins, the Chilean patriot and Antarctic visionary. Most other Spanish-speaking countries call it la Península Antártica, though Argentina also officially refers to this as Tierra de San Martín; as of 2018 Argentina has more bases and personnel in the peninsula than any other nation.Other portions of the peninsula are named by and after the various expeditions that discovered them, including the Bowman, Black, Danco, Davis, English, Fallières, Nordenskjöld, Loubet, and Wilkins Coasts. Research stations The first Antarctic research stations were established during World War II by a British military operation, Operation Tabarin.The 1950s saw a marked increase in the number of research bases as Britain, Chile and Argentina competed to make claims over the same area. Meteorology and geology were the primary research subjects. Since the peninsula has the mildest climate in Antarctica, the highest concentration of research stations on the continent can be found there, or on the many nearby islands, and it is the part of Antarctica most often visited by tour vessels and yachts. Occupied bases include Base General Bernardo O'Higgins Riquelme, Bellingshausen Station, Carlini Base, Comandante Ferraz Antarctic Station, Palmer Station, Rothera Research Station, and San Martín Base. Today on the Antarctic Peninsula there are many abandoned scientific and military bases. Argentina's Esperanza Base was the birthplace of Emilio Marcos Palma, the first person to be born in Antarctica. Oil spill The grounding of the Argentine ship the ARA Bahía Paraíso and subsequent 170,000 US gal (640,000 L; 140,000 imp gal) oil spill occurred near the Antarctic Peninsula in 1989. Geology Antarctica was once part of the Gondwana supercontinent. Outcrops from this time include Ordovician and Devonian granites and gneiss found in the Scar Inlet and Joerg Peninsula, while the Carboniferous-Triassic Trinity Peninsula Group are sedimentary rocks that outcrop in Hope Bay and Prince Gustav Channel. Ring of Fire volcanic rocks erupted in the Jurassic, with the breakup of Gondwana, and outcrop in eastern Graham Land as volcanic ash deposits. Volcanism along western Graham Land dates from the Cretaceous to present times, and outcrops are found along the Gerlache Strait, the Lemaire Channel, Argentine Islands, and Adelaide Island. These rocks in western Graham Land include andesite lavas and granite from the magma, and indicate Graham Land was a continuation of the Andes. This line of volcanoes are associated with subduction of the Phoenix Plate. Metamorphism associated with this subduction is evident in the Scotia Metamorphic Complex, which outcrops on Elephant Island, along with Clarence and Smith Islands of the South Shetland Islands. The Drake Passage opened about 30 Ma as Antarctica separated from South America. The South Shetland Island separated from Graham Land about 4 Ma as a volcanic rift formed within the Bransfield Strait. Three dormant submarine volcanoes along this rift include The Axe, Three Sisters, and Orca. Deception Island is an active volcano at the southern end of this rift zone. Notable fossil locations include the Late Jurassic to Early Cretaceous Fossil Bluff Group of Alexander Island, Early Cretaceous sediments in Byers Peninsula on Livingston Island, and the sediments on Seymour Island, which include the Cretaceous extinction. Geography The peninsula is very mountainous, its highest peaks rising to about 2,800 m (9,200 ft). Notable peaks on the peninsula include Deschanel Peak, Mounts Castro, Coman, Gilbert, Jackson, William, Owen, Scott, and Hope, which is the highest point at 3,239 m (10,627 ft), Mount William, Mount Owen and Mount Scott. These mountains are considered to be a continuation of the Andes of South America, with a submarine spine or ridge connecting the two. This is the basis for the position advanced by Chile and Argentina for their territorial claims. The Scotia Arc is the island arc system that links the mountains of the Antarctic Peninsula to those of Tierra del Fuego. There are various volcanoes in the islands around the Antarctic Peninsula. This volcanism is related to extensional tectonics in Bransfield Rift to the west and Larsen Rift to the east.The landscape of the peninsula is typical Antarctic tundra. The peninsula has a sharp elevation gradient, with glaciers flowing into the Larsen Ice Shelf, which experienced significant breakup in 2002. Other ice shelves on the peninsula include the George VI, Wilkins, Wordie and Bach Ice Shelves. The Filchner-Ronne Ice Shelf lies to the east of the peninsula. Islands along the peninsula are mostly ice-covered and connected to the land by pack ice. Separating the peninsula from nearby islands are the Antarctic Sound, Erebus and Terror Gulf, George VI Sound, Gerlache Strait and the Lemaire Channel. The Lemaire Channel is a popular destination for tourist cruise ships that visit Antarctica. Further to the west lies the Bellingshausen Sea and in the north is the Scotia Sea. The Antarctic Peninsula and Cape Horn create a funneling effect, which channels the winds into the relatively narrow Drake Passage.Hope Bay, at 63°23′S 057°00′W, is near the northern extremity of the peninsula, Prime Head, at 63°13′S. Near the tip at Hope Bay is Sheppard Point. The part of the peninsula extending northeastwards from a line connecting Cape Kater to Cape Longing is called the Trinity Peninsula. Brown Bluff is a rare tuya and Sheppard Nunatak is found here also. The Airy, Seller, Fleming and Prospect Glaciers form the Forster Ice Piedmont along the west coast of the peninsula. Charlotte Bay, Hughes Bay and Marguerite Bay are all on the west coast as well. On the east coast is the Athene Glacier; the Arctowski and Åkerlundh Nunataks are both just off the east coast. A number of smaller peninsulas extend from the main Antarctic Peninsula, including Hollick-Kenyon Peninsula and Prehn Peninsula at the base of the Antarctic Peninsula. Also located here are the Scaife Mountains. The Eternity Range is found in the middle of the peninsula. Other geographical features include Avery Plateau, the twin towers of Una Peaks. Climate Because the Antarctic Peninsula, which reaches north of the Antarctic Circle, is the most northerly part of Antarctica, it has the mildest climates within this continent. Its temperatures are warmest in January, averaging 1 to 2 °C (34 to 36 °F), and coldest in June, averages from −15 to −20 °C (5 to −4 °F). Its west coast from the tip of the Antarctic Peninsula south to 68° S, which has a maritime Antarctic climate, is the mildest part of the Antarctic Peninsula. Within this part of the Antarctic Peninsula, temperatures exceed 0 °C (32 °F) for 3 or 4 months during the summer, and rarely fall below −10 °C (14 °F) during the winter. Farther south along the west coast and the northeast coast of the peninsula, mean monthly temperatures exceed 0 °C (32 °F) for only one or two months of summer and average around −15 °C (5 °F) in winter. The east coast of the Antarctic Peninsula south of 63° S is generally much colder, with mean temperatures exceeding 0 °C (32 °F) for at most one month of summer, and winter mean temperatures ranging from −5 to −25 °C (23 to −13 °F). The colder temperatures of the southeast, Weddell Sea side, of the Antarctic Peninsula are reflected in the persistence of ice shelves that cling to the eastern side.Precipitation varies greatly within the Antarctic Peninsula. From the tip of the Antarctic Peninsula to 68° S, precipitation averages 35–50 cm (14–20 in) per year. A good portion of this precipitation falls as rain during the summer, on two-thirds of the days of the year, and with little seasonal variation in amounts. Between about 68° S and 63° S on the west coast of the Antarctic Peninsula and along its northeast coast, precipitation is 35 cm (14 in) or less with occasional rain. Along the east coast of the Antarctic Peninsula south of 63° S, precipitation ranges from 10 to 15 cm (3.9 to 5.9 in). In comparison, the subantarctic islands have precipitation of 100–200 cm (39–79 in) per year and the dry interior of Antarctica is a virtual desert with only 10 cm (3.9 in) precipitation per year. Climate change Because of issues concerning global climate change, the Antarctic Peninsula and adjacent parts of the Weddell Sea and its Pacific continental shelf have been the subject of intensive geologic, paleontologic, and paleoclimatic research by interdisciplinary and multinational groups over the last several decades. The combined study of the glaciology of its ice sheet and the paleontology, sedimentology, stratigraphy, structural geology, and volcanology of glacial and nonglacial deposits of the Antarctic Peninsula has allowed the reconstruction of the paleoclimatology and prehistoric ice sheet fluctuation of it for over the last 100 million years. This research shows the dramatic changes in climate, which have occurred within this region after it reached its approximate position within the Antarctic Circle during the Cretaceous Period.The Fossil Bluff Group, which outcrops within Alexander Island, provides a detailed record, which includes paleosols and fossil plants, of Middle Cretaceous (Albian) terrestrial climates. The sediments that form the Fossil Bluff Group accumulated within a volcanic island arc, which now forms the bedrock backbone of the Antarctic Peninsula, in prehistoric floodplains and deltas and offshore as submarine fans and other marine sediments. As reflected in the plant fossils, paleosols, and climate models, the climate was warm, humid, and seasonally dry. According to climate models, the summers were dry and winters were wet. The rivers were perennial and subject to intermittent flooding as the result of heavy rainfall.Warm high-latitude climates reached a peak during the mid-Late Cretaceous Cretaceous Thermal Maximum. Plant fossils found within the Late Cretaceous (Coniacian and Santonian-early Campanian) strata of the Hidden Lake and Santa Maria formations, which outcrop within James Ross, Seymour, and adjacent islands, indicate that this emergent volcanic island arc enjoyed warm temperate or subtropical climates with adequate moisture for growth and without extended periods of below freezing winter temperatures.After the peak warmth of the Cretaceous thermal maximum the climate, both regionally and globally, appears to have cooled as seen in the Antarctic fossil wood record. Later, warm high-latitude climates returned to the Antarctic Peninsula region during the Paleocene and early Eocene as reflected in fossil plants. Abundant plant and marine fossils from Paleogene marine sediments that outcrop on Seymour Island indicate the presence of cool and moist, high-latitudes environment during the early Eocene.Detailed studies of the paleontology, sedimentology, and stratigraphy of glacial and nonglacial deposits within the Antarctic Peninsula and adjacent parts of the Weddell Sea and its Pacific continental shelf have found that it has become progressively glaciated as the climate of Antarctica dramatically and progressively cooled during the last 37 million years. This progressive cooling was contemporaneous with a reduction in atmospheric CO2 concentrations. During this climatic cooling, the Antarctic Peninsula was probably the last region of Antarctica to have been fully glaciated. Within the Antarctic Peninsula, mountain glaciation was initiated during the latest Eocene, about 37–34 Ma. The transition from temperate, alpine glaciation to a dynamic ice sheet occurred about 12.8 Ma. At this time, the Antarctic Peninsula formed as the bedrock islands underlying it were overridden and joined by an ice sheet in the early Pliocene about 5.3–3.6 Ma. During the Quaternary period, the size of the West Antarctic Ice Sheet has fluctuated in response to glacial–interglacial cycles. During glacial epochs, this ice sheet was significantly thicker than it is currently and extended to the edge of the continental shelves. During interglacial epochs, the West Antarctica Ice Sheet was thinner than during glacial epochs and its margins lay significantly inland of the continental margins. During the Last Glacial Maximum, about 20,000 to 18,000 years ago, the ice sheet covering the Antarctic Peninsula was significantly thicker than it is now. Except for a few isolated nunataks, the Antarctic Peninsula and its associated islands were completely buried by the ice sheet. In addition, the ice sheet extended past the present shoreline onto the Pacific outer continental shelf and completely filled the Weddell Sea up to the continental margin with grounded ice. The deglaciation of the Antarctic Peninsula largely occurred between 18,000 and 6,000 years ago as an interglacial climate was established in the region. It initially started about 18,000 to 14,000 years ago with retreat of the ice sheet from the Pacific outer continental shelf and the continental margin within the Weddell Sea. Within the Weddell Sea, the transition from grounded ice to a floating ice shelf occurred about 10,000 years ago. The deglaciation of some locations within the Antarctic Peninsula continued until 4,000 to 3,000 years ago. Within the Antarctic Peninsula, an interglacial climatic optimum occurred about 3,000 to 5,000 years ago. After the climate optimum, a distinct climate cooling, which lasted until historic times, occurred.The Antarctic Peninsula is a part of the world that is experiencing extraordinary warming. Each decade for the last five, average temperatures in the Antarctic Peninsula have risen by 0.5 °C (0.90 °F). Ice mass loss on the peninsula occurred at a rate of 60 billion tons / year in 2006, with the greatest change occurring in the northern tip of the peninsula. Seven ice shelves along the Antarctic Peninsula have retreated or disintegrated in the last two decades. Research by the United States Geological Survey has revealed that every ice front on the southern half of the peninsula experienced a retreat between 1947 and 2009. According to a study by the British Antarctic Survey, glaciers on the peninsula are not only retreating but also increasing their flow rate as a result of increased buoyancy in the lower parts of the glaciers. Professor David Vaughan has described the disintegration of the Wilkins Ice Shelf as the latest evidence of rapid warming in the area. The Intergovernmental Panel on Climate Change has been unable to determine the greatest potential effect on sea level rise that glaciers in the region may cause. Flora and fauna The coasts of the peninsula have the mildest climate in Antarctica and moss and lichen-covered rocks are free of snow during the summer months, although the weather is still intensely cold and the growing season very short. The plant life today is mainly mosses, lichens and algae adapted to this harsh environment, with lichens preferring the wetter areas of the rocky landscape. The most common lichens are Usnea and Bryoria species. Antarctica's two flowering plant species, the Antarctic hair grass (Deschampsia antarctica) and Antarctic pearlwort (Colobanthus quitensis) are found on the northern and western parts of the Antarctic Peninsula, including offshore islands, where the climate is relatively mild. Lagotellerie Island in Marguerite Bay is an example of this habitat.Xanthoria elegans and Caloplaca are visible crustose lichens seen on coastal rocks.Antarctic krill are found in the seas surrounding the peninsula and the rest of the continent. The crabeater seal spends most of its life in the same waters feeding on krill. Bald notothen is a cryopelagic fish that lives in sub-zero water temperatures around the peninsula. Vocalizations of the sei whale can be heard emanating from the waters surrounding the Antarctic Peninsula.Whales include the Antarctic Minke Whale, Dwarf Minke Whale, and the Killer Whale.The animals of Antarctica live on food they find in the sea—not on land—and include seabirds, seals and penguins. The seals include: leopard seal (Hydrurga leptonyx), Weddell seal (Leptonychotes weddellii), the huge southern elephant seal (Mirounga leonina), and Crabeater seal (Lobodon carcinophagus).Penguin species found on the peninsula, especially near the tip and surrounding islands, include the chinstrap penguin, emperor penguin, gentoo penguin and the Adélie penguin. Petermann Island is the world's southernmost colony of gentoo penguins. The exposed rocks on the island is one of many locations on the peninsula that provides a good habitat for rookeries. The penguins return each year and may reach populations of more than ten thousand. Of these the most common on the Antarctic Peninsula are the chinstrap and gentoo, with the only breeding colony of emperor penguins in West Antarctica an isolated population on the Dion Islands, in Marguerite Bay on the west coast of the peninsula. Most emperor penguins breed in East Antarctica.Seabirds of the Southern Ocean and West Antarctica found on the peninsula include: southern fulmar (Fulmarus glacialoides), the scavenging southern giant petrel (Macronectes giganteus), Cape petrel (Daption capense), snow petrel (Pagodroma nivea), the small Wilson's storm-petrel (Oceanites oceanicus), imperial shag (Phalacrocorax atriceps), snowy sheathbill (Chionis alba), the large south polar skua (Catharacta maccormicki), brown skua (Catharacta lönnbergi), kelp gull (Larus dominicanus), and Antarctic tern (Sterna vittata). The imperial shag is a cormorant which is native to many sub-Antarctic islands, the Antarctic Peninsula and southern South America.Also present are the Antarctic petrel, Antarctic shag, king penguin, macaroni penguin, and Arctic tern. Threats and preservation Although this very remote part of the world has never been inhabited and is protected by the Antarctic Treaty System, which bans industrial development, waste disposal and nuclear testing, there is still a threat to these fragile ecosystems from increasing tourism, primarily on cruises across the Southern Ocean from the port of Ushuaia, Argentina. Paleoflora and paleofauna A rich record of fossil leaves, wood, pollen, and flowers demonstrates that flowering plants thrived in subtropical climates within the volcanic island arcs that occupied the Antarctic Peninsula region during the Cretaceous and very early Paleogene periods. The analysis of fossil leaves and flowers indicates that semitropical woodlands, which were composed of ancestors of plants that live in the tropics today, thrived within this region during a global thermal maximum with summer temperatures that averaged 20 °C (68 °F). The oldest fossil plants come from the middle Cretaceous (Albian) Fossil Bluff Group, which outcrop along the edge of Alexander Island. These fossils reveal that at this time the forests consisted of large conifers, with mosses and ferns in the undergrowth. The paleosols, in which trees are rooted, have physical characteristics indicative of modern soils that form under seasonally dry climates with periodic high rainfall. Younger Cretaceous strata, which outcrop within James Ross, Seymour, and adjacent islands, contain fossil plants of Late Cretaceous angiosperms with leaf morphotypes that are similar to those of living families such as Sterculiaceae, Lauraceae, Winteraceae, Cunoniaceae, and Myrtaceae. They indicate that the emergent parts of the volcanic island arc, the eroded roots of which now form the central part of the Antarctic Peninsula, were covered by either warm temperate or subtropical forests.These fossil plants are indicative of tropical and subtropical forest at high paleolatitudes during the Middle and Late Cretaceous, which grew in climates without extended periods of below freezing winter temperatures and with adequate moisture for growth. The Cretaceous strata of James Ross Island also yielded the dinosaur genus Antarctopelta, which was the first dinosaur fossil to be found on Antarctica.Paleogene and Early Eocene marine sediments that outcrop on Seymour Island contain plant-rich horizons. The fossil plants are dominated by permineralized branches of conifers and compressions of angiosperm leaves, and are found within carbonate concretions. These Seymour Island region fossils date to about 51.5–49.5 Ma and are dominated by leaves, cone scales, and leafy branches of Araucarian conifers, very similar in all respects to living Araucaria araucana (monkey puzzle) from Chile. They suggest that the adjacent parts of the prehistoric Antarctic Peninsula were covered by forests that grew in a cool and moist, high-latitude environment during the early Eocene.During the Cenozoic climatic cooling, the Antarctic Peninsula was the last region of Antarctica to have been fully glaciated according to current research. As a result, this region was probably the last refugium for plants and animals that had inhabited Antarctica after it separated from the Gondwanaland supercontinent. Analysis of paleontologic, stratigraphic, and sedimentologic data acquired from the study of drill core and seismic acquired during the Shallow Drilling on the Antarctic Continental Shelf (SHALDRIL) and other projects and from fossil collections from and rock outcrops within Alexander, James Ross, King George, Seymour, and South Shetland Islands has yielded a record of the changes in terrestrial vegetation that occurred within the Antarctic Peninsula over the course of the past 37 million years.This research found that vegetation within the Antarctic Peninsula changed in response to a progressive climatic cooling that started with the initiation of mountain glaciation in the latest Eocene, about 37–34 Ma. The cooling was contemporaneous with glaciation elsewhere in Antarctica and a reduction in atmospheric CO2 concentrations. Initially, during the Eocene, this climate cooling resulted in a decrease in diversity of the angiosperm-dominated vegetation that inhabited the northern Antarctic Peninsula. During the Oligocene, about 34–23 Ma, these woodlands were replaced by a mosaic of southern beech (Nothofagus) and conifer-dominated woodlands and tundra as the climate continued to cool. By middle Miocene, 16–11.6 Ma, a tundra landscape completely replaced any remaining woodlands. At this time, woodlands became completely extirpated from the Antarctic Peninsula and all of Antarctica. A tundra landscape probably persisted until about 12.8 Ma when the transition from a temperate, alpine glaciation to a dynamic ice sheet occurred. Eventually, the Antarctic Peninsula was overridden by an ice sheet, which has persisted without any interruption to this day, in the early Pliocene, about 5.3–3.6 Ma. See also Argentina–Chile relations § Border issues Brategg Bank Geology of the Antarctic Peninsula Hope Bay incident Instituto Antártico Argentino Rendezvous Rocks Summit Ridge References External links "Of Ice and Men" Account of a tourist visit to the Antarctic Peninsula by Roderick Eime Biodiversity at Ardley Island, South Shetland archipelago, Antarctic Peninsula 89 photos of the Antarctic Peninsula
global north and global south
The concept of Global North and Global South (or North–South divide in a global context) is used to describe a grouping of countries along the lines of socio-economic and political characteristics. The Global South is a term that broadly comprises countries in the regions of Africa, Latin America and the Caribbean, Asia (without Israel, Japan, and South Korea), and Oceania (without Australia and New Zealand), according to the United Nations Conference on Trade and Development (UNCTAD). Most of the countries in the Global South are characterized by low income, dense population, poor infrastructure, and often political or cultural marginalization. The Global South forms one side of the divide; on the other is the Global North (broadly comprising Northern America and Europe, Israel, Japan and South Korea, as well as Australia and New Zealand, according to the UNCTAD). As such, the terms Global North and Global South do not refer to the cardinal directions of north and south as many of the Global South countries are geographically located in the Northern Hemisphere.Countries that are developed are considered as Global North countries, while those developing are considered as Global South countries. The term as used by governmental and developmental organizations was first introduced as a more open and value-free alternative to "Third World" and similarly potentially "valuing" terms like developing countries. Countries of the Global South have been described as newly industrialized or are in the process of industrializing, and are frequently current or former subjects of colonialism.The Global North generally correlates with the Western world, while the South largely corresponds with the developing countries and the Eastern world. The two groups are often defined in terms of their differing levels of wealth, economic development, income inequality, democracy, and political and economic freedom, as defined by freedom indices. States that are generally seen as part of the Global North tend to be wealthier and less unequal. They are developed countries, which export technologically advanced manufactured products. Southern states are generally poorer developing countries with younger, more fragile democracies that are heavily dependent on primary sector exports; many of the Southern states also share a common history of past colonialism under Northern states. Nevertheless, some scholars have suggested that the gap of inequality between the North and the South is narrowing due to globalization. However other scholars have disputed this position, suggesting that the Global South has actually gotten poorer, relative to the North, since globalization.South-South cooperation (SSC) has increased to "challenge the political and economic dominance of the North." This cooperation has become a popular political and economic concept following geographical migration of manufacturing and production activity from the North to the Global South, and the diplomatic actions of several states like China. These contemporary economic trends have "enhanced the historical potential of economic growth and industrialization in the Global South," which has renewed targeted SSC efforts to "loosen the strictures imposed during the colonial era and transcend the boundaries of postwar political and economic geography." Definition The terms are not strictly geographical, and are not "an image of the world divided by the equator, separating richer countries from their poorer counterparts." Rather, geography should be more readily understood as economic and migratory, the world understood through the "wider context of globalization or global capitalism."Generally, definitions of the Global North is not exclusively a geographical term, and it broadly comprises Northern America and Europe, Israel, Japan and South Korea, as well as Australia and New Zealand, according to the UNCTAD. The Global South broadly comprises Africa, Latin America and the Caribbean, Asia without Israel, Japan, and South Korea, and Oceania without Australia and New Zealand, also according to the UNCTAD. Some, such as Australian sociologists Fran Collyer and Raewyn Connell, have argued that Australia and New Zealand are marginalized in similar ways to other Global South countries, due to their geographical isolation and location in the Southern Hemisphere.The Global South is generally seen as home to Brazil, India, Pakistan, Indonesia and China, which, along with Nigeria and Mexico, are the largest Southern states in terms of land area and population. The overwhelming majority of the Global South countries are located in or near the tropics. The term Global North is often used interchangeably with developed countries. Likewise, the term Global South is often used interchangeably with developing countries. Development of the terms Carl Oglesby used the term "global south" in 1969, writing in Catholic journal Commonweal in a special issue on the Vietnam War. Oglesby argued that centuries of northern "dominance over the global south […] [has] converged […] to produce an intolerable social order."The term gained appeal throughout the second half of the 20th century, which rapidly accelerated in the early 21st century. It appeared in fewer than two dozen publications in 2004, but in hundreds of publications by 2013. The emergence of the new term meant looking at the troubled realities of its predecessors, i.e.: Third World or Developing World. The term "Global South", in contrast, was intended to be less hierarchical.The idea of categorizing countries by their economic and developmental status began during the Cold War with the classifications of East and West. The Soviet Union and China represented the East, and the United States and their allies represented the West. The term Third World came into parlance in the second half of the twentieth century. It originated in a 1952 article by Alfred Sauvy entitled "Trois Mondes, Une Planète". Early definitions of the Third World emphasized its exclusion from the east–west conflict of the Cold War as well as the ex-colonial status and poverty of the peoples it comprised.Efforts to mobilize the Third World as an autonomous political entity were undertaken. The 1955 Bandung Conference was an early meeting of Third World states in which an alternative to alignment with either the Eastern or Western Blocs was promoted. Following this, the first Non-Aligned Summit was organized in 1961. Contemporaneously, a mode of economic criticism which separated the world economy into "core" and "periphery" was developed and given expression in a project for political reform which "moved the terms 'North' and 'South' into the international political lexicon."In 1973, the pursuit of a New International Economic Order which was to be negotiated between the North and South was initiated at the Non-Aligned Summit held in Algiers. Also in 1973, the oil embargo initiated by Arab OPEC countries as a result of the Yom Kippur War caused an increase in world oil prices, with prices continuing to rise throughout the decade. This contributed to a worldwide recession which resulted in industrialized nations increasing economically protectionist policies and contributing less aid to the less developed countries of the South. The slack was taken up by Western banks, which provided substantial loans to Third World countries. However, many of these countries were not able to pay back their debt, which led the IMF to extend further loans to them on the condition that they undertake certain liberalizing reforms. This policy, which came to be known as structural adjustment, and was institutionalized by International Financial Institutions (IFIs) and Western governments, represented a break from the Keynesian approach to foreign aid which had been the norm from the end of the Second World War.After 1987, reports on the negative social impacts that structural adjustment policies had had on affected developing nations led IFIs to supplement structural adjustment policies with targeted anti-poverty projects. Following the end of the Cold War and the break-up of the Soviet Union, some Second World countries joined the First World, and others joined the Third World. A new and simpler classification was needed. Use of the terms "North" and "South" became more widespread. Brandt Line The Brandt Line is a visual depiction of the north–south divide, proposed by West German former Chancellor Willy Brandt in the 1980s in the report titled North-South: A Programme for Survival which was later known as the Brandt Report. This line divides the world at a latitude of approximately 30° North, passing between the United States and Mexico, north of Africa and the Middle East, climbing north over China and Mongolia, then dipping south to include Japan, Australia, and New Zealand in the "Rich North". As of 2023 the Brandt line has been criticised for being outdated, yet is still regarded as a helpful way to visulise global inequalities. Uses of the term Global South Global South "emerged in part to aid countries in the southern hemisphere to work in collaboration on political, economic, social, environmental, cultural, and technical issues." This is called South–South cooperation (SSC), a "political and economical term that refers to the long-term goal of pursuing world economic changes that mutually benefit countries in the Global South and lead to greater solidarity among the disadvantaged in the world system." The hope is that countries within the Global South will "assist each other in social, political, and economical development, radically altering the world system to reflect their interests and not just the interests of the Global North in the process." It is guided by the principles of "respect for national sovereignty, national ownership, independence, equality, non-conditionality, non-interference in domestic affairs, and mutual benefit." Countries using this model of South–South cooperation see it as a "mutually beneficial relationship that spreads knowledge, skills, expertise and resources to address their development challenges such as high population pressure, poverty, hunger, disease, environmental deterioration, conflict and natural disasters." These countries also work together to deal with "cross border issues such as environmental protection, HIV/AIDS," and the movement of capital and labor.Social psychiatrist Vincenzo Di Nicola has applied the Global South as a bridge between the critiques globalization and the gaps and limitations of the Global Mental Health Movement invoking Boaventura de Sousa Santos' notion of "epistemologies of the South" to create a new epistemology for social psychiatry. Defining development Being categorized as part of the "North" implies development as opposed to belonging to the "South", which implies a lack thereof. According to N. Oluwafemi Mimiko, the South lacks the right technology, it is politically unstable, its economies are divided, and its foreign exchange earnings depend on primary product exports to the North, along with the fluctuation of prices. The low level of control it exercises over imports and exports condemns the South to conform to the 'imperialist' system. The South's lack of development and the high level of development of the North deepen the inequality between them and leave the South a source of raw material for the developed countries. The North becomes synonymous with economic development and industrialization while the South represents the previously colonized countries which are in need of help in the form of international aid agendas. In order to understand how this divide occurs, a definition of "development" itself is needed. Northern countries are using most of the earth resources and most of them are high entropic fossil fuels. Reducing emission rates of toxic substances is central to debate on sustainable development but this can negatively affect economic growth. The Dictionary of Human Geography defines development as "processes of social change or [a change] to class and state projects to transform national economies". This definition entails an understanding of economic development which is imperative when trying to understand the North–South divide. Economic Development is a measure of progress in a specific economy. It refers to advancements in technology, a transition from an economy based largely on agriculture to one based on industry and an improvement in living standards.Other factors that are included in the conceptualization of what a developed country is include life expectancy and the levels of education, poverty and employment in that country. Furthermore, in Regionalism Across the North-South Divide: State Strategies and Globalization, Jean Grugel states that the three factors that direct the economic development of states within the Global south is "élite behaviour within and between nation states, integration and cooperation within 'geographic' areas, and the resulting position of states and regions within the global world market and related political economic hierarchy." Theories explaining the divide The development disparity between the North and the South has sometimes been explained in historical terms. Dependency theory looks back on the patterns of colonial relations which persisted between the North and South and emphasizes how colonized territories tended to be impoverished by those relations. Theorists of this school maintain that the economies of ex-colonial states remain oriented towards serving external rather than internal demand, and that development regimes undertaken in this context have tended to reproduce in underdeveloped countries the pronounced class hierarchies found in industrialized countries while maintaining higher levels of poverty. Dependency theory is closely intertwined with Latin American Structuralism, the only school of development economics emerging from the Global South to be affiliated with a national research institute and to receive support from national banks and finance ministries. The Structuralists defined dependency as the inability of a nation's economy to complete the cycle of capital accumulation without reliance on an outside economy. More specifically, peripheral nations were perceived as primary resource exporters reliant on core economies for manufactured goods. This led structuralists to advocate for import-substitution industrialization policies which aimed to replace manufactured imports with domestically made products.New Economic Geography explains development disparities in terms of the physical organization of industry, arguing that firms tend to cluster in order benefit from economies of scale and increase productivity which leads ultimately to an increase in wages. The North has more firm clustering than the South, making its industries more competitive. It is argued that only when wages in the North reach a certain height, will it become more profitable for firms to operate in the South, allowing clustering to begin. Associated theories The term of the Global South has many researched theories associated with it. Since many of the countries that are considered to be a part of the Global South were first colonized by Global North countries, they are at a disadvantage to become as quickly developed. Dependency theorists suggest that information has a top-down approach and first goes to the Global North before countries in the Global South receive it. Although many of these countries rely on political or economic help, this also opens up opportunity for information to develop Western bias and create an academic dependency. Meneleo Litonjua describes the reasoning behind distinctive problems of dependency theory as "the basic context of poverty and underdevelopment of Third World/Global South countries was not their traditionalism, but the dominance-dependence relationship between rich and poor, powerful and weak counties."What brought about much of the dependency, was the push to become modernized. After World War II, the U.S. made effort to assist developing countries financially in attempt to pull them out of poverty. Modernization theory "sought to remake the Global South in the image and likeliness of the First World/Global North." In other terms, "societies can be fast-tracked to modernization by 'importing' Western technical capital, forms of organization, and science and technology to developing countries." With this ideology, as long as countries follow in Western ways, they can develop quicker.After modernization attempts took place, theorists started to question the effects through post-development perspectives. Postdevelopment theorists try to explain that not all developing countries need to be following Western ways but instead should create their own development plans. This means that "societies at the local level should be allowed to pursue their own development path as they perceive it without the influences of global capital and other modern choices, and thus a rejection of the entire paradigm from Eurocentric model and the advocation of new ways of thinking about the non-Western societies." The goals of postdevelopment was to reject development rather than reform by choosing to embrace non-Western ways. Challenges The accuracy of the North–South divide has been challenged on a number of grounds. Firstly, differences in the political, economic and demographic make-up of countries tend to complicate the idea of a monolithic South. Globalization has also challenged the notion of two distinct economic spheres. Following the liberalization of post-Mao China initiated in 1978, growing regional cooperation between the national economies of Asia has led to the growing decentralization of the North as the main economic power. The economic status of the South has also been fractured. As of 2015, all but roughly the bottom 60 nations of the Global South were thought to be gaining on the North in terms of income, diversification, and participation in the world market.However, other scholars, notably Jason Hickel and Robert Wade have suggested that the Global South is not rising economically, and that global inequality between the North and South has risen since globalization. Hickel has suggested that the exchange of resources between the South and the North is substantially unbalanced in favor of the North, with Global North countries extracting a windfall of over 240 trillion dollars from the Global South in 2015. This figure outstrips the amount of financial aid given to Global South by a factor of 30.Globalization has largely displaced the North–South divide as the theoretical underpinning of the development efforts of international institutions such as the IMF, World Bank, WTO, and various United Nations affiliated agencies, though these groups differ in their perceptions of the relationship between globalization and inequality. Yet some remain critical of the accuracy of globalization as a model of the world economy, emphasizing the enduring centrality of nation-states in world politics and the prominence of regional trade relations. Lately, there have been efforts to integrate the Global South more meaningfully into the world economic order. The divide between the North and South challenges international environmental cooperation. The economic differences between North and South have created dispute over the scientific evidence and data regarding global warming and what needs to be done about it, as the South do not trust Northern data and cannot afford the technology to be able to produce their own. In addition to these disputes, there are serious divisions over responsibility, who pays, and the possibility for the South to catch up. This is becoming an ever-growing issue with the emergence of rising powers, imploding these three divisions just listed and making them progressively blurry. Multiplicity of actors, such as governments, businesses, and NGO's all influence any positive activity that can be taken into preventing further global warming problems with the Global North and Global South divide, contributing to the severity of said actors. Disputes between Northern countries governments and Southern countries governments has led to a break down in international discussions with governments from either side disagreeing with each other. Addressing most environmental problems requires international cooperation, and the North and South contribute to the stagnation concerning any form of implementation and enforcement, which remains a key issue. Debates over the term With its development, many scholars preferred using the Global South over its predecessors, such as "developing countries" and "Third World". Leigh Anne Duck, co-editor of Global South, argued that the term is better suited at resisting "hegemonic forces that threaten the autonomy and development of these countries." The Global South / Global North distinction has been prefered to the older developed / developing dichotomy as it does not imply a hierarchy. Alvaro Mendez, co-founder of the London School of Economics and Political Science's Global South Unit, have applauded the empowering aspects of the term. In an article, Discussion on Global South, Mendez discusses emerging economies in nations like China, India and Brazil. It is predicted that by 2030, 80% of the world's middle-class population will be living in developing countries. The popularity of the term "marks a shift from a central focus on development and cultural difference" and recognizes the importance of geopolitical relations.Critics of this usage often argue that it is a vague blanket term. Others have argued that the term, its usage, and its subsequent consequences mainly benefit those from the upper classes of countries within the Global South; who stand "to profit from the political and economic reality [of] expanding south-south relations."According to scholar Anne Garland Mahler, this nation-based understanding of the Global South is regarded as an appropriation of a concept that has deeper roots in Cold War radical political thought. In this political usage, the Global South is employed in a more geographically fluid way, referring to "spaces and peoples negatively impacted by contemporary capitalist globalization." In other words, "there are economic Souths in the geographic North and Norths in the geographic South." Through this geographically fluid definition, another meaning is attributed to the Global South where it refers to a global political community that is formed when the world's "Souths" recognize one another and view their conditions as shared.The geographical boundaries of the Global South remain a source of debate. Some scholars agree that the term is not a "static concept". Others have argued against "grouping together a large variety of countries and regions into one category [because it] tends to obscure specific (historical) relationships between different countries and/or regions", and the power imbalances within these relationships. This "may obscure wealth differences within countries – and, therefore, similarities between the wealthy in the Global South and Global North, as well as the dire situation the poor may face all around the world." Future development Some economists have argued that international free trade and unhindered capital flows across countries could lead to a contraction in the North–South divide. In this case more equal trade and flow of capital would allow the possibility for developing countries to further develop economically.As some countries in the South experience rapid development, there is evidence that those states are developing high levels of South–South aid. Brazil, in particular, has been noted for its high levels of aid ($1 billion annually—ahead of many traditional donors) and the ability to use its own experiences to provide high levels of expertise and knowledge transfer. This has been described as a "global model in waiting".The United Nations has also established its role in diminishing the divide between North and South through the Millennium Development Goals, all of which were to be achieved by 2015. These goals seek to eradicate extreme poverty and hunger, achieve global universal education and healthcare, promote gender equality and empower women, reduce child mortality, improve maternal health, combat HIV/AIDS, malaria, and other diseases, ensure environmental sustainability, and develop a global partnership for development. These were replaced in 2015 by 17 Sustainable Development Goals (SDGs). The SDGs, set in 2015 by the United Nations General Assembly and intended to be achieved by 2030, are part of a UN Resolution called "The 2030 Agenda". Society and culture Digital and technological divide The global digital divide is often characterized as corresponding to the north–south divide; however, Internet use, and especially broadband access, is now soaring in Asia compared with other continents. This phenomenon is partially explained by the ability of many countries in Asia to leapfrog older Internet technology and infrastructure, coupled with booming economies which allow vastly more people to get online. Media representation See also BRICS, CIVETS, MINT, VISTA East–West dichotomy First World Golden billion Group of 77 Inglehart–Welzel cultural map of the world International Solar Alliance Non-Aligned Movement North–South Centre, an institution of the Council of Europe, awarding the North–South Prize North–South model, in economics theory North–South Summit, the only North–South summit ever held, with 22 heads of state and government taking part Northern and southern China Three-world model World-systems theory Global majority, roughly corresponding to Global South peoples Notes References External links Share The World's Resources: The Brandt Commission Report, a 1980 report by a commission led by Willy Brandt that popularized the terminology Brandt 21 Forum, a recreation of the original commission with an updated report (information on original commission at site)
cement clinker
Cement clinker is a solid material produced in the manufacture of portland cement as an intermediary product. Clinker occurs as lumps or nodules, usually 3 millimetres (0.12 in) to 25 millimetres (0.98 in) in diameter. It is produced by sintering (fusing together without melting to the point of liquefaction) limestone and aluminosilicate materials such as clay during the cement kiln stage. Composition and preparation The Portland clinker essentially consists of four minerals: two calcium silicates, alite (Ca3SiO5) and belite (Ca2SiO4), along with tricalcium aluminate (Ca3Al2O6) and calcium aluminoferrite (Ca2(Al,Fe)2O5). These main mineral phases are produced by heating at high temperature clays and limestone.The major raw material for the clinker-making is usually limestone mixed with a second material containing clay as a source of alumino-silicate. An impure limestone containing clay or silicon dioxide (SiO2) can be used. The calcium carbonate (CaCO3) content of these limestones can be as low as 80% by weight. During the calcination process that occurs in the production of clinker, limestone is broken into Lime (calcium oxide), which is incorporated into the final clinker product, and carbon dioxide which is typically released into the atmosphere. The second raw material (materials in the rawmix other than limestone) depend on the purity of the limestone. Some of the second raw materials used are: clay, shale, sand, iron ore, bauxite, fly ash and slag. Portland cement clinker is made by heating a homogeneous mixture of raw materials in a rotary kiln at high temperature. The products of the chemical reaction aggregate together at their sintering temperature, about 1,450 °C (2,640 °F). Aluminium oxide and iron oxide are present only as a flux to reduce the sintering temperature and contribute little to the cement strength. For special cements, such as low heat (LH) and sulfate resistant (SR) types, it is necessary to limit the amount of tricalcium aluminate formed. The clinker and its hydration reactions are characterized and studied in detail by many techniques, including calorimetry, strength development, X-ray diffraction, scanning electron microscope and atomic force microscopy. Uses Portland cement clinker (abbreviated k in the European norms) is ground to a fine powder and used as the binder in many cement products. A small amount of gypsum (less than 5 wt.%) must be added to avoid the flash setting of the tricalcium aluminate (Ca3Al2O6), the most reactive mineral phase (exothermic hydration reaction) in Portland clinker. It may also be combined with other active ingredients or cement additions to produce other types of cement including, following the European EN 197-1 standard: CEM I: pure Portland clinker (Ordinary Portland Cement, OPC) CEM II: composite cements with a limited addition of limestone filler or blast furnace slag (BFS) CEM III: BFS-OPC blast furnace cements CEM IV: pozzolanic cements CEM V: composite cements (with large additions of BFS, fly ashes, or silica fume)Clinker is one of the ingredients of an artificial rock imitating limestone and called pulhamite after its inventor, James Pulham (1820–1898). Other ingredients were Portland cement and sand. Pulhamite can be extremely convincing and was popular in creating natural looking rock gardens in the nineteenth century. Clinker, if stored in dry conditions, can be kept for several months without appreciable loss of quality. Because of this, and because it can be easily handled by ordinary mineral handling equipment, clinker is internationally traded in large quantities. Cement manufacturers purchasing clinker usually grind it as an addition to their own clinker at their cement plants. Manufacturers also ship clinker to grinding plants in areas where cement-making raw materials are not available. Clinker grinding aids Gypsum is added to clinker primarily as an additive preventing the flash settings of the cement, but it is also very effective to facilitate the grinding of clinker by preventing agglomeration and coating of the powder at the surface of balls and mill wall.Organic compounds are also often added as grinding aids to avoid powder agglomeration. Triethanolamine (TEA) is commonly used at 0.1 wt. % and has proved to be very effective. Other additives are sometimes used, such as ethylene glycol, oleic acid, and dodecyl-benzene sulfonate. Clinker minerals hydration Upon addition of water, clinker minerals react to form different types of hydrates and "set"(harden) as the hydrated cement paste becomes concrete. The calcium silicate hydrates (C-S-H) (hydrates of alite and belite minerals) represent the main "glue" components of the concrete. After initial setting the concrete continues to harden and to develop its mechanical strength. The first 28 days are the most critical for the hardening. The concrete does not dry but one says that it sets and hardens. The cement is a hydraulic binder whose hydration requires water. It can perfectly set under water. Water is essential to its hardening and water losses must be avoided at the young age to avoid the development of cracks. Young concrete is protected against desiccation (evaporation of unreacted water). Traditional methods for preventing desiccation involve covering the product with wet burlap or use of plastic sheeting.. For larger projects, such as highways, the surface is sprayed with a solution of curing compound that leaves a water-impermeable coating. Contribution to global warming As of 2018, cement production contributed about 8% of all carbon emissions worldwide, contributing substantially to global warming. Most of those emissions were produced in the clinker manufacturing process. See also Clinker (waste) Environmental impact of concrete == References ==
season creep
In phenology, season creep refers to observed changes in the timing of the seasons, such as earlier indications of spring widely observed in temperate areas across the Northern Hemisphere. Phenological records analyzed by climate scientists have shown significant temporal trends in the observed time of seasonal events, from the end of the 20th century and continuing into the 21st century. In Europe, season creep has been associated with the arrival of spring moving up by approximately one week in a recent 30-year period. Other studies have put the rate of season creep measured by plant phenology in the range of 2–3 days per decade advancement in spring, and 0.3–1.6 days per decade delay in autumn, over the past 30–80 years.Observable changes in nature related to season creep include birds laying their eggs earlier and buds appearing on some trees in late winter. In addition to advanced budding, flowering trees have been blooming earlier, for example the culturally-important cherry blossoms in Japan, and Washington, D.C.Northern hardwood forests have been trending toward leafing out sooner, and retaining their green canopies longer. The agricultural growing season has also expanded by 10–20 days over the last few decades.The effects of season creep have been noted by non-scientists as well, including gardeners who have advanced their spring planting times, and experimented with plantings of less hardy warmer climate varieties of non-native plants. While summer growing seasons are expanding, winters are getting warmer and shorter, resulting in reduced winter ice cover on bodies of water, earlier ice-out, earlier melt water flows, and earlier spring lake level peaks. Some spring events, or "phenophases", have become intermittent or unobservable; for example, bodies of water that once froze regularly most winters now freeze less frequently, and formerly migratory birds are now seen year-round in some areas. Relationship to global warming The full impact of global warming is forecast to happen in the future, but climate scientists have cited season creep as an easily observable effect of climate change that has already occurred and continues to occur. A large systematic phenological examination of data on 542 plant species in 21 European countries from 1971–2000 showed that 78% of all leafing, flowering, and fruiting records advanced while only 3% were significantly delayed, and these observations were consistent with measurements of observed warming. Similar changes in the phenology of plants and animals are occurring across marine, freshwater, and terrestrial groups studied, and these changes are also consistent with the expected impact of global warming.While phenology fairly consistently points to an earlier spring across temperate regions of North America, a recent comprehensive study of the subarctic showed greater variability in the timing of green-up, with some areas advancing, and some having no discernible trend over a recent 44-year period. Another 40 year phenological study in China found greater warming over that period in the more northerly sites studied, with sites experiencing cooling mostly in the south, indicating that the temperature variation with latitude is decreasing there. This study also confirmed that season creep was correlated with warming, but the effect is non-linear—phenophases advanced less with greater warming, and retarded more with greater cooling.Shorter winters and longer growing seasons may appear to be a benefit to society from global warming, but the effects of advanced phenophases may also have serious consequences for human populations. Modeling of snowmelt predicted that warming of 3 to 5 °C in the Western United States could cause snowmelt-driven runoff to occur as much as two months earlier, with profound effects on hydroelectricity, land use, agriculture, and water management. Since 1980, earlier snowmelt and associated warming has also been associated with an increase in length and severity of the wildfire season there.Season creep may also have adverse effects on plant species as well. Earlier flowering could occur before pollinators such as honey bees become active, which would have negative consequences for pollination and reproduction. Shorter and warmer winters may affect other environmental adaptations including cold hardening of trees, which could result in frost damage during more severe winters. Etymology Season creep was included in the 9th edition of the Collins English Dictionary published in London June 4, 2007. The term was popularized in the media after the report titled "Season Creep: How Global Warming Is Already Affecting The World Around Us" was published by the American environmental organization Clear the Air on March 21, 2006. In the "Season Creep" report, Jonathan Banks, Policy Director for Clear the Air, introduced the term as follows: While to some, an early arrival of spring may sound good, an imbalance in the ecosystem can wreak havoc. Natural processes like flowers blooming, birds nesting, insects emerging, and ice melting are triggered in large part by temperature. As temperatures increase globally, the delicately balanced system begins to fall into ecological disarray. We call this season creep. See also Orbital forcing Climate change and birds Phenological mismatch Other uses The term "season creep" has been applied in other contexts as well: In professional sports, season creep refers to lengthening of the playing season, especially the extension of the MLB season to 162 games. In college athletics, season creep refers to longer periods athletes spend training in their sport. In American politics, campaign season creep refers to the need for candidates to start fund raising activities sooner. In retailing, holiday season creep, also known as Christmas creep refers to the earlier appearance of Christmas-themed merchandising, extending the holiday shopping season. == References ==
ghost forest
Ghost forests are areas of dead trees in former forests, typically in coastal regions where rising sea levels or tectonic shifts have altered the height of a land mass. Forests located near the coast or estuaries may also be at risk of dying through saltwater poisoning, if invading seawater reduces the amount of freshwater that deciduous trees receive for sustenance.By looking at the stratigraphic record it is possible to reconstruct a series of events that lead to the creation of a ghost forest where, in a convergent plate boundary, there has been orogenic uplift, followed by earthquakes resulting in subsidence and tsunamis, altering the coast and creating a ghost forest. Formations Sea level changes When there is a change in sea level, coastal regions may become inundated with sea water. This can alter coastal areas and kill large areas of trees, leaving behind what is called a “ghost forest.” This type of ghost forest may develop in a variety of environments and in many different places. In the southern US, coastal marshes are expanding into dry wooded areas, killing trees and leaving behind areas of dead trees called snags. Regions of the US at or below sea level are more susceptible to tides. Coastal features affected by changing sea levels are indirectly affected by climate change. With global sea-level rise, the coastlines in the southern US are being altered and leave behind salt marshes filled with dead and dying trees in some areas. Tectonic activity Ghost forests can also result from tectonic activity. In the Pacific Northwest, there is a large, active subduction zone called the Cascadia Subduction Zone. Here, there is a convergent plate boundary where the Gorda plate, the Juan de Fuca plate, and the Explorer plate are being subducted underneath the North American plate. As these plates attempt to slide past one another, they often become stuck. For several hundred years the plates will be locked in place and the tension builds. As a result of this tension, there is orogenic uplift. This is where the tension building between two converging plates gets translated into the vertical uplift of the mountains on the coast. Orogenic uplift is usually associated with earthquakes and mountain building. But then, every 500+ years, there is a large earthquake in the Cascadia Subduction Zone and all that built-up tension is released. The release of this tension results in what is called subsidence. And with subsidence the once elevated coastline drops down several meters to below sea level. Here, sea level has not changed, but the coastline has been deformed, making it susceptible to tides. Areas of the coastline can be inundated with sea water, creating marshes and leaving behind ghost forests. The Cascadia event was documented by geologist Brian Atwater in his book, The Orphan Tsunami. Gathering evidence from both the trees and the ground, he determined that the earthquake and tsunami had occurred sometime between 1680 and 1720, but he could not pinpoint the exact date. Japanese scientists, who had extensive records of tsunamis dating back to 684 AD, read the report, and told Atwater that they knew the date and even the precise time: January 26, 1700 at 9:00 p.m. Several hours after the earthquake, tsunami waves had crossed the ocean and wiped out a fishing village. The Japanese were baffled because there was no earthquake anywhere near Japan to account for the tsunami. The ghost forest in Washington thus provided the evidence for its origin. Tsunamis In addition to subsidence, large earthquakes can also cause tsunamis. It is possible to determine that ghost forests in the Pacific Northwest were created by earthquakes and subsidence by looking at the stratigraphic record. Digging down into the earth, adjacent to a ghost forest, different layers of sediment can reveal the stratigraphy in a ghost forest. Layers of material filled with organic material can indicate where the old forest floor was located prior to subsidence. On top of the layer, there will often be a large sandy deposit. This layer represents the tsunami event, where the coast was flooded with sea water that is filled with sandy sediment. Superimposed on top of the tsunami deposit will be a muddy deposit, representative of an area subjected to ocean tides. Global warming The mountain pine beetle, Dendroctonus ponderosae Hopkins, is a significant ecological force at the landscape level. The majority of the life cycle is spent as larvae feeding in the phloem tissue (inner bark) of host pine trees. This feeding activity eventually girdles and kills successfully attacked trees. Mismanaged forests has resulted in increased mountain pine beetle activity. These direct and indirect effects potentially have devastating consequences for whitebark and other high-elevation pines. Examples Odiorne Point in Rye, New Hampshire, USA, where a transatlantic cable was laid in 1874, and at extreme low tides on nearby Jenness Beach. Neskowin Ghost Forest See also Neskowin Ghost Forest == References ==
the discovery of global warming
The Discovery of Global Warming is a book by physicist and historian Spencer R. Weart published in 2003; revised and updated edition, 2008. It traces the history of scientific discoveries that led to the current scientific opinion on climate change. It has been translated into Spanish, Japanese, Italian, Arabic, Chinese and Korean. Reviews Christie, Maureen (March–April 2004). "Hot Topic". American Scientist. Ehrlich, Robert (April 2004). "Heat exchange: the global warming debate mixes daunting complexity with high political stakes, a toxic brew that continues to test dispassionate science". Natural History. Maurellis, Ahilleas (May 2004). "Warming to the story of climate change". Physics World. 17 (5): 46. doi:10.1088/2058-7058/17/5/38. Schneider, Stephen H. (15 January 2004). "Warning of warming". Nature. 427 (6971): 197–8. Bibcode:2004Natur.427..197S. doi:10.1038/427197a. Revkin, Andrew C. (5 October 2003). "Living in the Greenhouse". New York Times. Crowley, Thomas J. (30 April 2004). "Something Warm, Something New". Science. 304 (5671): 685–6. doi:10.1126/science.1094781. S2CID 140638683. Oppenheimer, Michael (2004). "The Discovery of Global Warming. By Spencer R. Weart". Environmental History. 9 (2): 327–8. doi:10.2307/3986102. JSTOR 3986102. See also History of climate change science Global warming portal External links Online expanded and updated version of The Discovery of Global Warming
geography of greenland
Greenland is located between the Arctic Ocean and the North Atlantic Ocean, northeast of Canada and northwest of Iceland. The territory comprises the island of Greenland—the largest island in the world—and more than a hundred other smaller islands (see alphabetic list). Greenland has a 1.2 kilometre (0.75 mi) long border with Canada on Hans Island. A sparse population is confined to small settlements along certain sectors of the coast. Greenland possesses the world's second-largest ice sheet. Greenland sits atop the Greenland plate, a subplate of the North American plate. The Greenland craton is made up of some of the oldest rocks on the face of the earth. The Isua greenstone belt in southwestern Greenland contains the oldest known rocks on Earth, dated at 3.7–3.8 billion years old.The vegetation is generally sparse, with the only patch of forested land being found in Nanortalik Municipality in the extreme south near Cape Farewell. The climate is arctic to subarctic, with cool summers and cold winters. The terrain is mostly a flat but gradually sloping icecap that covers all land except for a narrow, mountainous, barren, rocky coast. The lowest elevation is sea level and the highest elevation is the summit of Gunnbjørn Fjeld, the highest point in the Arctic at 3,694 meters (12,119 ft). The northernmost point of the island of Greenland is Cape Morris Jesup, discovered by Admiral Robert Peary in 1900. Natural resources include zinc, lead, iron ore, coal, molybdenum, gold, platinum, uranium, hydropower and fish. Area Total area: 2,166,086 km2Land area: 2,166,086 km2 (410,449 km2 ice-free, 1,755,637 km2 ice-covered) Maritime claims: Territorial sea: 3 nautical miles (5.6 km; 3.5 mi) Exclusive fishing zone: 200 nautical miles (370.4 km; 230.2 mi) Land use Arable land: approximately 6%; some land is used to grow silage. Permanent crops: Approximately 0% Other: 100% (2012 est.) The total population comprises around 56,000 inhabitants, of whom approximately 18,000 live in the capital, Nuuk. Natural hazards Continuous ice sheet covers 84% of the country; the rest is permafrost. Environment – current issues Protection of the Arctic environment, climate change, pollution of the food chain, excessive hunting of endangered species (walrus, polar bears, narwhal, beluga whale and several sea birds). Climate Greenland's climate is a tundra climate on and near the coasts and an ice cap climate in inland areas. It typically has short, cool summers and long, moderately cold winters. Due to Gulf Stream influences, Greenland's winter temperatures are very mild for its latitude. In Nuuk, the capital, average winter temperatures are only −9 °C (16 °F). In comparison, the average winter temperatures for Iqaluit, Nunavut, Canada, are around −27 °C (−17 °F). Conversely, summer temperatures are very low, with an average high around 10 °C (50 °F). This is too low to sustain trees, and the land is treeless tundra. On the Greenland ice sheet, the temperature is far below freezing throughout the year, and record high temperatures have peaked only slightly above freezing. The record high temperature at Summit Camp is 2.2 °C (36.0 °F).In the far south of Greenland, there is a very small forest in the Qinngua Valley, due to summer temperatures being barely high enough to sustain trees. There are mountains over 1,500 metres (4,900 ft) high surrounding the valley, which protect it from cold, fast winds travelling across the ice sheet. It is the only natural forest in Greenland, but is only 15 kilometres (9.3 mi) long. Climate change The Greenland ice sheet is 3 kilometers (1.9 mi) thick and broad enough to blanket an area the size of Mexico. The ice is so massive that its weight presses the bedrock of Greenland below sea level and is so all-concealing that not until recently did scientists discover Greenland's Grand Canyon or the possibility that Greenland might actually be three islands.If the ice melted, the interior bedrock below sea level would be covered by water. It is not clear whether this water would be at sea level or a lake above sea level. If it would be at sea level it could connect to the sea at Ilulissat Icefjord, in Baffin Bay and near Nordostrundingen, creating three large islands. But it is most likely that it would be a lake with one drain. It is thought that before the last Ice Age, Greenland had mountainous edges and a lowland (and probably very dry) center which drained to the sea via one big river flowing out westwards, past where Disko Island is now.There is concern about sea level rise caused by ice loss (melt and glaciers falling into the sea) on Greenland. Between 1997 and 2003 ice loss was 68–92 km3/a (16–22 cu mi/a), compared to about 60 km3/a (14 cu mi/a) for 1993/4–1998/9. Half of the increase was from higher summer melting, with the rest caused by the movements of some glaciers exceeding the speeds needed to balance upstream snow accumulation. A complete loss of ice on Greenland would cause a sea level rise of as much as 6.40 meters (21.0 ft). Researchers at NASA's Jet Propulsion Laboratory and the University of Kansas reported in February 2006 that the glaciers are melting twice as fast as they were five years ago. By 2005, Greenland was beginning to lose more ice volume than anyone expected – an annual loss of up to 52 cubic miles or 217 cubic kilometres per year, according to more recent satellite gravity measurements released by JPL. The increased ice loss may be partially offset by increased snow accumulation due to increased precipitation. Between 1991 and 2006, monitoring of the weather at one location (Swiss Camp) found that the average winter temperature had risen almost 10 °F (5.6 °C). Recently, Greenland's three largest outlet glaciers have started moving faster, satellite data show. These are the Jakobshavn Isbræ at Ilulissat on the western edge of Greenland, and the Kangerdlugssuaq and Helheim glaciers on the eastern edge of Greenland. The two latter accelerated greatly during the years 2004–2005, but returned to pre-2004 velocities in 2006. The accelerating ice flow has been accompanied by a dramatic increase in seismic activity. In March 2006, researchers at Harvard University and the Lamont–Doherty Earth Observatory at Columbia University reported that the glaciers now generate swarms of earthquakes up to magnitude 5.0.The retreat of Greenland's ice is revealing islands that were thought to be part of the mainland. In September 2005 Dennis Schmitt discovered an island 400 miles (644 km) north of the Arctic Circle in eastern Greenland which he named Uunartoq Qeqertaq, Inuit for "warming island". Future projections In the Arctic, temperatures are rising faster than anywhere else in the world. Greenland is losing 200 billion tonnes of ice per year. Research suggests that this could increase the sea levels' rise by 30 centimeters by the end of the century. These projections have the possibility of changing as satellite data only dates back to 40 years ago. This means that researchers must view old photographs of glaciers and compare them to ones taken today to determine the future of Greenland's ice. Temperature extremes Highest temperatures Lowest temperatures Topography The ice sheet covering Greenland varies significantly in elevation across the landmass, rising dramatically between the coastline at sea level and the East-Central interior, where elevations reach 3,200 meters (10,500 ft). The coastlines are rocky and predominantly barren with fjords. Numerous small islands spread from the Central to Southern coastlines.Greenland's mountain ranges are partially or completely buried by ice. The highest mountains are in the Watkins Range, which runs along the eastern coast. Greenland's highest mountain is Gunnbjorn Fjeld with a height of 3,700 meters (12,139 ft).Scientists discovered an asteroid impact crater in the northwestern region of Greenland, buried underneath the ice sheet. At a size larger than Washington, D.C., it is the first impact crater found beneath one of Earth's ice sheets. Extreme points This is a list of the extreme points of Greenland, the points that are farther north, south, east or west than any other location. Territory of Greenland Northernmost point — Kaffeklubben Island (83°40'N) – the northernmost permanent land in the world. There are also some shifting gravel bars that lie north of Kaffeklubben, the most northerly ever found being 83-42. Southernmost point — unnamed islet 2.3 km south of Cape Farewell, Egger Island (59°44'N) Westernmost point — Nordvestø, Carey Islands (73°10'W) Easternmost point — Nordostrundingen, Greenland (11°19'W) Highest point — Gunnbjørn Fjeld, 3,694 meters (12,119 ft) Mainland Greenland Northernmost point — Cape Morris Jesup (83°39'N) Southernmost point — Peninsula near Tasiusaq, Kujalleq (59°58'26.4"N) Westernmost point — Cape Alexander (73°08'W) Easternmost point — Nordostrundingen, Greenland (11°19'W) Highest point — Gunnbjørn Fjeld, 3,694 meters (12,119 ft). Towns Greenland has 17 towns – settlements with more than 500 inhabitants. Nuuk is the largest town – and the capital – with roughly one third of the country's urban population. Sisimiut with approximately 5,500 inhabitants is the second largest town, while Ilulissat is number three with around 5,000 inhabitants. History of exploration Gallery See also List of mountain peaks of Greenland List of mountain ranges of Greenland Greenland's Grand Canyon Climate change adaptation in Greenland References External links www.geus.dk Geological map of Greenland from the Geological Survey of Denmark and Greenland (GEUS). "Times Atlas reviews Greenland map accuracy after climate change row" – The Guardian, 22 September 2011
environmental activism of al gore
Al Gore is an American politician and environmentalist. He was vice president of the United States from 1993 to 2001, the Democratic Party's presidential nominee in 2000, and the co-recipient of the 2007 Nobel Peace Prize with the Intergovernmental Panel on Climate Change. He has been involved with the environmental activist movement for a number of decades and has had full participation since he left the vice-presidency in 2001. Childhood Gore stated in an interview for The New York Times that his interest in environmentalism began when he was a teenager: As I was entering high school, my mother was reading Silent Spring and the dinner table conversation was about pesticides and the environment ... The year I graduated from college the momentum was building for Earth Day. After that, as I was entering divinity school, the Club of Rome report came out and the limits to growth was a main issue. Politics Congress Gore has been involved with environmental work for a number of decades. In 1976, at 28, after joining the United States House of Representatives, Gore held the "first congressional hearings on the climate change, and co-sponsor[ed] hearings on toxic waste and global warming". He continued to speak on the topic throughout the 1980s and was known as one of the Atari Democrats, later called the "Democrats' Greens, politicians who see issues like clean air, clean water and global warming as the key to future victories for their party".In 1989, while still a Senator, Gore published an editorial in The Washington Post, in which he argued: Humankind has suddenly entered into a brand new relationship with the planet Earth. The world's forests are being destroyed; an enormous hole is opening in the ozone layer. Living species are dying at an unprecedented rate. In 1990, Senator Gore presided over a three-day conference with legislators from over 42 countries which sought to create a Global Marshall Plan, "under which industrial nations would help less developed countries grow economically while still protecting the environment".The Concord Monitor says that Gore "was one of the first politicians to grasp the seriousness of climate change and to call for a reduction in emissions of carbon dioxide and other greenhouse gases". Vice presidency: 1993–2001 As Vice President, Gore was involved in a number of initiatives related to the environment. He launched the GLOBE program on Earth Day 1994, an education and science activity that, according to Forbes, "made extensive use of the Internet to increase student awareness of their environment". In the late 1990s, Gore strongly pushed for the passage of the Kyoto Protocol, which called for reduction in greenhouse gas emissions. He was opposed by the Senate, which passed unanimously (95–0) the Byrd–Hagel Resolution (S. Res. 98), which stated the sense of the Senate was that the United States should not be a signatory to any protocol that did not include binding targets and timetables for developing as well as industrialized nations or "would result in serious harm to the economy of the United States". On November 12, 1998, Gore symbolically signed the protocol. Both Gore and Senator Joseph Lieberman indicated that the protocol would not be acted upon in the Senate until there was participation by the developing nations. The Clinton Administration never submitted the protocol to the Senate for ratification. In 1998, Gore became associated with Digital Earth. He also began promoting a NASA satellite that would provide a constant view of Earth, marking the first time such an image would have been made since The Blue Marble photo from the 1972 Apollo 17 mission. The "Triana" satellite would have been permanently mounted in the L1 Lagrangian Point, 1.5 million km away. This satellite would allow the measurement of the earth's changing reflectivity (albedo) due to melting ice caps, but the project was put on hold by George W. Bush's administration. The satellite was finally launched in 2015 as the Deep Space Climate Observatory. 2001–present Generation Investment Management In 2004, Gore co-launched Generation Investment Management, a company for which he serves as Chair. The company was "a new London fund management firm that plans to create environment-friendly portfolios. Generation Investment will manage assets of institutional investors, such as pension funds, foundations and endowments, as well as those of 'high net worth individuals,' from offices in London and Washington, D.C." The fund's filed accounts showed profits in 2017 of £248.5m, with assets of £14.2bn. Turnover at the London-based operation was £293m with distributed profits of £193m to the firm's 32 members, one of the senior staff receiving £41m (The Sunday Times (UK), September 16, 2018). We Can Solve It Gore and The Alliance for Climate Protection created the We Can Solve It organization, a web-based program with multiple advertisements on television focused on spreading awareness for climate crisis (global warming) and petitioning for the press putting more attention on the crisis, the government doing more to help the environment, and their ultimate goal is the end to global warming. Although focused mostly upon the United States, and Americans, it is an international petition and effort. It already has over one million signatures.[1] Lectures and conferences In recent years, Gore has remained busy traveling the world speaking and participating in events mainly aimed towards global warming awareness and prevention. His keynote presentation on global warming has received standing ovations, and he has presented it at least 1,000 times according to his monologue in An Inconvenient Truth. His speaking fee is $100,000. Gore's global warming presentations in several major cities have sometimes been associated with exceptionally severe cold weather, a juxtaposition since dubbed "the Gore Effect." Gore is a vocal proponent of carbon neutrality, buying a carbon offset each time he travels by aircraft. Gore and his family drive hybrid vehicles. In An Inconvenient Truth Gore calls for people to conserve energy. In 2007, Al Gore was the main non-official representative for the United States in the United Nations Climate Change Conference in Bali, which is a series of discussions that plans to continue where the Kyoto Protocol left off when it expires in 2012. He used a famous World War II poem written by Pastor Martin Niemöller to describe how the international community is eerily accomplishing nothing in the face of the greatest crisis in human history. He ended the speech using his famous tag line: "However, political will is a renewable resource."During Global Warming Awareness Month, on February 9, 2007, Al Gore and Richard Branson announced the Virgin Earth Challenge, a competition offering a $25 million prize for the first person or organization to produce a viable design that results in the removal of atmospheric greenhouse gases.A public lecture at University of Toronto on February 21, 2007, on the topic of global warming, led to a crash of the ticket sales website within minutes of opening.In March 2008, Gore gave a talk via videoconferencing in order to promote this technology as a means, he argued, of fighting global warming. On July 17, 2008, Gore gave a speech at the DAR Constitution Hall in Washington, D.C. in which he called for a move towards replacing a dependence upon "carbon-based fuels" with Green energy by the United States within the next ten years. Gore stated: "When President John F. Kennedy challenged our nation to land a man on the moon and bring him back safely in 10 years, many people doubted we could accomplish that goal. But 8 years and 2 months later, Neil Armstrong and Buzz Aldrin walked on the surface of the moon." Some criticized his plan. According to the BBC, "Robby Diamond, president of a bipartisan think tank called Securing America's Future Energy, said weaning the nation off fossil fuels could not be done in a decade. 'The country is not going to be able to go cold turkey ... We have a hundred years of infrastructure with trillions of dollars of investment that is not simply going to be made obsolete.'" Repower America On July 21, 2008, Al Gore used a speech to challenge the United States to commit to producing all electricity from renewable sources (AERS) like solar and wind power in 10 years . Al Gore´s Alliance for Climate Protection In this speech, Al Gore says that our dangerous over-reliance on carbon-based fuels is at the core of all three of the economic, environmental and national security crises. Our democracy has become sclerotic at a time when these crises require bold policy solutions.Center for Resource Solutions supports Al Gore's Repower America goal. Civil disobedience to stop coal plants On September 24, 2008, Gore made the following statements in a speech given at the Clinton Global Initiative: "If you're a young person looking at the future of this planet and looking at what is being done right now, and not done, I believe we have reached the stage where it is time for civil disobedience to prevent the construction of new coal plants that do not have carbon capture and sequestration."These remarks were similar to ones he'd made the previous year: "I can't understand why there aren't rings of young people blocking bulldozers," Mr. Gore said, "and preventing them from constructing coal-fired power plants." Climate Reality Project In March 2010 two nonprofit organizations founded by Al Gore, the Alliance for Climate Protection and the Climate Project, joined together, and in July 2011 the combined organization was renamed the Climate Reality Project. In February 2012 the Climate Reality Project organized an expedition to the Antarctic with "civic and business leaders, activists and concerned citizens from many countries". Vegan In 2013, Gore became a vegan. He had earlier said that "it's absolutely correct that the growing meat intensity of diets across the world is one of the issues connected to this global crisis -- not only because of the [carbon dioxide] involved, but also because of the water consumed in the process" and some speculate that his adoption of the new diet is related to his environmentalist stance. In a 2014 interview, Gore said "Over a year ago I changed my diet to a vegan diet, really just to experiment to see what it was like. ... I felt better, so I've continued with it and I'm likely to continue it for the rest of my life." Rampal power plant In a plenary session of the 47th annual meeting of the World Economic Forum (WEF) in Davos of Switzerland on January 18, 2017, Al Gore urged Prime Minister of Bangladesh Sheikh Hasina to stop building the coal-powered Rampal Power Station close to the largest mangrove forest, Sundarbans. Climate and Health Summit A "Climate and Health Summit" which was originally going to be held by the Centers for Disease Control and Prevention, was cancelled without warning in late January 2017. A few days later, Gore revived the summit, which he will hold without the CDC. Environmental criticism Four main environmental criticisms have been leveled at Gore: (1) he has an alleged conflict of interest from his role as both an investor in green-technology companies and as an advocate of taxpayer-funded green-technology subsidies, (2) he allegedly makes erroneous scientific claims, (3) he consumes excessive amounts of energy, and (4) he allegedly refuses to debate others on the subject of global warming.In reference to Gore's alleged conflict of interest, some critics have labeled Gore a "carbon billionaire." In response to these criticisms Gore stated that it is "certainly not true" that he is a "carbon billionaire" and that he is "proud to put my money where my mouth is for the past 30 years. And though that is not the majority of my business activities, I absolutely believe in investing in accordance with my beliefs and my values." Gore was challenged on this topic by Tennessee Congresswoman Marsha Blackburn who asked him: "The legislation that we are discussing here today, is that something that you are going to personally benefit from?" Gore responded by stating: "I believe that the transition to a green economy is good for our economy and good for all of us, and I have invested in it." Gore also added that all earnings from his investments have gone to the Alliance for Climate Protection and that "If you believe that the reason I have been working on this issue for 30 years is because of greed, you don't know me." Finally, Gore told Blackburn: "Do you think there is something wrong with being active in business in this country ... I am proud of it. I am proud of it."Criticisms of Gore's allegedly erroneous scientific statements tend to focus on a British High Court's ruling that Gore's Inconvenient Truth documentary was deemed by the court to have nine significant errors. Several of these, such as the statement that climate change was a main cause of coral reef bleaching, and that polar bears were drowning due to habitat-loss as a result of ice-cap melting, have been subsequently backed up by stronger evidence than the court was able to locate at the time. The court's broad conclusion, nevertheless, was that "many of the claims made by the film were fully backed up by the weight of science."Gore has also been the subject of criticism for his personal use of energy, including his ownership of multiple large homes. The Tennessee Center for Policy Research (TCPR) has twice criticized Gore for electricity consumption in his Tennessee home. In February 2007, TCPR stated that its analysis of records from the Nashville Electric Service indicated that the Gore household uses "20 times as much electricity as the average household nationwide." In reporting on TCPR's claims, MSNBC's Countdown With Keith Olbermann noted that the house has twenty rooms and home offices and that the "green power switch" installed increased their electric bill while decreasing overall carbon pollution. Philosopher A. C. Grayling also defended Al Gore, arguing that Gore's personal lifestyle does nothing to impugn his message and that Gore's critics have committed the ad hominem fallacy.A few months later, the Associated Press reported on December 13, 2007, that Gore "has completed a host of improvements to make the home more energy efficient, and a building-industry group has praised the house as one of the nation's most environmentally friendly ... 'Short of tearing it down and starting anew, I don't know how it could have been rated any higher,' said Kim Shinn of the non-profit U.S. Green Building Council, which gave the house its second-highest rating for sustainable design."Gore was criticized by the TCPR again in June 2008, after the group obtained his public utility bills from the Nashville Electric Service and compared "electricity consumption between the 12 months before June 2007, when it says he installed his new technology, and the year since then." According to their analysis, the Gores consumed 10% more energy in the year since their home received its eco-friendly modifications. TCPR also argued that, while the "average American household consumes 11,040 kWh in an entire year," the Gore residence "uses an average of 17,768 kWh per month –1,638 kWh more energy per month than before the renovations." Gore's spokeswoman Kalee Kreider countered the claim by stating that the Gores' "utility bills have gone down 40 percent since the green retrofit." and that "the three-year renovation on the home wasn't complete until November, so it's a bit early to attempt a before-and-after comparison." She also noted that TCPR did not include Gore's gas bill in their analysis (which they had done the previous year) and that the gas "bill has gone down 90 percent ... And when the Gores do power up, they pay for renewable resources, like wind and solar power or methane gas."Media Matters for America also discussed the fact that "100 percent of the electricity in his home comes from green power" and quoted the Tennessee Valley Authority as stating that "[a]lthough no source of energy is impact-free, renewable resources create less waste and pollution."In August 2017, it was reported that over the past year, Gore used enough electric energy to power the typical American household for over 21 years, as per a report issued by the National Center for Public Policy Research. Reportedly, Gore consumed 230,889 kilowatt hours (kWh) at his Nashville residence alone. Additionally, Gore owns two other residences – a penthouse in San Francisco and a farmhouse in Carthage, Tennessee – making his carbon footprint even larger than what was reported. Gore's Nashville home actually classifies as an 'energy hog' under standards developed by Energy Vanguard.Some have argued that Gore refuses to debate the topic of global warming. Bjørn Lomborg, a key figure in the climate-change denier movement, asked him to debate the topic at a conference in California. Gore replied that he would not, stating that "The scientific community has gone through this chapter and verse. We have long since passed the time when we should pretend this is a 'on the one hand, on the other hand' issue," he said. "It's not a matter of theory or conjecture, for goodness sake." Books, film, television, and live performances An Inconvenient Truth Gore starred in the documentary film An Inconvenient Truth, released on May 24, 2006. The film documents the evidence for anthropogenic global warming and warns of the consequences of people not making immediate changes to their behavior. It is the fourth-highest-grossing documentary in U.S. history.After An Inconvenient Truth was nominated for an Academy Award, Donna Brazile (Gore's campaign chairwoman from the 2000 campaign) speculated that Gore might announce a possible presidential candidacy for the 2008 election. During a speech on January 31, 2007, at Moravian College, Brazile stated, "Wait till Oscar night, I tell people: 'I'm dating. I haven't fallen in love yet. On Oscar night, if Al Gore has slimmed down 25 or 30 pounds, Lord knows.'" During the award ceremony, Gore and actor Leonardo DiCaprio shared the stage to speak about the "greening" of the ceremony itself. Gore began to give a speech that appeared to be leading up to an announcement that he would run for president. However, background music drowned him out and he was escorted offstage, implying that it was a rehearsed gag, which he later acknowledged.After winning the 2007 Academy Award for Documentary Feature. the Oscar was awarded to director Davis Guggenheim, who asked Gore to join him and other members of the crew on stage. Gore then gave a brief speech, saying, "My fellow Americans, people all over the world, we need to solve the climate crisis. It's not a political issue; it's a moral issue. We have everything we need to get started, with the possible exception of the will to act. That's a renewable resource. Let's renew it."The official documentary film website is meaningfully called climatecrisis.net At the 2017 Sundance Film Festival, Gore released An Inconvenient Sequel: Truth to Power, a sequel to his 2006 film, An Inconvenient Truth, which documents his continuing efforts to battle climate change. Books Gore wrote Earth in the Balance (which was published in 1992) while his six-year-old son Albert was recovering from a serious accident. It became the first book written by a sitting Senator to make The New York Times Best Seller list since John F. Kennedy's Profiles in Courage.Gore also published the book An Inconvenient Truth: The Planetary Emergency of Global Warming and What We Can Do About It, which became a bestseller. In reference to the use of nuclear power to mitigate global warming, Gore has stated, "Nuclear energy is not the panacea for tackling global warming."In July 2017, Gore published An Inconvenient Sequel: Truth to Power: Your Action Handbook to Learn the Science, Find Your Voice, and Help Solve the Climate Crisis, concurrent with his film An Inconvenient Sequel: Truth to Power. Futurama Gore appeared in Matt Groening's Futurama as himself and his own head in a jar in episodes related to environmentalism. Gore also reprised the role in the 2007 film, Futurama: Bender's Big Score. In 2000 Gore had offered to appear in the 2000 season finale of Futurama, "Anthology of Interest I". In this episode, Gore led his team of "Vice Presidential Action Rangers" in their goal to protect the space-time continuum. In 2002, Gore appeared in the episode "Crimes of the Hot". In addition, Gore used a short clip from Futurama to explain how global warming works in his presentations as well as in An Inconvenient Truth. An internet promo for An Inconvenient Truth titled A Terrifying Message From Al Gore was also produced by Groening and David X. Cohen, creators of Futurama, starring Gore and Bender (John DiMaggio). Live Earth On July 7, 2007, Live Earth benefit concerts were held around the world in an effort to raise awareness about climate change. The event was the brainchild of Gore and Kevin Wall of Save Our Selves. On July 21, 2007, Gore announced he was teaming with actress Cameron Diaz for a TV climate contest, 60 Seconds to Save the Earth, to gain people's support in solving the climate crisis. 2007 Nobel Peace Prize and India Gore was awarded the 2007 Nobel Peace Prize, which was shared by the Intergovernmental Panel on Climate Change, headed by Rajendra K. Pachauri (Delhi, India). The award was given "for their efforts to build up and disseminate greater knowledge about man-made climate change, and to lay the foundations for the measures that are needed to counteract such change" on October 12, 2007.Gore made the following statement after receiving the prize: I am deeply honored to receive the Nobel Peace Prize. This award is even more meaningful because I have the honor of sharing it with the Intergovernmental Panel on Climate Change—the world's pre-eminent scientific body devoted to improving our understanding of the climate crisis—a group whose members have worked tirelessly and selflessly for many years. We face a true planetary emergency. The climate crisis is not a political issue, it is a moral and spiritual challenge to all of humanity. It is also our greatest opportunity to lift global consciousness to a higher level. My wife, Tipper, and I will donate 100 percent of the proceeds of the award to the Alliance for Climate Protection, a bipartisan non-profit organization that is devoted to changing public opinion in the U.S. and around the world about the urgency of solving the climate crisis. Gore and Pachauri accepted the Nobel Peace Prize for 2007 in Oslo, Norway on December 10, 2007.In the Lecture he delivered on December 10, 2007, in Oslo, fece to the Royal Highnesses of Norway, to the members of the Norwegian Nobel Committee and to the other ladies and gentlemen, who attended the ceremony for the Nobel prize-giving, he made this surprising statement: Last September 21, as the Northern Hemisphere tilted away from the sun, scientists reported with unprecedented distress that the North Polar ice cap is "falling off a cliff." One study estimated that it could be completely gone during summer in less than 22 years. Another new study, to be presented by U.S. Navy researchers later this week, warns it could happen in as little as 7 years. In a talk given during March 2008 in Delhi, Gore argued that India, as a leader in information technology, is in a particularly strong position to also lead the way in climate change. This talk coincided with the release of two children's books by Gore jointly published with the India Habitat Centre. Selected honors and awards 2008 Dan David Prize: "Social Responsibility with Particular Emphasis on the Environment." 2008 The Gore resolution (HJR712) passed by the Tennessee House of Representatives which honors Gore's "efforts to curb global warming." 2007 Gothenburg Prize for Sustainable Development 2007 Nobel Peace Prize with the Intergovernmental Panel on Climate Change (IPCC) (environment) 2007 International Academy of Television Arts and Sciences: Founders Award for Current TV and for work in the area of global warming 2007 Prince of Asturias Award in Spain (environment) 2007 The Sir David Attenborough Award for Excellence in Nature Filmmaking (environment) 2006 Quill Awards: History/current events/politics, An Inconvenient Truth Selected publications Glossary Al Gore uses the terms: Climate crisis (global warming/climate change). Climate refugee Energy tsunami (a loss of access to foreign oil). Megafire Further reading Kirk, Andrew G. Counterculture Green: The Whole Earth Catalog and American Environmentalism. Lawrence: Univ. of Kansas Press, 2007. See also Climate Reality Project List of environmental philosophers World Resources Institute Board of Directors Biosketch for Al Gore Archived April 27, 2012, at the Wayback Machine Notes External links WASHINGTON TALK; Greening of Democrats: An 80's Mix of Idealism And Shrewd Politics The New York Times, June 14, 1989 Global warming is a planetary emergency: Al Gore - India Today interview with Gore Al Gore Speech Accepting 2007 Nobel Peace Prize Video: Gore Speaks at TED Conference — 15 ways to avert a climate crisis
space sunshade
A space sunshade or sunshield is a parasol that diverts or otherwise reduces some of the Sun's radiation, preventing it from hitting a spacecraft or planet and thereby reducing its insolation, which results in reduced heating. Light can be diverted by different methods. The concept of the construction of sunshade as a method of climate engineering dates back to the years 1923, 1929, 1957 and 1978 by the physicist Hermann Oberth. Space mirrors in orbit around the Earth with a diameter of 100 to 300 km, as designed by Hermann Oberth, are intended to focus sunlight on individual regions of the Earth’s surface or deflect it into space so that the solar radiation is weakened in a specifically controlled manner for individual regions on the Earth’s surface. First proposed in 1989, another space sunshade concept involves putting a large occulting disc, or technology of equivalent purpose, between the Earth and Sun. A sunshade is of particular interest as a climate engineering method for mitigating global warming through solar radiation management. Heightened interest in such projects reflects the concern that internationally negotiated reductions in carbon emissions may be insufficient to stem climate change. Sunshades could also be used to produce space solar power, acting as solar power satellites. Proposed shade designs include a single-piece shade and a shade made by a great number of small objects. Most such proposals contemplate a blocking element at the Sun-Earth L1 Lagrangian point. Modern proposals are based on some form of distributed sunshade composed of lightweight transparent elements or inflatable "space bubbles" manufactured in space to reduce the cost of launching massive objects to space. Designs for planetary sunshade Cloud of small spacecraft One proposed sunshade would be composed of 16 trillion small disks at the Sun-Earth L1 Lagrangian point, 1.5 million kilometers from Earth and between it and the Sun. Each disk is proposed to have a 0.6-meter diameter and a thickness of about 5 micrometers. The mass of each disk would be about a gram, adding up to a total of almost 20 million tonnes. Such a group of small sunshades that blocks 2% of the sunlight, deflecting it off into space, would be enough to halt global warming. If 100 tonnes of disks were launched to low Earth orbit every day, it would take 550 years to launch all of them. The individual autonomous flyers building up the cloud of sunshades are proposed not to reflect the sunlight but rather to be transparent lenses, deflecting the light slightly so it does not hit Earth. This minimizes the effect of solar radiation pressure on the units, requiring less effort to hold them in place at the L1 point. An optical prototype has been constructed by Roger Angel with funding from NIAC.The remaining solar pressure and the fact that the L1 point is one of unstable equilibrium, easily disturbed by the wobble of the Earth due to gravitational effects from the Moon, requires the small autonomous flyers to be capable of maneuvering themselves to hold position. A suggested solution is to place mirrors capable of rotation on the surface of the flyers. By using the solar radiation pressure on the mirrors as solar sails and tilting them in the right direction, the flyer will be capable of altering its speed and direction to keep in position.Such a group of sunshades would need to occupy an area of about 3.8 million square kilometers if placed at the L1 point.It would still take years to launch enough of the disks into orbit to have any effect. This means a long lead time. Roger Angel of the University of Arizona presented the idea for a sunshade at the U.S. National Academy of Sciences in April 2006 and won a NASA Institute for Advanced Concepts grant for further research in July 2006. Creating this sunshade in space was estimated to cost in excess of US$130 billion over 20 years with an estimated lifetime of 50-100 years. Thus leading Professor Angel to conclude that "the sunshade is no substitute for developing renewable energy, the only permanent solution. A similar massive level of technological innovation and financial investment could ensure that. But if the planet gets into an abrupt climate crisis that can only be fixed by cooling, it would be good to be ready with some shading solutions that have been worked out." Lightweight solutions and "Space bubbles" A more recent design has been proposed by Olivia Borgue and Andreas M. Hein in 2022, proposing a distributed sunshade with a mass on the order of 100,000 tons, composed of ultra-thin polymeric films and SiO2 nanotubes. The author estimated that launching such mass would require 399 yearly launches of a vehicle such as SpaceX Starship for 10 years.A 2022 concept by MIT Senseable City Lab proposes using thin-film structures ("space bubbles") manufactured in outer space to solve the problem of launching the required mass to space. MIT scientists led by Carlo Ratti believe deflecting 1.8 percent of solar radiation can fully reverse climate change. The full raft of inflatable bubbles would be roughly the size of Brazil and include a control system to regulate its distance from the Sun and optimise its effects. The shell of the thin-film bubbles would be made of silicon, tested in outer space-like conditions at a pressure of .0028 atm and at -50 degrees Celsius. They plan to investigate low vapor-pressure materials to rapidly inflate the bubbles, such as a silicon-based melt or a graphene-reinforced ionic liquid. One Fresnel lens Several authors have proposed dispersing light before it reaches the Earth by putting a very large lens in space, perhaps at the L1 point between the Earth and the Sun. This plan was proposed in 1989 by J. T. Early. His design involved making a large glass (2,000 km) occulter from lunar material and placing at the L1 point. Issues included the large amount of material needed to make the disc and also the energy to launch it to its orbit.In 2004, physicist and science fiction author Gregory Benford calculated that a concave rotating Fresnel lens 1000 kilometres across, yet only a few millimeters thick, floating in space at the L1 point, would reduce the solar energy reaching the Earth by approximately 0.5% to 1%.The cost of such a lens has been disputed. At a science fiction convention in 2004, Benford estimated that it would cost about US$10 billion up front, and another $10 billion in supportive cost during its lifespan. One diffraction grating A similar approach involves placing a very large diffraction grating (thin wire mesh) in space, perhaps at the L1 point between the Earth and the Sun. A proposal for a 3,000 ton diffraction mesh was made in 1997 by Edward Teller, Lowell Wood, and Roderick Hyde, although in 2002 these same authors argued for blocking solar radiation in the stratosphere rather than in orbit given then-current space launch technologies. Spacecraft sunshades The James Webb Space Telescope (JWST) infrared telescope has a layered sunshade to keep the telescope cold. For spacecraft approaching the Sun, the sunshade is usually called a heatshield. Notable spacecraft [designs] with heatshields include: Messenger, launched 2004, orbited Mercury until 2015, had a ceramic cloth sunshade Parker Solar Probe (was Solar Probe Plus), launched 2018 (carbon, carbon-foam, carbon sandwich heatshield) Solar Orbiter, launched Feb 2020 BepiColombo, to orbit Mercury, with Optical Solar Reflectors (acting as a sunshade) on the Planetary Orbiter component. See also Starshade – Proposed occulterPages displaying short descriptions of redirect targets Space mirror (geoengineering) – Artificial satellites designed to change the amount of solar radiation that impacts EarthPages displaying short descriptions of redirect targets Space-based solar power – Concept of collecting solar power in outer space and distributing it to Earth Sunshield (JWST) – Main cooling system for the infrared observatoryPages displaying short descriptions of redirect targets Spacecraft thermal control – Process of keeping all parts of a spacecraft within acceptable temperature ranges Solar sail – Space propulsion method using Sun radiation References External links Marchis, Franck; Sánchez, Joan-Pau; McInnes, Colin R. (2015). "Optimal Sunshade Configurations for Space-Based Geoengineering near the Sun-Earth L1 Point". PLOS ONE. 10 (8): e0136648. Bibcode:2015PLoSO..1036648S. doi:10.1371/journal.pone.0136648. ISSN 1932-6203. PMC 4550401. PMID 26309047.
breakthrough institute
The Breakthrough Institute is an environmental research center located in Berkeley, California. Founded in 2007 by Michael Shellenberger and Ted Nordhaus, The institute is aligned with ecomodernist philosophy. The Institute advocates for an embrace of modernization and technological development (including nuclear power and carbon capture) in order to address environmental challenges. Proposing urbanization, agricultural intensification, nuclear power, aquaculture, and desalination as processes with a potential to reduce human demands on the environment, allowing more room for non-human species. Since its inception, environmental scientists and academics have criticized Breakthrough's environmental positions. Popular press reception of Breakthrough's environmental ideas and policy has been mixed. Organization, funding and people The Breakthrough institute is registered as 501(c)(3) nonprofit organization and is supported by various public institutions and individuals.Breakthrough's executive director is Ted Nordhaus. Others associated with Breakthrough include former National Review executive editor Reihan Salam, journalist Gwyneth Cravens, political scientist Roger A. Pielke Jr., sociologist Steve Fuller, and environmentalist Stewart Brand.Nordhaus and Shellenberger have written on the subjects ranging from positive treatment of nuclear energy and shale gas to critiques of the planetary boundaries hypothesis. The Breakthrough Institute has argued that climate policy should be focused on higher levels of public funding on technology innovation to "make clean energy cheap", and has been critical of climate policies such as cap and trade and carbon pricing. Programs and philosophy Breakthrough Institute maintains programs in energy, conservation, and food. Their website states that the energy research is “focused on making clean energy cheap through technology innovation to deal with both global warming and energy poverty.” The conservation work “seeks to offer pragmatic new frameworks and tools for navigating" the challenges of the Anthropocene, offering up nuclear energy, synthetic fertilizers, and genetically modified foods as solutions. Jonathan Symons, Senior Lecture at Macquarie University, Australia and Breakthrough affiliate, has written an extensive survey of the Breakthrough Institute and its philosophy. He argues that ecomodernism is aligned with the IPCC’s position that new technologies are crucial to cutting carbon emissions. Criticism Scholars such as Professor of American and Environmental Studies Julie Sze and environmental humanist Michael Ziser criticize Breakthrough's philosophy as one that believes "community-based environmental justice poses a threat to the smooth operation of a highly capitalized, global-scale Environmentalism." Further, Environmental and Art Historian TJ Demos has argued that Breakthrough's ideas present "nothing more than a bad utopian fantasy" that function to support the oil and gas industry and work as "an apology for nuclear energy."Journalist Paul D. Thacker alleged that the Breakthrough Institute is an example of a think tank which lacks intellectual rigour, promoting contrarianist reasoning and cherry picking evidence.The institute has also been criticized for promoting industrial agriculture and processed foodstuffs while also accepting donations from the Nathan Cummings Foundation, whose board members have financial ties to processed food companies that rely heavily on industrial agriculture. After an IRS complaint about potential improper use of 501(c)(3) status, the Institute no longer lists the Nathan Cummings Foundation as a donor. However, as Thacker has noted, the institute's funding remains largely opaque.Climate scientist Michael E. Mann also questions the motives of the Breakthrough Institute. According to Mann, the self-declared mission of the BTI is to look for a breakthrough to solve the climate problem. However Mann states that basically the BTI "appears to be opposed to anything - be it a price on carbon or incentives for renewable energy - that would have a meaningful impact." He notes that the BTI "remains curiously preoccupied with opposing advocates for meaningful climate action and is coincidentally linked to natural gas interests" and criticises the BTI for advocating "continued exploitation of fossil fuels." Mann also questions that the BTI on the one hand seems to be "very pessimistic" about renewable energy, while on the other hand "they are extreme techno-optimists" regarding geoengineering. Publications "The Death of Environmentalism: Global Warming in a Post-Environmental World" In 2004, Breakthrough founders Ted Nordhaus and Michael Shellenberger coauthored the essay, “Death of Environmentalism: Global Warming Politics in a Post-Environmental World.” The paper argued that environmentalism is incapable of dealing with climate change and should "die" so that a new politics can be born. The paper was criticized by members of the mainstream environmental movement. Former Sierra Club Executive Director Carl Pope called the essay "unclear, unfair and divisive." He said it contained multiple factual errors and misinterpretations. However, former Sierra Club President Adam Werbach praised the authors' arguments. Former Greenpeace Executive Director John Passacantando said in 2005, referring to both Shellenberger and his coauthor Ted Nordhaus, "These guys laid out some fascinating data, but they put it in this over-the-top language and did it in this in-your-face way." Michel Gelobter and other environmental experts and academics wrote The Soul of Environmentalism: Rediscovering transformational politics in the 21st century in response, criticizing "Death" for demanding increased technological innovation rather than addressing the systemic concerns of people of color.Matthew Yglesias of The New York Times said that "Nordhaus and Shellenberger persuasively argue, environmentalists must stop congratulating themselves for their own willingness to confront inconvenient truths and must focus on building a politics of shared hope rather than relying on a politics of fear.", adding that the paper "is more convincing in its case for a change in rhetoric." Break Through: From the Death of Environmentalism to the Politics of Possibility In 2007, Nordhaus and Shellenberger published their book Break Through: From the Death of Environmentalism to the Politics of Possibility. The book argues for a "post-environmental" politics that abandons the environmentalist focus on nature protection for a new focus on technological innovation to create a new, stronger U.S. economy.The Wall Street Journal wrote that, "If heeded, Nordhaus and Shellenberger's call for an optimistic outlook—embracing economic dynamism and creative potential—will surely do more for the environment than any U.N. report or Nobel Prize." NPR's science correspondent Richard Harris listed Break Through on his "recommended reading list" for climate change.However, Julie Sze and Michael Ziser argued that Break Through continued the trend Gelobter pointed out related the authors' commitment to technological innovation and economic growth instead of focusing on systemic inequalities that create environmental injustices. Specifically Sze and Ziser argue that Nordhaus and Shellenberger's "evident relish in their notoriety as the 'sexy' cosmopolitan 'bad boys' of environmentalism (their own words) introduces some doubt about their sincerity and reliability." The authors asserted that Shellenberger's work fails "to incorporate the aims of environmental justice while actively trading on suspect political tropes," such as blaming China and other Nations as large-scale polluters so that the United States may begin and continue Nationalistic technology-based research-and-development environmentalism, while continuing to emit more greenhouse gases than most other nations. In turn, Shellenberger and Nordhaus seek to move away from proven Environmental Justice tactics, "calling for a moratorium" on "community organizing." Such technology-based "approaches like those of Nordhaus and Shellenberger miss entirely" the "structural environmental injustice" that natural disasters like Hurricane Katrina make visible.Joseph Romm, a former US Department of Energy official now with the Center for American Progress, argued that "Pollution limits are far, far more important than R&D for what really matters -- reducing greenhouse-gas emissions and driving clean technologies into the marketplace." Environmental journalist David Roberts, writing in Grist, stated that while the BTI and its founders garner much attention, their policy is lacking, and ultimately they "receive a degree of press coverage that wildly exceeds their intellectual contributions." Reviewers for the San Francisco Chronicle, the American Prospect, and the Harvard Law Review argued that a critical reevaluation of green politics was unwarranted because global warming had become a high-profile issue and the Democratic Congress was preparing to act. "An Ecomodernist Manifesto" In April 2015, "An Ecomodernist Manifesto" was issued by John Asafu-Adjaye, Linus Blomqvist, Stewart Brand, Barry Brook. Ruth DeFries, Erle Ellis, Christopher Foreman, David Keith, Martin Lewis, Mark Lynas, Ted Nordhaus, Roger A. Pielke, Jr., Rachel Pritzker, Joyashree Roy, Mark Sagoff, Michael Shellenberger, Robert Stone, and Peter Teague. It proposed dropping the goal of “sustainable development” and replacing it with a strategy to shrink humanity's footprint by using natural resources more intensively through technological innovation. The authors argue that economic development is necessary to preserve the environment.According to The New Yorker, "most of the criticism of [the Manifesto] was more about tone than content. The manifesto's basic arguments, after all, are hardly radical. To wit: technology, thoughtfully applied, can reduce the suffering, human and otherwise, caused by climate change; ideology, stubbornly upheld, can accomplish the opposite." At The New York Times, Eduardo Porter wrote approvingly of ecomodernism's alternative approach to sustainable development. In an article titled "Manifesto Calls for an End to 'People Are Bad' Environmentalism", Slate's Eric Holthaus wrote "It's inclusive, it's exciting, and it gives environmentalists something to fight for for a change." The science journal Nature editorialized the manifesto.The Manifesto was met with critiques similar to Gelobter's evaluation of "Death" and Sze and Ziser's analysis of Break Through. Environmental historian Jeremy Caradonna and environmental economist Richard B. Norgaard led a group of environmental scholars in a critique, arguing that Ecomodernism as presented in the Manifesto "violates everything we know about ecosystems, energy, population, and natural resources," and "Far from being an ecological statement of principles, the Manifesto merely rehashes the naïve belief that technology will save us and that human ingenuity can never fail." Further, "The Manifesto suffers from factual errors and misleading statements."T.J. Demos agreed with Caradonna, and wrote in 2017 that "What is additionally striking about the Ecomodernist document, beyond its factual weaknesses and ecological falsehoods, is that there is no mention of social justice or democratic politics," and "no acknowledgement of the fact that big technologies like nuclear reinforce centralized power, the military-industrial complex, and the inequalities of corporate globalization." Breakthrough Journal In 2011, Breakthrough published the first issue of the Breakthrough Journal, which aims to "modernize political thought for the 21st century". The New Republic called Breakthrough Journal "among the most complete efforts to provide a fresh answer to" the question of how to modernize liberal thought, and the National Review called it "the most promising effort at self-criticism by our liberal cousins in a long time". References External links Breakthrough Institute
medieval warm period
The Medieval Warm Period (MWP), also known as the Medieval Climate Optimum or the Medieval Climatic Anomaly, was a time of warm climate in the North Atlantic region that lasted from c. 950 to c. 1250. Climate proxy records show peak warmth occurred at different times for different regions, which indicate that the MWP was not a globally uniform event. Some refer to the MWP as the Medieval Climatic Anomaly to emphasize that climatic effects other than temperature were also important.The MWP was followed by a regionally cooler period in the North Atlantic and elsewhere, which is sometimes called the Little Ice Age (LIA). Possible causes of the MWP include increased solar activity, decreased volcanic activity, and changes in ocean circulation. Modelling evidence has shown that natural variability is insufficient on its own to explain the MWP and that an external forcing had to be one of the causes. Research The MWP is generally thought to have occurred from c. 950 – c. 1250, during the European Middle Ages. Some researchers divide the MWP into two phases: MWP-I, which began around 450 AD and ended around 900 AD, and MWP-II, which lasted from 1000 AD to 1300 AD; MWP-I is called the early Mediaeval Warm Period while MWP-II is called the conventional Mediaeval Warm Period. In 1965, Hubert Lamb, one of the first paleoclimatologists, published research based on data from botany, historical document research, and meteorology, combined with records indicating prevailing temperature and rainfall in England around c. 1200 and around c. 1600. He proposed, "Evidence has been accumulating in many fields of investigation pointing to a notably warm climate in many parts of the world, that lasted a few centuries around c. 1000 – c. 1200 AD, and was followed by a decline of temperature levels till between c. 1500 and c. 1700 the coldest phase since the last ice age occurred."The era of warmer temperatures became known as the Medieval Warm Period and the subsequent cold period the Little Ice Age (LIA). However, the view that the MWP was a global event was challenged by other researchers. The IPCC First Assessment Report of 1990 discussed the "Medieval Warm Period around 1000 AD (which may not have been global) and the Little Ice Age which ended only in the middle to late nineteenth century." It stated that temperatures in the "late tenth to early thirteenth centuries (about AD 950–1250) appear to have been exceptionally warm in western Europe, Iceland and Greenland." The IPCC Third Assessment Report from 2001 summarized newer research: "evidence does not support globally synchronous periods of anomalous cold or warmth over this time frame, and the conventional terms of 'Little Ice Age' and 'Medieval Warm Period' are chiefly documented in describing northern hemisphere trends in hemispheric or global mean temperature changes in past centuries."Global temperature records taken from ice cores, tree rings, and lake deposits have shown that the Earth may have been slightly cooler globally (by 0.03 °C) than in the early and the mid-20th century.Palaeoclimatologists developing region-specific climate reconstructions of past centuries conventionally label their coldest interval as "LIA" and their warmest interval as the "MWP". Others follow the convention, and when a significant climate event is found in the "LIA" or "MWP" timeframes, they associate their events to the period. Some "MWP" events are thus wet events or cold events, rather than strictly warm events, particularly in central Antarctica, where climate patterns that are opposite to those of the North Atlantic have been noticed. Global climate during the Medieval Warm Period The nature and extent of the MWP has been marked by long-standing controversy over whether it was a global or regional event. In 2019, by using an extended proxy data set, the Pages-2k consortium confirmed that the Medieval Climate Anomaly was not a globally synchronous event. The warmest 51-year period within the MWP did not occur at the same time in different regions. They argue for a regional instead of global framing of climate variability in the preindustrial Common Era to aid in understanding. North Atlantic Lloyd D. Keigwin's 1996 study of radiocarbon-dated box core data from marine sediments in the Sargasso Sea found that its sea surface temperature was approximately 1 °C (1.8 °F) cooler approximately 400 years ago, during the LIA, and 1700 years ago and was approximately 1 °C warmer 1000 years ago, during the MWP.Using sediment samples from Puerto Rico, the Gulf Coast, and the Atlantic Coast from Florida to New England, Mann et al. (2009) found consistent evidence of a peak in North Atlantic tropical cyclone activity during the MWP, which was followed by a subsequent lull in activity. Iceland Iceland was first settled between about 865 and 930, during a time believed to be warm enough for sailing and farming. By retrieval and isotope analysis of marine cores and from examination of mollusc growth patterns from Iceland, Patterson et al. reconstructed a stable oxygen (δ18 O) and carbon (δ13 C) isotope record at a decadal resolution from the Roman Warm Period to the MWP and the LIA. Patterson et al. conclude that the summer temperature stayed high but winter temperature decreased after the initial settlement of Iceland. Greenland The 2009 Mann et al. study found warmth exceeding 1961–1990 levels in southern Greenland and parts of North America during the MWP, which the study defines as from 950 to 1250, with warmth in some regions exceeding temperatures of the 1990–2010 period. Much of the Northern Hemisphere showed a significant cooling during the LIA, which the study defines as from 1400 to 1700, but Labrador and isolated parts of the United States appeared to be approximately as warm as during the 1961–1990 period. Greenlandic winter oxygen isotope data from the MWP display a strong correlation with the North Atlantic Oscillation (NAO).The Norse colonization of the Americas has been associated with warmer periods. The common theory is that Norsemen took advantage of ice-free seas to colonize areas in Greenland and other outlying lands of the far north. However, a study from Columbia University suggests that Greenland was not colonized in warmer weather, but the warming effect in fact lasted for only very briefly. c. 1000AD, the climate was sufficiently warm for the Vikings to journey to Newfoundland and to establish a short-lived outpost there. In around 985, Vikings founded the Eastern and Western Settlements, both near the southern tip of Greenland. In the colony's early stages, they kept cattle, sheep, and goats, with around a quarter of their diet from seafood. After the climate became colder and stormier around 1250, their diet steadily shifted towards ocean sources. By around 1300, seal hunting provided over three quarters of their food. By 1350, there was reduced demand for their exports, and trade with Europe fell away. The last document from the settlements dates from 1412, and over the following decades, the remaining Europeans left in what seems to have been a gradual withdrawal, which was caused mainly by economic factors such as increased availability of farms in Scandinavian countries. Europe Substantial glacial retreat in southern Europe was experienced during the MWP. While several smaller glaciers experienced complete deglaciation, larger glaciers in the region survived and now provide insight into the region's climate history. In addition to warming induced glacial melt, sedimentary records reveal a period of increased flooding, coinciding with the MWP, in eastern Europe that is attributed to enhanced precipitation from a positive phase NAO. Other impacts of climate change can be less apparent such as a changing landscape. Preceding the MWP, a coastal region in western Sardinia was abandoned by the Romans. The coastal area was able to substantially expand into the lagoon without the influence of human populations and a high stand during the MWP. When human populations returned to the region, they encountered a land altered by climate change and had to reestablish ports. Other regions North America In Chesapeake Bay (now in Maryland and Virginia, United States), researchers found large temperature excursions (changes from the mean temperature of that time) during the MWP (about 950–1250) and the Little Ice Age (about 1400–1700, with cold periods persisting into the early 20th century), which are possibly related to changes in the strength of North Atlantic thermohaline circulation. Sediments in Piermont Marsh of the lower Hudson Valley show a dry MWP from 800 to 1300. In the Hammock River marsh in Connecticut, salt marshes extended 15 km farther westward than they do in the present due to higher sea levels.Prolonged droughts affected many parts of what is now the Western United States, especially eastern California and the west of Great Basin. Alaska experienced three intervals of comparable warmth: 1–300, 850–1200, and since 1800. Knowledge of the MWP in North America has been useful in dating occupancy periods of certain Native American habitation sites, especially in arid parts of the Western United States. Aridity was more prevalent in the southeastern United States during the MWP than the following LIA, but only slightly; this difference may be statistically insignificant. Droughts in the MWP may have impacted Native American settlements also in the Eastern United States, such as at Cahokia. Review of more recent archaeological research shows that as the search for signs of unusual cultural changes has broadened, some of the early patterns (such as violence and health problems) have been found to be more complicated and regionally varied than had been previously thought. Other patterns, such as settlement disruption, deterioration of long-distance trade, and population movements, have been further corroborated. Africa The climate in equatorial eastern Africa has alternated between being drier than today and relatively wet. The climate was drier during the MWP (1000–1270). Off the coast of Africa, Isotopic analysis of bones from the Canary Islands’ inhabitants during the MWP to LIA transition reveal the region experienced a 5 °C decrease in air temperature. Over this period, the diet of inhabitants did not appreciably change, which suggests they were remarkably resilient to climate change. Antarctica The onset of the MWP in the Southern Ocean lagged the MWP's onset in the North Atlantic by approximately 150 years. A sediment core from the eastern Bransfield Basin, in the Antarctic Peninsula, preserves climatic events from both the LIA and the MWP. The authors noted, "The late Holocene records clearly identify Neoglacial events of the LIA and Medieval Warm Period (MWP)." Some Antarctic regions were atypically cold, but others were atypically warm between 1000 and 1200. Pacific Ocean Corals in the tropical Pacific Ocean suggest that relatively cool and dry conditions may have persisted early in the millennium, which is consistent with a La Niña-like configuration of the El Niño-Southern Oscillation patterns.In 2013, a study from three US universities was published in Science magazine and showed that the water temperature in the Pacific Ocean was 0.9 degrees warmer during the MWP than during the LIA and 0.65 degrees warmer than the decades before the study. South America The MWP has been noted in Chile in a 1500-year lake bed sediment core, as well as in the Eastern Cordillera of Ecuador.A reconstruction, based on ice cores, found that the MWP could be distinguished in tropical South America from about 1050 to 1300 and was followed in the 15th century by the LIA. Peak temperatures did not rise as to the level of the late 20th century, which were unprecedented in the area during the study period of 1600 years. East Asia Ge et al. studied temperatures in China for the past 2000 years and found high uncertainty prior to the 16th century but good consistency over the last 500 years highlighted by the two cold periods, 1620s–1710s and 1800s–1860s, and the 20th-century warming. They also found that the warming from the 10th to the 14th centuries in some regions might be comparable in magnitude to the warming of the last few decades of the 20th century, which was unprecedented within the past 500 years. Generally, a warming period was identified in China, coinciding with the MWP, using multi-proxy data for temperature. However, the warming was inconsistent across China. Significant temperature change, from the MWP to LIA, was found for northeast and central-east China but not for northwest China and the Tibetan Plateau. During the MWP, the East Asian Summer Monsoon (EASM) was the strongest it has been in the past millennium and was highly sensitive to the El Niño Southern Oscillation (ENSO). The Mu Us Desert witness increased moisture in the MWP. Peat cores from peatland in southeast China suggest changes in the EASM and ENSO are responsible for increased precipitation in the region during the MWP. However, other sites in southern China show aridification and not humidification during the MWP, showing that the MWP's influence was highly spatially heterogeneous. Modelling evidence suggests that EASM strength during the MWP was low in early summer but very high during late summer.In far eastern Russia, continental regions experienced severe floods during the MWP while nearby islands experienced less precipitation leading to a decrease in peatland. Pollen data from this region indicates an expansion of warm climate vegetation with an increasing number of broadleaf and decreasing number of coniferous forests.Adhikari and Kumon (2001), investigating sediments in Lake Nakatsuna, in central Japan, found a warm period from 900 to 1200 that corresponded to the MWP and three cool phases, two of which could be related to the LIA. Other research in northeastern Japan showed that there was one warm and humid interval, from 750 to 1200, and two cold and dry intervals, from 1 to 750 and from 1200 to now. South Asia The Indian Summer Monsoon (ISM) was also enhanced during the MWP with a temperature driven change to the Atlantic Multi-decadal Oscillation (AMO), bringing more precipitation to India. Vegetation records in Lahaul in Himachal Pradesh confirm a warm and humid MWP from 1,158 to 647 BP. Pollen from Madhya Pradesh dated to the MWP provides further direct evidence for increased monsoonal precipitation. Multi-proxy records from Pookode Lake in Kerala also reflect the warmth of the MWP. Middle East Sea surface temperatures in the Arabian Sea increased during the MWP, owing to a strong monsoon. The Arabian Peninsula, already extremely arid in the present day, was even drier during the MWP. Prolonged drought was a mainstay of the Arabian climate until around 660 BP, when this hyperarid interval was terminated. Oceania There is an extreme scarcity of data from Australia for both the MWP and the LIA. However, evidence from wave-built shingle terraces for a permanently-full Lake Eyre during the 9th and the 10th centuries is consistent with a La Niña-like configuration, but the data are insufficient to show how lake levels varied from year to year or what climatic conditions elsewhere in Australia were like. A 1979 study from the University of Waikato found, "Temperatures derived from an 18O/16O profile through a stalagmite found in a New Zealand cave (40.67°S, 172.43°E) suggested the Medieval Warm Period to have occurred between AD c. 1050 and c. 1400 and to have been 0.75 °C warmer than the Current Warm Period." More evidence in New Zealand is from an 1100-year tree-ring record. See also Classic Maya collapse – Concurrent with the Medieval Warm Period and marked by decades-long droughts Cretaceous Thermal Maximum – Period of climatic warming that reached its peak approximately 90 million years ago Description of the Medieval Warm Period and Little Ice Age in IPCC reports Historical climatology Hockey stick graph (global temperature) – Graph in climate science Holocene climatic optimum – Global warm period around 9,000–5,000 years ago Late Antique Little Ice Age – Northern Hemispheric cooling period Paleoclimatology – Study of changes in ancient climate References Further reading Fagan, Brian (2000). The Little Ice Age: How Climate Made History, 1300–1850. Basic Books. ISBN 978-0-465-02272-4. Fagan, Brian (2009). The Great Warming: Climate Change and the Rise and Fall of Civilizations. Bloomsbury Publishing. ISBN 9781596913929. Lamb, Hubert (1995). Climate, History, and the Modern World: Second Edition. Routledge. Staff members at NOAA Paleoclimatology (19 May 2000). The "Medieval Warm Period". A Paleo Perspective...on Global Warming. NOAA Paleoclimatology. {{cite book}}: External link in |author= (help) External links HistoricalClimatology.com, further links, resources, and relevant news, updated 2016 Climate History Network The Little Ice Age and Medieval Warm Period at the American Geophysical Union
an inconvenient truth (book)
An Inconvenient Truth: The Planetary Emergency of Global Warming and What We Can Do About It is a 2006 book by Al Gore released in conjunction with the film An Inconvenient Truth. It is published by Rodale Press in Emmaus, Pennsylvania, in the United States.The sequel is Our Choice: A Plan to Solve the Climate Crisis (2009). Summary Based on Gore's lecture tour on the topic of global warming this book elaborates upon points offered in the film. The publisher of the text states that the book, "brings together leading-edge research from top scientists around the world; photographs, charts, and other illustrations; and personal anecdotes and observations to document the fast pace and wide scope of global warming."In a section called "The Politicization of Global Warming", Al Gore stated: As for why so many people still resist what the facts clearly show, I think, in part, the reason is that the truth about the climate crisis is an inconvenient one that means we are going to have to change the way we live our lives.The second part of the statement beginning "... the reason is that the truth about the climate crisis..." was also highlighted and separated from the main writing in that section. The 2006 edition of the book has neither a table of contents nor an index, but can be summarized as the presentation of various positive and negative causal links, shown in the graphic with corresponding page numbers. Reception Michiko Kakutani argues in The New York Times that the book's "roots as a slide show are very much in evidence. It does not pretend to grapple with climate change with the sort of minute detail and analysis" given by other books on the topic "and yet as a user-friendly introduction to global warming and a succinct summary of many of the central arguments laid out in those other volumes, "An Inconvenient Truth" is lucid, harrowing and bluntly effective."In 2009, the audiobook version, narrated by Beau Bridges, Cynthia Nixon, and Blair Underwood, won the Grammy Award for Best Spoken Word Album. References External links OnTheIssues.org's book review and excerpts
energy in poland
The Polish energy sector is the fifth largest in Europe. In 2022, the country consumed 13.16 TWh of electricity, importing 3 114 GWh thereof.The Polish energy mix in 2021 is dominated by hard coal (approx. 48%) and lignite (24%). When it comes to green energy, wind installations had the highest contribution of 9%, with other sources playing a smaller part, but growing at a faster pace. The future plan is to generate at least 50% of electricity from renewable sources by 2040, by which time coal will cease to generate electricity. Poland's 2040 energy plan PEP2040 is a government plan for the Polish fuel and energy sector, which aims for 50% zero-emissions by 2040. It envisions building offshore wind farms and commissioning a nuclear power plant. The draft was presented in September 2020, aiming to tackle climate change, energy security, and a just transition. Poland is considering 6–9 GW of nuclear power, to be operational by 2040. Energy statistics Fossil fuels Coal In 2009 Poland produced 78 megatonnes (Mt) of hard coal and 57 Mt of brown coal. As of 2020, extraction is becoming increasingly difficult and expensive, and has become uncompetitive so reliant on government subsidies. In September 2020, the government and mining unions agreed a plan to phase out coal by 2049, with coal used in power generation falling to negligible levels in 2032. Coal and the environment Coal mining has far-reaching effects on local water resources. Coal mining requires large amounts of water. Mining activities have dropped the water level of Lake Ostrowskie by almost two meters in the Kuyavia–Pomerania and the lakes in the Powidz Landscape Park. According to Poznań's University of Agriculture, the water drainage in the Kleczew brown coal mining areas has formed craters in the area. Statistics from Eurostat shows that Poland accounts for 30% of the European Union's annual consumption of coal.Ten coal power stations in Poland and Germany account for 13 per cent of the EU’s total emissions and 25% of all emissions from the power sector in 2022. Coal and the public In April 2008, five thousand people demonstrated in Kruszwica to protect cultural heritage and the nature reserve at Lake Gopło. This was the first protest of its kind in the country's history. Gopło Millennium Park (Nadgoplański Park Tysiąclecia) is protected by the European Union's Natura 2000 program and includes a major bird sanctuary. The Tomisławice opencast mine (less than 10 kilometers away from the Kruszwica mine) was due to open in 2009.In 2021 demonstrations took place with coal workers protesting against the EU plans to close coal as an energy source and later to close the Turow brown coal mine. Coal and business The Bełchatów Power Station in the Łódź region supplies almost 20% of Poland's energy. It is the largest brown coal power plant in the EU, and also the single biggest source of CO2 emissions in the region. Gas During the April 2022 Russia–European Union gas dispute, Russia cut off natural gas deliveries to Poland after demanding to be paid in Russian rubles during currency disruptions caused by the 2022 Russian invasion of Ukraine.In September 2022 a gas pipeline connecting Poland with Denmark, allowing gas from Norway to pass through to Poland was commissioned. Electricity In 2018 48% of electricity produced in Poland came from hard coal, 29% from brown coal, 13% from renewable sources (mostly wind power) and 7% from natural gas. In parts of 2020, electricity costs in Poland were the highest in Europe. Renewable energy Renewable energy includes wind, solar, biomass and geothermal energy sources. A binding European Union resolution, the Renewable Energy Directive 2009, stipulates a 15% renewable energy target for total energy use in Poland by 2020. According to the Polish National Renewable Energy Action Plan, the 2020 figure is set to exceed this target by 0.5% at 15.5% of overall energy use, broken down as 19.1% of total electricity consumption, 17% in the heating and cooling sector, and 10.1% in the transport sector.As of 2014–2015 renewable energy provided around 10% of total primary energy supply in Poland as well as around 13% of total electricity generation. Progress towards targets As of year end 2014 Poland had achieved an 11.45% share of renewable energy use as a percentage of overall energy usage. The overall 2014 share breaks down as 13.95% of the heating and cooling sector, 12.40% of the electricity sector and 5.67% of the transport sector. Sources Biomass and waste As of 2015 Biomass and waste was the largest source of renewable energy in Poland providing an estimated 8.9% of total primary energy supply (TPES) in that year and an estimated 6.1% of electricity generation. In 2019 there were 1,142 MW installed capacity power.Solid biomass is the most important source by volume, providing fuel for heat and power plants or consumed directly for industrial or household heat requirements. Biogasses are also used in heat and power plants as well whilst waste is mainly used as a fuel in industry. In 2014 0.7 Mtoe of biofuels were used in transport, 81% as biodiesel and 19% as biogasoline, making up 5% of the total energy consumption in the transport sector in 2014. Wind power Wind power is becoming more important, passing 1,000MW capacity in 2010 and 5,000MW in 2015. The Polish NREAP plan of 6,700 MW of wind power by 2020 was almost met, whilst EWEA's 2009 forecast suggests a higher wind capacity of 10–12 GW is possible.Wind power is estimated to have provided 6.6% of total electricity generation in 2015. The total wind power grid-connected capacity in Poland was 8,857.2 MW as of 30 june 2023.Offshore wind In September 2020, the government announced a 130 billion zloty (£26.5 billion) plan to invest in offshore wind.Poland's "Offshore Wind Act" came into force in 2020. The main purpose of the Act is to set the framework for a dedicated subsidy scheme for offshore wind projects. However, it also addresses other relevant issues pertaining to the development and operation of offshore projects.According to Polish Wind Energy Association (PWEA), offshore wind farms in the Baltic Sea with an overall capacity of 5.9 GW are set to "receive support under a two-sided contract for difference between the investor and the regulator. Awarding support under this formula will be time-limited until the end of June 2021." In a second phase, contracts are planned to be awarded by auctions. The first is to take place in 2025. The PWEA said that support will be available for projects with a total capacity of 2.5 GW in each of the auctions. By 2050, Poland wants a massive 28 GW in offshore sector, which would make Poland the largest operator of offshore wind in the Baltic Sea.On 1 July 2020 representatives of the Polish government and Polish wind energy industry signed a “Letter of Intent on cooperation for development of offshore wind power in Poland”. The letter acknowledges the role of offshore wind in meeting the European Union's Green Deal objectives while increasing the security of energy supply and reducing Poland's CO2 emissions.In its National Energy and Climate Plan (NECP) Poland identified offshore wind as one of key technologies to meet its goals for renewable energy for 2030. Offshore wind has also been described as strategic in the draft of Poland's Energy Policy until 2040. It will help diversifying the Polish national power generation structure that today heavily depends on coal. Hydroelectric power A 2023 study suggested that Poland is currently only using around 15% of its total hydroelectric power capacity. Poland currently has 786 hydroelectric power plants, the vast majority of which (705) are relatively small, generating no more than 1 MW. Many of the smaller power plants are privately owned by small firms and family businesses, with the bigger ones owned by major electricity producers or the state. Solar power In 2019, the Polish government launched a scheme called "Mój Prąd", which is dedicated to supporting the development of prosumer energy, and specifically supporting the segment of photovoltaic (PV) micro-installations. The budget of the program is currently PLN 1.1 billion.As a result, in recent years there has been a significant increase in power in this segment of the energy sector. The total solar photovoltaics (PV) grid-connected capacity in Poland was 14,668.1 MW as of 31 July 2023. Global warming On November 4, 2021, Poland signed the 'Global Coal to Clean Power Transition Statement. April 2022 Jacek Sasin, minister for state assets and a deputy prime minister, said that the Russia-Ukraine war made it necessary for Poland to review an earlier energy strategy which assumed the closedown of coal energy See also Oil industry in Poland Wind power in Poland Solar power in Poland Nuclear energy in Poland PGNiG – Polish state-controlled oil and natural gas company Renewable energy by country References External links European Commission National Renewable Energy Action Plans European Commission renewable energy Progress Reports European Commission National Energy Efficiency Energy Action Plans Report on the Polish power system (PDF 1.44 MB), February 2014
the uninhabitable earth (book)
The Uninhabitable Earth: Life After Warming is a 2019 non-fiction book by David Wallace-Wells about the consequences of global warming. It was inspired by his New York magazine article "The Uninhabitable Earth" (2017). Synopsis The book fleshes out Wallace-Wells' original New York magazine piece in more detail, dovetailing into discussions surrounding various possibilities for Earth's future across a spectrum of predicted future temperature ranges. Wallace-Wells' argues that even with active intervention, the effects of climate change will have catastrophic impacts across multiple spheres: rising sea levels, extreme weather events, extinctions, disease outbreaks, fires, droughts, famines, earthquakes, volcanic eruptions, floods, and increased geopolitical conflict, among other calamities. While the book is not focused on solutions, it recognizes solutions exist to prevent the worst of the damages: "a carbon tax and the political apparatus to aggressively phase out dirty energy; a new approach to agricultural practices and a shift away from beef and dairy in the global diet; and public investment in green energy and carbon capture". Reception The book has been both praised and criticized for its dramatic depictions of future life on Earth. As The Economist stated, "Some readers will find Mr. Wallace-Wells’s outline of possible futures alarmist. He is indeed alarmed. You should be, too." It was also reviewed in The Guardian, The New York Times, and Slate. A review in The Irish Times by John Gibbons was critical of the book's primary focus on effects of climate change on humans rather than also covering impacts on other species.In The New Climate War, the climatologist Michael Mann dedicates 12 pages to comment "The Uninhabitable Earth". About the book, he notably writes that "while some of the blatant errors that marked the original article were largely gone, the pessimistic – and, at times, downright doomist – framing remained, as did exaggerated descriptions that fed the doomist narrative". Television adaptation In January 2020, it was reported that The Uninhabitable Earth would be adapted into an anthology series on HBO Max. Each episode will be about the dangers of climate change. Adam McKay will serve as the executive producer. Publications Wallace-Wells, David (February 19, 2019). The Uninhabitable Earth: Life after Warming. New York, USA: Tim Duggan Books. ISBN 978-0-525-57670-9. Hardcover edition. Wallace-Wells, David (March 17, 2020). The Uninhabitable Earth: Life after Warming. New York, USA: Tim Duggan Books. ISBN 978-0-525-57671-6. Paperback edition. == References ==
el niño
El Niño ( el NEEN-yoh, Spanish: [el ˈniɲo]; lit. 'The Boy') is the warm phase of the El Niño–Southern Oscillation (ENSO). It is associated with a band of warm ocean water that develops in the central and east-central equatorial Pacific (approximately between the International Date Line and 120°W), including the area off the west coast of South America. The ENSO is the cycle of warm and cold sea surface temperature (SST) of the tropical central and eastern Pacific Ocean. El Niño is accompanied by high air pressure in the western Pacific and low air pressure in the eastern Pacific. El Niño phases are known to last close to four years; however, records demonstrate that the cycles have lasted between two and seven years. During the development of El Niño, rainfall develops between September–November. The cool phase of ENSO is La Niña, 'The Girl', with SST in the eastern Pacific below average, and air pressure high in the eastern Pacific and low in the western Pacific. The ENSO cycle, including both El Niño and La Niña, causes global changes in temperature and rainfall.Developing countries that depend on their own agriculture and fishing, particularly those bordering the Pacific Ocean, are usually most affected. In this phase of the Oscillation, the pool of warm water in the Pacific near South America is often at its warmest about Christmas. The original phrase, El Niño de Navidad, arose centuries ago, when Peruvian fishermen named the weather phenomenon after the newborn Christ. Concept Originally, the term El Niño applied to an annual weak warm ocean current that ran southwards along the coast of Peru and Ecuador at about Christmas time. However, over time the term has evolved and now refers to the warm and negative phase of the El Niño–Southern Oscillation and is the warming of the ocean surface or above-average sea surface temperatures in the central and eastern tropical Pacific Ocean. This warming causes a shift in the atmospheric circulation with rainfall becoming reduced over Indonesia, India and northern Australia, while rainfall and tropical cyclone formation increases over the tropical Pacific Ocean. The low-level surface trade winds, which normally blow from east to west along the equator, either weaken or start blowing from the other direction. It is believed that El Niño has occurred for thousands of years. For example, it is thought that El Niño affected the Moche in modern-day Peru. Scientists have also found chemical signatures of warmer sea surface temperatures and increased rainfall caused by El Niño in coral specimens that are around 13,000 years old. Around 1525, when Francisco Pizarro made landfall in Peru, he noted rainfall in the deserts, the first written record of the impacts of El Niño. Modernday research and reanalysis techniques have managed to find at least 26 El Niño events since 1900, with the 1982–83, 1997–98 and 2014–16 events among the strongest on record.Currently, each country has a different threshold for what constitutes an El Niño event, which is tailored to their specific interests. For example, the Australian Bureau of Meteorology looks at the trade winds, Southern Oscillation Index, weather models and sea surface temperatures in the Niño 3 and 3.4 regions, before declaring an El Niño. The United States Climate Prediction Center (CPC) and the International Research Institute for Climate and Society (IRI) looks at the sea surface temperatures in the Niño 3.4 region, the tropical Pacific atmosphere and forecasts that NOAA's Oceanic Niño Index will equal or exceed +.5 °C (0.90 °F) for several seasons in a row. However, the Japan Meteorological Agency declares that an El Niño event has started when the average five month sea surface temperature deviation for the Niño 3 region, is over 0.5 °C (0.90 °F) warmer for six consecutive months or longer. The Peruvian government declares that a coastal El Niño is under way if the sea surface temperature deviation in the Niño 1+2 regions equal or exceed 0.4 °C (0.72 °F) for at least three months. There is no consensus on whether climate change will have any influence on the strength or duration of El Niño events, as research alternately supports El Niño events becoming stronger and weaker, longer and shorter. However, recent scholarship has found that climate change is increasing the frequency of extreme El Niño events. Occurrences El Niño events are thought to have been occurring for thousands of years. For example, it is thought that El Niño affected the Moche in modern-day Peru, who sacrificed humans in order to try to prevent the rains.It is thought that there have been at least 30 El Niño events since 1900, with the 1982–83, 1997–98 and 2014–16 events among the strongest on record. Since 2000, El Niño events have been observed in 2002–03, 2004–05, 2006–07, 2009–10, 2014–16, 2018–19, and beginning in 2023.Major ENSO events were recorded in the years 1790–93, 1828, 1876–78, 1891, 1925–26, 1972–73, 1982–83, 1997–98, and 2014–16.Typically, this anomaly happens at irregular intervals of two to seven years, and lasts nine months to two years. The average period length is five years. When this warming occurs for seven to nine months, it is classified as El Niño "conditions"; when its duration is longer, it is classified as an El Niño "episode".During strong El Niño episodes, a secondary peak in sea surface temperature across the far eastern equatorial Pacific Ocean sometimes follows the initial peak. Cultural history and prehistoric information ENSO conditions have occurred at two- to seven-year intervals for at least the past 300 years, but most of them have been weak. Evidence is also strong for El Niño events during the early Holocene epoch 10,000 years ago.El Niño may have led to the demise of the Moche and other pre-Columbian Peruvian cultures. A recent study suggests a strong El Niño effect between 1789 and 1793 caused poor crop yields in Europe, which in turn helped touch off the French Revolution. The extreme weather produced by El Niño in 1876–77 gave rise to the most deadly famines of the 19th century. The 1876 famine alone in northern China killed up to 13 million people.An early recorded mention of the term "El Niño" to refer to climate occurred in 1892, when Captain Camilo Carrillo told the geographical society congress in Lima that Peruvian sailors named the warm south-flowing current "El Niño" because it was most noticeable around Christmas. Although pre-Columbian societies were certainly aware of the phenomenon, the indigenous names for it have been lost to history. The phenomenon had long been of interest because of its effects on the guano industry and other enterprises that depend on biological productivity of the sea. It is recorded that as early as 1822, cartographer Joseph Lartigue, of the French frigate La Clorinde under Baron Mackau, noted the "counter-current" and its usefulness for traveling southward along the Peruvian coast.Charles Todd, in 1888, suggested droughts in India and Australia tended to occur at the same time; Norman Lockyer noted the same in 1904. An El Niño connection with flooding was reported in 1894 by Victor Eguiguren (1852–1919) and in 1895 by Federico Alfonso Pezet (1859–1929). In 1924, Gilbert Walker (for whom the Walker circulation is named) coined the term "Southern Oscillation". He and others (including Norwegian-American meteorologist Jacob Bjerknes) are generally credited with identifying the El Niño effect.The major 1982–83 El Niño led to an upsurge of interest from the scientific community. The period 1990–95 was unusual in that El Niños have rarely occurred in such rapid succession. An especially intense El Niño event in 1998 caused an estimated 16% of the world's reef systems to die. The event temporarily warmed air temperature by 1.5 °C, compared to the usual increase of 0.25 °C associated with El Niño events. Since then, mass coral bleaching has become common worldwide, with all regions having suffered "severe bleaching". Diversity It is thought that there are several different types of El Niño events, with the canonical eastern Pacific and the Modoki central Pacific types being the two that receive the most attention. These different types of El Niño events are classified by where the tropical Pacific sea surface temperature (SST) anomalies are the largest. For example, the strongest sea surface temperature anomalies associated with the canonical eastern Pacific event are located off the coast of South America. The strongest anomalies associated with the Modoki central Pacific event are located near the International Date Line. However, during the duration of a single event, the area with the greatest sea surface temperature anomalies can change.The traditional Niño, also called Eastern Pacific (EP) El Niño, involves temperature anomalies in the Eastern Pacific. However, in the last two decades, atypical El Niños were observed, in which the usual place of the temperature anomaly (Niño 1 and 2) is not affected, but an anomaly arises in the central Pacific (Niño 3.4). The phenomenon is called Central Pacific (CP) El Niño, "dateline" El Niño (because the anomaly arises near the International Date Line), or El Niño "Modoki" (Modoki is Japanese for "similar, but different").The effects of the CP El Niño are different from those of the traditional EP El Niño—e.g., the recently discovered El Niño leads to more frequent Atlantic hurricanes.There is also a scientific debate on the very existence of this "new" ENSO. Indeed, a number of studies dispute the reality of this statistical distinction or its increasing occurrence, or both, either arguing the reliable record is too short to detect such a distinction, finding no distinction or trend using other statistical approaches, or that other types should be distinguished, such as standard and extreme ENSO.The first recorded El Niño that originated in the central Pacific and moved toward the east was in 1986. Recent Central Pacific El Niños happened in 1986–87, 1991–92, 1994–95, 2002–03, 2004–05 and 2009–10. Furthermore, there were "Modoki" events in 1957–59, 1963–64, 1965–66, 1968–70, 1977–78 and 1979–80. Some sources say that the El Niños of 2006-07 and 2014-16 were also Central Pacific El Niños. Effects on the global climate El Niño affects the global climate and disrupts normal weather patterns, which as a result can lead to intense storms in some places and droughts in others. Tropical cyclones Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving into the main belt of the Westerlies. Areas west of Japan and Korea tend to experience many fewer September–November tropical cyclone impacts during El Niño and neutral years. During El Niño years, the break in the subtropical ridge tends to lie near 130°E, which would favor the Japanese archipelago.Within the Atlantic Ocean vertical wind shear is increased, which inhibits tropical cyclone genesis and intensification, by causing the westerly winds in the atmosphere to be stronger. The atmosphere over the Atlantic Ocean can also be drier and more stable during El Niño events, which can also inhibit tropical cyclone genesis and intensification. Within the Eastern Pacific basin: El Niño events contribute to decreased easterly vertical wind shear and favours above-normal hurricane activity. However, the impacts of the ENSO state in this region can vary and are strongly influenced by background climate patterns. The Western Pacific basin experiences a change in the location of where tropical cyclones form during El Niño events, with tropical cyclone formation shifting eastward, without a major change in how many develop each year. As a result of this change, Micronesia is more likely to be affected by tropical cyclones, while China has a decreased risk of being affected by tropical cyclones. A change in the location of where tropical cyclones form also occurs within the Southern Pacific Ocean between 135°E and 120°W, with tropical cyclones more likely to occur within the Southern Pacific basin than the Australian region. As a result of this change tropical cyclones are 50% less likely to make landfall on Queensland, while the risk of a tropical cyclone is elevated for island nations like Niue, French Polynesia, Tonga, Tuvalu, and the Cook Islands. Remote influence on tropical Atlantic Ocean A study of climate records has shown that El Niño events in the equatorial Pacific are generally associated with a warm tropical North Atlantic in the following spring and summer. About half of El Niño events persist sufficiently into the spring months for the Western Hemisphere Warm Pool to become unusually large in summer. Occasionally, El Niño's effect on the Atlantic Walker circulation over South America strengthens the easterly trade winds in the western equatorial Atlantic region. As a result, an unusual cooling may occur in the eastern equatorial Atlantic in spring and summer following El Niño peaks in winter. Cases of El Niño-type events in both oceans simultaneously have been linked to severe famines related to the extended failure of monsoon rains. Regional impacts Observations of El Niño events since 1950 show that impacts associated with El Niño events depend on the time of year. However, while certain events and impacts are expected to occur during events, it is not certain or guaranteed that they will occur. The impacts that generally do occur during most El Niño events include below-average rainfall over Indonesia and northern South America, while above average rainfall occurs in southeastern South America, eastern equatorial Africa, and the southern United States. Africa In Africa, East Africa—including Kenya, Tanzania, and the White Nile basin—experiences, in the long rains from March to May, wetter-than-normal conditions. Conditions are also drier than normal from December to February in south-central Africa, mainly in Zambia, Zimbabwe, Mozambique, and Botswana. Antarctica Many ENSO linkages exist in the high southern latitudes around Antarctica. Specifically, El Niño conditions result in high-pressure anomalies over the Amundsen and Bellingshausen Seas, causing reduced sea ice and increased poleward heat fluxes in these sectors, as well as the Ross Sea. The Weddell Sea, conversely, tends to become colder with more sea ice during El Niño. The exact opposite heating and atmospheric pressure anomalies occur during La Niña. This pattern of variability is known as the Antarctic dipole mode, although the Antarctic response to ENSO forcing is not ubiquitous. Asia As warm water spreads from the west Pacific and the Indian Ocean to the east Pacific, it takes the rain with it, causing extensive drought in the western Pacific and rainfall in the normally dry eastern Pacific. Singapore experienced the driest February in 2010 since records began in 1869, with only 6.3 mm of rain falling in the month and temperatures hitting as high as 35 °C on 26 February. The years 1968 and 2005 had the next driest Februaries, when 8.4 mm of rain fell. Australia and the Southern Pacific During El Niño events, the shift in rainfall away from the Western Pacific may mean that rainfall across Australia is reduced. Over the southern part of the continent, warmer than average temperatures can be recorded as weather systems are more mobile and fewer blocking areas of high pressure occur. The onset of the Indo-Australian Monsoon in tropical Australia is delayed by two to six weeks, which as a consequence means that rainfall is reduced over the northern tropics. The risk of a significant bushfire season in south-eastern Australia is higher following an El Niño event, especially when it is combined with a positive Indian Ocean Dipole event. During an El Niño event, New Zealand tends to experience stronger or more frequent westerly winds during their summer, which leads to an elevated risk of drier than normal conditions along the east coast. There is more rain than usual though on New Zealand's West Coast, because of the barrier effect of the North Island mountain ranges and the Southern Alps.Fiji generally experiences drier than normal conditions during an El Niño, which can lead to drought becoming established over the Islands. However, the main impacts on the island nation is felt about a year after the event becomes established. Within the Samoan Islands, below average rainfall and higher than normal temperatures are recorded during El Niño events, which can lead to droughts and forest fires on the islands. Other impacts include a decrease in the sea level, possibility of coral bleaching in the marine environment and an increased risk of a tropical cyclone affecting Samoa. Europe El Niño's effects on Europe are controversial, complex and difficult to analyse, as it is one of several factors that influence the weather over the continent and other factors can overwhelm the signal. North America Over North America, the main temperature and precipitation impacts of El Niño generally occur in the six months between October and March. In particular, the majority of Canada generally has milder than normal winters and springs, with the exception of eastern Canada where no significant impacts occur. Within the United States, the impacts generally observed during the six-month period include wetter-than-average conditions along the Gulf Coast between Texas and Florida, while drier conditions are observed in Hawaii, the Ohio Valley, Pacific Northwest and the Rocky Mountains.Historically, El Niño was not understood to affect U.S. weather patterns until Christensen et al. (1981) used entropy minimax pattern discovery based on information theory to advance the science of long range weather prediction. Previous computer models of weather were based on persistence alone and reliable to only 5–7 days into the future. Long range forecasting was essentially random. Christensen et al. demonstrated the ability to predict the probability that precipitation will be below or above average with modest but statistically significant skill one, two and even three years into the future. Study of more recent weather events over California and the southwestern United States indicate that there is a variable relationship between El Niño and above-average precipitation, as it strongly depends on the strength of the El Niño event and other factors. Though it has been historically associated with high rainfall in California, the effects of El Niño depend more strongly on the "flavor" of El Niño than its presence or absence, as only "persistent El Niño" events lead to consistently high rainfall.The synoptic condition for the Tehuano wind, or "Tehuantepecer", is associated with a high-pressure area forming in Sierra Madre of Mexico in the wake of an advancing cold front, which causes winds to accelerate through the Isthmus of Tehuantepec. Tehuantepecers primarily occur during the cold season months for the region in the wake of cold fronts, between October and February, with a summer maximum in July caused by the westward extension of the Azores High. Wind magnitude is greater during El Niño years than during La Niña years, due to the more frequent cold frontal incursions during El Niño winters. Its effects can last from a few hours to six days. Some El Niño events were recorded in the isotope signals of plants, and that had helped scientists to study his impact. South America Because El Niño's warm pool feeds thunderstorms above, it creates increased rainfall across the east-central and eastern Pacific Ocean, including several portions of the South American west coast. The effects of El Niño in South America are direct and stronger than in North America. An El Niño is associated with warm and very wet weather months in April–October along the coasts of northern Peru and Ecuador, causing major flooding whenever the event is strong or extreme. The effects during the months of February, March, and April may become critical along the west coast of South America, El Niño reduces the upwelling of cold, nutrient-rich water that sustains large fish populations, which in turn sustain abundant sea birds, whose droppings support the fertilizer industry. The reduction in upwelling leads to fish kills off the shore of Peru.The local fishing industry along the affected coastline can suffer during long-lasting El Niño events. The world's largest fishery collapsed due to overfishing during the 1972 El Niño Peruvian anchoveta reduction. During the 1982–83 event, jack mackerel and anchoveta populations were reduced, scallops increased in warmer water, but hake followed cooler water down the continental slope, while shrimp and sardines moved southward, so some catches decreased while others increased. Horse mackerel have increased in the region during warm events. Shifting locations and types of fish due to changing conditions create challenges for the fishing industry. Peruvian sardines have moved during El Niño events to Chilean areas. Other conditions provide further complications, such as the government of Chile in 1991 creating restrictions on the fishing areas for self-employed fishermen and industrial fleets. The ENSO variability may contribute to the great success of small, fast-growing species along the Peruvian coast, as periods of low population removes predators in the area. Similar effects benefit migratory birds that travel each spring from predator-rich tropical areas to distant winter-stressed nesting areas. Southern Brazil and northern Argentina also experience wetter than normal conditions, but mainly during the spring and early summer. Central Chile receives a mild winter with large rainfall, and the Peruvian-Bolivian Altiplano is sometimes exposed to unusual winter snowfall events. Drier and hotter weather occurs in parts of the Amazon River Basin, Colombia, and Central America. Galapagos Islands The Galapagos Islands are a chain of volcanic islands, nearly 600 miles west of Ecuador, South America Located in the Eastern Pacific Ocean, these islands are home to a wide diversity of terrestrial and marine species including sharks, birds, iguanas, turtles, penguins, and seals. This robust ecosystem is fueled by the normal trade winds which influence upwelling of cold, nutrient rich waters to the islands. During an El Niño an event where the trade winds weaken and sometimes push from west to east. This causes the Equatorial current to weaken, thus raising surface water temperatures and decreasing nutrients in waters surrounding the Galapagos. El Niño causes a trophic cascade which impacts entire ecosystems starting with primary producers and ending with critical animals such as sharks, penguins, and seals. The effects of El Niño can become detrimental to populations that often starve and die off during these years. Rapid evolutionary adaptations are displayed amongst animal groups during El Niño years to mitigate El Niño conditions. Socio-ecological effects for humanity and nature Economic effects When El Niño conditions last for many months, extensive ocean warming and the reduction in easterly trade winds limits upwelling of cold nutrient-rich deep water, and its economic effect on local fishing for an international market can be serious.More generally, El Niño can affect commodity prices and the macroeconomy of different countries. It can constrain the supply of rain-driven agricultural commodities; reduce agricultural output, construction, and services activities; create food-price and generalised inflation; and may trigger social unrest in commodity-dependent poor countries that primarily rely on imported food. A University of Cambridge Working Paper shows that while Australia, Chile, Indonesia, India, Japan, New Zealand and South Africa face a short-lived fall in economic activity in response to an El Niño shock, other countries may actually benefit from an El Niño weather shock (either directly or indirectly through positive spillovers from major trading partners), for instance, Argentina, Canada, Mexico and the United States. Furthermore, most countries experience short-run inflationary pressures following an El Niño shock, while global energy and non-fuel commodity prices increase. The IMF estimates a significant El Niño can boost the GDP of the United States by about 0.5% (due largely to lower heating bills) and reduce the GDP of Indonesia by about 1.0%. Health and social impacts Extreme weather conditions related to the El Niño cycle correlate with changes in the incidence of epidemic diseases. For example, the El Niño cycle is associated with increased risks of some of the diseases transmitted by mosquitoes, such as malaria, dengue fever, and Rift Valley fever. Cycles of malaria in India, Venezuela, Brazil, and Colombia have now been linked to El Niño. Outbreaks of another mosquito-transmitted disease, Australian encephalitis (Murray Valley encephalitis—MVE), occur in temperate south-east Australia after heavy rainfall and flooding, which are associated with La Niña events. A severe outbreak of Rift Valley fever occurred after extreme rainfall in north-eastern Kenya and southern Somalia during the 1997–98 El Niño.ENSO conditions have also been related to Kawasaki disease incidence in Japan and the west coast of the United States, via the linkage to tropospheric winds across the north Pacific Ocean.ENSO may be linked to civil conflicts. Scientists at The Earth Institute of Columbia University, having analyzed data from 1950 to 2004, suggest ENSO may have had a role in 21% of all civil conflicts since 1950, with the risk of annual civil conflict doubling from 3% to 6% in countries affected by ENSO during El Niño years relative to La Niña years. Ecological consequences In terrestrial ecosystems, rodent outbreaks were observed in northern Chile and along the Peruvian coastal desert following the 1972-73 El Niño event. While some nocturnal primates (western tarsiers Tarsius bancanus and slow loris Nycticebus coucang) and the Malayan sun bear (Helarctos malayanus) were locally extirpate or suffered drastic reduction in numbers within these burned forests. Lepidoptera outbreaks were documented in Panamá and Costa Rica. During the 1982–83, 1997–98 and 2015–16 ENSO events, large extensions of tropical forests experienced a prolonged dry period that resulted in widespread fires, and drastic changes in forest structure and tree species composition in Amazonian and Bornean forests. Their impacts do not restrict only vegetation, since declines in insect populations were observed after extreme drought and terrible fires during El Niño 2015–16. Declines in habitat-specialist and disturbance-sensitive bird species and in large-frugivorous mammals were also observed in Amazonian burned forests, while temporary extirpation of more than 100 lowland butterfly species occurred at a burned forest site in Borneo. Most critically, global mass bleaching events were recorded in 1997-98 and 2015–16, when around 75-99% losses of live coral were registered across the world. Considerable attention was also given to the collapse of Peruvian and Chilean anchovy populations that led to a severe fishery crisis following the ENSO events in 1972–73, 1982–83, 1997–98 and, more recently, in 2015–16. In particular, increased surface seawater temperatures in 1982-83 also lead to the probable extinction of two hydrocoral species in Panamá, and to a massive mortality of kelp beds along 600 km of coastline in Chile, from which kelps and associated biodiversity slowly recovered in the most affected areas even after 20 years. All these findings enlarge the role of ENSO events as a strong climatic force driving ecological changes all around the world – particularly in tropical forests and coral reefs.In seasonally dry tropical forests, which are more drought tolerant, researchers found that El Niño induced drought increased seedling mortality. In a research published in October 2022, researchers studied seasonally dry tropical forests in a national park in Chiang Mai in Thailand for 7 years and observed that El Niño increased seedling mortality even in seasonally dry tropical forests and may impact entire forests in long run. See also Ocean temperature References Further reading Caviedes, César N. (2001). El Niño in History: Storming Through the Ages. Gainesville: University Press of Florida. ISBN 978-0-8130-2099-0. Fagan, Brian M. (1999). Floods, Famines, and Emperors: El Niño and the Fate of Civilizations. New York: Basic Books. ISBN 978-0-7126-6478-3. Glantz, Michael H. (2001). Currents of change. Cambridge: Cambridge University Press. ISBN 978-0-521-78672-0. Philander, S. George (1990). El Niño, La Niña and the Southern Oscillation. San Diego: Academic Press. ISBN 978-0-12-553235-8. Kuenzer, C.; Zhao, D.; Scipal, K.; Sabel, D.; Naeimi, V.; Bartalis, Z.; Hasenauer, S.; Mehl, H.; Dech, S.; Waganer, W. (2009). "El Niño southern oscillation influences represented in ERS scatterometer-derived soil moisture data". Applied Geography. 29 (4): 463–477. doi:10.1016/j.apgeog.2009.04.004. Li, J.; Xie, S.-P.; Cook, E.R.; Morales, M.; Christie, D.; Johnson, N.; Chen, F.; d'Arrigo, R.; Fowler, A.; Gou, X.; Fang, K. (2013). "El Niño modulations over the past seven centuries" (PDF). Nature Climate Change. 3 (9): 822–826. Bibcode:2013NatCC...3..822L. doi:10.1038/nclimate1936. hdl:10722/189524. External links "Current map of sea surface temperature anomalies in the Pacific Ocean". earth.nullschool.net. "Southern Oscillation diagnostic discussion". Climate Prediction Center (CPC). United States National Oceanic and Atmospheric Administration.
natural refrigerant
Natural refrigerants are considered substances that serve as refrigerants in refrigeration systems (including refrigerators, HVAC, and air conditioning). They are alternatives to synthetic refrigerants such as chlorofluorocarbon (CFC), hydrochlorofluorocarbon (HCFC), and hydrofluorocarbon (HFC) based refrigerants. Unlike other refrigerants, natural refrigerants can be found in nature and are commercially available thanks to physical industrial processes like fractional distillation, chemical reactions such as Haber process and spin-off gases. The most prominent of these include various natural hydrocarbons, carbon dioxide, ammonia, and water. Natural refrigerants are preferred actually in new equipment to their synthetic counterparts for their presumption of higher degrees of sustainability. With the current technologies available, almost 75 percent of the refrigeration and air conditioning sector has the potential to be converted to natural refrigerants. Background Synthetic refrigerants have been used in refrigeration systems since the creation of CFCs and HCFCs in 1929. When these refrigerants leak out of systems and into the atmosphere they can have adverse results on the ozone layer and global warming. CFC refrigerants contain carbon, fluorine, and chlorine and become a significant source of inorganic chlorine in the stratosphere after their photolytic decomposition by UV radiation. Released chlorine also becomes active in destroying the ozone layer. HCFCs have shorter atmospheric lifetimes than CFCs due to their addition of hydrogen, but still have adverse effects on the environment from their chlorine elements. HFCs do not contain chlorine and have short atmospheric lives, but still absorb infrared radiation to contribute to the greenhouse effect from their fluorine elements.In 1987 the Montreal Protocol first acknowledged these dangers and banned the use of CFCs by 2010. A 1990 amendment included agreements to phase out the use of HCFCs by 2020 with production and import being eliminated by 2030. HFC refrigerants, which have a negligible impact on the ozone layer, were seen as viable replacements, but these too have a high impact on global warming. The Kigali amendment of 2016 calls for these HFCs to be cut back by 80% over the next 30 years. Natural refrigerants are one of the potential options for replacement of HFCs, and are growing in usage and popularity as a result. The natural refrigerant industry is expected to have a compounded annual growth rate of 8.5% over the next 4 years, and is expected to become a US$2.88 billion industry by 2027. Sustainability metrics Refrigerants are typically evaluated on both their global warming potential (GWP) and ozone depletion potential (ODP). The GWP scale is standardized to carbon dioxide, where the refrigerant's value is the multiple of the heat that would be absorbed by the same mass of carbon dioxide over a period of time. This is generally measured over a 100-year period. ODP measures the relative impact of a refrigerant to the ozone layer, standardized to R-11, which has a value of 1.GWP and ODP vary greatly among the different refrigerants. CFCs are generally the highest impact, with a high GWP and ODP. HCFCs have similar GWP values and medium ODP values. HFCs again have similar GWP values but a zero ODP value. Natural refrigerants have low to zero GWP values and zero ODP values. Natural refrigerants are therefore gaining increased interest to replace HFCs and offer a more sustainable option for refrigeration. Refrigerants Hydrocarbons as refrigerants Pure hydrogen compounds see moderate use in refrigeration. Hydrocarbons are a viable option as refrigerants because, besides providing cooling properties, they are also plentiful and energy efficient. They are rated to be up to 50% more energy efficient than synthetic refrigerants. Hydrocarbons are also environmentally friendly, as they exist in nature and rank low on the global warming potential (GWP) scale. Historically, hydrocarbons have mainly seen use as a refrigerant for industrial chilling and refrigeration, but with the current shift towards natural refrigerants they are starting to see an increase in use in other areas of refrigeration. They are the favored refrigerant of many European countries.Hydrocarbons used as refrigerants include: Methane (CH4) [R-50] Ethane (CH3CH3) [R-170] Propane (CH3CH2CH3) [R-290] Ethylene (CH2CH2) [R-1150] n-butane (CH3CH2CH2CH3) [R-600] Isobutane (CH(CH3)3) [R-600a] Propylene (CH3CHCH2) [R-1270] Pentane (CH3CH2CH2CH2CH3) [R-601] Isopentane (CH(CH3)2CH2CH3) [R-601a] Cyclopentane ((CH2)5) Flammability The main detriment of using hydrocarbons as refrigerants is that they are extremely flammable at higher pressures. In the past, this risk was mitigated by turning hydrocarbons into CFCs, HCFCs, and HFCs, but with the increasing avoidance of such substances, the problem of flammability must be addressed. Refrigeration systems work by pressurizing the refrigerant to a point where it begins to display refrigerant properties, but with the risk of pressurizing hydrocarbons there is a higher level of caution needed for the internal pressure. In order for hydrocarbons to combust, there must first be a release of hydrocarbons which mix with the correct proportion of air, and then an ignition source must be present. The range of flammability for hydrocarbons lie between 1 and 10%, and an ignition source must have an energy greater than 0.25 J or a temperature greater than 440 °C.Current safety measures regarding the usage of hydrocarbons are outlined by the Environmental Protection Agency (EPA). EPA guidelines of hydrocarbon usage as a refrigerant include specifically designating pressure ranges for hydrocarbon refrigerant systems, ensuring the removal of potentially fire-starting components from hydrocarbon refrigerant systems such as electrical components prone to sparking, and placing standards on the construction of the systems to ensure a higher level of safety. Installing ventilation such that the concentration in air would be less than the flammability limit and reducing the maximum charge size of the refrigerant are other viable safety measures. Technological advances to reduce the total refrigerant charge amount have been recently obtained using aluminum mini-channel heat exchangers. Applications and uses Hydrocarbon refrigerant markets have been growing as a result of increased concern for environmental effects of typical synthetic refrigerants. According to ASHRAE, available equipment that utilizes hydrocarbon refrigerant includes the following: Systems with small charges such as domestic refrigerators, freezers, and portable air conditioners Stand-alone commercial refrigeration systems including beverage and ice-cream machines Centralized indirect systems for supermarket refrigeration Transport refrigeration systems for trucks Chillers in the range of 1 kW – 150 kW Carbon dioxide as a refrigerant (R-744) Carbon dioxide has seen extensive use as a refrigerant. Carbon dioxide's main advantage as a refrigerant stems from the fact that it is classified as an A1 refrigerant by the EPA, placing it in the least toxic and hazardous category for refrigerants. This makes carbon dioxide a viable refrigerant for systems that are used in areas where a leak could cause exposure. Carbon dioxide sees extensive use in large-scale refrigeration systems, sometimes via a cascade refrigeration system. It is also used sparingly in automotive refrigeration, and is seen as favorable for uses in domestic, commercial and industrial refrigeration and air conditioning systems. Carbon dioxide is also both plentiful and inexpensive. These factors have led to carbon dioxide being used as a refrigerant since 1850, when it was patented for use as a refrigerant in the United Kingdom. Carbon dioxide usage at the time was limited due to the high pressures required for refrigerant properties to manifest, but these pressures can be easily reached and sustained with current pressurization technology. The main concern over the use of carbon dioxide in refrigeration is the increased pressure required for carbon dioxide to act as a refrigerant. Carbon dioxide requires higher pressures to be able to condense within the cooling system, meaning that it has to be pressurized more than the other natural refrigerants. It can require up to 200 atmospheres to achieve adequate pressure for condensation. Refrigerant systems using carbon dioxide need to be built to withstand higher pressures. This prevents old coolant systems from being able to be retrofitted in order to use carbon dioxide. However, if carbon dioxide is used as a part of a cascade refrigeration system, it can be used at lower pressures. Using carbon dioxide in cascade refrigeration systems also means that the aforementioned benefits of availability and low price are applicable for a cascade system. There are also benefits to the increased required pressures. Increased pressures yield higher gas densities, which allow for greater refrigerating effects to be achieved. This makes it ideal for cooling dense loads such as ones found in server rooms. It also allows for carbon dioxide to perform well under cold (-30 to -50 °C) conditions, since there are very small reductions in saturation temperatures for a given pressure drop. Plate freezers and blast freezers have noted improvements in efficiency and freezing time using carbon dioxide. There are also propositions for improved thermodynamic cycles to increase the efficiency of carbon dioxide at higher temperatures. Equipment with carbon dioxide refrigerant is also not necessarily heavier, bulkier, or more dangerous than similar equipment despite its higher working pressures due to reduced refrigerant volume flow rates.When the pressure of carbon dioxide is raised above its critical point of 7.3773MPa it cannot be liquidized. Heat rejection must occur by cooling the dense gas, which creates a situation advantageous to water-heating heat pumps. These are particularly efficient with an incoming cold water supply. Ammonia as a refrigerant (R-717) Ammonia (NH3) used as a refrigerant is anhydrous ammonia, which is at least 99.5% pure ammonia. Water and oil cannot exceed 33 and 2ppm respectively. Ammonia refrigerant is stored in pressurized containers. When the pressure is released it undergoes rapid evaporation causing the temperature of the liquid to drop until it reaches its boiling point of -33 °C (-28 °F), which makes it useful in refrigeration systems.Ammonia has been used frequently in industrial refrigeration since it was first used in the compression process in 1872. It is used for its favorable thermodynamic properties, efficiency, and profitability. Ammonia is produced in massive quantities due to the fertilizer industry, making it relatively inexpensive. It has a GWP and ODP of zero, making ammonia leaks negligible on the climate. Ammonia is also tolerant to mineral oils and low sensitivity to small amounts of water in the system. The vaporization heat of ammonia is high and the flow rate low, which requires different technologies to be used than other refrigerants. The low flow rate has historically limited ammonia to larger capacity systems.One of the largest issues with ammonia usage in refrigeration is its toxicity. Ammonia is lethal in certain doses, but proper preparation and emergency protocols can mitigate these risks down to as little as one death per decade, according to the EPA. The unusual smell of ammonia is one reason for that, which allows humans to detect leaks at as low as 5ppm, while its toxic effects begin above 300ppm. Exposure of up to thirty minutes can also be handled without lasting health effects. As a result, much of the hazard with using ammonia as a refrigerant is actually just a matter of public perception. The main focus of safety measures is therefore to avoid fast increases in concentration to a public panic level. Flammability is also not of particular concern, since the flammability range is 15-28%, which would be detected far in advance. It is classified as 2L by ASHRAE for low flammability. Applications and uses Ammonia based refrigerant applications can include the following: Thermal storage systems HVAC chillers Process cooling Air conditioning Winter sports District cooling systems Heat pump systems Supermarkets Convenience stores Increasing output efficiencies for power generation facilitiesAmmonia is expected to see increased use in HVAC&R industries as more officials become informed of its relative safety. It is already used in large heat pump installations and grocery stores, as well as in projects such as the International Space Station. Similar to carbon dioxide, ammonia can also be used in cascade refrigeration systems in order to improve the efficiency of the refrigeration process. There is an increasing use of cascade refrigeration systems that contain both ammonia and carbon dioxide. Absorption chillers with a water/ammonia mixture are also cost effective in some applications such as combined chilling, heat and power systems. Advancing technology also makes ammonia an increasingly viable option for small-scale systems. Water as a refrigerant (R-718) Water is nontoxic, nonflammable, has a zero GWP and ODP value, and has a low cost. Technical challenges, such as water's high specific volume at low temperatures, high pressure ratios required across the compressor, and high temperatures at the outlet of the compressor, exist as barriers to the use of water and water vapor as a refrigerant. Additionally, some applications can find issues with sediment build-up and bacterium breeding, although these issues can be minimized with techniques such as adding chemicals to fight the bacteria and softening the water used.Water is commonly used at higher temperatures in lithium-bromide absorption chillers, but the coefficient of performance (COP) in these applications is just one fifth of typical electric drive centrifugal chillers. Vapor compression refrigeration cycles are a rare application but do have the potential to yield high COPs due to the thermophysical properties of water. Beyond absorption chillers, water can be used in desiccant dehumidification/evaporative cooling, adsorption chillers and compression chillers. Water has also been proposed to be used in special rotary compressors, although the dimensions and price of these systems can become very large.In typical heat pump systems water can be an ideal refrigerant substance, with some applications yielding COPs that exceed 20. This makes it an obvious choice for industrial applications with temperatures above 80 °C. Water has also shown to be viable as a refrigerant in ground source heat pumps Air as a refrigerant Air is free, non-toxic, and does not negatively impact the environment. Air can be used as a refrigerant in air cycle refrigeration systems, which work on the reverse Brayton or Joule cycle. Air is compressed and expanded to create heating and cooling capacities. Originally, reciprocating expanders and compressors were used, which created poor reliability. With the invention of rotary compressors and expanders the efficiency and reliability of these cycles has improved, and alongside new compact heat exchangers makes air possible to compete with more conventional refrigerants. Noble gases as refrigerants The noble gases are rarely used as refrigerants. The primary uses of noble gases as refrigerants is in liquid super coolant experimental systems in laboratories or in superconductors. This specifically applies to liquid helium, which has a boiling point of 4.2 K. They are never used for industrial or home refrigeration. Other natural refrigerants These natural refrigerants are substances that can be used in refrigeration systems but are not used or are only used very rarely because of an availability of compounds that are either less expensive or are easier to handle and contain. Oxygen compounds Diethyl ether/ethyl ether ("ether") (CH3CH2OCH2CH3) [R-610] (from dehydrogenation of ethanol; extremely flammable) Methyl formate (HCOOCH3) [R-611] (from carbonylation of methanol, which usually both come from syngas; or by condensation of methanol; highly flammable) Dimethyl ether (CH3OCH3) [R-E170] (From dehydration of methanol which comes from syngas, natural gas or from some biofuels; highly flammable, medium toxicity) Nitrogen compounds Methylamine (CH3NH2) [R-630] (from reaction of ammonia and methanol; medium toxicity; controlled substance) Ethylamine (CH3CH2NH2) [R-631] (from reaction of ammonia and ethanol; very toxic) Lubricant In refrigeration systems, oil is used to lubricate parts in the compressor to ensure proper function. In typical operations, some of this lubricant may inadvertently pass into another part of the system. This negatively impacts the heat transfer and frictional characteristics of the refrigerant. In order to avoid this, the lubricant oil needs to be sufficiently compatible and miscible with the refrigerant. CFC systems utilize mineral oils, however HFC systems are not compatible and need to rely on ester and polyalkylene-glycol based oils, which are significantly more expensive.Hydrocarbons have a sign solubility with standard mineral oils, so very low solubility lubricants are needed. Polyalkylene Glycol and Polyalphaolefin are typically used in these systems for their low pour point and vapor pressures. Traditional oils cannot be used for lubricants in carbon dioxide systems since it is more solvent than most HFCs. Polyester oil is specifically designed to be used in carbon dioxide based systems and also helps to guard against increased bearing wear and maintenance costs that may come from a result of the higher stresses and pressures on a carbon dioxide system. Ammonia requires lubricants with low operating temperatures and high oxidation resistances, fluidity, and viscosity. Polyalphaolefin or Polyalphaolefin and Alkylbenzene blends are typically used. References External links Greenpeace, Natural Refrigerants: The Solutions See also List of refrigerants Sustainable automotive air conditioning, also covering the Alliance for CO2 Solutions eurammon, a European non-profit initiative for natural refrigerants
geography and wealth
Geography and wealth have long been perceived as correlated attributes of nations. Scholars such as Jeffrey D. Sachs argue that geography has a key role in the development of a nation's economic growth. For instance, nations that reside along coastal regions, or those who have access to a nearby water source, are more plentiful and able to trade with neighboring nations. In addition, countries that have a tropical climate face a significant amount of difficulties such as disease, intense weather patterns, and lower agricultural productivity. This thesis is supported by the fact that the volumes of UV radiation have a negative impact on economic activity. There are a number of studies confirming that spatial development in countries with higher levels of economic development differs from countries with lower levels of development. The correlation between geography and a nation's wealth can be observed by examining a country's GDP (gross national product) per capita, which takes into account a nation's economic output and population.The wealthiest nations of the world with the highest standard of living tend to be those at the northern extreme of areas open to human habitation—including Northern Europe, the United States, and Canada. Within prosperous nations, wealth often increases with distance from the equator; for example, the Northeast United States has long been wealthier than its southern counterpart and northern Italy wealthier than southern regions of the country. Even within Africa this effect can be seen, as the nations farthest from the equator are wealthier. In Africa, the wealthiest nations are the three on the southern tip of the continent, South Africa, Botswana, and Namibia, and the countries of North Africa. Similarly, in South America, Argentina, Southern Brazil, Chile, and Uruguay have long been the wealthiest. Within Asia, Indonesia, located on the equator, is among the poorest. Within Central Asia, Kazakhstan is wealthier than other former Soviet Republics which border it to the south, like Uzbekistan. Very often such differences in economic development are linked to the North-South issue. This approach assumes an empirical division of the world into rich northern countries and poor southern countries. In addition, the problem of heterogeneous economic development (between the industrialised north and the agrarian south) also exists within the following countries: South and North Kazakhstan Southern (Osh, Batken) and northern (Bishkek city and Chui) oblasts in Kyrgyzstan. Southern Italy and Padania. Flanders and Wallonia. Catalonia, Basque Country and the rest of Spain. New England and the south-eastern states of the USA. Poland A and BResearchers at Harvard's Center for International Development found in 2001 that only two tropical economies — Singapore and Hong Kong — are classified as high-income by the World Bank, while all countries within regions zoned as temperate had either middle- or high-income economies. Measurement Most of the recent studies use national gross domestic product per person, as measured by the World Bank and the International Monetary Fund, as the unit of comparison. Intra-national comparisons use their own data, and political divisions such as states or provinces then delineate the study areas. Explanations Historic One of the first to describe and assess the phenomenon was the French philosopher Montesquieu, who asserted in the 18th century that "cold air constringes (sic) the extremities of the external fibres of the body; this increases their elasticity, and favours the return of the blood from the extreme parts to the heart. It contracts those very fibres; consequently it also increases their force. On the contrary, warm air relaxes and lengthens the extremes of the fibres; of course it diminishes their force and elasticity. People are therefore more vigorous in cold climates."The 19th century historian Henry Thomas Buckle wrote that "climate, soil, food, and the aspects of nature are the primary causes of intellectual progress—the first three indirectly, through determining the accumulation and distribution of wealth, and the last by directly influencing the accumulation and distribution of thought, the imagination being stimulated and the understanding subdued when the phenomena of the external world are sublime and terrible, the understanding being emboldened and the imagination curbed when they are small and feeble." The first industrial revolution marked the beginning of the divergence of different nations. Alex Trew introduces a spatial take-off model that uses data on occupations in 18th century England. The model predicts changes in the spatial distribution of agricultural and manufacturing employment, which are consistent with the 1817 and 1861 data. Thus, one of the historical factors influencing the geographical unevenness of wealth distribution is the degree of industrial development, catalysed by the Industrial Revolution. Climatic differences Physiologist Jared Diamond was inspired to write his Pulitzer Prize-winning work Guns, Germs, and Steel by a question posed by Yali, a New Guinean politician: why were Europeans so much wealthier than his people? In this book, Diamond argues that the Europe-Asia (Eurasia) land mass is particularly favorable for the transition of societies from hunter-gatherer to farming communities. This continent stretches much farther along the same lines of latitude than any of the other continents. Since it is much easier to transfer a domesticated species along the same latitude than it is to move it to a warmer or colder climate, any species developed at a particular latitude will be transferred across the continent in a relatively short amount of time. Thus the inhabitants of the Eurasian continent have had a built-in advantage in terms of earlier development of farming, and a greater range of plants and animals from which to choose. Oded Galor, and Ömer Özak examine existing differences in time preference across countries and regions, using pre-industrial agro-climatic characteristics. The authors conclude that agro-climatic characteristics have a significant impact through culture on economic behaviour, the degree of technology adoption and human capital. Cultural differences One important determinant of unequal wealth in spatial contexts is significant cultural differences, including issues of attitudes to women and the slave trade. Existing research finds that the descendants of societies that traditionally practised plough farming now have less equal gender norms. The evidence is strong and persistent across countries, areas within countries and ethnic groups within areas. This research is particularly relevant in terms of the division of labour and equality as a basic mechanism that contributes to economic development. Part of the work is devoted to the analysis of the slave trade as a cultural element of spatial development. Nathan Nunn, and Leonard Wantchekon explore the differences in levels of donorisation in Africa. The authors find that people whose ancestors were heavily raided during the slave trade are less trusting today. The authors used several strategies to determine trust and found that the link is causal. Furthermore, it is noted that the rugged terrain in Africa provided protection to those who were raided during the slave trade. Given that the slave trade inhibited subsequent economic development, the rugged terrain in Africa has also had a historically indirect positive effect on income. Another factor determining spatial development is the distance to pre-industrial technological frontiers. It has been noted that it can contribute to the emergence of a culture conducive to innovation, knowledge creation and entrepreneurship. Unequal technology development Diamond notes that modern technologies and institutions were designed primarily in a small area of northwestern Europe, which is also supported by Galor`s book "The journey of humanity: the origins of wealth and inequality". After the Scientific Revolution in Europe in the 16th century, the quality of life increased and wealth began to spread to the middle class. This included agricultural techniques, machines, and medicines. These technologies and models readily spread to areas colonized by Europe which happened to be of similar climate, such as North America and Australia. As these areas also became centres of innovation, this bias was further enhanced. Technologies from automobiles to power lines are more often designed for colder and drier regions, since most of their customers are from these regions. The book Guns, Germs, and Steel goes on to document a feedback effect of technologies being designed for the wealthy, which makes them more wealthy and thus more able to fund technological development. He notes that the far north has not always been the wealthiest latitude; until only a few centuries ago, the wealthiest belt stretched from Southern Europe through the Middle East, northern India and southern China, which was also highlighted in the book of Mokyr "The lever of riches: Technological creativity and economic progress". A dramatic shift in technologies, beginning with ocean-going ships and culminating in the Industrial Revolution, saw the most developed belt move north into northern Europe, China, and the Americas. Northern Russia became a superpower, while southern India became impoverished and colonized. Diamond argues that such dramatic changes demonstrate that the current distribution of wealth is not only due to immutable factors such as climate or race, citing the early emergence of agriculture in ancient Mesopotamia as evidence. Diamond also notes the feedback effect in the growth of medical technology, stating that more research money goes into curing the ailments of northerners. Disease Ticks, mosquitos, and rats, which continue to be important disease vectors, benefit from warmer temperatures and higher humidities. There has long been a malarial belt spanning the equatorial portions of the globe; the disease is especially deadly to children under the age of five. Notably it has been almost impossible for most forms of northern livestock to thrive due to the endemic presence of the tsetse fly. Bleakley finds empirical evidence that ankylostomosis eradication in the American South contributed to a significant increase in income that coincided with eradication activities.Jared Diamond has linked domestication of animals in Europe and Asia to the development of diseases that enabled these countries to conquer the inhabitants of other continents. The close association of people in Eurasia with their domesticated animals provided a vector for the rapid transmission of diseases. Inhabitants of lands with few domesticated species were never exposed to the same range of diseases, and so, at least on the American continents, succumbed to diseases introduced from Eurasia. These effects were exhaustively discussed in William McNeill's book Plagues and Peoples. Besides, Ola Olsson, and Douglas A. Hibbs argue that geographical and initial biogeographical conditions had a decisive influence on the location and timing of the transition to sedentary agriculture, complex social organisation and, ultimately, to modern industrial production.The 2001 Harvard study mentions high infant mortality as another factor; since birth rates usually increase in compensation, women may delay their entry into the workforce to care for their younger children. The education of the surviving children then becomes difficult, perpetuating a cycle of poverty. Bloom, Canning and Fink argue that health in childhood affects productivity in adulthood. Other In "Climate and Scale in Economic Growth," William A. Masters and Margaret S. McMillan of Purdue University and Tufts University hypothesize that the disparity is partially due to the effects of frost in increasing soil fertility. Hernando Zuleta, of the Universidad del Rosario, has proposed that where output fluctuations are more profound, i.e. areas that experience winter, saving is more pronounced, which leads to the adoption or creation of capital-intensive technologies. Daron Acemoglu, Simon Johnson, and James A. Robinson of MIT argued in 2001 that in places where Europeans faced high mortality rates, they could not settle and were more likely to set up exploitative institutions. These institutions offered no protection for private property or checks and balances against government expropriation. They assert that after controlling for the effect of institutions, countries in Africa or those closer to the equator do not have lower incomes. This work has been disputed by David Albouy, who argues that European mortality rates in the study were badly mismeasured, falsely supporting its conclusion. The authors of "Brain size, cranial morphology, climate, and time machines" assert that colder climates increase brain size, resulting in an intelligence differential. Dao, N.T. and Davila, J. argue that when a number of geographical factors coincide, the economy may become locked in Malthusian stagnation and never develop. Impact of global warming on wealth In a 2006 paper discussing the potential impact of global warming on wealth, John K. Horowitz of the University of Maryland predicted that a 2-degree Fahrenheit (1 °C) temperature increase across all countries would cause a decrease of 2 to 6 percent in world GDP, with a best estimate of around 3.5 percent. The former United Nations Secretary-General Ban Ki-moon had expressed concern that global warming will exacerbate the existing poverty in Africa. There is growing evidence that the inequality between rich and poor people's emissions within countries now exceeds the inequality between countries. High-emitting countries have more in common across international borders, regardless of where they live. It is noted that personal wealth explains more than national wealth the sources of emissions. Singapore In a 2009 interview for New Perspective's Quarterly, Singapore's founding father Lee Kuan Yew attributed two things to the country's incredible success. One being ethnic tolerance, and the other being air conditioning, which he described as an extremely important invention that made civilization possible in the tropics. Lee argued air conditioning allowed people to work during midday, previously difficult do to the tropical climate, and installed them where civil servants worked. See also North-South divide Land (economics) List of countries by GDP (nominal) Development geography Measures of national income and output Tropical disease References Further reading Jared Diamond: Guns, Germs, and Steel: The Fates of Human Societies. W.W. Norton & Company, March 1997. ISBN 0-393-03891-2 Iain Hay: Geographies of the Super-rich. Edward Elgar. 2013 ISBN 978-0-85793-568-7 Iain Hay and Jonathan Beaverstock: Handbook on Wealth and the Super-rich. Edward Elgar ISBN 978-1-78347-403-5 Theil, Henri, and Dongling Chen, "The Equatorial Grand Canyon," De Economist (1995). Abstract at [1] University of Texas at El Paso. Economic Geography: Lecture 21 Sachs, Jeffrey D., Andrew D. Mellinger, and John L. Gallup. 2001. The Geography of Poverty and Wealth. Scientific American. March 2001. McNeill, William H. Plagues and People. New York: Anchor Books, 1976. ISBN 0-385-12122-9. Reprinted with new preface 1998.
energy in hungary
Energy in Hungary describes energy and electricity production, consumption and import in Hungary. Energy policy of Hungary describes the politics of Hungary related to energy. Statistics Nuclear power Hungary had, in 2017, four operating nuclear power reactors, constructed between 1982 and 1987, at the Paks Nuclear Power Plant.An agreement in 2014 with the EU and an agreement between Hungary and Rosatom may result in an additional two reactors being built for operation in 2030. The cost, estimated at €12.5bn, being funded mainly by Russia. Oil Hungary is reliant on oil from Russia for 46% of its needs in 2021, a decrease from 80% in 2013.An EU exemption to sanctions, following the Russian invasion of Ukraine in 2022 allows Hungary to continue importing oil from Russia until December 2023.MOL Group is an oil and gas group in Hungary. Gas Emfesz is a natural gas distributor in Hungary. Panrusgáz imports natural gas from Russia mainly Gazprom. The Arad–Szeged pipeline is a natural gas pipeline from Arad (Romania) to Szeged (Hungary). Nabucco and South Stream gas pipelines were intended to reach Hungary and further to other European countries. The Nabucco gas pipeline was expected to pipe 31bn cubic metres of gas annually in a 3,300 km long pipeline constructed via Hungary, Turkey, Romania, Bulgaria and Austria. The South Stream gas pipeline was expected to pipe 63bn cu m of gas from southern Russia to Bulgaria under the Black Sea. The pipe was planned to run via Hungary to central and southern Europe. These two were abandoned early in their design phases by a mix of disinterest, changing priorities and changes in geopolitical conditions in the larger Black Sea basin. Hungary in 2022 is reliant on Russia for 80% of its natural gas and seeks to continue buying from Gazprom. In October 2023 Bulgaria passed a law taxing Russian gas in transit to Hungary at 20 levs (10.22 euro) per MWh, roughly 20% of the purchase price of gas, the cost is probably payable by Gazprom. Hungary has complained about the tax. Coal The last coal electricity producer, the Matra Power Plant produced around 9% of the electricity needs of Hungary in 2020. It is served by two coal mines in Visonta, and in Bükkábrány. The current generator is to shut down in 2025 to be replaced by a CCGT unit. Renewable energy Hungary is a member of the European Union and thus takes part in the EU strategy to increase its share of the renewable energy. The EU has adopted the 2009 Renewable Energy Directive, which included a 20% renewable energy target by 2020 for the EU. By 2030 wind should produce in average 26-35% of the EU's electricity and save Europe €56 billion a year in avoided fuel costs.The national authors of Hungary forecast is 14.7% renewables in gross energy consumption by 2020, exceeding their 13% binding target by 1.7 percentage points. Hungary is the EU country with the smallest forecast penetration of renewables of the electricity demand in 2020, namely only 11% (including biomass 6% and wind power 3%). The forecast includes 400 MW of new wind power capacity between 2010 and 2020. EWEA's 2009 forecast expects Hungary to reach 1.2 GW of installed wind capacity in this time. In the end of 2010 wind power capacity was 295 MW. Wind power No new wind farms have been built since 2012, with a law in 2016 effectively banning wind farms in Hungary by requiring they be built further than 12km from any community. Capacity in 2022 is 330MW. In 2022 Hungary published a Recovery and Resilience Plan outlining a total of HUF 2,300 billion (ca. EUR 6 billion) for strategic development projects with the energy sector, which may result in additional wind farms being built. Solar power Hungary had 4.8GW capacity for solar power in 2022, having grown from 26MW in 2016. The 2030 National Energy Strategy target is for 6GW capacity. Global warming In 2007, emissions of carbon dioxide totalled 53.9 million tonnes or around 5.4 tonnes per capita when the EU-27 average was 7.9 tonnes per capita. See also == References ==
1,1-difluoroethane
1,1-Difluoroethane, or DFE, is an organofluorine compound with the chemical formula C2H4F2. This colorless gas is used as a refrigerant, where it is often listed as R-152a (refrigerant-152a) or HFC-152a (hydrofluorocarbon-152a). It is also used as a propellant for aerosol sprays and in gas duster products. As an alternative to chlorofluorocarbons, it has an ozone depletion potential of zero, a lower global warming potential (124) and a shorter atmospheric lifetime (1.4 years). Production 1,1-Difluoroethane is a synthetic substance that is produced by the mercury-catalyzed addition of hydrogen fluoride to acetylene: HCCH + 2 HF → CH3CHF2The intermediate in this process is vinyl fluoride (C2H3F), the monomeric precursor to polyvinyl fluoride. Uses With a relatively low global warming potential (GWP) index of 124 and favorable thermophysical properties, 1,1-difluoroethane has been proposed as an environmentally friendly alternative to R134a. Despite its flammability, R152a also presents operating pressures and volumetric cooling capacity (VCC) similar to R134a so it can be used in large chillers or in more particular applications like heat pipe finned heat exchangers.Furthermore, 1,1-difluoroethane is also commonly used in gas dusters and numerous other retail aerosol products, particularly those subject to stringent volatile organic compound (VOC) requirements. The molecular weight of difluoroethane is 66, making it a useful and convenient tool for detecting vacuum leaks in Gas chromatography–mass spectrometry (GC-MS) systems. The cheap and freely available gas has a molecular weight and fragmentation pattern (base peak 51 m/z in typical EI-MS, major peak at 65 m/z) distinct from anything in air. If mass peaks corresponding to 1,1-difluoroethane are observed immediately after spraying a suspect leak point, leaks may be identified. Safety Difluoroethane is an extremely flammable gas, which decomposes rapidly on heating or burning, producing toxic and irritating fumes, including hydrogen fluoride and carbon monoxide.In a DuPont study, rats were exposed to up to 25,000 ppm (67,485 mg/m3) for six hours daily, five days a week for two years. This has become the no-observed-adverse-effect level for this substance. Prolonged exposure to 1,1-difluoroethane has been linked in humans to the development of coronary disease and angina. Repeated or sufficiently high levels of exposure, particularly purposeful inhalation, can precipitate fatal cardiac arrhythmia. Abuse Difluoroethane is an intoxicant with abuse potential. Fatal overdoses linked to difluoroethane include actress Skye McCole Bartusiak, singer Aaron Carter and wrestler Mike Bell. Bitterants, added voluntarily to some brands to deter purposeful inhalation, are often not legally required; they do not negate or counteract difluoroethane's intoxicating effects. Environmental abundance Most production, use, and emissions of HFC-152a have occurred within Earth's more industrialized and populated northern hemisphere following the substance's introduction in the 1990s. Its concentration in the northern troposphere reached an annual average of about 10 parts per trillion by year 2011. The concentration of HFC-152a in the southern troposphere is about 50% lower due to its removal rate (i.e. lifetime) of about 1.5 years being similar in magnitude to the global atmospheric mixing time of one to two years. See also List of refrigerants IPCC list of greenhouse gases Canned air == References ==
energy in estonia
Energy in Estonia has heavily depended on fossil fuels. Finland and Estonia are two of the last countries in the world still burning peat.Estonia has set a target of 100% of electricity production from renewable sources by 2030. Statistics Energy plan The National Energy and Climate Plan published in 2019 aims to reduce greenhouse gas emissions by 70% by 2030 and by 80% by 2050. Renewable energy must be at least 42%, with a target of 16 TWh in 2030The plan was changed in October 2022, when Estonia set a target date of 2030 to generate 100% electricity from renewables. Energy types Renewable energy Renewable energy includes wind, solar, biomass and geothermal energy sources. Wind power Wind power had a capacity of 320MW in 2020 however investment continues with a €200m 255MW Sopi-Tootsi wind project planned to be operational by 2024. Solar power Solar power has received investment since 2014. In 2022, Estonian solar power plants produced 2,569 gigawatt-hours (GWh) of renewable energy. 26 million euros were paid in subsidies for electricity produced via solar power in 2022. Biomass power Biomass provides around 25% of the electrical energy capacity. Fossil fuels Oil-shale Oil-shale powered generators in 2019 accounted for 70% of electricity generation in Estonia.The original target to reduce production from oil-shale was 2035 with production ceasing by 2040, has been changed to ceasing oil-shale production by 2030.Between 2018 and 2022 oil-shale extraction and use reduced by 50%. Natural Gas Estonia has the Balticconnector pipeline, which links Estonia with Finland. In April 2022 Estonia reduced gas imports from Russia and on 29 September 2022 Estonia banned buying Natural gas from Russia. Work began on LNG facilities at Paldiski which was completed in October 2022 and increased transmission capacities in existing interconnection points.In December 2022 a floating LNG terminal became operational in Finland which connects to Estonia. Electricity Electricity production in Estonia is largely dependent on fossil fuels. In 2007, more than 90% of power was generated from oil shale. The Estonian energy company Eesti Energia owns the largest oil shale-fuelled power plants in the world, Narva Power Plants.There are two submarine power cables from Finland, with combined rated power of 1000 MW. Estonia's all-time peak consumption is 1591 MW (in 2021).It was agreed in 2018 that Estonia, Latvia and Lithuania will connect to the European Union's electricity system and desynchronize from the Russian BRELL power system, this is expected to be completed by February 2025.A back up plan, should Russia disconnect the Baltic states before 2025, would enable a connection to the European grid to be completed within 24 hours. Transport sector In February 2013, Estonia had a network of 165 fast chargers for electric cars (for a population of 1.3 million). This grew to 400 in 2022. See also Energy in Latvia Energy in Lithuania References == External links ==
hurricane sandy
Hurricane Sandy (unofficially referred to as Superstorm Sandy) was an extremely large and destructive Category 3 Atlantic hurricane which ravaged the Caribbean and the coastal Mid-Atlantic region of the United States in late October 2012. It was the largest Atlantic hurricane on record as measured by diameter, with tropical-storm-force winds spanning 1,150 miles (1,850 km). The storm inflicted nearly $70 billion (2012 USD) in damage and killed 233 people across eight countries from the Caribbean to Canada. The eighteenth named storm, tenth hurricane, and second major hurricane of the 2012 Atlantic hurricane season, Sandy was a Category 3 storm at its peak intensity when it made landfall in Cuba, though most of the damage it caused was after it became a Category 1-equivalent extratropical cyclone off the coast of the Northeastern United States.Sandy developed from a tropical wave in the western Caribbean Sea on October 22, quickly strengthened, and was upgraded to Tropical Storm Sandy six hours later. Sandy moved slowly northward toward the Greater Antilles and gradually intensified. On October 24, Sandy became a hurricane, made landfall near Kingston, Jamaica, re-emerged a few hours later into the Caribbean Sea and strengthened into a Category 2 hurricane. On October 25, Sandy hit Cuba as a Category 3 hurricane, then weakened to a Category 1 hurricane. Early on October 26, Sandy moved through the Bahamas. On October 27, Sandy briefly weakened to a tropical storm and then restrengthened to a Category 1 hurricane. Early on October 29, Sandy curved west-northwest (the "left turn" or "left hook") and then moved ashore near Brigantine, New Jersey, just to the northeast of Atlantic City, as a post-tropical cyclone with hurricane-force winds. Sandy continued drifting inland for another few days while gradually weakening, until it was absorbed by another approaching extratropical storm on November 2.In Jamaica, winds left 70 percent of residents without electricity, blew roofs off buildings, killed one person, and caused about $100 million (2012 USD) in damage. Sandy's outer bands brought flooding to Haiti, killing at least 54, causing food shortages, and leaving about 200,000 homeless; the hurricane also caused two deaths in the Dominican Republic. In Puerto Rico, one man was swept away by a swollen river. In Cuba, there was extensive coastal flooding and wind damage inland, destroying some 15,000 homes, killing 11, and causing $2 billion (2012 USD) in damage. Sandy caused two deaths and an estimated $700 million (2012 USD) in damage in The Bahamas. In the United States, Hurricane Sandy affected 24 states, including the entire eastern seaboard from Florida to Maine and west across the Appalachian Mountains to Michigan and Wisconsin, with particularly severe damage in New Jersey and New York. Its storm surge hit New York City on October 29, flooding streets, tunnels and subway lines and cutting power in and around the city. Damage in the United States amounted to $65 billion (2012 USD). In Canada, two were killed in Ontario, and the storm caused an estimated $100 million (2012 CAD) in damage throughout Ontario and Quebec. Meteorological history Hurricane Sandy began as a low pressure system which developed sufficient organized convection to be classified as Tropical Depression Eighteen on October 22 south of Kingston, Jamaica. It moved slowly at first due to a ridge to the north. Low wind shear and warm waters allowed for strengthening, and the system was named Tropical Storm Sandy late on October 22. Early on October 24, an eye began developing, and it was moving steadily northward due to an approaching trough. Later that day, the National Hurricane Center (NHC) upgraded Sandy to hurricane status about 65 mi (105 km) south of Kingston, Jamaica. At about 1900 UTC that day, Sandy made landfall near Kingston with winds of about 85 mph (137 km/h). Just offshore Cuba, Sandy rapidly intensified to a Category 3 hurricane, with sustained winds at 115 mph (185 km/h) and a minimum central pressure of 954 millibars (28.2 inHg), and at that intensity, Sandy made landfall just west of Santiago de Cuba at 0525 UTC on October 25. Operationally, Sandy was assessed to have peaked as a high-end Category 2 hurricane, with maximum sustained winds of 110 mph (180 km/h). After Sandy exited Cuba, the structure of the storm became disorganized, and it turned to the north-northwest over the Bahamas. By October 27, Sandy was no longer fully tropical, as evidenced by the development of frontal structures in its outer circulation. Despite strong shear, Sandy maintained its convection due to influence from an approaching trough; the same that turned the hurricane to the northeast. After briefly weakening to a tropical storm, Sandy re-intensified into a Category 1 hurricane, and on October 28, an eye began redeveloping. The storm moved around an upper-level low over the eastern United States and also to the southwest of a ridge over Atlantic Canada, turning it to the northwest. Sandy briefly re-intensified to Category 2 intensity on the morning of October 29, around which time it had become an extremely large hurricane, with a record gale-force wind diameter of over 1,150 miles (1,850 km), and an unusually low central barometric pressure of 940 mbar, possibly due to the very large size of the system. This pressure set records for many cities across the Northeastern United States for the lowest pressures ever observed. The convection diminished while the hurricane accelerated toward the New Jersey coast, and the cyclone was no longer tropical by 2100 UTC on October 29. About 2½ hours later, Sandy made landfall near Brigantine, New Jersey, with sustained winds of 80 mph (130 km/h). During the next four days, Sandy's remnants drifted northward and then northeastward over Ontario, before merging with another low pressure area over Eastern Canada on November 2. Forecasts On October 23, 2012, the path of Hurricane Sandy was correctly predicted by the European Centre for Medium-Range Weather Forecasts (ECMWF) headquartered in Reading, England nearly eight days in advance of its striking the American East Coast. The computer model noted that the storm would turn west towards land and strike the New York/New Jersey region on October 29, rather than turn east and head out to the open Atlantic as most hurricanes in this position do. By October 27, four days after the ECMWF made its prediction, the National Weather Service and National Hurricane Center confirmed the path of the hurricane predicted by the European model. The National Weather Service was criticized for not employing its higher-resolution forecast models the way that its European counterpart did. A hardware and software upgrade completed at the end of 2013 enabled the weather service to make predictions more accurate and farther in advance than the technology in 2012 had allowed. Relation to global warming According to NCAR senior climatologist Kevin E. Trenberth, "The answer to the oft-asked question of whether an event is caused by climate change is that it is the wrong question. All weather events are affected by climate change because the environment in which they occur is warmer and moister than it used to be." Although NOAA meteorologist Martin Hoerling attributes Sandy to "little more than the coincidental alignment of a tropical storm with an extratropical storm", Trenberth does agree that the storm was caused by "natural variability" but adds that it was "enhanced by global warming". One factor contributing to the storm's strength was abnormally warm sea surface temperatures offshore the East Coast of the United States—more than 3 °C (5 °F) above normal, to which global warming had contributed 0.6 °C (1 °F). As the temperature of the atmosphere increases, the capacity to hold water increases, leading to stronger storms and higher rainfall amounts.As they move north, Atlantic hurricanes typically are forced east and out to sea by the Prevailing Westerlies. In Sandy's case, this typical pattern was blocked by a ridge of high pressure over Greenland resulting in a negative North Atlantic Oscillation, forming a kink in the jet stream, causing it to double back on itself off the East Coast. Sandy was caught up in this southeasterly flow, taking the storm on an unusual northwest path. The blocking pattern over Greenland also stalled an Arctic front which combined with the cyclone. Mark Fischetti of Scientific American said that the jet stream's unusual shape was caused by the melting of Arctic ice. Trenberth said that while a negative North Atlantic Oscillation and a blocking anticyclone were in place, the null hypothesis remained that this was just the natural variability of weather. Sea level at New York and along the New Jersey coast has increased by nearly a foot (300 mm) over the last hundred years, which contributed to the storm surge. One group of scientists estimated that the anthropogenic (human activity-driven) climate change was responsible for approximately 9 cm of sea level rise in New York, which permitted additional storm surge that caused approximately US$8.1B out of the $60 billion in reported economic damage and to an extension of the flood zone to impact approximately 71,000 more people than would have been the case without it. Harvard geologist Daniel P. Schrag calls Hurricane Sandy's 13-foot (4.0 m) storm surge an example of what will, by mid-century, be the "new norm on the Eastern seaboard". Preparations Caribbean and Bermuda After the storm became a tropical cyclone on October 22, the Government of Jamaica issued a tropical storm watch for the entire island. Early on October 23, the watch was replaced with a tropical storm warning and a hurricane watch was issued. At 1500 UTC, the hurricane watch was upgraded to a hurricane warning, while the tropical storm warning was discontinued. In preparation of the storm, many residents stocked up on supplies and reinforced roofing material. Acting Prime Minister Peter Phillips urged people to take this storm seriously, and also to take care of their neighbors, especially the elderly, children, and disabled. Government officials shut down schools, government buildings, and the airport in Kingston on the day prior to the arrival of Sandy. Meanwhile, numerous and early curfews were put in place to protect residents, properties, and to prevent crime. Shortly after Jamaica issued its first watch on October 22, the Government of Haiti issued a tropical storm watch for Haiti. By late October 23, it was modified to a tropical storm warning.The Government of Cuba posted a hurricane watch for the Cuban Provinces of Camagüey, Granma, Guantánamo, Holguín, Las Tunas, and Santiago de Cuba at 1500 UTC on October 23. Only three hours later, the hurricane watch was switched to a hurricane warning. The Government of the Bahamas, at 1500 UTC on October 23, issued a tropical storm watch for several Bahamian islands, including the Acklins, Cat Island, Crooked Island, Exuma, Inagua, Long Cay, Long Island, Mayaguana, Ragged Island, Rum Cay, and San Salvador Island. Later that day, another tropical storm watch was issued for Abaco Islands, Andros Island, the Berry Islands, Bimini, Eleuthera, Grand Bahama, and New Providence. By early on October 24, the tropical storm watch for Cat Island, Exuma, Long Island, Rum Cay, and San Salvador was upgraded to a tropical storm warning.At 1515 UTC on October 26, the Bermuda Weather Service issued a tropical storm watch for Bermuda, reflecting the enormous size of the storm and the anticipated wide-reaching impacts. United States Much of the East Coast of the United States, in Mid-Atlantic and New England regions, had a good chance of receiving gale-force winds, flooding, heavy rain and possibly snow early in the week of October 28 from an unusual hybrid of Hurricane Sandy and a winter storm producing a Fujiwhara effect. Government weather forecasters said there was a 90% chance that the East Coast would be impacted by the storm. Jim Cisco of the Hydrometeorological Prediction Center coined the term "Frankenstorm", as Sandy was expected to merge with a storm front a few days before Halloween. As coverage continued, several media outlets began eschewing this term in favor of "superstorm". Utilities and governments along the East Coast attempted to head off long-term power failures Sandy might cause. Power companies from the Southeast to New England alerted independent contractors to be ready to help repair storm damaged equipment quickly and asked employees to cancel vacations and work longer hours. Researchers from Johns Hopkins University, using a computer model built on power outage data from previous hurricanes, conservatively forecast that 10 million customers along the Eastern Seaboard would lose power from the storm. Through regional offices in Atlanta, Philadelphia, New York City, and Boston, the Federal Emergency Management Agency (FEMA) monitored Sandy, closely coordinating with state and tribal emergency management partners in Florida and the Southeast, Mid-Atlantic, and New England states. President Obama signed emergency declarations on October 28 for several states expected to be impacted by Sandy, allowing them to request federal aid and make additional preparations in advance of the storm. Flight cancellations and travel alerts on the U.S. East Coast were put in place in the Mid-Atlantic and the New England areas. Over 5,000 commercial airline flights scheduled for October 28 and 29 were canceled by the afternoon of October 28 and Amtrak canceled some services through October 29 in preparation for the storm. In addition, the National Guard and U.S. Air Force put as many as 45,000 personnel in at least seven states on alert for possible duty in response to the preparations and aftermath of Sandy. Southeast Florida Schools on the Treasure Coast announced closures for October 26 in anticipation of Sandy. A Russian intelligence-gathering ship was allowed to stay in Jacksonville to avoid Sandy; the port is not far from Kings Bay Naval Submarine Base. Carolinas At 0900 UTC on October 26, a tropical storm watch was issued from the mouth of the Savannah River in South Carolina to Oregon Inlet, North Carolina, including Pamlico Sound. Twelve hours later, the portion of the tropical storm watch from the Santee River in South Carolina to Duck, North Carolina, including Pamlico Sound, was upgraded to a warning. Governor of North Carolina Beverly Perdue declared a state of emergency for 38 eastern counties on October 26, which took effect on the following day. By October 29, the state of emergency was extended to 24 counties in western North Carolina, with up to a foot (30 cm) of snow attributed to Sandy anticipated in higher elevations. The National Park Service closed at least five sections of the Blue Ridge Parkway. Mid-Atlantic Virginia On October 26, Governor of Virginia Bob McDonnell declared a state of emergency. The U.S. Navy sent more than twenty-seven ships and forces to sea from Naval Station Norfolk for their protection. Governor McDonnell authorized the National Guard to activate 630 personnel ahead of the storm. Republican Party presidential candidate Mitt Romney canceled campaign appearances scheduled for October 28 in Virginia Beach, Virginia, and New Hampshire October 30 because of Sandy. Vice President Joe Biden canceled his appearance on October 27 in Virginia Beach and an October 29 campaign event in New Hampshire. President Barack Obama canceled a campaign stop with former President Bill Clinton in Virginia scheduled for October 29, as well as a trip to Colorado Springs, Colorado, the next day because of the impending storm. Washington, D.C. On October 26, Mayor of Washington, D.C. Vincent Gray declared a state of emergency, which President Obama signed on October 28. The United States Office of Personnel Management announced federal offices in the Washington, D.C. area would be closed to the public on October 29–30. In addition, Washington D.C. Metro service, both rail and bus, was canceled on October 29 due to expected high winds, the likelihood of widespread power outages, and the closing of the federal government. The Smithsonian Institution closed for the day of October 29. Maryland Governor of Maryland Martin O'Malley declared a state of emergency on October 26. By the following day, Smith Island residents were evacuated with the assistance of the Maryland Natural Resources Police, Dorchester County opened two shelters for those in flood prone areas, and Ocean City initiated Phase I of their Emergency Operations Plan. Baltimore Gas and Electric Co. put workers on standby and made plans to bring in crews from other states. On October 28, President Obama declared an emergency in Maryland and signed an order authorizing the Federal Emergency Management Agency to aid in disaster relief efforts. Also, numerous areas were ordered to be evacuated including part of Ocean City, Worcester County, Wicomico County, and Somerset County. Officials warned that more than a hundred million tons of dirty sediment mixed with tree limbs and debris floating behind Conowingo Dam could eventually pour into the Chesapeake Bay, posing a potential environmental threat.The Maryland Transit Administration canceled all service for October 29 and 30. The cancellations applied to buses, light rail, and Amtrak and MARC train service. On October 29, six shelters opened in Baltimore, and early voting was canceled for the day. Maryland Insurance Commissioner Therese M. Goldsmith activated an emergency regulation requiring pharmacies to refill prescriptions regardless of their last refill date. On October 29, the Chesapeake Bay Bridge over the Chesapeake Bay and the Millard E. Tydings Memorial Bridge and Thomas J. Hatem Memorial Bridge over the Susquehanna River were closed to traffic in the midday hours. Delaware On October 28, Governor Markell declared a state of emergency, with coastal areas of Sussex County evacuated. In preparation for the storm, the Delaware Department of Transportation suspended some weekend construction projects, removed traffic cones and barrels from construction sites, and removed several span-wire overhead signs in Sussex County. Delaware Route 1 through Delaware Seashore State Park was closed due to flooding. Delaware roads were closed to the public, except for emergency and essential personnel, and tolls on I-95 and Delaware Route 1 were waived. DART First State transit service was also suspended during the storm. New Jersey Preparations began on October 26, when officials in Cape May County advised residents on barrier islands to evacuate. There was also a voluntary evacuation for Mantoloking, Bay Head, Barnegat Light, Beach Haven, Harvey Cedars, Long Beach, Ship Bottom, and Stafford in Ocean County. Governor of New Jersey Chris Christie ordered all residents of barrier islands from Sandy Hook to Cape May to evacuate and closed Atlantic City casinos. Tolls were suspended on the northbound Garden State Parkway and the westbound Atlantic City Expressway starting at 6 a.m. on October 28. President Obama signed an emergency declaration for New Jersey, allowing the state to request federal funding and other assistance for actions taken before Sandy's landfall.On October 28, Mayor of Hoboken Dawn Zimmer ordered residents of basement and street-level residential units to evacuate, due to possible flooding. On October 29, residents of Logan Township were ordered to evacuate. Jersey Central Power & Light told employees to prepare to work extended shifts. Most schools, colleges and universities were closed October 29 while at least 509 out of 580 school districts were closed October 30. Although tropical storm conditions were inevitable and hurricane-force winds were likely, the National Hurricane Center did not issue any tropical cyclone watches or warnings for New Jersey, because Sandy was forecast to become extratropical before landfall and thus would not be a tropical cyclone. Pennsylvania Preparations in Pennsylvania began when Governor Tom Corbett declared a state of emergency on October 26. Mayor of Philadelphia Michael Nutter asked residents in low-lying areas and neighborhoods prone to flooding to leave their homes by 1800 UTC October 28 and move to safer ground. The Philadelphia International Airport suspended all flight operations for October 29. On October 29, Philadelphia shut down its mass transit system. On October 28, Mayor of Harrisburg Linda D. Thompson declared a state of disaster emergency for the city to go into effect at 5 a.m. October 29. Electric utilities in the state brought in crews and equipment from other states such as New Mexico, Texas, and Oklahoma, to assist with restoration efforts. New York Governor Andrew Cuomo declared a statewide state of emergency and asked for a pre-disaster declaration on October 26, which President Obama signed later that day. By October 27, major carriers canceled all flights into and out of JFK, LaGuardia, and Newark-Liberty airports, and Metro North and the Long Island Rail Road suspended service. The Tappan Zee Bridge was closed, and later the Brooklyn Battery Tunnel and Holland Tunnel were also closed. On Long Island, an evacuation was ordered for South Shore, including areas south of Sunrise Highway, north of Route 25A, and in elevations of less than 16 feet (4.9 m) above sea level on the North Shore. In Suffolk County, mandatory evacuations were ordered for residents of Fire Island and six towns. Most schools closed in Nassau and Suffolk counties on October 29. New York City began taking precautions on October 26. Governor Cuomo ordered the closure of MTA and its subway on October 28, and the MTA suspended all subway, bus, and commuter rail service beginning at 2300 UTC. After Hurricane Irene nearly submerged subways and tunnels in 2011, entrances and grates were covered just before Sandy, but were still flooded. PATH train service and stations as well as the Port Authority Bus Terminal were shut down in the early morning hours of October 29.Later on October 28, officials activated the coastal emergency plan, with subway closings and the evacuation of residents in areas hit by Hurricane Irene in 2011. More than 76 evacuation shelters were open around the city. On October 29, Mayor Michael Bloomberg ordered public schools closed and called for a mandatory evacuation of Zone A, which comprised areas near coastlines or waterways. Additionally, 200 National Guard troops were deployed in the city. NYU Langone Medical Center canceled all surgeries and medical procedures, except for emergency procedures. Additionally, one of NYU Langone Medical Center's backup generators failed on October 29, prompting the evacuation of hundreds of patients, including those from the hospital's various intensive care units. U.S. stock trading was suspended for October 29–30. New England Connecticut Governor Dannel Malloy partially activated the state's Emergency Operations Center on October 26 and signed a Declaration of Emergency the next day. On October 28, President Obama approved Connecticut's request for an emergency declaration, and hundreds of National Guard personnel were deployed. On October 29, Governor Malloy ordered road closures for all state highways. Numerous mandatory and partial evacuations were issued in cities across Connecticut. Massachusetts Governor Deval Patrick ordered state offices to be closed October 29 and recommended schools and private businesses close. On October 28, President Obama issued a Pre-Landfall Emergency Declaration for Massachusetts. Several shelters were opened, and many schools were closed. The Massachusetts Bay Transportation Authority shut down all services on the afternoon of October 29. On October 28, Vermont Governor Peter Shumlin, New Hampshire Governor John Lynch, and Maine's Governor Paul LePage all declared states of emergency. Appalachia and the Midwest The National Weather Service issued a storm warning for Lake Huron on October 29 that called for wave heights of 26 feet (7.9 m), and possibly as high as 38 feet (12 m). Lake Michigan waves were expected to reach 19 feet (5.8 m), with a potential of 33 feet (10 m) on October 30. Flood warnings were issued in Chicago on October 29, where wave heights were expected to reach 18 to 23 feet (5.5 to 7.0 m) in Cook County and 25 feet (7.6 m) in northwest Indiana. Gale warnings were issued for Lake Michigan and Green Bay in Wisconsin until the morning of October 31, and waves of 33 feet (10 m) in Milwaukee and 20 feet (6.1 m) in Sheboygan were predicted for October 30. The actual waves reached about 20 feet (6.1 m) but were less damaging than expected. The village of Pleasant Prairie, Wisconsin urged a voluntary evacuation of its lakefront area, though few residents signed up, and little flooding actually occurred.Michigan was impacted by a winter storm system coming in from the west, mixing with cold air streams from the Arctic and colliding with Hurricane Sandy. The forecasts slowed shipping traffic on the Great Lakes, as some vessels sought shelter away from the peak winds, except those on Lake Superior. Detroit-based DTE Energy released 100 contract line workers to assist utilities along the eastern U.S. with storm response, and Consumers Energy did the same with more than a dozen employees and 120 contract employees. Due to the widespread power outages, numerous schools had to close, especially in St. Clair County and areas along Lake Huron north of Metro Detroit.As far as Ohio's western edge, areas were under a wind advisory. All departing flights at Cleveland Hopkins International Airport were canceled until October 30 at 3 p.m.Governor of West Virginia Earl Ray Tomblin declared a state of emergency ahead of storm on October 29. Up to 2 to 3 feet (0.61 to 0.91 m) of snow was forecast for mountainous areas of the state.In Great Smoky Mountains National Park, in Tennessee, several inches of snow led to the closure of a major route through the park on Sunday, October 28, and again, after a brief reopening, on Monday, October 29, 2012. Canada The Canadian Hurricane Centre issued its first preliminary statement for Hurricane Sandy on October 25 from Southern Ontario to the Canadian Maritimes, with the potential for heavy rain and strong winds. On October 29, Environment Canada issued severe wind warnings for the Great Lakes and St. Lawrence Valley corridor, from Southwestern Ontario as far as Quebec City. On October 30, Environment Canada issued storm surge warnings along the mouth of the St. Lawrence River. Rainfall warnings were issued for the Charlevoix region in Quebec, as well as for several counties in New Brunswick, and Nova Scotia, where about 2 to 3 inches (51 to 76 mm) of rain was to be expected. Freezing rain warnings were issued for parts of Northern Ontario. Impact At least 233 people were killed across the United States, the Caribbean, and Canada, as a result of the storm. Caribbean Jamaica Jamaica was the first country directly affected by Sandy, which was also the first hurricane to make landfall on the island since Hurricane Gilbert, which struck the island in 1988. Trees and power lines were snapped and shanty houses were heavily damaged, both from the winds and flooding rains. More than 100 fishermen were stranded in outlying Pedro Cays off Jamaica's southern coast. Stones falling from a hillside crushed one man to death as he tried to get into his house in a rural village near Kingston. After 6 days another fatality recorded as a 27-year-old man, died due to electrocution, attempting a repair. The country's sole electricity provider, the Jamaica Public Service Company, reported that 70 percent of its customers were without power. More than 1,000 people went to shelters. Jamaican authorities closed the island's international airports, and police ordered 48-hour curfews in major towns to keep people off the streets and deter looting. Most buildings in the eastern portion of the island lost their roofs. Damage was assessed at approximately $100 million throughout the country. Hispaniola In Haiti, which was still recovering from both the 2010 earthquake and the ongoing cholera outbreak, at least 54 people died, and approximately 200,000 were left homeless as a result of four days of ongoing rain from Hurricane Sandy. Heavy damage occurred in Port-Salut after rivers overflowed their banks. In the capital of Port-au-Prince, streets were flooded by the heavy rains, and it was reported that "the whole south of the country is underwater". Most of the tents and buildings in the city's sprawling refugee camps and the Cité Soleil neighborhood were flooded or leaking, a repeat of what happened earlier in the year during the passage of Hurricane Isaac. Crops were also wiped out by the storm and the country would be making an appeal for emergency aid. Damage in Haiti was estimated at $750 million (2012 USD), making it the costliest tropical cyclone in Haitian history. In the month following Sandy, a resurgence of cholera linked to the storm killed at least 44 people and infected more than 5,000 others.In the neighboring Dominican Republic, two people were killed and 30,000 people evacuated. An employee of CNN estimated 70% of the streets in Santo Domingo were flooded. One person was killed in Juana Díaz, Puerto Rico after being swept away by a swollen river. Cuba At least 55,000 people were evacuated before Hurricane Sandy's arrival. While moving ashore, the storm produced waves up to 29 feet (8.8 meters) and a 6-foot (1.8-meter) storm surge that caused extensive coastal flooding. There was widespread damage, particularly to Santiago de Cuba where 132,733 homes were damaged, of which 15,322 were destroyed and 43,426 lost their roof. Electricity and water services were knocked out, and most of the trees in the city were damaged. Total losses throughout Santiago de Cuba province is estimated as high as $2 billion (2012 USD). Sandy killed 11 people in the country – nine in Santiago de Cuba Province and two in Guantánamo Province; most of the victims were trapped in destroyed houses. This makes Sandy the deadliest hurricane to hit Cuba since 2005, when Hurricane Dennis killed 16 people. Bahamas A NOAA automated station at Settlement Point on Grand Bahama Island reported sustained winds of 49 mph (79 km/h) and a wind gust of 63 mph (101 km/h). One person died from falling off his roof while attempting to fix a window shutter in the Lyford Cay area on New Providence. Another died in the Queen's Cove area on Grand Bahama Island where he drowned after the sea surge trapped him in his apartment. Portions of the Bahamas lost power or cellular service, including an islandwide power outage on Bimini. Five homes were severely damaged near Williams's Town. Overall damage in the Bahamas was about $700 million (2012 USD), with the most severe damage on Cat Island and Exuma where many houses were heavily damaged by wind and storm surge. Bermuda Owing to the sheer size of the storm, Sandy also impacted Bermuda with high winds and heavy rains. On October 28, a weak F0 tornado touched down in Sandys Parish, damaging homes and businesses. During a three-day span, the storm produced 0.98 in (25 mm) of rain at the L.F. Wade International Airport. The strongest winds were recorded on October 29: sustained winds reached 37 mph (60 km/h) and gusts peaked at 58 mph (93 km/h), which produced scattered minor damage. United States A total of 24 U.S. states were in some way affected by Sandy. The hurricane caused tens of billions of dollars in damage in the United States, destroyed thousands of homes, left millions without electric service, and caused 71 direct deaths in nine states, including 49 in New York, 10 in New Jersey, 3 in Connecticut, 2 each in Pennsylvania and Maryland, and 1 each in New Hampshire, Virginia and West Virginia. There were also 2 direct deaths from Sandy in U.S. coastal waters in the Atlantic Ocean, about 90 miles (140 km) off the North Carolina coast, which are not counted in the U.S. total. In addition, the storm resulted in 87 indirect deaths. In all, a total of 160 people were killed due to the storm, making Sandy the deadliest hurricane to hit the United States mainland since Hurricane Katrina in 2005 and the deadliest to hit the U.S. East Coast since Hurricane Agnes in 1972.Due to flooding and other storm-related problems, Amtrak canceled all Acela Express, Northeast Regional, Keystone, and Shuttle services for October 29 and 30. More than 13,000 flights were canceled across the U.S. on October 29, and more than 3,500 were called off October 30. From October 27 through early November 1, airlines canceled a total of 19,729 flights, according to FlightAware.On October 31, over 6 million customers were still without power in 15 states and the District of Columbia. The states with the most customers without power were New Jersey with 2,040,195 customers; New York with 1,933,147; Pennsylvania with 852,458; and Connecticut with 486,927. The New York Stock Exchange and Nasdaq reopened on October 31 after a two-day closure for the storm. More than 1,500 FEMA personnel were along the East Coast working to support disaster preparedness and response operations, including search and rescue, situational awareness, communications and logistical support. In addition, 28 teams containing 294 FEMA Corps members were pre-staged to support Sandy responders. Three federal urban search and rescue task forces were positioned in the Mid-Atlantic and ready to deploy as needed. Direct Relief provided medical supplies to community clinics, non-profit health centers, and other groups in areas affected by Hurricane Sandy, and mapped pharmacies, gas stations, and other facilities that remained in the New York City area despite power outages.On November 2, the American Red Cross announced they had 4,000 disaster workers across storm damaged areas, with thousands more en route from other states. Nearly 7,000 people spent the night in emergency shelters across the region.Hurricane Sandy: Coming Together, a live telethon on November 2 that featured rock and pop stars such as Bruce Springsteen, Billy Joel, Jon Bon Jovi, Mary J. Blige, Sting and Christina Aguilera, raised around $23 million for American Red Cross hurricane relief efforts.At the time, the National Hurricane Center ranked Hurricane Sandy the second-costliest U.S. hurricane since 1900 in constant 2010 dollars, and the sixth-costliest after adjusting for inflation, population and property values. Scientists at the University of Utah reported the energy generated by Sandy was equivalent to "small earthquakes between magnitudes 2 and 3". Southeast Florida In South Florida, Sandy lashed the area with rough surf, strong winds, and brief squalls. Along the coast of Miami-Dade County, waves reached 10 feet (3.0 m), but may have been as high as 20 feet (6.1 m) in Palm Beach County. In the former county, minor pounding occurred on a few coastal roads. Further north in Broward County, State Road A1A was inundated with sand and water, causing more than a 2 miles (3.2 km) stretch of the road to be closed for the entire weekend. Additionally, coastal flooding extended inland up to 2 blocks in some locations and a few houses in the area suffered water damage. In Manalapan, which is located in southern Palm Beach County, several beachfront homes were threatened by erosion. The Lake Worth Pier was also damaged by rough seas. In Palm Beach County alone, losses reached $14 million. Sandy caused closures and cancellations of some activities at schools in Palm Beach, Broward and Miami-Dade counties. Storm surge from Sandy also caused flooding and beach erosion along coastal areas in South Florida. Gusty winds also impacted South Florida, peaking at 67 mph (108 km/h) in Jupiter and Fowey Rocks Light, which is near Key Biscayne. The storm created power outages across the region, which left many traffic lights out of order.In east-central Florida, damage was minor, though the storm left about 1,000 people without power. Airlines at Miami International Airport canceled more than 20 flights to or from Jamaica or the Bahamas, while some airlines flying from Fort Lauderdale–Hollywood International Airport canceled a total of 13 flights to the islands. The Coast Guard rescued two sea men in Volusia County off New Smyrna Beach on the morning of October 26. Brevard and Volusia Counties schools canceled all extracurricular activities for October 26, including football.Two panther kittens escaped from the White Oak Conservation Center in Nassau County after the hurricane swept a tree into the fence of their enclosure; they were missing for 24 hours before being found in good health. North Carolina On October 28, Governor Bev Perdue declared a state of emergency in 24 western counties, due to snow and strong winds.North Carolina was spared from major damage for the most part (except at the immediate coastline), though winds, rain, and mountain snow affected the state through October 30. Ocracoke and Highway 12 on Hatteras Island were flooded with up to 2 feet (0.6 m) of water, closing part of the highway, while 20 people on a fishing trip were stranded on Portsmouth Island.There were three Hurricane Sandy-related deaths in the state.On October 29, the Coast Guard responded to a distress call from Bounty, which was built for the 1962 movie Mutiny on the Bounty. It was taking on water about 90 miles (140 km) southeast of Cape Hatteras. Sixteen people were on board. The Coast Guard said the 16 people abandoned ship and got into two lifeboats, wearing survival suits and life jackets. The ship sank after the crew got off. The Coast Guard rescued 14 crew members; another was found hours later but was unresponsive and later died. The search for the captain, Robin Walbridge, was suspended on November 1, after efforts lasting more than 90 hours and covering approximately 12,000 square nautical miles (41,100 km2). Mid-Atlantic Virginia On October 29, snow was falling in parts of the state. Gov. Bob McDonnell announced on October 30 that Virginia had been "spared a significant event", but cited concerns about rivers cresting and consequent flooding of major arteries. Virginia was awarded a federal disaster declaration, with Gov. McDonnell saying he was "delighted" that President Barack Obama and FEMA were on it immediately. At Sandy's peak, more than 180,000 customers were without power, most of whom were located in Northern Virginia. There were three Hurricane Sandy-related fatalities in the state. Maryland and Washington, D.C. The Supreme Court and the United States Government Office of Personnel Management were closed on October 30, and schools were closed for two days. MARC train and Virginia Railway Express were closed on October 30, and Metro rail and bus service were on Sunday schedule, opening at 2 p.m., until the system closes.At least 100 feet (30 m) of a fishing pier in Ocean City was destroyed. Governor Martin O'Malley said the pier was "half-gone". Due to high winds, the Chesapeake Bay Bridge and the Millard E. Tydings Memorial Bridge on I-95 were closed. During the storm, the Mayor of Salisbury instituted a Civil Emergency and a curfew. Interstate 68 in far western Maryland and northern West Virginia closed due to heavy snow, stranding multiple vehicles and requiring assistance from the National Guard. Redhouse, Maryland received 26 inches (66 cm) of snow and Alpine Lake, West Virginia received 24 inches (61 cm).Workers in Howard County tried to stop a sewage overflow caused by a power outage on October 30. Raw sewage spilled at a rate of 2 million gallons per hour. It was unclear how much sewage had flowed into the Little Patuxent River. Over 311,000 people were left without power as a result of the storm. Delaware By the afternoon of October 29, rainfall at Rehoboth Beach totaled 6.53 inches (166 mm). Other precipitation reports include nearly 7 inches (180 mm) at Indian River Inlet and more than 4 inches (100 mm) in Dover and Bear. At 4 p.m. on October 29, Delmarva Power reported on its website that more than 13,900 customers in Delaware and portions of the Eastern Shore of Maryland had lost electric service as high winds brought down trees and power lines. About 3,500 of those were in New Castle County, 2,900 were in Sussex, and more than 100 were in Kent County. Some residents in Kent and Sussex Counties experienced power outages that lasted up to nearly six hours. At the peak of the storm, more than 45,000 customers in Delaware were without power. The Delaware Memorial Bridge speed limit was reduced to 25 mph (40 km/h) and the two outer lanes in each direction were closed. Officials planned to close the span entirely if sustained winds exceeded 50 mph (80 km/h). A wind gust of 64 mph (103 km/h) was measured at Lewes just before 2:30 p.m. on October 29. Delaware Route 1 was closed due to water inundation between Dewey Beach and Fenwick Island. In Dewey Beach, flood waters were 1 to 2 feet (0.30 to 0.61 m) in depth. Following the impact in Delaware, President Barack Obama declared the entire state a federal disaster area, providing money and agencies for disaster relief in the wake of Hurricane Sandy. New Jersey A 50-foot (15 m) piece of the Atlantic City Boardwalk washed away. Half the city of Hoboken flooded; the city of 50,000 had to evacuate two of its fire stations, the EMS headquarters, and the hospital. With the city cut off from area hospitals and fire suppression mutual aid, the city's Mayor asked for National Guard help. In the early morning of October 30, authorities in Bergen County, New Jersey, evacuated residents after a berm overflowed and flooded several communities. Police Chief of Staff Jeanne Baratta said there were up to five feet (1.5 m) of water in the streets of Moonachie and Little Ferry. The state Office of Emergency Management said rescues were undertaken in Carlstadt. Baratta said the three towns had been "devastated" by the flood of water. At the peak of the storm, more than 2,600,000 customers were without power. There were 43 Hurricane Sandy-related deaths in the state of New Jersey. Damage in the state was estimated at $36.8 billion. Pennsylvania Philadelphia Mayor Michael Nutter said the city would have no mass transit operations on any lines October 30. All major highways in and around the city of Philadelphia were closed on October 29 during the hurricane, including Interstate 95, the Blue Route portion of Interstate 476, the Vine Street Expressway, Schuylkill Expressway (I-76), and the Roosevelt Expressway; U.S. Route 1. The highways reopened at 4 a.m. on October 30. The Delaware River Port Authority also closed its major crossings over the Delaware River between Pennsylvania and New Jersey due to high winds, including the Commodore Barry Bridge, the Walt Whitman Bridge, the Benjamin Franklin Bridge and the Betsy Ross Bridge. Trees and powerlines were downed throughout Altoona, and four buildings partially collapsed. More than 1.2 million were left without power. The Pennsylvania Emergency Management Agency reported 14 deaths believed to be related to Sandy. New York New York governor Andrew Cuomo called National Guard members to help in the state. Storm impacts in Upstate New York were much more limited than in New York City; there was some flooding and a few downed trees. Rochester area utilities reported slightly fewer than 19,000 customers without power, in seven counties. In the state as a whole, however, more than 2,000,000 customers were without power at the peak of the storm.Mayor of New York City Michael Bloomberg announced that New York City public schools would be closed on Tuesday, October 30 and Wednesday, October 31, but they remained closed through Friday, November 2. The City University of New York and New York University canceled all classes and campus activities for October 30. The New York Stock Exchange was closed for trading for two days, the first weather closure of the exchange since 1985. It was also the first two-day weather closure since the Great Blizzard of 1888.The East River overflowed its banks, flooding large sections of Lower Manhattan. Battery Park had a water surge of 13.88 ft. Seven subway tunnels under the East River were flooded. The Metropolitan Transportation Authority said that the destruction caused by the storm was the worst disaster in the 108-year history of the New York City subway system. Sea water flooded the Ground Zero construction site including the National September 11 Memorial and Museum. Over 10 billion gallons of raw and partially treated sewage were released by the storm, 94% of which went into waters in and around New York and New Jersey. In addition, a four-story Chelsea building's facade crumbled and collapsed, leaving the interior on full display; however, no one was hurt by the falling masonry. The Atlantic Ocean storm surge also caused considerable flood damage to homes, buildings, roadways, boardwalks and mass transit facilities in low-lying coastal areas of the outer boroughs of Queens, Brooklyn and Staten Island. After receiving many complaints that holding the marathon would divert needed resources, Mayor Bloomberg announced late afternoon November 2 that the New York City Marathon had been canceled. The event was to take place on Sunday, November 4. Marathon officials said that they did not plan to reschedule.Gas shortages throughout the region led to an effort by the U.S. federal government to bring in gasoline and set up mobile truck distribution at which people could receive up to 10 gallons of gas, free of charge. This caused lines of up to 20 blocks long and was quickly suspended. On Thursday, November 8, Mayor Bloomberg announced odd-even rationing of gasoline would be in effect beginning November 9 until further notice.On November 26, Governor Cuomo called Sandy "more impactful" than Hurricane Katrina, and estimated costs to New York at $42 billion. Approximately 100,000 residences on Long Island were destroyed or severely damaged, including 2,000 that were rendered uninhabitable. There were 53 Hurricane Sandy-related deaths in the state of New York. In 2016, the hurricane was determined to have been the worst to strike the New York City area since at least 1700. New England Wind gusts to 83 mph were recorded on outer Cape Cod and Buzzards Bay. Nearly 300,000 customers were without power in Massachusetts, and roads and buildings were flooded. Over 100,000 customers lost power in Rhode Island. Most of the damage was along the coastline, where some communities were flooded. Mount Washington, New Hampshire saw the strongest measured wind gust from the storm at 140 mph. Nearly 142,000 customers lost power in the state.The flooding caused by Hurricane Sandy overwhelmed water treatment infrastructure on the northeast coast of the United States. More than 200 wastewater treatment plants and over 80 drinking water facilities along the coast of the Tri-State area had been damaged beyond function, with a statement from Governor Cuomo that damage in New York treatment plants alone could reach $1.1 billion. The resulting damage caused more than 10 billion gallons of raw sewage to be released into New York and New Jersey water sources. This contamination resulted in the shutting down of several drinking-water facilities.The contamination caused by this incident resulted in the EPA issuing a warning that all individuals should avoid coming into contact with the water in Newark Bay and New York Harbor, due to the increased presence of fecal coliform, a bacteria that is associated with human waste. Similar warnings were issued for water sources in both the Westchester and Yonkers areas. Appalachia and Midwest West Virginia Sandy's rain became snow in the Appalachian Mountains, leading to blizzard conditions in some areas, especially West Virginia, when a tongue of dense and heavy Arctic air pushed south through the region. This would normally cause a Nor'easter, prompting some to dub Sandy a "nor'eastercane" or "Frankenstorm". There was 1–3 feet (30–91 cm) of snowfall in 28 of West Virginia's 55 counties. The highest snowfall accumulation was 36 inches (91 cm) near Richwood. Other significant totals include 32 inches (81 cm) in Snowshoe, 29 inches (74 cm) in Quinwood, and 28 inches (71 cm) in Davis, Flat Top, and Huttonsville. By the morning of October 31, there were still 36 roads closed due to downed trees, powerlines, and snow in the road. Approximately 271,800 customers lost power during the storm.There were reports of collapsed buildings in several counties due to the sheer weight of the wet, heavy snow. Overall, there were seven fatalities related to Hurricane Sandy and its remnants in West Virginia, including John Rose Sr., the Republican candidate for the state's 47th district in the state legislature, who was killed in the aftermath of the storm by a falling tree limb broken off by the heavy snowfall. Governor Earl Ray Tomblin asked President Obama for a federal disaster declaration, and on October 30, President Obama approved a state of emergency declaration for the state. Ohio Wind gusts at Cleveland Burke Lakefront Airport were reported at 68 miles per hour (109 km/h). On October 30, hundreds of school districts canceled or delayed school across the state with at least 250,000 homes and businesses without power. Damage was reported across the state including the Rock and Roll Hall of Fame which lost parts of its siding. Snow was reported in some parts of eastern Ohio and south of Cleveland. Snow and icy roads also were reported south of Columbus. Michigan The US Department of Energy reported that more than 120,000 customers lost power in Michigan as a result of the storm. The National Weather Service said that waves up to 23 feet (7.0 m) high were reported on southern Lake Huron. Kentucky More than one foot (300 mm) of snow fell in eastern Kentucky as Sandy merged with an Arctic front. Winter warnings in Harlan, Letcher, and Pike County were put into effect until October 31. Tennessee Mount Le Conte, Tennessee, in Great Smoky Mountains National Park, was blanketed with 32 inches (81 cm) of snow, an October record. Canada The remnants of Sandy produced high winds along Lake Huron and Georgian Bay, where gusts were measured at 105 km/h (65 mph). A 121 km/h (75 mph) gust was measured on top of the Bluewater Bridge. One woman died after being hit by a piece of flying debris in Toronto. At least 145,000 customers across Ontario lost power, and a Bluewater Power worker was electrocuted in Sarnia while working to restore power. Around 49,000 homes and businesses lost power in Quebec during the storm, with nearly 40,000 of those in the Laurentides region of the province, as well as more than 4,000 customers in the Eastern Townships and 1,700 customers in Montreal. Hundreds of flights were canceled. Around 14,000 customers in Nova Scotia lost power during the height of the storm. The Insurance Bureau of Canada's preliminary damage estimate was over $100 million for the nation. Aftermath Relief efforts Several media organizations contributed to the immediate relief effort: Disney–ABC Television Group held a "Day of Giving" on Monday, November 5, raising $17 million on their television stations for the American Red Cross and NBC raised $23 million during their Hurricane Sandy: Coming Together telethon the same day. On October 31, 2012, News Corporation donated $1 million to relief efforts in the New York metropolitan area. As of December 2013, the NGO Hurricane Sandy New Jersey Relief Fund had distributed much of the funding raised in New Jersey. On November 6, the United Nations and World Food Programme promised humanitarian aid to at least 500,000 people in Santiago de Cuba.On December 12, 2012, the 12-12-12: The Concert for Sandy Relief took place at Madison Square Garden in New York City. Various television channels in the United States and internationally aired the four-hour concert which was expected to reach over 1 billion people worldwide, featuring many famous performers including Bon Jovi, Eric Clapton, Dave Grohl, Billy Joel, and Alicia Keys. Web sites including Fuse.tv, MTV.com, YouTube, and the sites of AOL and Yahoo! planned to stream the performance.The U.S. Government mobilized several agencies and departments to mitigate the effects of the Hurricane in the most afflicted areas. The response to the storm on part of the government was of particular urgency owing to the possible fallout of a poor response on part of the Obama administration during the upcoming U.S. presidential elections. These sentiments were characterized in the President's speech in the days following the impact, stating the government's response was "not going to tolerate any red tape. We're not going to tolerate any bureaucracy".Anticipating the destruction of the Atlantic storm, states on the U.S. East Coast, especially in heavily populated regions like in the New York metropolitan area, began to prepare. As the tropical depression strengthened to a hurricane, the Department of Defense formed Joint Task Force Sandy on October 22, 2012. Gathering humanitarian supplies and disaster recovery equipment, the DOD prepared to carry out DSCA (Defense Support of Civil Authorities) operations across the eastern seaboard. In the aftermath of the calamity, thousands of military personnel provided vital assistance to affected communities. On the first night of the aftermath, 12,000 National Guard members across the East Coast worked to assuage the destruction. President Obama mandated the Defense Logistics Agency to supply over 5 million gallons Department of Energy-owned ultra-low sulfur diesel.On December 28, 2012, the United States Senate approved an emergency Hurricane Sandy relief bill to provide $60 billion for US states affected by Sandy, but the House in effect postponed action until the next session which began January 3 by adjourning without voting on the bill. On January 4, 2013, House leaders pledged to vote on a flood insurance bill and an aid package by January 15. On January 28, the Senate passed the $50.5 billion Sandy aid bill by a count of 62–36. which President Obama signed into law January 29.In January 2013, The New York Times reported that those affected by the hurricane were still struggling to recover.In June 2013, NY Governor Andrew Cuomo set out to centralize recovery and rebuilding efforts in impacted areas of New York State by establishing the Governor's Office of Storm Recovery (GOSR). He aimed to address communities' most urgent needs, and to identify innovative and enduring solutions to strengthen the State's infrastructure and critical systems. Operating under the umbrella of New York Rising, GOSR utilized approximately $3.8 billion in flexible funding made available by the U.S. Department of Housing & Urban Development's (HUD) Community Development Block Grant Disaster Recovery (CDBG-DR) program to concentrate aid in four main areas: housing, small business, infrastructure, and the community reconstruction.On December 6, 2013, an analysis of Federal Emergency Management Agency data showed that fewer than half of those affected who requested disaster recovery assistance had received any, and a total of 30,000 residents of New York and New Jersey remained displaced.In March 2014, Newsday reported, that 17 months after the hurricane people displaced from rental units on Long Island faced unique difficulties due to lack of affordable rental housing and delays in housing program implementations by New York State. Close to 9,000 rental units on Long Island were damaged by Hurricane Sandy in October 2012, and Hurricane Irene and Tropical Storm Lee in 2011 per the NY State Governor's Office of Storm Recovery (GOSR). New York State officials said that additional assistance would soon be available from the HUD's Community Development Block Grant funds via the New York Rising program. On March 15, 2014, a group of those who remained displaced by the hurricane organized a protest at the Nassau Legislative building in Mineola, New York, to raise awareness of their frustration with the timeline for receiving financial assistance from the New York Rising program.As of March 2014, the GOSR released a press statement, that the New York Rising Community Reconstruction Program had distributed more than $280 million in payments to 6,388 homeowners for damage from Hurricane Sandy, Hurricane Irene or Tropical Storm Lee. Every eligible homeowner who had applied by January 20, 2014, had been issued a check for home reconstruction, including over 4,650 Nassau residents for over $201 million and over 1,350 Suffolk residents for over $65 million. The state also had made offers over $293 million to buy out homes of 709 homeowners. Political impact Hurricane Sandy sparked much political commentary. Many scientists said warming oceans and greater atmospheric moisture were intensifying storms while rising sea levels were worsening coastal effects. November 2012 Representative Henry Waxman of California, the top Democrat of the House Energy and Commerce Committee, requested a hearing in the lame duck session on links between climate change and Hurricane Sandy. Some news outlets labeled the storm the October surprise of the 2012 United States Presidential election, while Democrats and Republicans accused each other of politicizing the storm.The storm hit the United States one week before its general United States elections, and affected the presidential campaign, local and state campaigns in storm-damaged areas. New Jersey Governor Chris Christie, one of Mitt Romney's leading supporters, praised President Barack Obama and his reaction to the hurricane, and toured storm-damaged areas of his state with the president. It was reported at the time that Sandy might affect elections in several states, especially by curtailing early voting. The Economist wrote, "the weather is supposed to clear up well ahead of election day, but the impact could be felt in the turnout of early voters." ABC News predicted this might be offset by a tendency to clear roads and restore power more quickly in urban areas. The storm ignited a debate over whether Republican presidential nominee Mitt Romney in 2011 proposed to eliminate the Federal Emergency Management Agency (FEMA). The next day the Romney campaign promised to keep FEMA funded, but did not explain what other parts of the federal budget it would cut to pay for it. Beyond the election, National Defense Magazine said Sandy "might cause a rethinking (in the USA) of how climate change threatens national security".In his news conference on November 14, 2012, President Obama said, "we can't attribute any particular weather event to climate change. What we do know is the temperature around the globe is increasing faster than was predicted even 10 years ago. We do know that the Arctic ice cap is melting faster than was predicted even five years ago. We do know that there have been extraordinarily — there have been an extraordinarily large number of severe weather events here in North America, but also around the globe. And I am a firm believer that climate change is real, that it is impacted by human behavior and carbon emissions. And as a consequence, I think we've got an obligation to future generations to do something about it."On January 30, 2015, days after the U.S. Army Corps of Engineers released a post-Sandy report examining flood risks for 31,200 miles (50,200 km) of the North Atlantic coast, President Obama issued an executive order directing federal agencies, state and local governments drawing federal funds to adopt stricter building and siting standards to reflect scientific projections that future flooding will be more frequent and intense due to climate change. Financial markets impact Power outages and flooding in the area closed the New York Stock Exchange and other financial markets on both October 29 and 30, a weather-related closure that last happened in 1888. When markets reopened on October 31, investors were relieved that it closed relatively flat that day. A week later, the National Association of Insurance Commissioners Capital Markets Bureau noted a slight uptick in the market (0.8%) and suggested that the negative economic impact of Hurricane Sandy was offset by the expected positive impacts of rebuilding. Infrastructure impact The destruction of physical infrastructure as a result of Sandy cost impacted states, including New York and New Jersey, tens of billions of dollars. EQECAT, a risk-modeling company that focuses on catastrophes, approximated that impacted regions lost between $30 billion to $50 billion in economic activity. The economic loss was attributed to the massive power outages, liquid fuel shortages, and a near shutdown of the region's transportation system. Energy: Roughly 8.5 million customers were impacted due to power outages, including many businesses that were hard-pressed to deliver products and services in a timely manner. Breaks in gas lines also caused fires in many locations, prompting explosions and destruction of a large number of residences. Locating gas and diesel fuel proved difficult immediately after Sandy hit, which harmed transportation access for many people. The impairment of the ability to obtain fuel was due to flooding damage in crucial terminals and harbors in areas of New Jersey bordering the Arthur Kill. The shortage of fuel held up first responders as well as other response and recovery officials. Therefore, portable generators remained unutilized, resulting in long lines at fueling stations while individuals were unable to differentiate between the stations that did not hold power from the gas stations that were operational. Communications: Telecommunications infrastructure was heavily disrupted, impacting millions of people and thousands of businesses, destabilizing the economy of one of the biggest cities in the world. The Federal Communications Commission (FCC) found that roughly 25% of cell towers across 10 states were out of service at the peak of the storm. Green Infrastructure: Hurricane Sandy's storm surge caused erosion of the beaches and dunes, island breaches, and overwash along the coast of New England down all the way to Florida. Flooding along the coast generated substantial erosion of previous natural infrastructure, flooding of wetland habitats, coastal dune destruction or erosion, decimation of coastal lakes, and novel inlet creation.Transportation: Throughout the history of the country, the nation had not witnessed a worse disaster for public transit systems, including buses, subway, and commuter rail, than when Sandy struck. The morning after the storm hit, on October 30, 2012, more than half of the country's daily public transportation riders were unable to commute due to inoperable service. The New York City subway system was shut down two days prior to the storm due to necessary precautions and remained closed through November 1. During that short amount of time, one of the world's largest financial centers experienced immense traffic jams. Those who were able to arrive at work experienced commutes of several hours. Eight New York City subway tunnels were flooded due to a seawater breach which flowed through the Brooklyn-Battery Tunnel, impacting various transportation systems throughout the region. Stormwater Management and Wastewater Treatment Systems: There was a massive failure in wastewater treatment facilities all around the mid-Atlantic coast due to floodwaters, large storm runoff, wind damage, and electricity loss. The region's waterways were hit with billions of gallons of raw and partially treated sewage, adversely affecting the health of the public, as well as ocean habitats and other important resources. There was also a public health concern about the threat of contaminated water filling the pipes and wells that supplied potable water to large parts of the region. Large water utility companies experienced power outages, disrupting their ability to provide safe drinking water. Advisories had to be sent out to many parts of New York and New Jersey for customers to warn them of the potential of their water being contaminated. The "boil water" advisories were later lifted, however, when it was proven that none of the water was contaminated or held the potential for any ill effects. Public Medical Facilities and Schools: A variety of New York City hospitals and other medical facilities, including the Bellevue Medical Center and Coney Island hospital, were shut down as a result of flooding from the storm. In many parts of the hospitals, there was considerable damage to research, medical, and electrical equipment which was located on lower floors for ease of access. In New Jersey, medical facilities were also severely affected. In sum, the hospitals in the state reported an estimated $68 million in damage. Hudson County had to force closure due to the extensive damage done by the hurricane. Hurricane Sandy also caused schools to close for about a week on average immediately following the storm. During the period of closure, schools attempted to regain control of electrical operations that were impaired by the aftermath. Insurance fraud claims Thousands of homeowners were denied their flood insurance claims based upon fraudulent engineers' reports, according to the whistleblowing efforts of Andrew Braum, an engineer who claimed that at least 175 of his more than 180 inspections were doctored. As a result, a class-action racketeering lawsuit has been filed against several insurance companies and their contract engineering firms. As of 2015, the Federal Emergency Management Agency planned to review all flood insurance claims. Baby boom New Jersey hospitals saw a spike in births nine months after Sandy, causing some to believe that there was a post-Sandy baby boom. The Monmouth Medical Center saw a 35% jump, and two other hospitals saw 20% increases. An expert stated that post-storm births that year were higher than in past disasters. Name retirement Because of the exceptional damage and deaths caused by the storm in several countries, the name Sandy was later retired by the World Meteorological Organization, and will never be used again for a North Atlantic hurricane. It was replaced with Sara for the 2018 Atlantic hurricane season (though it went unused that season). Media coverage As Hurricane Sandy approached the United States, forecasters and journalists gave it several different unofficial names, at first related to its projected snow content, then to its proximity to Halloween, and eventually to the overall size of the storm. Early nicknames included "Snowicane Sandy" and "Snor'eastercane Sandy". The most popular Halloween-related nickname was "Frankenstorm", coined by Jim Cisco, a forecaster at the Hydrometeorological Prediction Center. CNN banned the use of the term, saying it trivialized the destruction.The severe and widespread damage the storm caused in the United States, as well as its unusual merger with a frontal system, resulted in the nicknaming of the hurricane "Superstorm Sandy" by the media, public officials, and several organizations, including U.S. government agencies. This persisted as the most common nickname well into 2013. The term was also embraced by climate change proponents as a term for the new type of storms caused by global warming, while other writers used the term but maintained that it was too soon to blame the storm on climate change. Meanwhile, Popular Science called it "an imaginary scare-term that exists exclusively for shock value". See also 1938 New England hurricane 1991 Perfect Storm Hurricane Irene (2011) Tropical Storm Fay (2020) Hurricane Ida (2021) Other storms named Sandy Hurricane Sandy IRS tax deduction List of Atlantic hurricane records List of Category 3 Atlantic hurricanes List of Cuba hurricanes List of New Jersey hurricanes List of New York hurricanes Superstorm Timeline of the 2012 Atlantic hurricane season Typhoon Jongdari – A Pacific typhoon in 2018 which executed a similar turn into Japan. References Informational notes Citations Further reading Ian Roulstone; John Norbury (August 1, 2013). "How Math Helped Forecast Hurricane Sandy". Scientific American. 309 (2): 22. doi:10.1038/scientificamerican0813-22. PMID 23923198. External links Archived information on Hurricane Sandy from the National Hurricane Center Radar loop of Hurricane Sandy making landfall on YouTube Satellite imagery and data of Hurricane Sandy from NASA Google Crisis Map for Hurricane Sandy Hurricane information and live coverage from The Weather Channel from Weather Underground Superstorm Sandy at The Weather Channel Hurricane Sandy: Guidelines for Providing Assistance by American Radio Relay League Monitoring Storm Tide and Flooding from Hurricane Sandy along the Atlantic Coast of the United States, October 2012 from the United States Geological Survey Recovering from Superstorm Sandy: Rebuilding Our Infrastructure U.S. Senate Hearing, December 20, 2012
technological fix
A technological fix, technical fix, technological shortcut or (techno-)solutionism refers to attempts to use engineering or technology to solve a problem (often created by earlier technological interventions).Some references define technological fix as an "attempt to repair the harm of a technology by modification of the system", that might involve modification of the machine and/or modification of the procedures for operating and maintaining it. Technological fixes are inevitable in modern technology. It has been observed that many technologies, although invented and developed to solve certain perceived problems, often create other problems in the process, known as externalities. In other words, there would be modification of the basic hardware, modification of techniques and procedures, or both.The technological fix is the idea that all problems can find solutions in better and new technologies. It now is used as a dismissive phrase to describe cheap, quick fixes by using inappropriate technologies; these fixes often create more problems than they solve or give people a sense that they have solved the problem. Contemporary context In the contemporary context, technological fix is sometimes used to refer to the idea of using data and intelligent algorithms to supplement and improve human decision making in hope that this would result in ameliorating the bigger problem. One critic, Evgeny Morozov defines this as "Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized – if only the right algorithms are in place." Morozov has defined this perspective as an ideology that is especially prevalent in Silicon Valley, and defined it as "solutionism".[1] While some criticizes this approach to the issues of today as detrimental to efforts to truly solve these problems, opponents finds merits in such approach to technological improvement of our society as complements to existing activists and policy efforts.An example of the criticism is how policy makers may be tempted to think that installing smart energy monitors would help people conserve energy better, thus improving global warming, rather than focusing on the arduous process of passing laws to tax carbon, etc. Another example is the use of technological tools alone to solve complex sociopolitical crises such as pandemics, or the belief that such crises can be solved through the integration of technical fixes alone.[2] Algorithms The definition of algorithms according to the Oxford Languages dictionary is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.” Algorithms are increasingly used as technological fixes in modern society to replace tasks or decision-making by humans, often to reduce labor costs, increase efficiency, or reduce human bias. These solutions serve as a “quick and flawless way to solve complex real world problems… but technology isn’t magic”. The use of algorithms as fixes, however, are not addressing the root causes of these problems. Instead, algorithms are more often being used as “band-aid” solutions that may provide temporary relief, but do not ameliorate the issue for good. Additionally, these fixes tend to come with their own problems, some of which are even more harmful than the original problem. One example of algorithms as a technological fix for increasing public safety is face recognition software, which has been used by the San Diego County police department and the Pittsburgh police department, among other government security organizations. Face recognition is an example of algorithmic technology that is viewed as potentially having many benefits for its users, such as verifying one’s identity in security systems. This system uses biometrics to quantify and map out distinguishing facial features. However, face recognition as a technological fix for safety and security concerns comes with issues of privacy and discrimination. In the case of face recognition technology being used by the San Diego County police department, Black men were being falsely accused of crimes due to being mistakenly identified by the software. Additionally, San Diego police used the face recognition software on African Americans up to twice as often than on other people. The cases of discrimination perpetuated by the face recognition tool led to a three-year ban on its use starting in 2019. Instead of addressing systemic and historically embedded issues of inequalities among racial groups, the face recognition technology was used to perpetuate discrimination and support police in doing their jobs unfairly and inaccurately. Another example of algorithms being used as a technological fix is tools to automate decision-making, such as in the cases of Oregon’s Child Welfare Risk Tool and the Pittsburgh Allegheny County Family Screening Tool (AFST). In these cases, algorithms replacing humans as decision makers have been used to fix the underlying issues of the cost of employees to make child welfare case decisions and to eliminate human biases in the decision-making process. However, researchers at Carnegie Mellon University found that the tool discriminates against Black families, who are statistically underserved and have historically lived in lower-income areas. This historical data caused by systemic disparities causes the algorithm to flag a greater percentage of children of Black families as high risk than children of White families. By using data based on historical biases, the automated decisions further fuel racial disparities, and actually accomplish the opposite of the intended outcomes. Climate change The technological fix for climate change is an example of the use of technology to restore the environment. This can be seen through various different strategies such as: geo-engineering and renewable energy. Geo-engineering Geo-engineering is referred as "the artificial modification of Earth's climate systems through two primary ideologies: Solar Radiation Management (SRM) and Carbon Dioxide Removal (CDR)". Different schemes, projects and technologies have been designed to tackle the effects of climate change, usually by removing CO2 from the air as seen by Klaus Lackner's invention of a CO2 prototype, or by limiting the amount of sunlight that reaches the Earth's surface, by space mirrors. However, "critics by contrast claim that geoengineering isn't realistic – and may be a distraction from reducing emissions." It has been argued that geo-engineering is an adaptation to global warming. It allows TNC's, humans and governments to avoid facing the facts that global warming is a crisis that needs to be dealt with head-on by reducing emissions and implementing green technologies, rather than developing ways to control the environment and ultimately allow Greenhouse Gases to continue to be released into the atmosphere. Renewable energy Renewable energy is also another example of a technological fix. These technologies seek to reduce greenhouse gas emissions in energy production, directly releasing little to no emissions.[source?] They are generally regarded as infinite energy sources, which means they will never run out, unlike fossil fuels such as oil and coal, which are finite sources of energy. Examples of renewable energy can be seen by wind turbines, solar energy such as solar panels and kinetic energy from waves. Renewable energy is regarded as a technological fix to energy insecurity, providing alternatives to fossil fuels. It is also known that such technologies will in turn require their own technological fixes. For example, some types of solar energy have local impacts on ambient temperature, which can be a hazard to birdlife. Food famine It has been made explicit within society that the world's population is rapidly increasing, with the "UNICEF estimating that an average of 353,000 babies are born each day around the world." Therefore, it is expected that the production of food will not be able to progress and develop to keep up with the needs of species. Ester Boserup highlighted in 1965 that when the human population increases and food production decreases, an innovation will take place. This can be demonstrated in the technological development of hydroponics and genetically modified crops. Hydroponics Hydroponics is an example of a technological fix. It demonstrates the ability for humans to recognise a problem within society, such as the lack of food for an increasing population, and therefore attempt to fix this problem with the development of an innovative technology. Hydroponics is a method of food production to increase productivity, in an "artificial environment." The soil is replaced by a mineral solution that is left around the plant roots. Removing the soil allows a greater crop yield, as there is less chance of soil-borne diseases, as well as being able to monitor plant growth and mineral concentrations. This innovative technology to yield more food reflects the ability for humans to develop their way out of a problem, portraying a technological fix. Genetically modified organism Genetically modified organism (GMO) reflect the use of technology to innovate our way out of a problem such as the lack of food to cater for the growing population, demonstrating a technological fix. GM crops can create many advantages, such as higher food fields, added vitamins and increased farm profits. Depending on the modifications, they may also introduce the problem of increasing resistance to pesticides and herbicides, which may inevitably precipitate the need for further fixes in the future. Golden rice Golden rice is one example of a technological fix. It demonstrates the ability for humans to develop and innovate themselves out of problems, such as the deficiency of vitamin A in Taiwan and Philippines, in which the World Health Organization reported that about 250 million preschool children are affected by. Through the technological development of GM Crops, scientists were able to develop golden rice that can be grown in these countries with genetically higher levels of beta-carotene (a precursor of vitamin A). This enables healthier and fulfilling lifestyles for these individuals and consequently helps to reduce the deaths caused by the deficiency. Externalities Externalities refer to the unforeseen or unintended consequences of technology. It is evident that everything new and innovative can potentially have negative effects, especially if it is a new area of development. Although technologies are invented and developed to solve certain perceived problems, they often create other problems in the process. Algorithms Evgeny Morozov, writer and researcher on social implications of technology, has said, “A new problem-solving infrastructure is new; new types of solutions become possible that weren’t possible 15 years ago”. The issue with the use of algorithms as technological fixes is that they shouldn’t be applied as a one-size-fits-all solution because each problem comes with its own context and implications. While algorithms can offer solutions, it can also amplify discriminatory harms, especially to already marginalized groups. These externalities include racial bias, gender bias, and disability discrimination. Oftentimes, algorithms are implemented into systems without a clear understanding of whether or not it is an appropriate solution to a problem. In Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management, Min Kyung Lee writes, “...the problem is that industries often incorporate technology whose performance and effectiveness are not yet proven, without careful validation and reflection.” Algorithms may offer immediate relief to problems or an optimistic outlook to the current issues at hand, but they can also create more problems that require even more complex solutions. Sometimes, the use of algorithms as a technological fix leaves us asking, “Did anyone ask for this?” and wondering whether the benefits outweigh the harms. These tradeoffs should be rigorously assessed in order to determine if an algorithm is truly the most appropriate solution. DDT DDT was initially use by the Military in World War II to control a range of different illnesses, varying from Malaria to the bubonic plague and body lice. Due to the efficiency of DDT, it was soon adopted as a farm pesticide to help maximise crop yields to consequently cope with the rising populations food demands post WWII. This pesticide proved to be extremely effective in killing bugs and animals on crops, and was often referred as the "wonder-chemical". However, despite being banned for over forty years, we are still facing the externalities of this technology. It was found that DDT had major health impacts on both humans and animals. It was found that DDT accumulated within the fatty cells of both humans and animals and therefore highlights that technological fixes have their negatives as well as positives. Humans Breast & other cancers Male infertility Miscarriages & low birth weight Developmental delay Nervous system & liver damage Animals DDT is toxic to birds when eaten. Decreases the reproductive rate of birds by causing eggshell thinning and embryo deaths. Highly toxic to aquatic animals. DDT affects various systems in aquatic animals including the heart and brain. DDT moderately toxic to amphibians like frogs, toads, and salamanders. Immature amphibians are more sensitive to the effects of DDT than adults. Global warming Global warming can be a natural phenomenon that occurs in long (geologic) cycles. However, it has been found that the release of greenhouse gases through industry and traffic causes the earth to warm. This is causing externalities on the environment, such as melting icecaps, shifting biomes, and extinction of many aquatic species through ocean acidification and changing ocean temperatures. Automobiles Automobiles with internal combustion engines have revolutionised civilisation and technology. However, whilst the technology was new and innovative, helping to connect places through the ability of transport, it was not recognised at the time that burning fossil fuels, such as coal and oil, inside the engines would release pollutants. This is an explicit example of an externality caused by a technological fix, as the problems caused from the development of the technology was not recognised at the time. Different types of technological fixes High-tech megaprojects High-tech megaprojects are large scale and require huge sums of investment and revenue to be created. Examples of these high technologies are dams, nuclear power plants, and airports. They usually cause externalities on other factors such as the environment, are highly expensive, and are top-down governmental plans. Three Gorges Dam The Three Gorges Dam is an example of a high-tech technological fix. The creation of the multi-purpose navigation hydropower and flood control scheme was designed to fix the issues with flooding whilst providing efficient, clean renewable hydro-electric power in China. The Three Gorges Dam is the world's largest power station in terms of installed capacity (22,500 MW). The dam is the largest operating hydroelectric facility in terms of annual energy generation, generating 83.7 TWh in 2013 and 98.8 TWh in 2014, while the annual energy generation of the Itaipú Dam in Brazil and Paraguay was 98.6 TWh in 2013 and 87.8 in 2014. It was estimated to have cost over £25 billion. There have been many externalities from this technology, such as the extinction of the Chinese River Dolphin, an increase in pollution, as the river can no longer 'flush' itself, and over 4 million locals being displaced in the area. Intermediate technology Is usually small-scale and cheap technologies that are usually seen in developing countries. The capital to build and create these technologies are usually low, yet labour is high. Local expertise can be used to maintain these technologies making them very quick and effective to build and repair. An example of an intermediate technology can be seen by water wells, rain barrels and pumpkin tanks. Appropriate technologies Technology that suits the level of income, skills and needs of the people. Therefore, this factor encompasses both high and low technologies. An example of this can be seen by developing countries that implement technologies that suit their expertise, such as rain barrels and hand pumps. These technologies are low costing and can be maintained by local skills, making them affordable and efficient. However, to implement rain barrels in a developed country would not be appropriate, as it would not suit the technological advancement apparent in these countries. Therefore, appropriate technological fixes take into consideration the level of development within a country before implementing them. Concerns Michael and Joyce Huesemann caution against the hubris of large-scale techno-fixes In the book Techno-Fix: Why Technology Won't Save Us Or the Environment they show why negative unintended consequences of science and technology are inherently unavoidable and unpredictable, why counter-technologies or techno-fixes are no lasting solutions, and why modern technology in current context does not promote sustainability but instead collapse.Naomi Klein is a prominent opponent of the view that simply technological fixes will solve our problems. She explained her concerns in her book This Changes Everything: Capitalism vs. the Climate and states that technical fixes for climate change such as geoengineering bring significant risks as "we simply don't know enough about the Earth system to be able to re-engineer it safely". According to her the proposed technique of dimming the rays of the sun with sulphate-spraying helium balloons in order to mimic the cooling effect on the atmosphere of large volcanic eruptions for instance is highly dangerous and such schemes will surely be attempted if abrupt climate change gets seriously under way. Such concerns are explored in their complexity in Elizabeth Kolbert's Under a White Sky.Various experts and environmental groups have also come forward with their concerns over views and approaches that look for techno fixes as solutions and warn that those would be "misguided, unjust, profoundly arrogant and endlessly dangerous" approaches as well as over the prospect of a technological 'fix' for global warming, however impractical, causing lessened political pressure for a real solution. See also Attitudinal fix Structural fix Differential technological development Law of Unintended Consequences Philosophy of technology Social engineering (political science) Technocentrism == References ==
selli event
The Selli Event, also known as OAE1a, was an oceanic anoxic event (OAE) of global scale that occurred during the Aptian stage of the Early Cretaceous, about 120.5 million years ago (Ma). The OAE is associated with large igneous province volcanism and an extinction event of marine organisms driven by global warming, ocean acidification, and anoxia. Timing The negative carbon isotope excursion (CIE) representing the onset of OAE1a was rapid, taking only 22,000-47,000 years. The recovery of the global climate from the injection of large amounts of isotopically light carbon lasted for over a million years. The OAE lasted for about 1.1 to 1.3 Myr in total; one high-precision estimate put the length of OAE1a at 1.157 Myr. Causes Global warming OAE1a ensued during a hot climatic interval, with the global average temperature being around 21.5 °C. The Tethys Ocean experienced an increase in humidity at the beginning of OAE1a, while conditions around the Boreal Ocean were initially dry and only humidified later on during the OAE.The increase in global temperatures that caused OAE1a was most likely driven by large igneous province (LIP) volcanism. The negative CIE preceding the OAE is believed to reflect volcanic release of carbon dioxide into the atmosphere and its consequent warming of the Earth. Enrichments in unradiogenic osmium, which is primarily derived from alteration of oceanic crust by hydrothermal volcanism, further bolster volcanism as the driver of OAE1a. Multiple LIPs have been implicated as causes of the rapid global warming responsible for the onset of OAE1a, including the Kerguelen Plateau and the Ontong Java Plateau. The rate of greenhouse gas emissions leading up to OAE1a was relatively slow, causing the anoxic event to only generate a minor extinction event, in contrast to the severe LIP-induced Capitanian, Permian-Triassic, and Triassic-Jurassic mass extinctions and the ongoing Holocene extinction caused in part by anthropogenic greenhouse gas release, each of which were or are characterised by a very high rate of carbon dioxide discharge. Despite a much smaller methane clathrate reservoir relative to the present day, the degassing of methane clathrate deposits may have nonetheless significantly exacerbated volcanic warming. Following OAE1a, δ18O values increased, indicating a drop in temperatures that coincided with a δ13Corg decline. Enhanced phosphorus recycling OAE1a coincided with a peak in a 5-6 Myr periodicity cycle in the accumulation of phosphorus in marine sediments. During such peaks, the short-term positive feedback loop of increased biological productivity caused by an abundance of phosphorus that caused decreased oxygenation of seawater that then caused increased regeneration of phosphorus from marine sediments dominated, but it was eventually mitigated by a long-term negative feedback loop caused by an increase in atmospheric oxygen that resulted in enhanced wildfire activity and diminished phosphorus input into the oceans. An increase in the ratios of organic carbon to reactive phosphorus species and of total nitrogen to reactive phosphorus confirms leakage of sedimentary phosphorus back into the water column occurred during OAE1a, with this process likely being accelerated by the increased global temperatures of the time. Effects Marine productivity increased. The productivity spike was likely driven by an increase in iron availability. Increased sulphate flux from volcanism caused an increase in hydrogen sulphide production, which in turn increased phosphorus availability in the water column by inhibiting its burial on the seafloor and enabled the development of anoxia.The large-scale volcanic release of carbon dioxide caused a drop in the pH of seawater at the start of OAE1a, as much of this excess carbon dioxide was absorbed by the ocean and dissolved as carbonic acid. Seawater carbonate-saturation was severely reduced. Ocean acidification began shortly after the negative CIE and lasted for approximately 0.85 Myr.δ7Li measurements indicate an enrichment in isotopically light lithium coeval with the negative CIE, signifying an increase in silicate weathering amidst the volcanically induced global warming of OAE1a. A second negative lithium isotope excursion occurred synchronously with a strontium isotope minimum, demarcating another peak in silicate weathering. This weathering may have buffered the warming effects of large igneous province volcanism and helped to cool the Earth back to its pre-OAE1a state.OAE1a, as with other OAEs, exhibited widespread deposition of black shales rich in organic matter incapable of being decomposed on the seabed, as the anoxic conditions prohibited habitation of most microbial decomposers. Black shale deposition begins during the C6 stage of OAE1a and lasted for around 0.4 Myr.Overall, the biotic effects of OAE1a were comparatively minor relative to other LIP-driven extinction events. Nannoconids that were highly calcified suffered significant decline during OAE1a, likely as a consequence of ocean acidification, although this causal relationship is disputed by other authors. See also Ireviken Event Lundgreni Event Mulde Event Lau Event Šilalė Event Jenkyns Event Paquier Event Amadeus Event Breistroffer Event Bonarelli Event == References ==
temperature record of the last 2,000 years
The temperature record of the last 2,000 years is reconstructed using data from climate proxy records in conjunction with the modern instrumental temperature record which only covers the last 170 years at a global scale. Large-scale reconstructions covering part or all of the 1st millennium and 2nd millennium have shown that recent temperatures are exceptional: the Intergovernmental Panel on Climate Change Fourth Assessment Report of 2007 concluded that "Average Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years and likely the highest in at least the past 1,300 years." The curve shown in graphs of these reconstructions is widely known as the hockey stick graph because of the sharp increase in temperatures during the last century. As of 2010 this broad pattern was supported by more than two dozen reconstructions, using various statistical methods and combinations of proxy records, with variations in how flat the pre-20th-century "shaft" appears. Sparseness of proxy records results in considerable uncertainty for earlier periods.Individual proxy records, such as tree ring widths and densities used in dendroclimatology, are calibrated against the instrumental record for the period of overlap. Networks of such records are used to reconstruct past temperatures for regions: tree ring proxies have been used to reconstruct Northern Hemisphere extratropical temperatures (within the tropics trees do not form rings) but are confined to land areas and are scarce in the Southern Hemisphere which is largely ocean. Wider coverage is provided by multiproxy reconstructions, incorporating proxies such as lake sediments, ice cores and corals which are found in different regions, and using statistical methods to relate these sparser proxies to the greater numbers of tree ring records. The "Composite Plus Scaling" (CPS) method is widely used for large-scale multiproxy reconstructions of hemispheric or global average temperatures; this is complemented by Climate Field Reconstruction (CFR) methods which show how climate patterns have developed over large spatial areas, making the reconstruction useful for investigating natural variability and long-term oscillations as well as for comparisons with patterns produced by climate models. During the 1,900 years before the 20th century, it is likely that the next warmest period was from 950 to 1100, with peaks at different times in different regions. This has been called the Medieval Warm Period, and some evidence suggests widespread cooler conditions during a period around the 17th century known as the Little Ice Age. In the "hockey stick controversy", climate change deniers have asserted that the Medieval Warm Period was warmer than at present, and have disputed the data and methods of climate reconstructions. Temperature change in the last 2,000 years According to IPCC Sixth Assessment Report, in the last 170 years, humans have caused the global temperature to increase to the highest level in the last 2,000 years. The current multi-century period is the warmest in the past 100,000 years. The temperature in the years 2011-2020 was 1.09°C higher than in 1859–1890. The temperature on land rose by 1.59°C while over the ocean it rose only by 0.88°C. In 2020 the temperature was 1.2 °C above the pre-industrial era. On September 2023 the temperature was 1.75 °C above pre-industrial level and during the entire year of 2023 is expected to be 1.4 °C above it. General techniques and accuracy By far the best observed period is from 1850 to the present day, with coverage improving over time. Over this period the recent instrumental record, mainly based on direct thermometer readings, has approximately global coverage. It shows a general warming in global temperatures. Before this time various proxies must be used. These proxies are less accurate than direct thermometer measurements, have lower temporal resolution, and have less spatial coverage. Their only advantage is that they enable a longer record to be reconstructed. Since the direct temperature record is more accurate than the proxies (indeed, it is needed to calibrate them) it is used when available: i.e., from 1850 onwards. Quantitative methods using proxy data As there are few instrumental records before 1850, temperatures before then must be reconstructed based on proxy methods. One such method, based on principles of dendroclimatology, uses the width and other characteristics of tree rings to infer temperature. The isotopic composition of snow, corals, and stalactites can also be used to infer temperature. Other techniques which have been used include examining records of the time of crop harvests, the treeline in various locations, and other historical records to make inferences about the temperature. These proxy reconstructions are indirect inferences of temperature and thus tend to have greater uncertainty than instrumental data. Most proxy records have to be calibrated against local temperature records during their period of overlap, to estimate the relationship between temperature and the proxy. The longer history of the proxy is then used to reconstruct temperature from earlier periods. Proxy records must be averaged in some fashion if a global or hemispheric record is desired. The "Composite Plus Scaling" (CPS) method is widely used for large-scale multiproxy reconstructions of hemispheric or global average temperatures. This is complemented by Climate Field Reconstruction (CFR) methods which show how climate patterns have developed over large spatial areas. Considerable care must be taken in the averaging process; for example, if a certain region has a large number of tree ring records, a simple average of all the data would strongly over-weight that region, and statistical techniques are used to avoid such over-weighting. In the Mann, Bradley & Hughes 1998 and Mann, Bradley & Hughes 1999 CFR reconstructions, principal components analysis was used to combine some of these regional records before they were globally combined. An important distinction is between so-called 'multi-proxy' reconstructions, which attempt to obtain a global temperature reconstruction by using multiple proxy records distributed over the globe and more regional reconstructions. Usually, the various proxy records are combined arithmetically, in some weighted average. More recently, Osborn and Briffa used a simpler technique, counting the proportion of records that are positive, negative or neutral in any time period. This produces a result in general agreement with the conventional multi-proxy studies. The 2007 IPCC Fourth Assessment Report cited 14 reconstructions, 10 of which covered 1,000 years or longer, to support its conclusion that "Average Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years and likely the highest in at least the past 1,300 years". Qualitative reconstruction using historical records It is also possible to use historical data such as times of grape harvests, sea-ice-free periods in harbours and diary entries of frost or heatwaves to produce indications of when it was warm or cold in particular regions. These records are harder to calibrate, are often only available sparsely through time, may be available only from developed regions, and are unlikely to come with good error estimates. These historical observations of the same time period show periods of both warming and cooling. Limitations The apparent differences between the quantitative and qualitative approaches are not fully reconciled. The reconstructions mentioned above rely on various assumptions to generate their results. If these assumptions do not hold, the reconstructions would be unreliable. For quantitative reconstructions, the most fundamental assumptions are that proxy records vary with temperature and that non-temperature factors do not confound the results. In the historical records temperature fluctuations may be regional rather than hemispheric in scale. In a letter to Nature Bradley, Hughes & Mann (2006) pointed at the original title of their 1998 article: Northern Hemisphere temperatures during the past millennium: inferences, uncertainties, and limitations and pointed out more widespread high-resolution data are needed before more confident conclusions can be reached and that the uncertainties were the point of the article. History In the 1960s, Hubert Lamb generalised from historical documents and temperature records of central England to propose a Medieval Warm Period in the North Atlantic region, followed by Little Ice Age. This was discussed in the IPCC First Assessment Report with cautions that the medieval warming might not have been global. Using proxy indicators for quantitative estimates of past temperature record had developed sporadically from the 1930s onwards, and Bradley & Jones 1993 introduced the "Composite Plus Scaling" (CPS) method. Their reconstruction back to 1400 featured in the IPCC Second Assessment Report. The Michael E. Mann, Raymond S. Bradley and Malcolm K. Hughes reconstruction (Mann, Bradley & Hughes 1998, MBH98) showed global patterns of annual surface temperature, and average hemispheric temperatures back to 1400 with emphasising on uncertainties.Jones et al. 1998 independently produced a CPS reconstruction extending back for a thousand years, and Mann, Bradley & Hughes 1999 (MBH99) used the MBH98 methodology to extend their study back to 1000. The term hockey stick was used by the climatologist Jerry Mahlman to describe the pattern this showed, envisaging a graph that is relatively flat to 1900 as forming an ice hockey stick's "shaft", followed by a sharp increase corresponding to the "blade".A version of the MBH99 graph was featured prominently in the 2001 IPCC Third Assessment Report (TAR), which also drew on Jones et al. 1998 and three other reconstructions to support the conclusion that, in the Northern Hemisphere, the 1990s was likely to have been the warmest decade and 1998 the warmest year during the past 1,000 years. The graph was featured in publicity, and became a focus of dispute for those opposed to the strengthening scientific consensus that late 20th century warmth was exceptional.In 2003, as lobbying over the 1997 Kyoto Protocol intensified, efforts by the Bush administration to remove climate reconstructions from the first Environmental Protection Agency and Jim Inhofe's Senate speech claiming that man-made global warming is a hoax both drew on the Soon and Baliunas controversy. Later in 2003, Stephen McIntyre and Ross McKitrick published McIntyre & McKitrick 2003 disputing data in the MBH98 paper, but their argument was refuted. In 2004 Hans von Storch said that the MBH98 statistical techniques understated variability, but he erred in saying this undermined the overall graph. In 2005 McIntyre and McKitrick criticised the principal components analysis methodology in MBH98 and MBH99, but Huybers 2005 and Wahl & Ammann 2007 pointed to errors made by McIntyre and McKitrick. The National Research Council North Report in 2006 supported MBH with minor caveats. The Wegman Report supported McIntyre and McKitrick's study, but was subsequently discredited. Arguments against the MBH studies were reintroduced as part of the Climatic Research Unit email controversy, but dismissed by eight independent investigations. The test in science is whether findings can be replicated using different data and methods. More than two dozen reconstructions, using various statistical methods and combinations of proxy records, have supported the broad consensus shown in the original 1998 hockey-stick graph, with variations in how flat the pre-20th century "shaft" appears. The IPCC Fifth Assessment Report (AR5 WG1) of 2013 examined temperature variations during the last two millennia, and concluded that for average annual Northern Hemisphere temperatures, "the period 1983–2012 was very likely the warmest 30-year period of the last 800 years (high confidence) and likely the warmest 30-year period of the last 1400 years (medium confidence)". See also CLIWOC - Climatological database for the world's oceans Dendroclimatology Table of historic and prehistoric climate indicators Notes References External links A collection of various reconstructions of global and local temperature from centuries on up An NOAA collection of individual data records Surface Temperature Reconstructions for the Last 2,000 Years
r-454b
R-454B, also known by the trademarked names Opteon XL41, Solstice 454B, and Puron Advance, is a zeotropic blend of 68.9 percent difluoromethane (R-32), a hydrofluorocarbon, and 31.1 percent 2,3,3,3-tetrafluoropropene (R-1234yf), a hydrofluoroolefin. Because of its reduced global warming potential (GWP), R-454B is intended to be an alternative to refrigerant R-410A in new equipment. R-454B has a GWP of 466, which is 78 percent lower than R-410A's GWP of 2088.R-454B is non-toxic and mildly flammable, with an ASHRAE safety classification of A2L. In the United States, it is expected to be packaged in a container that is red or has a red band on the shoulder or top. History The refrigeration industry has been seeking replacements for R-410A because of its high global warming potential. R-454B, formerly known as DL-5A, has been selected by several manufacturers, including Mitsubishi Electric, Carrier, Johnson Controls, and others.R-454B was developed at and is manufactured by Chemours. Carrier first announced introduction of R-454B in ducted residential and light commercial packaged refrigeration and air conditioning products in 2018, with R-454B-based products launches starting in 2023. Related refrigerants R-454B is not the only blend of R-32 and R-1234yf to be proposed as a refrigerant. Other blends include R-454A (35 percent R-32, 65 percent R-1234yf) and R-454C (21.5 percent R-32, 78.5 percent R1234yf). There are also several blends that include a third component. == References ==
center for the study of carbon dioxide and global change
The Center for the Study of Carbon Dioxide and Global Change is a 501(c)(3) non-profit organization based in Tempe, Arizona. It is seen as a front group for the fossil fuel industry, and as promoting climate change denial. The Center produces a weekly online newsletter called CO2Science. The Center was founded and is run by Craig D. Idso, along with Sherwood B. Idso, his father, and Keith E. Idso, his brother. They came from backgrounds in agriculture and climate. According to the Idsos, they became involved in the global warming controversy through their study of earth's temperature sensitivity to radiative perturbations and plant responses to elevated CO2 levels and carbon sequestration. The Center sharply disputes the scientific consensus on climate change shown in IPCC assessment reports, and believes that global warming will be beneficial to mankind. Funding According to IRS records, the ExxonMobil Foundation provided a grant of $15,000 to the center in 2000. Another report states that ExxonMobil has funded an additional $55,000 to the center. ExxonMobil stated it funded, "organizations which research significant policy issues and promote informed discussion on issues of direct relevance to the company. [...] These organizations do not speak on our behalf, nor do we control their views and messages."The center was also funded by Peabody Energy, America’s biggest coalmining company. Reception A December 2009 article in Mother Jones magazine said the Center was a promoter of climate disinformation, from a family prominent in promoting climate change denial. References External links Center for the Study of Carbon Dioxide and Global Change Nongovernmental International Panel on Climate Change "Center for the Study of Carbon Dioxide and Global Change Internal Revenue Service filings". ProPublica Nonprofit Explorer.
hyperthermal event
A hyperthermal event corresponds to a sudden warming of the planet on a geologic time scale. The consequences of this type of event are the subject of numerous studies because they can constitute an analogue of current global warming. Hyperthermal events The first event of this type was described in 1991 from a sediment core extracted from a drilling of the Ocean Drilling Program (ODP) carried out in Antarctica in the Weddell Sea. This event occurs at the boundary of the Paleocene and Eocene epochs approximately 56 million years ago. It is now called the Paleocene-Eocene Thermal Maximum (PETM). During this event, the temperature of the oceans increased by more than 5 °C in less than 10,000 years.Since this discovery, several other hyperthermal events have been identified in this lower part of the Paleogene geological period: the Dan-C2 event at the beginning of the Danian stage of the Paleocene, about 65.2 million years ago, quite the basis of the Cenozoic era ; the Danian-Selandian event at the transition between the Danian and Selandian stages of the Paleocene, about 61 million years ago ; the two events following the PETM during the Eocene climatic optimum: the Eocene Thermal Maximum 2 (ETM2) about 53.2 million years ago, and the Eocene Thermal Maximum 3 (ETM3) about 52.5 million years ago.But the PETM event remains the most studied of the hyperthermic events. Other hyperthermic events occurred at the end of most Quaternary glaciations. Probably the most notable of these is the abrupt warming marking the end of the Younger Dryas, which saw an average annual temperature rise of several degrees in less than a century. Causes While the consequences of these hyperthermic events are now well studied and known, their causes are still debated. Two main tracks, possibly complementary, are mentioned for the initiation of these sudden warmings: orbital forcing with long and/or short Earth cycle eccentricity maxima that accentuate significant seasonal contrasts and lead to global warming; remarkable volcanic activity, especially in the North Atlantic province. Consequences Marine warming due to PETM is estimated, for all latitudes of the globe, between 4 and 5 °C for deep ocean waters and between 5 and 9 °C for surface waters.Carbon trapped in clathrates buried in high latitude sediments is released to the ocean as methane (CH4) which will quickly oxidize to carbon dioxide(CO2). Ocean acidification and carbonate dissolution As a result of the increase in CO2 dissolved in seawater, the oceans are acidifying. This results in a dissolution of the carbonates; global sedimentation becomes essentially clayey. This process takes place in less than 10,000 years while it will take about 100,000 years for the carbonate sedimentation to return to its pre-PETM level mainly by CO2 capture through greater silicate weathering on the continents. Disruption of ocean circulations The δ13C ratios of the carbon isotope contents of the carbonates constituting the shells of the benthic foraminifera have shown an upheaval in the oceanic circulations during the PETM under the effect of global warming. This change took place over a few thousand years. The return to the previous situation, again by negative feedback thanks to the "CO2 pump" of silicate weathering, took about 200,000 years. Impacts on marine fauna While the benthic foraminifera had gone through the Cretaceous-Tertiary extinction that occurred around 66 million years ago without difficulty, the hyperthermic event of the PETM, 10 million years later, decimated them with the disappearance of 30 to 50% of existing species.The warming of surface waters also leads to eutrophication of the marine environment which leads to a rapid increase by positive feedback of CO2 emissions. Impacts on terrestrial fauna Mammals that experienced a great development after the extinction of the end of the Cretaceous will be strongly affected by the climatic warming of the Paleogene. Temperature increases and induced climate changes modify the flora and the quantities of fodder available for herbivores. This is how a large number of groups of mammals appear at the beginning of the Eocene, about 56 million years ago: Artiodactyls; Perissodactyls; Primates; Hyaenodontidae... Analogies with current global warming Even if the hyperthermal events of the Paleogene appear extremely brutal on the geologic time scale (in a range of a few thousand years for an increase of the order of 5 °C), they remain very significantly longer than the durations envisaged in the current models of global warming of anthropogenic origin.The various studies of hyperthermal events insist on the phenomena of positive feedbacks which, after the onset of a warming, accelerate it considerably. References See also Paleocene–Eocene Thermal Maximum Eocene climatic optimum
availability cascade
An availability cascade is a self-reinforcing cycle that explains the development of certain kinds of collective beliefs. A novel idea or insight, usually one that seems to explain a complex process in a simple or straightforward manner, gains rapid currency in the popular discourse by its very simplicity and by its apparent insightfulness. Its rising popularity triggers a chain reaction within the social network: individuals adopt the new insight because other people within the network have adopted it, and on its face it seems plausible. The reason for this increased use and popularity of the new idea involves both the availability of the previously obscure term or idea, and the need of individuals using the term or idea to appear to be current with the stated beliefs and ideas of others, regardless of whether they in fact fully believe in the idea that they are expressing. Their need for social acceptance, and the apparent sophistication of the new insight, overwhelm their critical thinking. The idea of the availability cascade was first developed by Timur Kuran and Cass Sunstein as a variation of information cascades mediated by the availability heuristic, with the addition of reputational cascades. The availability cascade concept has been highly influential in finance theory and regulatory research, particular with respect to assessing and regulating risk. Cascade elements Availability cascades occur in a society via public discourse (e.g. the public sphere and the news media) or over social networks—sets of linked actors in one or more of several roles. These actors process incoming information to form their private beliefs according to various rules, both rational and semi-rational. The semi-rational rules include the heuristics, in particular the availability heuristic. The actors then behave and express their public beliefs according to self-interest, which might cause their publicly expressed beliefs to deviate from their privately held beliefs. Kuran and Sunstein emphasize the role of availability entrepreneurs, agents willing to invest resources into promoting a belief in order to derive some personal benefit. Other availability entrepreneurs with opposing interests may wage availability counter-campaigns. Other key roles include journalists and politicians, both of which are subject to economic and reputational pressures, the former in competition in the media, the latter for political status. As resources (e.g. attention and money) are limited, beliefs compete with one another in the "availability market". A given incident and subsequent availability campaign may succeed in raising the availability of one issue at the expense of other issues. Belief formation Dual process theory posits that human reasoning is divided into two systems, often called System 1 and System 2. System 1 is automatic and unconscious; other terms used for it include the implicit system, the experiential system, the associative system, and the heuristic system. System 2 is evolutionarily recent and specific to humans, performing the more slow and sequential thinking. It is also known as the explicit system, the rule-based system, the rational system, or the analytic system. In The Happiness Hypothesis, Jonathan Haidt refers to System 1 and System 2 as the elephant and the rider: while human beings incorporate reason into their beliefs, whether via direct use of facts and logic or their application as a test to hypotheses formed by other means, it is the elephant that is really in charge. Cognitive biases Heuristics are simple, efficient rules which people often use to form judgments and make decisions. They are mental shortcuts that replace a complex problem with a simpler one. These rules work well under most circumstances, but they can lead to systematic deviations from logic, probability or rational choice theory. The resulting errors are called "cognitive biases" and many different types have been documented. These have been shown to affect people's choices in situations like valuing a house or deciding the outcome of a legal case. Heuristics usually govern automatic, intuitive judgments but can also be used as deliberate mental strategies when working from limited information. While seemingly irrational, the cognitive biases may be interpreted as the result of bounded rationality, with human beings making decisions while economizing time and effort. Kuran and Sunstein describe the availability heuristic as more fundamental than the other heuristics: besides being important in its own right, it enables and amplifies the others, including framing, representativeness, anchoring, and reference points. Availability heuristic Even educated human beings are notoriously poor at thinking statistically. The availability heuristic, first identified by Daniel Kahneman and Amos Tversky, is a mental shortcut that occurs when people judge the probability of events by how easy it is to think of examples. The availability heuristic operates on the notion that, "if you can think of it, it must be important." Availability can be influenced by the emotional power of examples and by their perceived frequency; while personal, first-hand incidents are more available than those that happened to others, availability can be skewed by the media. In his book Thinking, Fast and Slow, Kahneman cites the examples of celebrity divorces and airplane crashes; both are more often reported by the media, and thus tend to be exaggerated in perceived frequency. Examples An important class of judgments is those concerning risk: the expectation of harm to result from a given threat, a function of the threat's likelihood and impact. Changes in perceived risk result in risk compensation—correspondingly more or less mitigation, including precautionary measures and support for regulation. Kuran and Sunstein offer three examples of availability cascades—Love Canal, the Alar scare, and TWA Flight 800—in which a spreading public panic led to growing calls for increasingly expensive government action to deal with risks that turned out later to be grossly exaggerated. Others have used the term "culture of fear" to refer to the habitual achieving of goals via such fear appeals, notably in the case of the threat of terrorism. Disease threats In the early years of the HIV/AIDS epidemic, many believed that the disease received less attention than warranted, in part due to the stigma attached to its sufferers. Since that time advocates— availability entrepreneurs that include LGBT activists and conservative Surgeon General of the United States C. Everett Koop—have succeeded in raising awareness to achieve significant funding. Similarly, awareness and funding for breast cancer and prostate cancer are high, thanks in part to the availability of these diseases. Other prevalent diseases competing for funding but lacking the availability of HIV/AIDS or cancer include lupus, sickle-cell anemia, and tuberculosis. Vaccination scares The MMR vaccine controversy was an example of an unwarranted health scare. It was triggered by the publication in 1998 of a paper in the medical journal The Lancet which presented apparent evidence that autism spectrum disorders could be caused by the MMR vaccine, an immunization against measles, mumps and rubella. In 2004, investigations by Sunday Times journalist Brian Deer revealed that the lead author of the article, Andrew Wakefield, had multiple undeclared conflicts of interest, had manipulated evidence, and had broken other ethical codes. The Lancet paper was partially retracted in 2004 and fully retracted in 2010, and Wakefield was found guilty of professional misconduct. The scientific consensus is that no evidence links the vaccine to the development of autism, and that the vaccine's benefits greatly outweigh its risks. The claims in Wakefield's 1998 The Lancet article were widely reported; vaccination rates in the UK and Ireland dropped sharply, which was followed by significantly increased incidence of measles and mumps, resulting in deaths and severe and permanent injuries. Reaction to vaccine controversies has contributed to a significant increase in preventable diseases including measles and pertussis (whooping cough), which in 2011 experienced its worst outbreak in 70 years as a result of reduced vaccination rates. Concerns about immunization safety often follow a pattern: some investigators suggest that a medical condition is an adverse effect of vaccination; a premature announcement is made of the alleged adverse effect; the initial study is not reproduced by other groups; and finally, it takes several years to regain public confidence in the vaccine. Global warming Extreme weather events provide opportunities to raise the availability of global warming. In the United States, the mass media devoted little coverage to global warming until the drought of 1988, and the testimony of James E. Hansen to the United States Senate, which explicitly attributed "the abnormally hot weather plaguing our nation" to global warming. The global warming controversy has attracted availability entrepreneurs on both sides, e.g. the book Merchants of Doubt claiming that scientific consensus had long ago been reached, and climatologist Patrick Michaels providing the denialist viewpoint. Gun violence The media inclination to sensationalism results in a tendency to devote disproportionate coverage to sympathetic victims (e.g. missing white woman syndrome), terrifying assailants (e.g. Media coverage of the Virginia Tech massacre), and incidents with multiple victims. Although half the victims of gun violence in the United States are black, generally young urban black males, media coverage and public awareness spike after suburban school shootings, as do calls for stricter gun control laws. International adoption scandals International adoption scandals receive disproportionate attention in the countries of adoptees' origins. As the incidents involve abuse of children, they easily spark media attention, and availability entrepreneurs (e.g. populist politicians) fan the flames of xenophobia, without making statistical comparisons of adoptee abuse in the source and target nations, or of the likelihood of abuse vs. other risks. Poisoned candy myths Poisoned candy myths are urban legends that malevolent individuals could hide poison or drugs, or sharp objects such as razor blades, needles, or broken glass in candy and distribute the candy in order to harm random children, especially during Halloween trick-or-treating. Several events fostered the candy tampering myth. The first took place in 1964, when an annoyed Long Island, New York housewife started giving out packages of inedible objects to children who she believed were too old to be trick-or-treating. The packages contained items such as steel wool, dog biscuits, and ant buttons (which were clearly labeled with the word "poison"). Although nobody was injured, she was prosecuted and pleaded guilty to endangering children. The same year saw reports of lye-filled bubble gum being handed out in Detroit and rat poison being given in Philadelphia.The second milestone in the spread of the candy-tampering myths was an article published in The New York Times in 1970. It claimed that "Those Halloween goodies that children collect this weekend on their rounds of ‘trick or treating’ may bring them more horror than happiness", and provided specific examples of potential tampering.In 2008, candy was found with metal shavings and metal blades embedded in it. The candy was Pokémon Valentine's Day lollipops purchased from a Dollar General store in Polk County, Florida. The candy was determined to have been manufactured in China and not tampered with within the United States. The lollipops were pulled from the shelves after a mother reported a blade in her child's lollipop and after several more lollipops with metal shavings in them were confiscated from a local elementary school. Also in 2008, some cold medicine was discovered in cases of Smarties that were handed out to children in Ontario.Over the years, various experts have tried to debunk the various candy tampering stories. Among this group is Joel Best, a University of Delaware sociologist who specializes in investigating candy tampering legends. In his studies, and the book Threatened Children: Rhetoric and Concern about Child-Victims, he researched newspapers from 1958 on in search of candy tampering. Of these stories, fewer than 90 instances might have qualified as actual candy tampering. Best has found five child deaths that were initially thought by local authorities to be caused by homicidal strangers, but none of those were sustained by investigation.Despite the falsity of these claims, the news media promoted the story continuously throughout the 1980s, with local news stations featuring frequent coverage. During this time, cases of poisoning were repeatedly reported based on unsubstantiated claims or before a full investigation could be completed and often never followed up on. This one-sided coverage contributed to the overall panic and caused rival media outlets to issue reports of candy tampering as well. By 1985, the media had driven the hysteria about candy poisonings to such a point that an ABC News/The Washington Post poll that found 60% of parents feared that their children would be injured or killed because of Halloween candy sabotage. Media feeding frenzy The phenomenon of media feeding frenzies is driven by a combination of the psychology described by the availability cascade model and the financial imperatives of media organizations to retain their funding. Policy implications Technocracy vs. democracy There are two schools of thought on how to cope with risks raised by availability cascades: technocratic and democratic. The technocratic approach, championed by Kuran and Sunstein, emphasizes assessing, prioritizing, and mitigating risks according to objective risk measures (e.g. expected costs, expected disability-adjusted life years (DALY)). The technocratic approach considers availability cascades to be phenomena of mass irrationality that can distort or hijack public policy, misallocating resources or imposing regulatory burdens whose costs exceed the expected costs of the risks they mitigate. The democratic approach, championed by Paul Slovic, respects risk preferences as revealed by the availability market. For example, though lightning strikes kill far more people each year than shark attacks, if people genuinely consider death by shark worse than death by lightning, a disproportionate share of resources should be devoted to averting shark attacks. Institutional safeguards Kuran and Sunstein recommend that availability cascades be recognized, and institutional safeguards be implemented in all branches of government. They recommend expanded product defamation laws, analogous to personal libel laws, to discourage availability entrepreneurs from knowingly spreading false and damaging reports about a product. They recommend that the legislative branch create a Risk Regulation Committee to assess risks in a broader context and perform cost-benefit analyses of risks and regulations, avoiding hasty responses pandering to public opinion. They recommend that the executive branch use peer review to open agency proposals to scrutiny by informed outsiders. They also recommend the creation of a Risk Information Center with a Risk Information Web Site to provide the public with objective risk measures. In the United States, the Centers for Disease Control and Prevention and the Federal Bureau of Investigation maintain web sites that provide objective statistics on the causes of death and violent crime. References See also Preference falsification Availability heuristic Knowledge falsification Information cascade Reputational cascade Moral panic Groupthink The Emperor's New Clothes Frequency illusion
r-410a
R-410A, sold under the trademarked names AZ-20, EcoFluor R410, Forane 410A, Genetron R410A, Puron, and Suva 410A, is a zeotropic but near-azeotropic mixture of difluoromethane (CH2F2, called R-32) and pentafluoroethane (CHF2CF3, called R-125) that is used as a refrigerant in air conditioning and heat pump applications. R-410A cylinders were colored rose but are no longer specially color-coded, now bearing a standard light gray color.On December 27, 2020, the United States Congress passed the American Innovation and Manufacturing (AIM) Act, which directs US Environmental Protection Agency (EPA) to phase down production and consumption of hydrofluorocarbons (HFCs). HFCs have a high global warming potential and contribute to climate change. Rules developed under the AIM Act require HFC production and consumption to be reduced by 85% from 2022 to 2036. R-410A will be restricted by this Act because it contains the HFC R-125. Other refrigerants (like R-32 and R-454B) will replace R-410A in most applications, just as R-410A replaced the earlier refrigerant, R-22. History R-410A was invented and patented by Allied Signal (now Honeywell) in 1991. Other producers around the world have been licensed to manufacture and sell R-410A, but Honeywell continues to be the leader in capacity and sales. R-410A was successfully commercialized in the air conditioning segment by a combined effort of Carrier Corporation, Emerson Climate Technologies, Inc., Copeland Scroll Compressors (a division of Emerson Electric Company), and Allied Signal. Carrier Corporation was the first company to introduce an R-410A-based residential air conditioning unit into the market in 1996 and holds the trademark "Puron". Transition from R-22 to R-410A R-410A replaced R-22 as the preferred refrigerant for use in residential and commercial air conditioners in Japan, Europe, and the United States.Parts designed specifically for R-410A must be used, as R-410A operates at higher pressures than other refrigerants. R-410A systems thus require service personnel to use different tools, equipment, safety standards, and techniques. Equipment manufacturers are aware of these changes and require the certification of professionals installing R-410A systems. In addition, the AC&R Safety Coalition has been created to help educate professionals about R-410A systems. R-22 Phaseout In accordance with terms and agreement reached in the Montreal Protocol (The Montreal Protocol on Substances That Deplete the Ozone Layer), the United States Environmental Protection Agency has mandated that production or import of R-22 along with other hydrochlorofluorocarbons (HCFCs) be phased out in the United States. In the E.U. and the U.S., R-22 cannot be used in the manufacture of new air conditioning or similar units from 1 January 2010. In other parts of the world, the phase-out date varies from country to country. All newly manufactured window air conditioners and mini split air conditioners in the United States come with R-410A. Since 1 January 2020, the production and importation of R-22 has been banned; the only available sources of R-22 include that which has been stockpiled or recovered from existing devices.R-410A use expanded globally and rapidly as it replaced R-22. Environmental effects Unlike alkyl halide refrigerants that contain bromine or chlorine, R-410A (which contains only fluorine) does not contribute to ozone depletion and is therefore becoming more widely used as ozone-depleting refrigerants like R-22 are phased out. However, like methane, its global warming potential (GWP) is appreciably worse than CO2 for the time it persists. Because R410A is a 50% combination of CH2F2 (HFC-32) and 50% CHF2CF3 (HFC-125), it is not easy to express their combined effects in a single global warming potential (GWP), However, HFC-32 has a 4.9 year lifetime and a 100-year GWP of 675 and HFC-125 has a 29-year lifetime and a 100-year GWP of 3500. The combination has a GWP of 2088, higher than that of R-22 (100-year GWP=1810), and an atmospheric lifetime of nearly 30 years compared with the 12-year lifetime of R-22.Since R-410A allows for higher SEER ratings than an R-22 system by reducing power consumption, the overall impact on global warming of R-410A systems can, in some cases, be lower than that of R-22 systems due to reduced greenhouse gas emissions from power plants. This assumes that the atmospheric leakage will be sufficiently managed. Under the assumption that preventing ozone depletion is more important in the short term than GWP reduction, R-410A is preferable to R-22. R-410A Phaseout The phase-down mandated by the AIM Act will lead to R-410A's replacement by other refrigerants beginning in 2022. Alternative refrigerants are available, including hydrofluoroolefins, hydrocarbons (such as propane R-290 and isobutane R-600A), and even carbon dioxide (R-744, GWP = 1). The alternative refrigerants have much lower GWP than R-410A. Physical properties Thermophysical properties - Properties of refrigerant R410a Precaution R-410A cannot be used in R-22 service equipment because of higher operating pressures (approximately 40 to 70% higher). While R-410A has negligible fractionation potential, it cannot be ignored when charging. Trade names Suva 410A (DuPont) Puron (Carrier) Genetron AZ-20 (Honeywell) == References ==
refrigerant
A refrigerant is a working fluid used in the refrigeration cycle of air conditioning systems and heat pumps where in most cases they undergo a repeated phase transition from a liquid to a gas and back again. Refrigerants are heavily regulated due to their toxicity, flammability and the contribution of CFC and HCFC refrigerants to ozone depletion and that of HFC refrigerants to climate change. Refrigerants are used in a direct expansion (DX) system to transfer energy from one environment to another, typically from inside a building to outside (or vice versa) commonly known as an "air conditioner" or "heat pump". Refrigerants can carry per kg 10 times more energy than water and 50 times more than air. In some countries, refrigerants are controlled substances due to high pressures (700–1,000 kPa (100–150 psi)), extreme temperatures (−50 °C [−58 °F] to over 100 °C [212 °F]), flammability (A1 class non-flammable, A2/A2L class flammable and A3 class extremely flammable/explosive) and toxicity (B1-low, B2-medium & B3-high), as classified by ISO 817 & ASHRAE 34. Refrigerants must only be handled by qualified/certified engineers to the relevant classes of refrigerant (in the UK, C&G 2079 for A1-class, amd C&G 6187-2 for A2/A2L & A3 class refrigerants). History The first air conditioners and refrigerators employed toxic or flammable gases, such as ammonia, sulfur dioxide, methyl chloride, or propane, that could result in fatal accidents when they leaked.In 1928 Thomas Midgley Jr. created the first non-flammable, non-toxic chlorofluorocarbon gas, Freon (R-12). The name is a trademark name owned by DuPont (now Chemours) for any chlorofluorocarbon (CFC), hydrochlorofluorocarbon (HCFC), or hydrofluorocarbon (HFC) refrigerant. Following the discovery of better synthesis methods, CFCs such as R-11, R-12, R-123 and R-502 dominated the market. Phasing out of CFCs In the early 1980s, scientists discovered that CFCs were causing major damage to the ozone layer that protects the earth from ultraviolet radiation, and to the ozone holes over polar regions. This led to the signing of the Montreal Protocol in 1987 which aimed to phase out CFCs and HCFC but did not address the contributions that HFCs made to climate change. The adoption of HCFCs such as R-22, and R-123 was accelerated and so were used in most U.S. homes in air conditioners and in chillers from the 1980s as they have a dramatically lower Ozone Depletion Potential (ODP) than CFCs, but their ODP was still not zero which led to their eventual phase-out. Hydrofluorocarbons (HFCs) such as R-134a, R-143a, R-407a, R-407c, R-404a and R-410a (a 50/50 blend of R-125/R-32) were promoted as replacements for CFCs and HCFCs in the 1990s and 2000s. HFCs were not ozone-depleting but did have global warming potentials (GWPs) thousands of times greater than CO2 with atmospheric lifetimes that can extend for decades. This in turn, starting from the 2010s, led to the adoption in new equipment of Hydrocarbon and HFO (hydrofluoroolefin) refrigerants R-32, R-290, R-600a, R-454B, R-1234yf, R-514A, R-744 (CO2), R-1234ze and R-1233zd, which have both an ODP of zero and a lower GWP. Hydrocarbons and CO2 are sometimes called natural refrigerants because they can be found in nature. The environmental organization Greenpeace provided funding to a former East German refrigerator company to research alternative ozone- and climate-safe refrigerants in 1992. The company developed hydrocarbon mixes such as isopentane and isobutane, propane and isobutane, or pure isobutane, called "Greenfreeze", but as a condition of the contract with Greenpeace could not patent the technology, which led to their widespread adoption by other firms. Policy and political influence by corporate executives resisted change however, citing the flammability and explosive properties of the refrigerants, and DuPont together with other companies blocked them in the U.S. with the U.S. EPA.Beginning on 14 November 1994, the U.S. Environmental Protection Agency restricted the sale, possession and use of refrigerants to only licensed technicians, per rules under sections 608 and 609 of the Clean Air Act. In 1995, Germany made CFC refrigerators illegal.In 1996 Eurammon, a European non-profit initiative for natural refrigerants, was established and comprises European companies, institutions, and industry experts.In 1997, FCs and HFCs were included in the Kyoto Protocol to the Framework Convention on Climate Change. In 2000 in the UK, the Ozone Regulations came into force which banned the use of ozone-depleting HCFC refrigerants such as R22 in new systems. The Regulation banned the use of R22 as a "top-up" fluid for maintenance from 2010 for virgin fluid and from 2015 for recycled fluid. Addressing greenhouse gases With growing interest in natural refrigerants as alternatives to synthetic refrigerants such as CFCs, HCFCs and HFCs, in 2004, Greenpeace worked with multinational corporations like Coca-Cola and Unilever, and later Pepsico and others, to create a corporate coalition called Refrigerants Naturally!. Four years later, Ben & Jerry's of Unilever and General Electric began to take steps to support production and use in the U.S. It is estimated that almost 75 percent of the refrigeration and air conditioning sector has the potential to be converted to natural refrigerants.In 2006, the EU adopted a Regulation on fluorinated greenhouse gases (FCs and HFCs) to encourage to transition to natural refrigerants (such as hydrocarbons). It was reported in 2010 that some refrigerants are being used as recreational drugs, leading to an extremely dangerous phenomenon known as inhalant abuse.From 2011 the European Union started to phase out refrigerants with a global warming potential (GWP) of more than 150 in automotive air conditioning (GWP = 100-year warming potential of one kilogram of a gas relative to one kilogram of CO2) such as the refrigerant HFC-134a (known as R-134a in North America) which has a GWP of 1526. In the same year the EPA decided in favour of the ozone- and climate-safe refrigerant for U.S. manufacture.A 2018 study by the nonprofit organization "Drawdown" put proper refrigerant management and disposal at the very top of the list of climate impact solutions, with an impact equivalent to eliminating over 17 years of US carbon dioxide emissions.In 2019 it was estimated that CFCs, HCFCs, and HFCs were responsible for about 10% of direct radiative forcing from all long-lived anthropogenic greenhouse gases. and in the same year the UNEP published new voluntary guidelines, however many countries have not yet ratified the Kigali Amendment. From early 2020 HFCs (including R-404a, R-134a and R-410a) are being superseded: Residential air-conditioning systems and heat pumps are increasingly using R-32. This still has a GWP of more than 600. Progressive devices use refrigerants with almost no climate impact, namely R-290 (propane), R-600 (isobutane) or R-1234yf (less flammable, in cars). In commercial refrigeration also CO2 (R-744) can be used. Requirements and desirable properties A refrigerant needs to have: a boiling point that is somewhat below the target temperature (although boiling point can be adjusted by adjusting the pressure appropriately), a high heat of vaporization, a moderate density in liquid form, a relatively high density in gaseous form (which can also be adjusted by setting pressure appropriately), and a high critical temperature. Extremely high pressures should be avoided.The ideal refrigerant would be: non-corrosive, non-toxic, non-flammable, with no ozone depletion and global warming potential. It should preferably be natural with well-studied and low environmental impact. Newer refrigerants address the issue of the damage that CFCs caused to the ozone layer and the contribution that HCFCs make to climate change, but some do raise issues relating to toxicity and/or flammability. Common refrigerants Refrigerants with very low climate impact With increasing regulations, refrigerants with a very low global warming potential are expected to play a dominant role in the 21st century, in particular, R-290 and R-1234yf. Starting from almost no market share in 2018, low GWPO devices are gaining market share in 2022. Most used Banned / Phased out Other Refrigerant reclamation and disposal Coolant and refrigerants are found throughout the industrialized world, in homes, offices, and factories, in devices such as refrigerators, air conditioners, central air conditioning systems (HVAC), freezers, and dehumidifiers. When these units are serviced, there is a risk that refrigerant gas will be vented into the atmosphere either accidentally or intentionally, hence the creation of technician training and certification programs in order to ensure that the material is conserved and managed safely. Mistreatment of these gases has been shown to deplete the ozone layer and is suspected to contribute to global warming.With the exception of isobutane and propane (R600a, R441a and R290), ammonia and CO2 under Section 608 of the United States' Clean Air Act it is illegal to knowingly release any refrigerants into the atmosphere.Refrigerant reclamation is the act of processing used refrigerant gas which has previously been used in some type of refrigeration loop such that it meets specifications for new refrigerant gas. In the United States, the Clean Air Act of 1990 requires that used refrigerant be processed by a certified reclaimer, which must be licensed by the United States Environmental Protection Agency (EPA), and the material must be recovered and delivered to the reclaimer by EPA-certified technicians. Classification of refrigerants Refrigerants may be divided into three classes according to their manner of absorption or extraction of heat from the substances to be refrigerated: Class 1: This class includes refrigerants that cool by phase change (typically boiling), using the refrigerant's latent heat. Class 2: These refrigerants cool by temperature change or 'sensible heat', the quantity of heat being the specific heat capacity x the temperature change. They are air, calcium chloride brine, sodium chloride brine, alcohol, and similar nonfreezing solutions. The purpose of Class 2 refrigerants is to receive a reduction of temperature from Class 1 refrigerants and convey this lower temperature to the area to be cooled. Class 3: This group consists of solutions that contain absorbed vapors of liquefiable agents or refrigerating media. These solutions function by nature of their ability to carry liquefiable vapors, which produce a cooling effect by the absorption of their heat of solution. They can also be classified into many categories. R numbering system The R- numbering system was developed by DuPont (which owned the Freon trademark), and systematically identifies the molecular structure of refrigerants made with a single halogenated hydrocarbon. ASHRAE has since set guidelines for the numbering system as follows:R-X1X2X3X4 X1 = Number of unsaturated carbon-carbon bonds (omit if zero) X2 = Number of carbon atoms minus 1 (omit if zero) X3 = Number of hydrogen atoms plus 1 X4 = Number of fluorine atoms Series R-xx Methane Series R-1xx Ethane Series R-2xx Propane Series R-4xx Zeotropic blend R-5xx Azeotropic blend R-6xx Saturated hydrocarbons (except for propane which is R-290) R-7xx Inorganic Compounds with a molar mass < 100 R-7xxx Inorganic Compounds with a molar mass ≥ 100 Ethane Derived Chains Number Only Most symmetrical isomer Lower Case Suffix (a,b,c,etc.) indicates increasingly unsymmetrical isomers Propane Derived Chains Number Only Most symmetrical isomer 2nd Lower Case Suffix (a,b,c,etc.) Indicates increasingly unsymmetrical isomers a Suffix Cl2 central carbon substitution b Suffix Cl2F central carbon substitution c Suffix F2 central carbon substitution d Suffix Cl, H central carbon substitution e Suffix F, H central carbon substitution f Suffix H2 central carbon substitution HFOs x Suffix Cl substitution on central atom y Suffix F substitution on central atom z Suffix H substitution on central atom a Suffix =CCl2 methylene substitution b Suffix =CClF methylene substitution c Suffix =CF2 methylene substitution d Suffix =CHCl methylene substitution e Suffix =CHF methylene substitution f Suffix =CH2 methylene substitution Blends Upper Case Suffix (A,B,C,etc.) Same blend with different compositions of refrigerants Miscellaneous R-Cxxx Cyclic compound R-Exxx Ether group is present R-CExxx Cyclic compound with an ether group R-4xx/5xx + Upper Case Suffix (A,B,C,etc.) Same blend with different composition of refrigerants R-6xx + Lower Case Letter Indicates increasingly unsymmetrical isomers 7xx/7xxx + Upper Case Letter Same molar mass, different compound R-xxxxB# Bromine is present with the number after B indicating how many bromine atoms R-xxxxI# Iodine is present with the number after I indicating how many iodine atoms R-xxx(E) Trans Molecule R-xxx(Z) Cis MoleculeFor example, R-134a has 2 carbon atoms, 2 hydrogen atoms, and 4 fluorine atoms, an empirical formula of tetrafluoroethane. The "a" suffix indicates that the isomer is unbalanced by one atom, giving 1,1,1,2-Tetrafluoroethane. R-134 (without the "a" suffix) would have a molecular structure of 1,1,2,2-Tetrafluoroethane. The same numbers are used with an R- prefix for generic refrigerants, with a "Propellant" prefix (e.g., "Propellant 12") for the same chemical used as a propellant for an aerosol spray, and with trade names for the compounds, such as "Freon 12". Recently, a practice of using abbreviations HFC- for hydrofluorocarbons, CFC- for chlorofluorocarbons, and HCFC- for hydrochlorofluorocarbons has arisen, because of the regulatory differences among these groups. Refrigerant safety ASHRAE Standard 34, Designation and Safety Classification of Refrigerants, assigns safety classifications to refrigerants based upon toxicity and flammability. Using safety information provided by producers, ASHRAE assigns a capital letter to indicate toxicity and a number to indicate flammability. The letter "A" is the least toxic and the number 1 is the least flammable. See also Brine (Refrigerant) Section 608 List of Refrigerants References Sources IPCC reports IPCC (2013). Stocker, T. F.; Qin, D.; Plattner, G.-K.; Tignor, M.; et al. (eds.). Climate Change 2013: The Physical Science Basis (PDF). Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press. ISBN 978-1-107-05799-9. (pb: 978-1-107-66182-0). Fifth Assessment Report - Climate Change 2013 Myhre, G.; Shindell, D.; Bréon, F.-M.; Collins, W.; et al. (2013). "Chapter 8: Anthropogenic and Natural Radiative Forcing" (PDF). Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. pp. 659–740. IPCC (2021). Masson-Delmotte, V.; Zhai, P.; Pirani, A.; Connors, S. L.; et al. (eds.). Climate Change 2021: The Physical Science Basis (PDF). Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press (In Press). Forster, Piers; Storelvmo, Trude (2021). "Chapter 7: The Earth's Energy Budget, Climate Feedbacks, and Climate Sensitivity" (PDF). IPCC AR6 WG1 2021. Other "High GWP refrigerants". California Air Resources Board. Retrieved 13 February 2022. "BSRIA's view on refrigerant trends in AC and Heat Pump segments". 2020. Retrieved 2022-02-14. Yadav, Saurabh; Liu, Jie; Kim, Sung Chul (2022). "A comprehensive study on 21st-century refrigerants - R290 and R1234yf: A review". International Journal of Heat and Mass Transfer. 122: 121947. doi:10.1016/j.ijheatmasstransfer.2021.121947. S2CID 240534198. External links US Environmental Protection Agency page on the GWPs of various substances Green Cooling Initiative on alternative natural refrigerants cooling technologies International Institute of Refrigeration Archived 2018-09-25 at the Wayback Machine
ocean heat content
Ocean heat content (OHC) is the energy absorbed and stored by oceans. To calculate the ocean heat content, it is necessary to measure ocean temperature at many different locations and depths. Integrating the areal density of ocean heat over an ocean basin or entire ocean gives the total ocean heat content. Between 1971 and 2018, the rise in ocean heat content accounted for over 90% of Earth’s excess thermal energy from global heating. The main driver of this increase was anthropogenic forcing via rising greenhouse gas emissions.: 1228  By 2020, about one third of the added energy had propagated to depths below 700 meters. In 2022, the world’s oceans were again the hottest in the historical record and exceeded the previous 2021 record maximum. The four highest ocean heat observations occurred in the period 2019–2022. The North Pacific, North Atlantic, the Mediterranean, and the Southern Ocean all recorded their highest heat observations for more than sixty years. Ocean heat content and sea level rise are important indicators of climate change.Ocean water absorbs solar energy efficiently. It has far greater heat capacity than atmospheric gases. As a result, the top few meters of the ocean contain more thermal energy than the entire Earth's atmosphere. Since before 1960, research vessels and stations have sampled sea surface temperatures and temperatures at greater depth all over the world. Since 2000, an expanding network of nearly 4000 Argo robotic floats has measured temperature anomalies, or the change in ocean heat content. Ocean heat content has been increasing at a steady or accelerating rate since at least 1990. The net rate of change in the upper 2000 meters from 2003 to 2018 was +0.58±0.08 W/m2 (or annual mean energy gain of 9.3 zettajoules). It is challenging to measure temperatures over decades with sufficient accuracy and covering enough areas. This gives rise to the uncertainty in the figures.Changes in ocean heat content have far-reaching consequences for the planet's marine and terrestrial ecosystems; including multiple impacts to coastal ecosystems and communities. Direct effects include variations in sea level and sea ice, shifts in intensity of the water cycle, and the migration and extinction of marine life. Calculations Definition Ocean heat content is "the total amount of heat stored by the oceans". To calculate the ocean heat content, measurements of ocean temperature at many different locations and depths are required. Integrating the areal density of ocean heat over an ocean basin, or entire ocean, gives the total ocean heat content. Thus, total ocean heat content is a volume integral of the product of temperature, density, and heat capacity over the three-dimensional region of the ocean for which data is available. The bulk of measurements have been performed at depths shallower than about 2000 m (1.25 miles).The areal density of ocean heat content between two depths is defined as a definite integral: H = c p ∫ h 2 h 1 ρ ( z ) T ( z ) d z {\displaystyle H=c_{p}\int _{h2}^{h1}\rho (z)T(z)dz} where c p {\displaystyle c_{p}} is the specific heat capacity of sea water, h2 is the lower depth, h1 is the upper depth, ρ ( z ) {\displaystyle \rho (z)} is the seawater density profile, and T ( z ) {\displaystyle T(z)} is the temperature profile. In SI units, H {\displaystyle H} has units of Joules per square metre (J·m−2). In practice, the integral can be approximated by summation of a smooth and otherwise well-behaved sequence of temperature and density data. Seawater density is a function of temperature, salinity, and pressure. Despite the cold and great pressure at ocean depth, water is nearly incompressible and favors the liquid state for which its density is maximized. Measurements of temperature versus ocean depth generally show an upper mixed layer (0–200 m), a thermocline (200–1500 m), and a deep ocean layer (>1500 m). These boundary depths are only rough approximations. Sunlight penetrates to a maximum depth of about 200 m; the top 80 m of which is the habitable zone for photosynthetic marine life covering over 70% of Earth's surface. Wave action and other surface turbulence help to equalize temperatures throughout the upper layer. Unlike surface temperatures which decrease with latitude, deep-ocean temperatures are relatively cold and uniform in most regions of the world. About 50% of all ocean volume is at depths below 3000 m (1.85 miles), with the Pacific Ocean being the largest and deepest of five oceanic divisions. The thermocline is the transition between upper and deep layers in terms of temperature, nutrient flows, abundance of life, and other properties. It is semi-permanent in the tropics, variable in temperate regions (often deepest during the summer), and shallow to nonexistent in polar regions. Measurements Ocean heat content measurements come with difficulties, especially before the deployment of the Argo profiling floats. Due to poor spatial coverage and poor quality of data, it has not always been easy to distinguish between long term global warming trends and climate variability. Examples of these complicating factors are the variations caused by El Niño–Southern Oscillation or changes in ocean heat content caused by major volcanic eruptions.Argo is an international program of robotic profiling floats deployed globally since the start of the 21st century. The program's initial 3000 units had expanded to nearly 4000 units by year 2020. At the start of each 10-day measurement cycle, a float descends to a depth of 1000 meters and drifts with the current there for nine days. It then descends to 2000 meters and measures temperature, salinity (conductivity), and depth (pressure) over a final day of ascent to the surface. At the surface the float transmits the depth profile and horizontal position data through satellite relays before repeating the cycle.Starting 1992, the TOPEX/Poseidon and subsequent Jason satellite series have observed vertically integrated OHC, which is a major component of sea level rise. The partnership between Argo and Jason measurements has yielded ongoing improvements to estimates of OHC and other global ocean properties. Causes for heat uptake Ocean heat uptake accounts for over 90% of total planetary heat uptake, mainly as a consequence of human-caused changes to the composition of Earth's atmosphere. This high percentage is because waters at and below the ocean surface - especially the turbulent upper mixed layer - exhibit a thermal inertia much larger than the planet's exposed continental crust, ice-covered polar regions, or atmospheric components themselves. A body with large thermal inertia stores a big amount of energy because of its volumetric heat capacity, and effectively transmits energy according to its heat transfer coefficient. Most extra energy that enters the planet via the atmosphere is thereby taken up and retained by the ocean. Planetary heat uptake or heat content accounts for the entire energy added to or removed from the climate system. It can be computed as an accumulation over time of the observed differences (or imbalances) between total incoming and outgoing radiation. Changes to the imbalance have been estimated from Earth orbit by CERES and other remote instruments, and compared against in-situ surveys of heat inventory changes in oceans, land, ice and the atmosphere. Achieving complete and accurate results from either accounting method is challenging, but in different ways that are viewed by researchers as being mostly independent of each other. Recent increases in planetary heat content obtained from the two methods are in overall agreement and are thought to exceed measurement uncertainties.From the ocean perspective, the more abundant equatorial solar irradiance is directly absorbed by Earth's tropical surface waters and drives the overall poleward propagation of heat. The surface also exchanges energy that has been absorbed by the lower troposphere through wind and wave action. Over time, a sustained imbalance in Earth's energy budget enables a net flow of heat either into or out of greater ocean depth via thermal conduction, downwelling, and upwelling. Releases of OHC to the atmosphere occur primarily via evaporation and enable the planetary water cycle. Concentrated releases in association with high sea surface temperatures help drive tropical cyclones, atmospheric rivers, atmospheric heat waves and other extreme weather events that can penetrate far inland. These processes enable the ocean to be Earth's largest thermal reservoir which functions to regulate the planet's climate; acting as both a sink and a source of energy. From the perspective of land and ice covered regions, their portion of heat uptake is reduced and delayed by the dominant thermal inertia of the ocean. Although the average rise in land surface temperature has exceeded the ocean surface due to the lower inertia (smaller heat-transfer coefficient) of solid land and ice, temperatures would rise more rapidly and by a greater amount without the full ocean. Measurements of how rapidly the heat mixes into the deep ocean have also been underway to better close the ocean and planetary energy budgets.The ocean also functions as a sink and source of carbon, with a role comparable to that of land regions in Earth's carbon cycle. In accordance with the temperature dependence of Henry's law, warming surface waters are less able to absorb atmospheric gases including oxygen and the growing emissions of carbon dioxide and other greenhouse gases from human activity. Recent observations and changes Numerous independent studies in recent years have found a multi-decadal rise in OHC of upper ocean regions that has begun to penetrate to deeper regions. The upper ocean (0–700 m) has warmed since 1971, while it is very likely that warming has occurred at intermediate depths (700–2000 m) and likely that deep ocean (below 2000 m) temperatures have increased.: 1228  The heat uptake results from a persistent warming imbalance in Earth's energy budget that is most fundamentally caused by the anthropogenic increase in atmospheric greenhouse gases.: 41  The rate in which the ocean absorbs anthropogenic carbon dioxide has approximately tripled from the early 1960s to the late 2010s, a scaling proportional to the increase in atmospheric carbon dioxide. There is very high confidence that increased ocean heat content in response to anthropogenic carbon dioxide emissions is essentially irreversible on human time scales.: 1233 Studies based on Argo measurements indicate that ocean surface winds, especially the subtropical trade winds in the Pacific Ocean, change ocean heat vertical distribution. This results in changes among ocean currents, and an increase of the subtropical overturning, which is also related to the El Niño and La Niña phenomenon. Depending on stochastic natural variability fluctuations, during La Niña years around 30% more heat from the upper ocean layer is transported into the deeper ocean. Furthermore, studies have shown that approximately one-third of the observed warming in the ocean is taking place in the 700-2000 meter ocean layer.Model studies indicate that ocean currents transport more heat into deeper layers during La Niña years, following changes in wind circulation. Years with increased ocean heat uptake have been associated with negative phases of the interdecadal Pacific oscillation (IPO). This is of particular interest to climate scientists who use the data to estimate the ocean heat uptake. The upper ocean heat content in most North Atlantic regions is dominated by heat transport convergence (a location where ocean currents meet), without large changes to temperature and salinity relation. Additionally, a study from 2022 on anthropogenic warming in the ocean indicates that 62% of the warming from the years between 1850 and 2018 in the North Atlantic along 25°N is kept in the water below 700 m, where a major percentage of the ocean's surplus heat is stored. Although the upper 2000 m of the oceans have experienced warming on average since the 1970s, the rate of ocean warming varies regionally with the subpolar North Atlantic warming more slowly and the Southern Ocean taking up a disproportionate large amount of heat due to anthropogenic greenhouse gas emissions.: 1230 Deep-ocean warming below 2000 m has been largest in the Southern Ocean compared to other ocean basins.: 1230 Impacts Warming oceans are one reason for coral bleaching and contribute to the migration of marine species. Marine heat waves are regions of life-threatening and persistently elevated water temperatures. Redistribution of the planet's internal energy by atmospheric circulation and ocean currents produces internal climate variability, often in the form of irregular oscillations, and helps to sustain the global thermohaline circulation.The increase in OHC accounts for 30–40% of global sea-level rise from 1900 to 2020 because of thermal expansion. It is also an accelerator of sea ice, iceberg, and tidewater glacier melting. The ice loss reduces polar albedo, amplifying both the regional and global energy imbalances. The resulting ice retreat has been rapid and widespread for Arctic sea ice, and within northern fjords such as those of Greenland and Canada. Impacts to Antarctic sea ice and the vast Antarctic ice shelves which terminate into the Southern Ocean have varied by region and are also increasing due to warming waters. Breakup of the Thwaites Ice Shelf and its West Antarctica neighbors contributed about 10% of sea-level rise in 2020.A study in 2015 concluded that ocean heat content increases by the Pacific Ocean were compensated by an abrupt distribution of OHC into the Indian Ocean.Warming of the deep ocean has the further potential to melt and release some of the vast store of frozen methane hydrate deposits that have naturally accumulated there. See also Ocean acidification Ocean reanalysis Ocean stratification Special Report on the Ocean and Cryosphere in a Changing Climate Tropical cyclones and climate change Climate change portal Ecology portal Oceans portal References External links NOAA Global Ocean Heat and Salt Content
the real global warming disaster
The Real Global Warming Disaster (Is the Obsession with 'Climate Change' Turning Out to Be the Most Costly Scientific Blunder in History?) is a 2009 book by English journalist and author Christopher Booker in which he asserts that global warming cannot be attributed to humans, and then alleges how the scientific opinion on climate change was formulated. From a standpoint of environmental scepticism, Booker seeks to combine an analysis of the science of global warming with the consequences of political decisions to reduce CO2 emissions and claims that, as governments prepare to make radical changes in energy policies, the scientific evidence for global warming is becoming increasingly challenged. He asserts that global warming is not supported by a significant number of climate scientists, and criticises how the UN's Intergovernmental Panel on Climate Change (IPCC) presents evidence and data, in particular citing its reliance on potentially inaccurate global climate models to make temperature projections. Booker concludes, "it begins to look very possible that the nightmare vision of our planet being doomed" may be imaginary, and that, if so, "it will turn out to be one of the most expensive, destructive, and foolish mistakes the human race has ever made".The book's claims were strongly criticised by science writer Philip Ball, but the book was praised by several columnists. The book opens with an erroneous quotation, which Booker subsequently acknowledged and promised to correct in future editions.The book was Amazon UK's fourth bestselling environment book of the decade 2000–10. Synopsis The book consists of three parts and an epilogue. Booker sums up the book's contents in a long epilogue, which quotes Theseus in A Midsummer Night's Dream: In the night, imagining some fear, How easy is a bush supposed a bear Booker contends that in this quote Shakespeare is identifying that "when we are not presented with enough information for our minds to resolve something into certainty, they may be teased into exaggerating it into something quite different from what it really is". The first chapter of the book is the introduction, where Booker warns us of the risk posed by 'those measures being proposed by the world's politicians in the hope that they can avert' climate change. It talks about the ineffectiveness of wind turbines, and how they produce the same amount of energy per year as one coal-fired power station (3.9 gigawatts). In the prologue, Booker claims that many people, including the former U.S President Barack Obama, were 'seriously misinformed' about the evidence surrounding Global Warming and the effects it might have on the world. Part I of the book tells how Climate Change has risen to 'the top of the World's political agenda' so quickly, and the methods he thinks the Inter-Governmental Panel on Climate Change used to convince politicians that the issue was genuine, including James Hansen's famous hearing before the American Senate, where he allegedly turned the room temperature up in order the strengthen his point. Part II of the book is entitled 'Gore and the EU unite to Save the Planet'. It depicts how, panic-stricken, the world's politicians took action to encourage more renewable forms of energy, and the closure of the world's non-renewable power stations. Finally, Part III speaks of how the Global Warming 'consensus begins (began) to crumble.' It claims that the evidence behind Climate Change and its causes is coming under increased scrutiny, and that by no means is the science "settled". Reception The book received a positive reception from non-scientists in the media: In The Spectator, Rodney Leach said it was one of the best of its type, remarking that Booker "narrates this story with the journalist's pace and eye for telling detail and the historian's forensic thoroughness which have made him a formidable opponent of humbug". Columnist James Delingpole, himself the author of a denialist book, described the book as "another of those classics which any even vaguely intelligent person who wants to know what's really going on needs to read". Writing in The Herald, Brian Morton was largely sympathetic to the position taken by Booker in the book, attributing global warming to natural causes. A positive review by Henry Kelly in The Irish Times, referring to the book as "meticulously researched, provocative and challenging", was criticised by Irish environmental campaigner and climatechange.ie website founder John Gibbons, who said that the decision by The Irish Times to allow Kelly to review The Real Global Warming Disaster was part of a recent trend of "the media giving too much coverage to 'anti-science' climate change deniers and failing to convey the gravity of the threat, making readers and viewers apathetic". In The Scotsman, writer and environmentalist Sir John Lister-Kaye chose The Real Global Warming Disaster as one of his books of the year, writing that "though barely credible in places" this was an "important, brave book making and explaining many valid points".Scientist Philip Ball, on the other hand, wrote in his review in The Observer that the book was "the definitive climate sceptics’ manual" in that it makes an uncritical presentation of "just about every criticism ever made of the majority scientific view" on global warming. Though expressing "a queer kind of admiration for the skill and energy with which Booker has assembled his polemic", Ball called the claims made by the author "bunk". Ball also criticised Booker's tactic of introducing global warming deniers "with a little eulogy to their credentials, while their opponents receive only a perfunctory, if not disparaging, preamble". Houghton misquotation The book opens with an incorrect quotation which wrongly attributes to John T. Houghton the words "Unless we announce disasters, no one will listen". The publishers apologised for this misquotation, confirmed that it would not be repeated, and agreed to place a corrigendum in any further copies of the book. In an article which appeared in The Sunday Telegraph on 20 February 2010, Booker wrote "we shall all in due course take steps to correct the record, as I shall do in the next edition of my book". Houghton felt that Booker continued to misstate his position regarding the role of disasters in policy making, and he referred the matter to the Press Complaints Commission (PCC Reference 101959), following whose involvement The Sunday Telegraph published on 15 August a letter of correction by Houghton stating his actual position, that adverse events shock people and thereby bring about change. An article supportive of Houghton appeared in the New Scientist magazine. See also Bibliography Booker, Christopher (2009). The Real Global Warming Disaster. Continuum International Publishing Group Ltd. ISBN 978-1-4411-1052-7. Notes Further reading Booker, Christopher; North, Richard (2007). Scared To Death: From BSE To Global Warming, Why Scares Are Costing Us The Earth. Continuum International Publishing Group Ltd. ISBN 978-0-8264-8614-1. Carter, Robert; Spooner, John (2013). Taxing air: Facts and fallacies of climate change. Kelpie Press. ISBN 9781742983189. Montford, Andrew (2010). The Hockey Stick Illusion; Climategate and the Corruption of Science. Stacey International. p. 482. ISBN 978-1-906768-35-5.
greenwashing
Greenwashing (a compound word modeled on "whitewash"), also called "green sheen", is a form of advertising or marketing spin in which green PR and green marketing are deceptively used to persuade the public that an organization's products, aims and policies are environmentally friendly. Companies that intentionally take up greenwashing communication strategies often do so in order to distance themselves from their own environmental lapses or those of their suppliers.An example of greenwashing occurs when an organization spends significantly more resources on advertising being "green" than on environmentally sound practices. Greenwashing can range from changing the name or label of a product to evoke the natural environment (for example on a product containing harmful chemicals) to multimillion-dollar campaigns that portray highly-polluting energy companies as eco-friendly. Greenwashing covers up unsustainable corporate agendas and policies. Highly public accusations of greenwashing have contributed to the term's increasing use.Many corporations use greenwashing to improve public perception of their brands. Complex corporate structures can further obscure the big picture. Critics of the practice suggest the rise of greenwashing, paired with ineffective regulation, contributes to consumer skepticism of all green claims and diminishes the power of the consumer to drive companies toward greener manufacturing processes and business operations.Greenwashing has increased in recent years to meet consumer demand for environmentally-friendly goods and services. New regulations, laws, and guidelines by organizations such as the Committee of Advertising Practice mean to discourage companies from using greenwashing to deceive consumers. Characteristics TerraChoice, an environmental consulting division of UL, described "seven sins of greenwashing" in 2007 to "help consumers identify products that made misleading environmental claims": Hidden Trade-off: a claim that a product is "green" based on an unreasonably narrow set of attributes without attention to other important environmental issues. No Proof: a claim that cannot be substantiated by easily accessible information or by a reliable third-party certification. Vagueness: a claim that is so poorly defined or broad that its real meaning is likely to be misunderstood by the consumer. "All-natural", for example, is not necessarily "green". Worshiping False Labels: a claim that, through words or images, gives the impression of a third-party endorsement where none exists. Irrelevance: a claim that may be truthful but which is unimportant or unhelpful to consumers seeking environmentally-preferable products. Lesser of Two Evils: a claim that may be true within the product category, but that risks distracting consumers from the greater environmental impact of the category as a whole. Fibbing: a claim that is simply false.The organization noted that by 2010 approximately 95% of consumer products in the U.S. claiming to be green were discovered to commit at least one of these sins. History Keep America Beautiful was a campaign founded by beverage manufacturers and others in 1953. The campaign focused on recycling and littering, diverting attention away from corporate responsibility to protect the environment. The objective was to forestall the regulation of disposable containers such as the one established by Vermont.In the mid-1960s, the environmental movement gained momentum. This prompted many companies to create a new green image through advertising. Jerry Mander, a former Madison Avenue advertising executive, called this new form of advertising "ecopornography". The first Earth Day was held on April 22, 1970. This encouraged many industries to advertise themselves as being friendly to the environment. Public utilities spent $300 million advertising themselves as clean green companies. This was eight times more than the money they spent on pollution reduction research.The term greenwashing was coined by New York environmentalist Jay Westerveld in a 1986 essay about the hotel industry's practice of placing notices in bedrooms promoting reuse of towels to "save the environment". He noted that often little or no effort toward reducing energy waste was made by these institutions, although towel reuse saved them laundry costs. He concluded that often the real objective was increased profit and labeled this and other profitable-but-ineffective "environmentally-conscientious" acts as greenwashing.In 1991, a study published in the Journal of Public Policy and Marketing (American Marketing Association) found that 58% of environmental ads had at least one deceptive claim. Another study found that 77% of people said the environmental reputation of a company affected whether they would buy its products. One-fourth of all household products marketed around Earth Day advertised themselves as being green and environmentally friendly. In 1998 the Federal Trade Commission created the "Green Guidelines", which defined terms used in environmental marketing. The following year the FTC found that the Nuclear Energy Institute's claims of being environmentally clean were not true. The FTC did nothing about the ads because they were out of the agency's jurisdiction. This caused the FTC to realize they needed new, clear, enforceable standards. In 1999 the word "greenwashing" was added to the Oxford English Dictionary.Days before the 1992 Earth Summit in Rio de Janeiro, Greenpeace released the Greenpeace Book on Greenwash, which described the corporate takeover of the UN conference and provided case studies of the contrast between corporate polluters and their rhetoric. An expanded version of that report was published by Third World Network as "Greenwash: The Reality Behind Corporate Environmentalism." In 2002, during the World Summit on Sustainable Development in Johannesburg, the Greenwashing Academy hosted the Greenwash Academy Awards. The ceremony awarded companies like BP, ExxonMobil, and even the U.S. Government for their elaborate greenwashing ads and support for greenwashing. Examples Fashion industry Kimberly Clark's claim of "Pure and Natural" diapers in green packaging. The product uses organic cotton on the outside but uses the same petrochemical gel inside as before. Pampers also claims that "Dry Max" diapers reduce landfill by reducing the amount of paper fluff in the diaper, which really is a way for Pampers to save money. In January 2020 the Fur Free Alliance noted that the "WelFur" label, which advocated for animal welfare on fur farms, is run by the fur industry itself and is aimed at European fur farms. Clothing company H&M came under fire for greenwashing their manufacturing practices as a result of a report published by Quartz News. Food industry In 2009, McDonald's changed the color of its European logos from yellow-and-red to yellow-and-green; a spokesman explained that the change was "to clarify [their] responsibility for the preservation of natural resources". In October 2021 McDonald's was accused of greenwashing after announcing its pledge to reach net-zero emissions by 2050. In 2018, in response to increased calls to ban plastic straws, Starbucks introduced a lid with a built-in drinking straw that actually contained more plastic by weight than the old straw and lid together (though it can be recycled, unlike its predecessor). Automobile industry The UK Advertising Standards Authority upheld complaints against major vehicle manufacturers including Suzuki, SEAT, Toyota, and Lexus who made false claims about their vehicles. Volkswagen fitted their cars with a "defeat device" which activated only when a car's emissions were being tested, to reduce polluting emissions. In normal use, by contrast, the cars were emitting 40 times the allowed rate of nitrogen oxide. Forbes estimates that this scandal cost Volkswagen US$35.4 billion. In November 2020, Aston Martin, Bosch, and other brands were discovered to have funded a report which downplayed electric vehicles' environmental benefits with misleading information about the CO2 emissions produced during the manufacture of electric vehicles, in response to the UK announcing that it would ban the sale of vehicles with internal combustion engines from 2030. The greenwashing scandal became known as Astongate given the relationship between the British automotive manufacturer and Clarendon Communications, a shell company posing as a public relations agency which was set up to promote the report, and which was registered to James Michael Stephens – the Director of Global Government & Corporate Affairs at Aston Martin Lagonda Ltd. Oil Industry A 2010 advertising campaign by Chevron was described by the Rainforest Action Network, Amazon Watch, and The Yes Men as greenwash. A spoof campaign was launched to pre-empt Chevron's greenwashing. In 1985, the Chevron Corporation launched one of the most famous greenwashing ad campaigns. Chevron's "People Do" advertisements were aimed at a "hostile audience" of "societally conscious" people. Two years after the launch of the campaign, surveys found people in California trusted Chevron more than other oil companies to protect the environment. In the late 1980s The American Chemistry Council started a program called Responsible Care, which shone a light on the environmental performances and precautions of the group's members. The loose guidelines of responsible care caused industries to adopt self-regulation over government regulation. Political campaigns In 2010, environmentalists stated the Bush Administration's "Clear Skies Initiative" actually weakened air pollution laws. Similar laws were issued under President Macron of France as "simplifying ecology rules" that were criticized on similar grounds while still being referred to by his government as "ecology laws". "Clean Coal", an initiative adopted by several platforms for the 2008 U.S. presidential election, cited carbon capture and storage as a means of reducing carbon emissions by capturing and injecting carbon dioxide produced by coal power plants into layers of porous rock below the ground. According to Fred Pearce's Greenwash column in The Guardian, clean coal is the "ultimate climate change oxymoron… pure and utter greenwash". In 2017, Australia's then Treasurer Scott Morrison used "Clean Coal" as the basis to suggest clean energy subsidies be used to build new coal power plants. The renaming of "Tar Sands" to "Oil Sands", (Alberta, Canada) in corporate and political language reflects an ongoing debate between the project's adherents and opponents. This semantic shift can be seen as a case of greenwashing in an attempt at countering growing public concern about the environmental and health impacts of the industry. While advocates claim that the shift is scientifically derived to better reflect the use of the sands as a precursor to oil, environmental groups claim that it is simply a means of cloaking the issue behind friendlier terminology. In 2021, Saudi Arabian Crown Prince Mohammed bin Salman announced a tree planting campaign in the desert as part of the plan to reach carbon neutrality by 2060. The plan was criticised as a greenwashing attempt by some climate scientists. Some environmental activists and critics condemned the 2021 United Nations Climate Change Conference (COP26) as greenwashing. In May 2023, a Wikipedia user who identified themselves as an employee of ADNOC was alleged to suggest edits to the Wikipedia article of Sultan Al Jaber, president of COP28, which presented Al Jaber as a supporter of the climate movement. In June 2023, Dr Marc Owen Jones of Hamad Bin Khalifa University noted that a large number of apparent fake Twitter profiles were used to defend Al Jaber's COP28 presidency. Business slogans "Clean Burning Natural Gas" — When compared to the dirtiest fossil fuel, coal, natural gas is only 50% as dirty. Producing natural gas through fracking and/or distribution by a pipeline may lead to methane emissions into the atmosphere. Methane, the main component of natural gas, is a powerful greenhouse agent. Despite this, natural gas is often presented as a cleaner fossil fuel in environmental discourse. It is in practice used to balance the intermittent nature of solar and wind energy. It can be considered a useful "transitional technology" towards hydrogen as hydrogen can already be blended in and eventually be used to replace it, inside gas networks initially conceived for natural gas-use. First-generation biofuels are said to be better for the environment than fossil fuels, but some (such as palm oil) contribute to deforestation (which contributes to global warming due to release of CO2). Higher-generation biofuels do not have these particular issues, but have contributed significantly to deforestation and habitat destruction in Canada due to rising corn prices which make it economically worthwhile to clear-cut existing forests in agricultural areas. An article in Wired magazine highlighted slogans that suggest environmentally-benign business activity: the Comcast Ecobill has the slogan "PaperLESSisMORE", but Comcast uses large amounts of paper for direct marketing. The Poland Spring (from the American city of Poland) ecoshape bottle is touted as "A little natural does a lot of good", although 80% of beverage containers go to landfills. The Airbus A380 airliner is described as "A better environment inside and out" even though air travel has a high environmental cost. The multinational oil company formerly known as British Petroleum launched a rebranding campaign in 2000 revising the company's acronym as "Beyond Petroleum". The campaign included a revised green logo, advertisements, a solar-paneled gas station in Los Angeles, and clean energy rhetoric across media to strategically position itself as the 'greenest' global oil company. The campaign became the center of public controversy due to the company's hypocrisy around lobbying efforts that sought permission to drill in protected areas, and its negligent operating practices that led to severe oil spills—most notably the Prudhoe Bay pipeline rupture in 2006 and the Gulf of Mexico rig explosion in 2010 Psychological effects Greenwashing is a relatively new area of research within psychology and there is little consensus among studies on how greenwashing affects consumers and stakeholders. Because of the variance in country and geography in recently published studies, the discrepancy between consumer behavior in studies could be attributed to cultural or geographic differences. Effect on consumer perception Researchers found that products that are truly environmentally friendly are significantly more favored by consumers than their greenwashed counterparts. A survey by Lending Tree found that 55% of Americans are willing to spend more money on products they perceive to be more sustainable and eco-friendly.Consumer perceptions of greenwashing are also found to be mediated by the level of greenwashing they are exposed to. Other research suggests that few consumers actually notice greenwashing, particularly when they perceive the company or brand as reputable. When consumers perceive green advertising as credible, they develop more positive attitudes towards the brand, even when the advertising is greenwashed.Other research suggests that consumers with more green concern are more able to tell the difference between honest green marketing and greenwashed advertising; the more green concern, the stronger the intention not to purchase from companies from which they perceive greenwashing advertising behavior. When consumers use word-of-mouth to communicate about a product, green concern strengthens the negative relationship between the consumer's intent to purchase and the perception of greenwashing.Research suggests that consumers distrust companies who greenwash because they view the act as deceptive. If consumers perceive that a company would realistically benefit from a green marketing claim being true, then it is more likely that the claim and the company will be seen as genuine.Consumers' willingness to purchase green decreases when they perceive the green attributes compromise the product quality, making greenwashing potentially risky, even when the consumer or stakeholder is not skeptical of the green messaging. Words and phrases often used in green messaging and greenwashing, such as "gentle", can lead consumers to believe the green product is less effective than a non-green option. Attributions of greenwashing Eco-labels can be given to a product both from an external organization and by the company itself, which has raised concerns because companies can label a product green or environmentally friendly by selectively disclosing positive attributes of the product while not disclosing environmental harms. Consumers expect to see eco-labels from both internal and external sources but perceive labels from external sources to be more trustworthy. Researchers from the University of Twente found that uncertified or greenwashed internal eco-labels may still contribute to consumer perceptions of a responsible company, with consumers attributing internal motivation to a company's internal eco-labeling. Other research connecting attribution theory and greenwashing found that consumers often perceive green advertising as greenwashing when companies use green advertisements, attributing the green messaging to corporate self-interest. Green advertising can backfire, particularly when the advertised environmental claim does not match a company's actual environmental engagement. Implications for green business Researchers working with consumer perception, psychology, and greenwashing note that in order for companies to avoid the negative connotations and perceptions of greenwashing, companies should "walk the walk" when it comes to green advertising and green behavior. Green marketing, labeling, and advertising are found to be most effective when it matches a company's actual environmental engagement. This is also mediated by the visibility of those environmental engagements, meaning that if consumers are unaware of a company's commitment to sustainability or environmentally-conscious ethos, they cannot factor greenness in their assessment of the company or product.Significant exposure to greenwashing can make a consumer indifferent to or generate negative feelings toward green marketing. Genuinely green businesses then have to work harder to differentiate themselves from those who use false claims. Consumers may also react negatively to true sustainability claims because of negative experiences with greenwashing. Deterrence Companies may pursue environmental certification to avoid greenwashing through independent verification of their green claims. For example, the Carbon Trust Standard launched in 2007 with the stated aim "to end 'greenwash' and highlight firms that are genuine about their commitment to the environment".There have been attempts to reduce the impact of greenwashing by exposing it to the public. The Greenwashing Index, created by the University of Oregon in partnership with EnviroMedia Social Marketing, allowed the public to upload and rate examples of greenwashing, but it was last updated in 2012.Research published in the Journal of Business Ethics in 2011 shows that Sustainability Ratings might deter greenwashing. Results concluded that higher sustainability ratings lead to significantly higher brand reputation compared to lower sustainability ratings. This same trend was found regardless of the company's level of corporate social responsibility (CSR) communications. This finding establishes that consumers pay more attention to sustainability ratings than CSR communications or greenwashing claims.The World Federation of Advertisers released six new guidelines for advertisers in 2022 that aim to prevent greenwashing. These approaches encourage credible environmental claims and more sustainable outcomes. Regulation Worldwide regulations on misleading environmental claims vary from criminal liability to fines or just voluntary guidelines. Australia The Australian Trade Practices Act punishes companies that provide misleading environmental claims. Any organization found guilty of such could face up A$6 million in fines. In addition, the guilty party must pay for all expenses incurred while setting the record straight about their product or company's actual environmental impact. Canada Canada's Competition Bureau along with the Canadian Standards Association discourage companies from making "vague claims" about their products' environmental impact. Any claims must be backed up by "readily available data". European Union The European Anti-Fraud Office (OLAF) handles investigations that have an environmental or sustainability element, such as misspending of EU funds intended for green products and the counterfeiting and smuggling of products with the potential to harm the environment and health. It also handles illegal logging and smuggling of precious wood and timber into the EU (wood laundering).In January 2021, the European Commission, in cooperation with national consumer protection authorities, published a report on its annual survey of consumer websites investigated for violations of EU consumer protection law. The study examined green claims across a wide range of consumer products, concluding that for 42 percent of the websites examined, the claims were likely false and misleading and could well constitute actionable claims for unfair commercial practices.The European Union has also struck a provisional agreement to mandate new reporting rules for companies with over 250 staff and a turnover of 40 million euros. They will need to disclose environmental, social, and governance (ESG) information, which will help combat greenwashing. These requirements go into effect in 2024. Norway Norway's consumer ombudsman has targeted automakers who claim their cars are "green", "clean", or "environmentally friendly" with some of the world's strictest advertising guidelines. Consumer Ombudsman official Bente Øverli said: "Cars cannot do anything good for the environment except less damage than others." Manufacturers risk fines if they fail to drop misleading advertisements. Øverli said she did not know of other countries going so far in cracking down on cars and the environment. Thailand The Green Leaf Certification is an evaluation method created by the Association of Southeast Asian Nations (ASEAN) as a metric that rates the hotels' environmental efficiency of environmental protection. In Thailand, this certification is believed to help regulate greenwashing phenomena associated with green hotels. Eco hotel or "green hotel" are hotels that have adopted sustainable environmentally-friendly practices in hospitality business operations. Since the development of the tourism industry in the ASEAN, Thailand superseded its neighboring countries in inbound tourism, with 9 percent of Thailand's direct GDP contributions coming from the travel and tourism industry in 2015. Because of the growth and reliance on tourism as an economic pillar, Thailand developed "responsible tourism" in the 1990s to promote the well-being of local communities and the environment affected by the industry. However, studies show the green hotel companies' principles and environmental perceptions contradict the basis of corporate social responsibilities in responsible tourism. Against this context, the issuance of the Green Leaf Certification aims at keeping the hotel industry and supply chains accountable for corporate social responsibilities in regard to sustainability by having an independent international organization evaluate a hotel and rate it one through five leaves. United Kingdom The Competition and Markets Authority is the UK's primary competition and consumer authority. In September 2021, it published a Green Claims Code intended to protect consumers from misleading environmental claims and to protect businesses from unfair competition. United States The Federal Trade Commission (FTC) provides voluntary guidelines for environmental marketing claims. These guidelines give the FTC the right to prosecute false and misleading claims. These guidelines are not enforceable but instead were intended to be followed voluntarily: Qualifications and disclosures: The Commission traditionally has held that in order to be effective, any qualifications or disclosures such as those described in these guides should be sufficiently clear, prominent, and understandable to prevent deception. Clarity of language, relative type size and proximity to the claim being qualified, and an absence of contrary claims that could undercut effectiveness, will maximize the likelihood that the qualifications and disclosures are appropriately clear and prominent. Distinction between benefits of product, package, and service: An environmental marketing claim should be presented in a way that makes clear whether the environmental attribute or benefit being asserted refers to the product, the product's packaging, a service, or to a portion or component of the product, package or service. In general, if the environmental attribute or benefit applies to all but minor, incidental components of a product or package, the claim need not be qualified to identify that fact. There may be exceptions to this general principle. For example, if an unqualified "recyclable" claim is made and the presence of the incidental component significantly limits the ability to recycle the product, then the claim would be deceptive. Overstatement of environmental attribute: An environmental marketing claim should not be presented in a manner that overstates the environmental attribute or benefit, expressly or by implication. Marketers should avoid implications of significant environmental benefits if the benefit is in fact negligible. Comparative claims: Environmental marketing claims that include a comparative statement should be presented in a manner that makes the basis for the comparison sufficiently clear to avoid consumer deception. In addition, the advertiser should be able to substantiate the comparison. The FTC said in 2010 that it will update its guidelines for environmental marketing claims in an attempt to reduce greenwashing. The revision to the FTC's Green Guides covers a wide range of public input, including hundreds of consumer and industry comments on previously proposed revisions, offering clear guidance on what constitutes misleading information and demanding clear factual evidence.According to FTC Chairman Jon Leibowitz, "The introduction of environmentally-friendly products into the marketplace is a win for consumers who want to purchase greener products and producers who want to sell them." Leibowitz also says such a win-win can only operate if marketers' claims are straightforward and proven.In 2013 the FTC began enforcing these revisions. It cracked down on six different companies; five of the cases concerned false or misleading advertising surrounding the biodegradability of plastics. The FTC charged ECM Biofilms, American Plastic Manufacturing, CHAMP, Clear Choice Housewares, and Carnie Cap, for misrepresenting the biodegradability of their plastics treated with additives.The FTC charged a sixth company, AJM Packaging Corporation, with violating a commission consent order put in place that prohibits companies from using advertising claims based on the product or packaging being "degradable, biodegradable, or photodegradable" without reliable scientific information. The FTC now requires companies to disclose and provide the information that qualifies their environmental claims, to ensure transparency. China The issue of green marketing and consumerism in China has gained significant attention as the country faces environmental challenges. According to "Green Marketing and Consumerism in China: Analyzing the Literature" authored by Qingyun Zhu and Joseph Sarkis, China has been implementing environmental protection laws to regulate the business and commercial sector. Regulations such as the Environmental Protection Law and the Circular Economy Promotion Law which contains provisions that prohibit false advertising (known as greenwashing). The Chinese government has issued regulations and standards to regulate green advertising and labeling, including the Guidelines for Green Advertising Certification, the Guidelines for Environmental Labeling and Eco-Product Certification, and the Standards for Environmental Protection Product Declaration. These guidelines promote transparency in green marketing and prevent false or misleading claims. The Guidelines for Green Advertising Certification require that green advertising claims should be truthful, accurate, and verifiable. These guidelines and certifications require that eco-labels should be based on scientific and technical evidence, and should not contain false or misleading information. The standards also require that eco-labels should be easy to understand and should not confuse or deceive consumers. The regulations that are set in place for greenwashing, green advertising, and labeling in China are designed to protect consumers and prevent misleading claims. The issues of the climate crisis, sustainability, and greenwashing in China remains a critical issue and requires ongoing attention. The implementation of regulations and guidelines for green advertising and labeling in China aims to promote transparency and prevent false or misleading claims. In efforts to stop this practice, in November 2016 the General Office of the State Council introduced legislation to promote the development of green products, encourage companies to adopt sustainable practices, and mentioned the need for a unified standard for what was to be labeled green. This was a general plan or opinion on the matter, with no specifics on its implementation, however with similarly worded legislation and plans out at that time there was a push toward a unified green product standard. Until then green products had various standards and guidelines developed by different government agencies or industry associations, resulting in a lack of consistency and coherence. One example of guidelines set at the time was from the Ministry of Environmental Protection of China (now known as the Ministry of Ecology and Environment), they issued specifications in 2000, but these guidelines were limited and not widely recognized by industry or consumers. It wasn’t until 2017 with the launch of GB/T (a set of national standards and recommendations) that a widespread guideline was set for what would constitute green manufacturing and a green supply chain. Expanding on these guidelines in 2019 the State Administration for Market Regulation (SAMR) created regulations for Green Product Labels, which are symbols used on products to mark that they meet certain environmentally friendly criteria, and their manufacturing process has been verified by certification agencies. The standards and coverage for green products have been increasing as time goes, with changes and improvements to green product standardization still occurring in 2023.In China, the Greenpeace Campaign focuses on the pain point of air pollution. The campaign aims to address the severe air pollution problem prevalent in many Chinese communities. The campaign has been working to raise awareness about the health and environmental impacts of air pollution, advocate for stronger government policies and regulations to reduce emissions, and encourage a shift toward clean and renewable energy sources. "From 2011 to 2016, we linked global fast fashion brands to toxic chemical pollution in China through their manufacturers. Many multinational companies and local suppliers stopped using toxic and harmful chemicals. They included Adidas, Benetton, Burberry, Esprit, H&M, Puma, and Zara, among others." The Greenpeace Campaign in China has involved various activities, including scientific research, public education, and advocacy efforts. The campaign has organized public awareness events to engage both consumers and policymakers urging them to take action to improve air quality. "In recent years Chinese President Xi Jinping has committed to controlling the expansion of coal power plants. He has also pledged to stop building new coal power abroad". The campaign seeks to drive public and government interest towards more strict air pollution control measures, promoting more clean energy technology, and contributing the health, wellness, and sustainability in China. Particularly though, the health of Chinese citizens is at the front of this issue being that air pollution has been a critical issue in the nation. The article emphasizes that China has made it a priority to put people front and center on environmental issues. China’s Greenpeace campaigns, and other countries, are a part of their global efforts to address environmental challenges and promote sustainability. Related terms "Bluewashing" is a term that describes deceptive marketing that overstates a company's commitment to responsible social practices. It focuses mainly on economic and community factors. Carbon emission trading can be similar to greenwashing in that it gives an environmentally-friendly impression, but can be counterproductive if carbon is priced too low, or if large emitters are given "free credits". For example, Bank of America subsidiary MBNA offers "Eco-Logique" MasterCards that reward Canadian customers with carbon offsets when they use them. Customers may feel that they are nullifying their carbon footprint by purchasing goods with these, but only 0.5% of the purchase price goes to buy carbon offsets; the rest of the interchange fee still goes to the bank. Greenscamming Greenscamming describes an organization or product taking on a name that falsely implies environmental friendliness. It is related to both greenwashing and greenspeak. This is analogous to aggressive mimicry in biology.Greenscamming is used in particular by industrial companies and associations that deploy astroturfing organisations to try to dispute scientific findings that they consider threatening to their business model. One example is the denial of man-made global warming by companies in the fossil energy sector, also driven by specially-founded greenscamming organizations.One reason to establish greenscamming organizations is that it is difficult to openly communicate the benefits of activities that damage the environment. Sociologist Charles Harper stresses that it would be difficult to market a group called "Coalition to Trash the Environment for Profit". Anti-environment initiatives therefore must give their front organizations deliberately deceptive names if they want to be successful, as surveys show that environmental protection has a social consensus. However, there is a danger of being exposed as an anti-environmental initiative, which entails a considerable risk that the greenscamming activities backfire and are counterproductive for the initiators.Greenscamming organizations are active in organized climate denial. An important financier of greenscamming organizations was the oil company ExxonMobil, which financially supported more than 100 climate denial organizations and spent about 20 million U.S. dollars on greenscamming groups. James Lawrence Powell identified the "admirable" designations of many of these organizations as the most striking common feature, which for the most part sounded very rational. He quotes a list of climate denial organizations drawn up by the Union of Concerned Scientists, which includes 43 organizations funded by Exxon. None had a name that would lead one to infer that climate change denial was their raison d'être. The list is headed by Africa Fighting Malaria, whose website features articles and commentaries opposing ambitious climate mitigation concepts, even though the dangers of malaria could be exacerbated by global warming. Examples Examples of greenscamming organizations include the National Wetlands Coalition, Friends of Eagle Mountain, The Sahara Club, The Alliance for Environment and Resources, The Abundant Wildlife Society of North America, the Global Climate Coalition, the National Wilderness Institute, the Environmental Policy Alliance of the Center for Organizational Research and Education, and the American Council on Science and Health. Behind these ostensible environmental protection organizations lie the interests of business sectors. For example, the National Wetlands Coalition is backed by oil drilling companies and real estate developers, while the Friends of Eagle Mountain is backed by a mining company that wants to convert open-cast mines into landfills. The Global Climate Coalition was backed by commercial enterprises that fought against government-imposed climate protection measures. Other Greenscam organizations include the U.S. Council for Energy Awareness, backed by the nuclear industry; the Wilderness Impact Research Foundation, representing the interests of lumberjacks and ranchers; and the American Environmental Foundation, representing the interests of landowners.Another Greenscam organization is the Northwesterners for More Fish, which had a budget of $2.6 million in 1998. This group opposed conservation measures for endangered fish that restricted the interests of energy companies, aluminum companies, and the timber industry in the region, as well as tried to discredit environmentalists who promoted fish habitats. The Center for the Study of Carbon Dioxide and Global Change, the National Environmental Policy Institute, and the Information Council on the Environment funded by the coal industry are also greenscamming organizations.In Germany, this form of mimicry or deception is used by the "European Institute for Climate and Energy" (EIKE), which suggests by its name that it is an important scientific research institution. In fact, EIKE is not a scientific institution at all, but a lobby organization that neither has an office nor employs climate scientists, but instead disseminates fake news on climate issues on its website. See also References Further reading Catherine, P. (n.d). Eco-friendly labelling? It's a lot of 'greenwash'. Toronto Star (Canada), Retrieved from Newspaper Source database. Clegg, Brian (2009). Ecologic : the truth and lies of green economics. London: Eden Project. ISBN 978-1-905811-25-0. Dobin, D (2009). "Greenwashing harms entire movement". Lodging Hospitality. 65 (14): 42. Greer, Jed; Bruno, Kenny (1996). Greenwash : the reality behind corporate environmentalism. Penang, Malaysia: Third World Network. ISBN 983-9747-16-9. Jenny, D. (n.d). New reports put an end to greenwashing. Daily Telegraph, The (Sydney), Retrieved from Newspaper Source database. Jonathan, L. (n.d). Why 'greenwash' won't wash with consumers. Sunday Times, The, Retrieved from Newspaper Source database. Lubbers, Eveline (2002). Battling big business: countering greenwash, infiltration, and other forms of corporate bullying. Monroe, Me: Common Courage Press. ISBN 1-56751-224-0. Nelson, Robert H. (10 March 2004). "Environmental Religion: A Theological Critique" (PDF). Case Western Reserve Law Review. Social Science Research Network. 55: 51. SSRN 2211873. Retrieved 11 March 2021. Priesnitz, W. (2008). "Greenwash: When the green is just veneer". Natural Life (121): 14–16 – via GreenFILE database. Seele, Peter; Gatti, Lucia (2017). "Greenwashing Revisited: In Search of a Typology and Accusation-Based Definition Incorporating Legitimacy Strategies". Business Strategy and the Environment. 26 (2): 239–252. doi:10.1002/bse.1912. ISSN 1099-0836. Retrieved 5 December 2015. Tokar, Brian (1997). Earth for Sale: Reclaiming Ecology in the Age of Corporate Greenwash. Boston, MA: South End Press. ISBN 0896085589. "Getting beyond greenwashing through standardization". Standards Council of Canada. 2023. {{cite journal}}: Cite journal requires |journal= (help) "Greenwashing culprits to be foiled ahead of business summit". European Environment & Packaging Law Weekly (159): 28. 2009. ISSN 1750-0087 – via GreenFILE database. Greenscamming. The Encyclopedia of World Problems and Human Potential. New rules aim to clamp down on corporate greenwashing Reuters. June 26, 2023. External links Roberts Environmental Center - ratings of corporate sustainability claims. How Greenwashing Works at HowStuffWorks Greenwashing in Popular Culture and Art What is Greenwashing, and Why is it a Problem?" Streaming audio of a 2011 radio program on the subject of Green Marketing/Greenwashing—from CBC Radio. Green claims, European Commission.
global warming: what you need to know
Global Warming: What You Need to Know is a 2006 global warming (climate change) documentary, directed by Nicolas Brown, starring Tom Brokaw, James Hansen, Michael Oppenheimer, and Mark Serreze. The film focuses on impacts from climate change, and Tom Brokaw interviews scientists. The documentary premiered on Discovery Channel, 16 July 2006. References External links IMDB Page
an inconvenient truth
An Inconvenient Truth is a 2006 American documentary film directed by Davis Guggenheim about former United States Vice President Al Gore's campaign to educate people about global warming. The film features a slide show that, by Gore's own estimate, he has presented over 1,000 times to audiences worldwide. The idea to document Gore's efforts came from producer Laurie David, who saw his presentation at a town hall meeting on global warming, which coincided with the opening of The Day After Tomorrow. Laurie David was so inspired by his slide show that she, with producer Lawrence Bender, met with Guggenheim to adapt the presentation into a film. Premiering at the 2006 Sundance Film Festival and opening in New York City and Los Angeles on May 24, 2006, the film was a critical and commercial success, winning two Academy Awards for Best Documentary Feature and Best Original Song. The film grossed $24 million in the U.S. and $26 million at the international box office, becoming the 11th highest grossing documentary film to date in the United States.Since the film's release, An Inconvenient Truth has been credited for raising international public awareness of global warming and reenergizing the environmental movement. The documentary has also been included in science curricula in schools around the world, which has spurred some controversy. A sequel to the film, titled An Inconvenient Sequel: Truth to Power, was released on July 28, 2017. Synopsis An Inconvenient Truth presents in film form an illustrated talk on climate by Al Gore, aimed at alerting the public to an increasing "planetary emergency" due to global warming, and shows re-enacted incidents from his life story which influenced his concerns about environmental issues. He began making these presentations in 1989 with flip chart illustrations; the film version uses a Keynote presentation, which Gore refers to as "the slide show".The former vice president opens the film by greeting an audience with his well-known line about his campaign in 2000: "I am Al Gore. I used to be the next President of the United States." He is shown using his laptop to edit his presentation, and pondering the difficulty he has had in awakening public concern: "I've been trying to tell this story for a long time and I feel as if I've failed to get the message across."Gore then begins his slide show on Global Warming; a comprehensive presentation replete with detailed graphs, flow charts and stark visuals. Gore shows off several photographs of the Earth taken from multiple space missions, as Earthrise and The Blue Marble. Gore notes that these photos dramatically transformed the way we see the Earth, helping spark modern environmentalism. Following this, Gore shares anecdotes that inspired his interest in the issue, including his college education with early climate expert Roger Revelle at Harvard University, his sister's death from lung cancer and his young son's near-fatal car accident. Gore recalls a story from his grade-school years, where a fellow student asked his geography teacher about continental drift, whether the coastlines of South America and Africa might fit together; in response, the teacher called the concept the "most ridiculous thing [he'd] ever heard." Gore ties this conclusion to the assumption that "the Earth is so big, we can't possibly have any lasting, harmful impact on the Earth's environment." For comic effect, Gore uses a clip from the Futurama episode "Crimes of the Hot" to describe the greenhouse effect. Gore refers to his loss to George W. Bush in the 2000 United States presidential election as a "hard blow" yet one which subsequently "brought into clear focus, the mission [he] had been pursuing for all these years." Throughout the movie, Gore discusses the scientific consensus on global warming, as well as the present and future effects of global warming and stresses that global warming "is really not a political issue, so much as a moral one," describing the consequences he believes global warming will produce if the amount of human-generated greenhouse gases is not significantly reduced in the very near future. Gore also presents Antarctic ice coring data showing CO2 levels higher now than in the past 650,000 years. The film includes segments intended to refute critics who say that global warming is unproven or that warming will be insignificant. For example, Gore cites the retreat of nearly all glaciers caused by melting over recent decades, showing nine cases, such as the Grinnel and Boulder Glaciers and Patagonia. He discusses the possibility of the collapse and melting of a major ice sheet in Greenland or in West Antarctica, either of which could raise global sea levels by approximately 20 feet (6m), flooding coastal areas and producing 100 million refugees. Melt water from Greenland, because of its lower salinity, could then halt the currents that keep northern Europe warm and quickly trigger dramatic local cooling there. It also contains various short animated projections of what could happen to different animals more vulnerable to global warming. Call to action The documentary ends with Gore arguing that if appropriate actions are taken soon, the effects of global warming can be successfully reversed by releasing less CO2 and planting more vegetation to consume existing CO2. Gore calls upon his viewers to learn how they can help him in these efforts. Gore closes the film by saying: Each one of us is a cause of global warming, but each one of us can make choices to change that with the things we buy, the electricity we use, the cars we drive; we can make choices to bring our individual carbon emissions to zero. The solutions are in our hands, we just have to have the determination to make it happen. We have everything that we need to reduce carbon emissions, everything but political will. But in America, the will to act is a renewable resource. During the film's end credits, several calls to action pop up on screen suggesting to viewers things at home they can do to combat global warming, including "recycle", "speak up in your community", "try to buy a hybrid vehicle" and "encourage everyone you know to watch this movie." Background Origins Gore became interested in global warming when he took a course at Harvard University with Professor Roger Revelle, one of the first scientists to measure carbon dioxide in the atmosphere. Later, when Gore was in Congress, he initiated the first congressional hearing on the subject in 1981. Gore's 1992 book, Earth in the Balance, dealing with a number of environmental topics, reached the New York Times bestseller list.As Vice President during the Clinton Administration, Gore pushed for the implementation of a carbon tax to encourage energy efficiency and diversify the choices of fuel better reflecting the true environmental costs of energy use; it was partially implemented in 1993.He helped broker the 1997 Kyoto Protocol, an international treaty designed to curb greenhouse gas emissions. The treaty was not ratified in the United States after a 95 to 0 vote in the Senate. The primary objections stemmed from the exemptions the treaty gave to China and India, whose industrial base and carbon footprint have grown rapidly, and fears that the exemptions would lead to further trade imbalances and offshoring arrangement with those countries.Gore also supported the funding of the controversial, and much-delayed satellite called Triana, which would have provided an image of the Earth 24 hours a day, over the internet and would have acted as a barometer measuring the process of global warming. During his 2000 presidential campaign, Gore ran, in part, on a pledge to ratify the Kyoto Protocol. The slide show Following his defeat in the 2000 presidential election by George W. Bush, Gore returned his focus to the topic. He edited and adapted a slide show he had compiled years earlier, and began featuring the slide show in presentations on global warming across the U.S. and around the world. At the time of the film, Gore estimated he had shown the presentation more than one thousand times.Producer Laurie David saw Gore's slide show in New York City at a global warming town-hall meeting after the May 27, 2004 premiere of The Day After Tomorrow. Gore was one of several panelists and he showed a ten-minute version of his slide show. I had never seen it before, and I was floored. As soon as the evening's program concluded, I asked him to let me present his full briefing to leaders and friends in New York and Los Angeles. I would do all the organizing if he would commit to the dates. Gore's presentation was the most powerful and clear explanation of global warming I had ever seen. And it became my mission to get everyone I knew to see it too. Inspired, David assembled a team, including producer Lawrence Bender and former president of eBay Jeffrey Skoll, who met with Gore about the possibility of making the slide show into a movie. It took some convincing. The slide show, she says, "was his baby, and he felt proprietary about it and it was hard for him to let go."David said the box office returns were not important to her, and that what was at stake was the planet, saying "none of us are going to make a dime."David and Bender later met with director Davis Guggenheim, to have him direct the film adaptation of his slide show. Guggenheim, who was skeptical at first, later saw the presentation for himself, stating that he was "blown away," and "left after an hour and a half thinking that global warming [was] the most important issue ... I had no idea how you'd make a film out of it, but I wanted to try," he said.In 2004 Gore enlisted Duarte Design to condense and update his material and add video and animation. Ted Boda described the tools that went into designing the project: "Gore's presentation was in fact using Apple's Keynote presentation software (the same software Steve Jobs presents from) and did so for a number of reasons. As a designer for the presentation, Keynote was the first choice to help create such an engaging presentation."Initially reluctant of the film adaptation, Gore said after he and the crew were into the production of the movie, the director, Guggenheim, earned his trust. I had seen enough to gain a tremendous respect for his skill and sensitivity. And he said that one of the huge differences between a live stage performance and a movie is that when you're in the same room with a live person who's on stage speaking — even if it's me — there's an element of dramatic tension and human connection that keeps your attention. And in a movie, that element is just not present. He explained to me that you have to create that element on screen, by supplying a narrative thread that allows the audience to make a connection with one or more characters. He said, "You've got to be that character." So we talked about it, and as I say, by then he had earned such a high level of trust from me that he convinced me. Production When Bender first saw Gore's visual presentation he had concerns about connection with viewers, citing a "need to find a personal way in." In the string of interviews with Gore that followed, Gore himself felt like they "were making Kill Al Vol. 3". Bender had other issues including a time frame that was "grueling" and needed to be done in "a very short period of time" despite many filming locations planned. These included many locations throughout the United States and also included China. "It was a lot of travel in a very short period of time. And they had to get this thing edited and cut starting in January, and ready to screen in May. That's like a seriously tight schedule. So the logistics of pulling it off with a low budget were really difficult, and if there's one person who gets credit, it's Leslie Chilcott, because she really pulled it together.""Most of my movies take a year and a half, if not two and a half," Guggenheim said. "We all felt like we were on a mission from God just to make it as fast as we could. We just felt like it was urgent. The clock was ticking, and people had to see it." Title The producers struggled to find an effective title for the film. "We had a lot of really bad titles," Guggenheim recalled. "One was The Rising. I remember Al talking about whether he should give Bruce Springsteen a call, because he had an album out called The Rising. It had a great triple-entendre, because it was like the sea-level rising and the idea of people rising. So we got excited about that for a while." "There were also some really bad ones like Too Hot to Handle," he added. "Maybe that's not right, but it was something with 'hot,' ya know? We had a lot of hot puns."Guggenheim said that he asked Gore why climate change was "so hard for people to grasp," to which Gore replied, "Because it's an inconvenient truth, ya know." "[...] In the back of my head, I go, that's the title of our movie," Guggenheim said.Initially, the title was not a popular choice. Gore recalled saying "Nah, I don't think so" but Guggenheim "defended it vigorously against other titles." "People thought it was hard to say, people thought it wasn't fun, it wasn't sexy," Guggenheim remembered. "Days before we went to Sundance and had to decide, there was a large group of people who did not like the title." Technical aspects The majority of the movie shows Gore delivering his lecture to an audience at a relatively small theater in Los Angeles. Gore's presentation was delivered on a 70-foot (21 m) digital screen that Bender commissioned specifically for the movie.While the bulk of the film was shot on 4:4:4 HDCAM, according to director Guggenheim, a vast array of different film formats were used: "There's 35mm and 16mm. A lot of the stuff on the farm I just shot myself on 8mm film. We used four Sony F950 HDCAMs for the presentation. We shot three different kinds of prosumer HD, both 30 and 24. There's MiniDV, there's 3200 black-and-white stills, there's digital stills, some of them emailed on the day they were taken from as far off as Greenland. There was three or four different types of animation. One of the animators is from New Zealand and emailed me his work. There's JPEG stuff."Guggenheim said that while it would have been a lot easier to use one format, it would not have had the same impact. "Each format has its own feel and texture and touch. For the storytelling of what Gore's memory was like of growing up on the farm, some of this 8mm stuff that I shot is very impressionistic. And for some of his memories of his son's accident, these grainy black-and-white stills ... have a feel that contrasted very beautifully with the crisp hi-def HD that we shot. Every format was used to its best potential. Some of the footage of Katrina has this blown-out video, where the chroma is just blasted, and it looks real muddy, but that too has its own kind of powerful, impactful feeling." Scientific basis The film lays out the scientific consensus that global warming is real, potentially catastrophic, and human-caused. Gore presents specific data to support this, including: The Keeling Curve, which depicts the long-term increase in atmospheric CO2 concentration as measured from the Mauna Loa Observatory. The retreat of numerous glaciers is shown with before-and-after photographs. A study by researchers at the Physics Institute of the University of Bern and the European Project for Ice Coring in Antarctica (EPICA) presenting data from Antarctic ice cores showing carbon dioxide concentrations higher than at any time during the past 650,000 years. Data from the atmospheric instrumental temperature record showing that the ten hottest years ever measured had all occurred in the previous fourteen years. A 2004 survey by Naomi Oreskes of 928 peer-reviewed scientific articles on global climate change published between 1993 and 2003. The survey, published as an editorial in the journal Science, found that every article either supported the human-caused global warming consensus or did not comment on it. Gore also presents a 2004 study by Max and Jules Boykoff showing 53% of articles that appeared in major US newspapers over a fourteen-year period gave roughly equal attention to scientists who expressed views that global warming was caused by humans as they did to climate change deniers (many of them funded by carbon-based industry interests), creating a false balance.The Associated Press contacted more than 100 climate researchers and questioned them about the film's veracity. All 19 climate scientists who had seen the movie or had read the homonymous book said that Gore accurately conveyed the science, with few errors.William H. Schlesinger, dean of the Nicholas School of Environment and Earth Sciences at Duke University, said "[Gore] got all the important material and got it right." Robert Corell, chairman of the Arctic Climate Impact Assessment, was also impressed. "I sat there and I'm amazed at how thorough and accurate. After the presentation I said, 'Al, I'm absolutely blown away. There's a lot of details you could get wrong.'...I could find no error." Michael Shermer, scientific author and founder of The Skeptics Society, wrote in Scientific American that Gore's slide show "shocked me out of my doubting stance." Eric Steig, a climate scientist writing on RealClimate, lauded the film's science as "remarkably up to date, with reference to some of the very latest research." Ted Scambos, lead scientist from the National Snow and Ice Data Center, said the film "does an excellent job of outlining the science behind global warming and the challenges society faces in the coming century because of it."One concern among scientists in the film was the connection between hurricanes and global warming, which at the time was contentious in the scientific community. Gore cited five recent scientific studies to support his view. "I thought the use of imagery from Hurricane Katrina was inappropriate and unnecessary in this regard, as there are plenty of disturbing impacts associated with global warming for which there is much greater scientific consensus," said Brian Soden, professor of meteorology and oceanography at the University of Miami. Gavin Schmidt, climate modeler for NASA, thought Gore appropriately addressed the issue. "Gore talked about 2005 and 2004 being very strong seasons, and if you weren't paying attention, you could be left with the impression that there was a direct cause and effect, but he was very careful to not say there's a direct correlation," Schmidt said. "There is a difference between saying 'we are confident that they will increase' and 'we are confident that they have increased due to this effect,'" added Steig. "Never in the movie does he say: 'This particular event is caused by global warming.'" Gore's use of long ice core records of CO2 and temperature (from oxygen isotope measurements) in Antarctic ice cores to illustrate the correlation between the two drew some scrutiny; Schmidt, Steig and Michael E. Mann back up Gore's data. "Gore stated that the greenhouse gas levels and temperature changes over ice age signals had a complex relationship but that they 'fit'. Both of these statements are true," said Schmidt and Mann. "The complexity though is actually quite fascinating ... a full understanding of why CO2 changes in precisely the pattern that it does during ice ages is elusive, but among the most plausible explanations is that increased received solar radiation in the southern hemisphere due to changes in Earth's orbital geometry warms the southern ocean, releasing CO2 into the atmosphere, which then leads to further warming through an enhanced greenhouse effect. Gore's terse explanation does not delve into such complexities, but the crux of his point—that the observed long-term relationship between CO2 and temperature in Antarctica supports our understanding of the warming impact of increased CO2 concentrations—is correct. Moreover, our knowledge of why CO2 is changing now (fossil fuel burning) is solid. We also know that CO2 is a greenhouse gas, and that the carbon cycle feedback is positive (increasing temperatures lead to increasing CO2 and CH4), implying that future changes in CO2 will be larger than we might anticipate." "Gore is careful not to state what the temperature/CO2 scaling is," said Steig. "He is making a qualitative point, which is entirely accurate. The fact is that it would be difficult or impossible to explain past changes in temperature during the ice age cycles without CO2 changes. In that sense, the ice core CO2-temperature correlation remains an appropriate demonstration of the influence of CO2 on climate."Steig disputed Gore's statement that one can visibly see the effect that the United States Clean Air Act has had on ice cores in Antarctica. "One can neither see, nor even detect using sensitive chemical methods any evidence in Antarctica of the Clean Air Act," he said, but did note that they are "clearly recorded in ice core records from Greenland." Despite these flaws, Steig said that the film got the fundamental science right and the minor factual errors did not undermine the main message of the film, adding "An Inconvenient Truth rests on a solid scientific foundation." Lonnie Thompson, Earth Science professor at Ohio State University, whose work on retreating glaciers was featured in the film, was pleased with how his research was presented. "It's so hard given the breadth of this topic to be factually correct, and make sure you don't lose your audience," Thompson said. "As scientists, we publish our papers in Science and Nature, but very few people read those. Here's another way to get this message out. To me, it's an excellent overview for an introductory class at a university. What are the issues and what are the possible consequences of not doing anything about those changes? To me, it has tremendous value. It will reach people that scientists will never reach." John Nielsen-Gammon from Texas A&M University said the "main scientific argument presented in the movie is for the most part consistent with the weight of scientific evidence, but with some of the main points needing updating, correction, or qualification." Nielsen-Gammon thought the film neglected information gained from computer models, and instead relied entirely on past and current observational evidence, "perhaps because such information would be difficult for a lay audience to grasp, believe, or connect with emotionally."Steven Quiring, a climatologist from Texas A&M University, added that "whether scientists like it or not, An Inconvenient Truth has had a much greater impact on public opinion and public awareness of global climate change than any scientific paper or report." Reception Box office The film opened in New York City and Los Angeles on May 24, 2006. On Memorial Day weekend, it grossed an average of $91,447 per theater, the highest of any movie that weekend and a record for a documentary, though it was only playing on four screens at the time.At the 2006 Sundance Film Festival, the movie received three standing ovations. It was also screened at the 2006 Cannes Film Festival and was the opening night film at the 27th Durban International Film Festival on June 14, 2006.An Inconvenient Truth was the most popular documentary at the 2006 Brisbane International Film Festival.The film has grossed over $24 million in the U.S., making it the eleventh-highest-grossing documentary in the U.S. (from 1982 to the present). It grossed nearly $26 million in foreign countries, the highest being France, where it grossed $5 million. According to Gore, "Tipper and I are devoting 100 percent of the profits from the book and the movie to a new bipartisan educational campaign to further spread the message about global warming." Paramount Classics committed 5% of their domestic theatrical gross from the film to form a new bipartisan climate action group, Alliance for Climate Protection, dedicated to awareness and grassroots organizing. Critical response The film received a positive reaction from film critics and audiences. It garnered a "certified fresh" 93% rating at Rotten Tomatoes, based on 166 reviews, and an average rating of 7.74/10. The website's critical consensus states, "This candid, powerful and informative documentary illuminates some of the myths surrounding its dual subjects: global warming and Al Gore". At Metacritic, which assigns a weighted average score out of 100 to reviews from mainstream critics, the film has received an average score of 75, based on 32 reviews, indicating "generally favorable reviews".Film critics Roger Ebert and Richard Roeper gave the film "two thumbs up". Ebert said, "In 39 years, I have never written these words in a movie review, but here they are: You owe it to yourself to see this film. If you do not, and you have grandchildren, you should explain to them why you decided not to," calling the film "horrifying, enthralling and [having] the potential, I believe, to actually change public policy and begin a process which could save the Earth."New York Magazine critic David Edelstein called the film "One of the most realistic documentaries I've ever seen—and, dry as it is, one of the most devastating in its implications." The New Yorker's David Remnick added that while it was "not the most entertaining film of the year ... it might be the most important" and a "brilliantly lucid, often riveting attempt to warn Americans off our hellbent path to global suicide."The New York Times reviewer A. O. Scott thought the film was "edited crisply enough to keep it from feeling like 90 minutes of C-SPAN and shaped to give Mr. Gore's argument a real sense of drama," and "as unsettling as it can be," Scott continued, "it is also intellectually exhilarating, and, like any good piece of pedagogy, whets the appetite for further study."Bright Lights Film Journal critic Jayson Harsin declared the film's aesthetic qualities groundbreaking, as a new genre of slideshow film.NASA climatologist James Hansen described the film as powerful, complemented by detail in the book. Hansen said that "Gore has put together a coherent account of a complex topic that Americans desperately need to understand. The story is scientifically accurate and yet should be understandable to the public, a public that is less and less drawn to science." He added that with An Inconvenient Truth, "Al Gore may have done for global warming what Silent Spring did for pesticides. He will be attacked, but the public will have the information needed to distinguish our long-term well-being from short-term special interests." In "extensive exit polling" of An Inconvenient Truth in "conservative suburban markets like Plano and Irvine (Orange County), as well as Dallas and Long Island", 92 percent rated "Truth" highly and 87 percent of the respondents said they'd recommend the film to a friend. University of Washington professor Michele Poff argued that Gore was successful in communicating to conservative-leaning audiences by framing the climate crisis as apolitical. "Gore's and the environment's identification with the Democratic Party posed a significant challenge to reaching Republicans and conservatives, as well as those disgruntled with politics in general," Poff wrote. "To appeal to such individuals, Gore framed the matter as distinctly apolitical – as an issue both outside politics and one that was crucial regardless of one's ideological leanings. These explicit attempts to frame the issue as apolitical take on further gravitas when we consider how Gore infused the film with reflections of conservative values. Indeed, Gore reached deeply into the value structure of American conservatives to highlight ideals that suggested his cause was not liberal, but rather was beyond politics, beyond ideology." A small number of reviews criticized the film on scientific and political grounds. Journalist Ronald Bailey argued in the libertarian magazine Reason that although "Gore gets [the science] more right than wrong," he exaggerates the risks. MIT atmospheric physicist Richard S. Lindzen was vocally critical of the film, writing in a June 26, 2006 op-ed in The Wall Street Journal that Gore was using a biased presentation to exploit the fears of the public for his own political gain.A few other reviewers were also skeptical of Gore's intent, wondering whether he was setting himself for another Presidential run. The Boston Globe writer Peter Canellos criticized the "gauzy biographical material that seems to have been culled from old Gore campaign commercials." Phil Hall of Film Threat gave the film a negative review, saying "An Inconvenient Truth is something you rarely see in movies today: a blatant intellectual fraud."Others felt that Gore did not go far enough in depicting the threat Indigenous peoples faced with the dire consequences of climate change. "An Inconvenient Truth completely ignores the plight of Arctic indigenous peoples whose cultures and landscapes are facing profound changes produced by melting polar ice," argued environmental historian Finis Dunaway. Accolades An Inconvenient Truth has received many different awards worldwide. The film won two awards at the 79th Academy Awards: Best Documentary Feature and Best Original Song for Melissa Etheridge's "I Need to Wake Up". It is the first documentary to win 2 Oscars and the first to win a best original song Oscar. After winning the 2007 Academy Award for Documentary Feature, the Oscar was awarded to director Guggenheim, who asked Gore to join him and other members of the crew on stage. Gore then gave a brief speech, saying: My fellow Americans, people all over the world, we need to solve the climate crisis. It's not a political issue; it's a moral issue. We have everything we need to get started, with the possible exception of the will to act. That's a renewable resource. Let's renew it. For Gore's wide-reaching efforts to draw the world's attention to the dangers of global warming which is centerpieced in the film, Al Gore, along with the Intergovernmental Panel on Climate Change (IPCC), won the 2007 Nobel Peace Prize. Gore also received the Prince of Asturias Prize in 2007 for international cooperation. The related album, which featured the voices of Beau Bridges, Cynthia Nixon and Blair Underwood, also won Best Spoken Word Album at the 51st Grammy Awards.The film received numerous other accolades, including a special recognition from the Humanitas Prize, the first time the organization had handed out a Special Award in over 10 years, the 2007 Stanley Kramer Award from The Producers Guild of America, which recognizes "work that dramatically illustrates provocative social issues" and the President's Award 2007 from the Society for Technical Communication "for demonstrating that effective and understandable technical communication, when coupled with passion and vision, has the power to educate—and change—the world."The film won many other awards for Best Documentary: Impact The documentary has been generally well-received politically in many parts of the world and is credited for raising further awareness of global warming internationally. The film inspired producer Kevin Wall to conceive the 2007 Live Earth festival and influenced Italian composer Giorgio Battistelli to write an operatic adaptation, entitled "CO2," premiering at La Scala in Milan in 2015. According to the Encyclopædia Britannica, in response to the documentary, "Pro-industry conservative politicians and their supporters (many of whom saw global warming as a hoax designed to bilk taxpayers out of their money) lined up on one side, while scientists and more-liberal politicians (who pressed that global warming was among the most important issues humanity would face) teamed up on the other", adding that "Most remember the film as part motivational science lecture with slick graphics and part self-reflection." Activism Following the film, Gore founded The Climate Reality Project in 2006 which trained 1,000 activists to give Gore's presentation in their communities. Presently, the group has 3,500 presenters worldwide. An additional initiative was launched in 2010, called "Inconvenient Youth". "'Inconvenient Youth' is built on the belief that teens can help lead efforts to solve the climate crisis," said Gore. The project was inspired by Mary Doerr, a 16-year-old who trained as presenter for the organization.University of Scranton professor Jessica Nolan found in a 2010 study published for Environment and Behavior that people became more informed and concerned about climate change right after seeing the film but that these concerns did not translate into changed behavior a month later. On the contrary, in a 2011 paper published in the Journal of Environmental Economics and Management, University of Oregon professor Grant Jacobsen found in the two months following the release of the film, zip codes within a 10-mile (16 km) radius of a zip code where the film was shown experienced a 50 percent relative increase in the purchase of voluntary carbon offsets. Public opinion In a July 2007 47-country Internet survey conducted by The Nielsen Company and Oxford University, 66% of those respondents who said they had seen An Inconvenient Truth stated that it had "changed their mind" about global warming and 89% said it had made them more aware of the problem. Three out of four (74%) said they had changed some of their habits because of seeing the film. Governmental reactions Then-President George W. Bush, when asked whether he would watch the film, responded: "Doubt it." "New technologies will change how we live and how we drive our cars, which all will have the beneficial effect of improving the environment," Bush said. "And in my judgment we need to set aside whether or not greenhouse gases have been caused by mankind or because of natural effects and focus on the technologies that will enable us to live better lives and at the same time protect the environment". Gore responded that "The entire global scientific community has a consensus on the question that human beings are responsible for global warming and he [Bush] has today again expressed personal doubt that that is true." White House deputy press secretary Dana Perino stated that "The president noted in 2001 the increase in temperatures over the past 100 years and that the increase in greenhouse gases was due to a certain extent to human activity".Several United States Senators screened the film. New Mexico Democratic Senator Jeff Bingaman and Nevada Democratic Senator Harry Reid saw the movie at its Washington premiere at the National Geographic Society. New Mexico Democratic Senator Tom Udall planned to see the film saying "It's such a powerful statement because of the way the movie is put together, I tell everybody, Democrat or Republican, they've got to go see this movie." Former New Mexico Republican Senator Pete Domenici thought Gore's prominence on the global warming issue made it more difficult to get a consensus in Congress. Bingaman disputed this saying, "It seems to me we were having great difficulty recruiting Republican members of Congress to support a bill before Al Gore came up with this movie."Oklahoma Republican Senator Jim Inhofe, then-chairman of the Senate Environment and Public Works Committee, said that he did not plan to see the film (which he appears in), and compared it to Adolf Hitler's book Mein Kampf. "If you say the same lie over and over again, and particularly if you have the media's support, people will believe it," Inhofe said, adding that he thought Gore was trying to use the issue to run for president again in 2008. In contrast to Inhofe, Arizona Republican Senator John McCain, did not criticize Gore's efforts or the movie, which he planned to see. Tennessee Republican Senator Lamar Alexander, said "Because (Gore) was a former vice president and presidential nominee, he brings a lot of visibility to (the issue)," Alexander said. "On the other hand it may be seen as political by some, and they may be less eager to be a part of it." Alexander also criticized the omission of nuclear power in the film. "Maybe it needs a sequel: 'An Inconvenient Truth 2: Nuclear Power.'"In September 2006, Gore traveled to Sydney, Australia to promote the film. Then-Australian Prime Minister John Howard said he would not meet with Gore or agree to Kyoto because of the movie: "I don't take policy advice from films." Former Opposition Leader Kim Beazley joined Gore for a viewing and other MPs attended a special screening at Parliament House earlier in the week. After winning the general election a year later, Prime Minister Kevin Rudd ratified Kyoto in his first week of office, leaving the United States the only industrialized nation in the world not to have signed the treaty.In the United Kingdom, Conservative party leader and future Prime Minister David Cameron urged people to watch the film in order to understand climate change. In Belgium, activist Margaretha Guidone persuaded the entire Belgian government to see the film. 200 politicians and political staff accepted her invitation, among whom were Belgian prime minister Guy Verhofstadt and Minister-President of Flanders, Yves Leterme. In Costa Rica, the film was screened by president Óscar Arias. Arias's subsequent championing of the climate change issue was greatly influenced by the film. Industry and business The Competitive Enterprise Institute released pro-carbon dioxide television ads in preparation for the film's release in May 2006. The ads featured a little girl blowing a dandelion with the tagline, "Carbon dioxide. They call it pollution. We call it life."In August 2006, The Wall Street Journal revealed that a YouTube video lampooning Gore and the movie, titled Al Gore's Penguin Army, appeared to be "astroturfing" by DCI Group, a Washington public relations firm. Use in education Several colleges and high schools have featured the film in science curricula. In Germany, German Environment Minister Sigmar Gabriel bought 6,000 DVDs of An Inconvenient Truth to make it available to German schools. Prime Minister José Luis Rodríguez Zapatero distributed 30000 copies to the Spanish schools in October 2007. In Burlington, Ontario, Canada, the Halton District School Board made An Inconvenient Truth available at schools and as an educational resource. In the United Kingdom As part of a nationwide "Sustainable Schools Year of Action" launched in late 2006, the UK Government, Welsh Assembly Government and Scottish Executive announced between January–March 2007 that copies of An Inconvenient Truth would be sent to all their secondary schools. The film was placed into the science curriculum for fourth and sixth-year students in Scotland as a joint initiative between Learning and Teaching Scotland and Scottish Power. Dimmock case In May 2007, Stewart Dimmock—a school governor from Kent, England and member of the right-wing New Party—challenged the UK Government's distribution of the film in a lawsuit, Dimmock v Secretary of State for Education and Skills, with help from political ally and New Party founder Viscount Monckton, who notably pointed out "35 serious scientific errors". The plaintiffs sought an injunction preventing the screening of the film in English schools, arguing that by law schools are forbidden to promote partisan political views and, when dealing with political issues, are required to provide a balanced presentation of opposing views. On October 10, 2007, High Court Justice Michael Burton, after explaining that the requirement for a balanced presentation does not warrant that equal weight be given to alternative views of a mainstream view, ruled that it was clear that the film was substantially founded upon scientific research and fact, albeit that the science had been used, in the hands of a "talented politician and communicator", to make a political statement and to support a political program. The judge ruled that An Inconvenient Truth contained nine scientific errors and thus must be accompanied by an explanation of those errors before being shown to school children. The judge said that showing the film without the explanations of error would be a violation of education laws.The judge concluded "I have no doubt that Dr Stott, the Defendant's expert, is right when he says that: 'Al Gore's presentation of the causes and likely effects of climate change in the film was broadly accurate.'" On the basis of testimony from Robert M. Carter and the arguments put forth by the claimant's lawyers, the judge also pointed to nine "errors", i.e. statements the truth of which he did not rule on, but that he found to depart from the mainstream scientific positions on global warming. He also found that some of these departures from the mainstream arose in the context of alarmism and exaggeration in support of political theses. Since the government had already accepted to amend the guidance notes to address these along with other points in a fashion that the judge found satisfactory, no order was made on the application. Government Minister of Children, Young People and Families, Kevin Brennan stated: "We have updated the accompanying guidance, as requested by the judge to make it clearer for teachers as to the stated Intergovernmental Panel on Climate Change position on a number of scientific points raised in the film." Plaintiff Dimmock complained that "no amount of turgid guidance" could change his view that the film was unsuitable for the classroom. In the United States In January 2007, the Federal Way (Washington State) School Board voted to require an approval by the principal and the superintendent for teachers to show the film to students and that the teachers must include the presentation of an approved "opposing view". The moratorium was repealed, at a meeting on January 23, after a predominantly negative community reaction. Shortly thereafter, the school board in Yakima, Washington, calling the film a "controversial issue", prevented the Environmental Club of Eisenhower High School from showing it, pending review by the school board, teachers, principal, and parents. It lifted the stay a month later, upon the approval by a review panel. National Science Teachers Association In the United States, 50,000 free copies of An Inconvenient Truth were offered to the National Science Teachers Association (NSTA), which declined to take them. Producer David provided an email correspondence from the NSTA detailing that their reasoning was that the DVDs would place "unnecessary risk upon the [NSTA] capital campaign, especially certain targeted supporters," and that they saw "little, if any, benefit to NSTA or its members" in accepting the free DVDs. In public, the NSTA argued that distributing this film to its members would have been contrary to a long-standing NSTA policy against distributing unsolicited materials to its members. The NSTA also said that they had offered several other options for distributing the film but ultimately "[it] appears that these alternative distribution mechanisms were unsatisfactory."David has said that NSTA Executive Director Gerry Wheeler promised in a telephone conversation to explore alternatives with NSTA's board for advertising the film but she had not yet received an alternative offer at the time of NSTA's public claim. She also said that she rejected their subsequent offers because they were nothing more than offers to sell their "commercially available member mailing list" and advertising space in their magazine and newsletter, which are available to anyone. The American Association for the Advancement of Science publication ScienceNOW published an assessment discussing both sides of the NSTA decision in which it was reported that "David says NSTA's imprimatur was essential and that buying a mailing list is a nonstarter. 'You don't want to send out a cold letter, and it costs a lot of money,' she says. 'There are a thousand reasons why that wouldn't work.'"A The Washington Post editorial called the decision "Science a la Joe Camel", citing for example that the NSTA had received $6 million since 1996 from ExxonMobil, which had a representative on the organization's corporate board. David noted that in the past, NSTA had shipped out 20,000 copies of a 10-part video produced by Wheeler with funding provided by ConocoPhillips in 2003. NSTA indicated that they retained editorial control over the content, which David questioned based on the point of view portrayed in the global warming section of the video. In New Zealand Former ACT New Zealand Member of Parliament Muriel Newman filed a petition to have New Zealand schoolchildren be protected from political indoctrination by putting provisions that resembled those in the UK to the Education Act. The petition was in response to concerned parents who talked with Newman after An Inconvenient Truth was shown in schools in 2007. The parents were apparently worried that teachers were not pointing out supposed inaccuracies in the film and were not explaining differing viewpoints. Music An Inconvenient Truth was scored by Michael Brook with an accompanying theme song played during the end credits by Melissa Etheridge. Brook explained that he wanted to bring out the emotion expressed in the film: "... in An Inconvenient Truth, there's a lot of information and it's kind of a lecture, in a way, and very well organized and very well presented, but it's a lot to absorb. And the director, Guggenheim, wanted to have – sort of give people a little break every once in a while and say, okay, you don't have to absorb this information, you can just sort of – and it was more the personal side of Al Gore's life or how it connected to the theme of the film. And that's when there's music."Etheridge agreed to write An Inconvenient Truth's theme song, "I Need to Wake Up" after viewing Gore's slide show. "I was so honored he would ask me to contribute to a project that is so powerful and so important, I felt such a huge responsibility," she said. "Then I went, 'What am I going to write? What am I going to say?' " Etheridge's former partner, Tammy Lynn Michaels, told her: "Write what you feel, because that's what people are going to feel." Of Etheridge's commitment to the project, Gore said, "Melissa is a rare soul who gives a lot of time and effort to causes in which she strongly believes." Etheridge received the 2006 Academy Award for Best Original Song for "I Need to Wake Up." Upon receiving the award, she noted in her acceptance speech: Mostly I have to thank Al Gore, for inspiring us, for inspiring me, showing that caring about the Earth is not Republican or Democrat; it's not red or blue, it's all green. Book and documentary Gore's book of the same title was published concurrently with the theatrical release of the documentary. The book contains additional information, scientific analysis, and Gore's commentary on the issues presented in the documentary. A 2007 documentary entitled An Update with Former Vice President Al Gore features Gore discussing additional information that came to light after the film was completed, such as Hurricane Katrina, coral reef depletion, glacial earthquake activity on the Greenland ice sheet, wildfires, and trapped methane gas release associated with permafrost melting. Sequel When asked during a Reddit "Ask Me Anything" in October 2013 whether there were plans for a follow-up film, Guggenheim said, "I think about it a lot – I think we need one right now." In 2014, The Hollywood Reporter reported that the producers of the film were in talks over a possible sequel. "We have had conversations," co-producer Bender said. "We've met; we've discussed. If we are going to make a movie, we want it to have an impact." Co-producer David also believed a sequel was needed. "God, do we need one," David said. "Everything in that movie has come to pass. At the time we did the movie, there was Hurricane Katrina; now we have extreme weather events every other week. The update has to be incredible and shocking."In December 2016, Al Gore announced that a follow-up to An Inconvenient Truth would open at the 2017 Sundance Film Festival. The film was screened in the Climate section, a new section for films featuring themes of climate and the environment. It was released by Paramount on July 28, 2017. See also CO2 (opera) Human impact on the environment Hurricane Katrina Extinction risk from climate change Racing Extinction Catching the Sun References External links An Inconvenient Truth official website An Inconvenient Truth at IMDb An Inconvenient Truth at AllMovie An Inconvenient Truth at the TCM Movie Database An Inconvenient Truth at the American Film Institute Catalog An Inconvenient Truth at Box Office Mojo An Inconvenient Truth at Rotten Tomatoes
energy in hawaii
Energy in the U.S. state of Hawaii is produced from a mixture of fossil fuel and renewable resources. Producing energy is complicated by the state's isolated location and lack of fossil fuel resources. The state relies heavily on imports of petroleum. Hawaii has the highest share of petroleum use in the United States, with about 62% of electricity coming from oil in 2017. As of 2021 renewable energy made up 34.5% .Hawaii has the highest electricity prices in the United States. As of 2016 the average cost of electricity was $0.24 per kilowatt-hour, followed by Alaska at $0.19. The U.S. average was $0.10. Consumption Hawaii's energy consumption is dominated by oil, which in 2016 provided 83% (down from 85.0% in 2008 and 99.7% in 1960). Other sources in 2016 included coal (5.6%) and renewable energy (11.2%). In 2017, sources of renewable power were: Renewable energy support Hawaii has committed to developing renewable energy to supply 70 percent or more of Hawaii's energy needs by 2030.Hawaii requires solar water heaters for new homes, except for those in areas with poor solar energy resources, homes using other renewable energy sources, and homes employing on-demand gas-fired water heaters. It offers a rebate of the lesser of 35% of the cost of photovoltaics or $5,000. History Hawaii began concrete support for renewable energy in the 21st century. Legislation HB 3179 made it easier for biofuel producers to lease state lands. SB 3190 and HB 2168 authorized special purpose revenue bonds to help finance a solar energy facility on Oahu and hydrogen generation and conversion facilities at the Natural Energy Laboratory of Hawaii Authority, located on Hawaii island. In 2010 SB 644 mandated solar water heaters for new construction, with some exceptions. The bill eliminated solar thermal energy tax credits for homes.SB 988 allowed the Hawaii Public Utility Commission to establish a rebate for photovoltaic systems, and HB 2550 encouraged net metering for residential and small commercial customers. In 2008 HB 2863 provided streamlined permitting for new renewable energy facilities of at least 200 megawatts capacity. HB 2505 created a full-time renewable energy facilitator to help the state expedite permits. HB 2261 provided loans of up to $1.5 million and up to 85% of the cost of renewable energy projects at farms and aquaculture facilities. HRS 235 established an income tax credit for photovoltaic systems of the lesser of 35% of the cost or $5,000. Hawaii Clean Energy Initiative On January 28, 2008, the State of Hawaii and the US Department of Energy announced the Hawaii Clean Energy Initiative, which established the commitment for energy to supply 70 percent of Hawaii's energy needs by 2030.The Initiative intended to work with public and private partners on renewable energy projects including: designing cost-effective approaches for 100 percent use of renewable energy on smaller islands, improve grid stability while incorporating variable generating sources and expanding Hawaii's capability to use locally grown crops for producing fuel and electricity.Partners include United States Department of Energy - EERE, the state of Hawaii, Hawaiian Electric Company, Phoenix Motorcars. Natural Energy Laboratory of Hawaii Authority The Natural Energy Laboratory of Hawaii Authority is a test site for experimental renewable energy. It was originally built to test Ocean thermal energy conversion (OTEC), and later evolved into a commercial (but requiring state subsidies and county agricultural rate potable water) industrial park, including desalinating drinking water for export, aquaculture, biofuel from algae, solar thermal energy, concentrating solar and wind power. Energy use by sector Transportation The electric Skyline network was planned to transport commuters from to begin operation in late 2020, as of 2019 was scheduled for 2025 at the earliest. Electricity Ninety-nine percent of the population in Hawaii (outside of Kauai) is supplied by Hawaiian Electric Industries (HECO). Kauai is supplied by consumer-owned Kauai Island Utility Cooperative. As of 2018, the total dispatchable capacity was 1,727 MW, and the intermittent generation capacity was 588 MW. Each island generates its own power, unconnected to other islands. The islands have several grid batteries. Oil Oil is the largest electricity source. As of 2022, it produced ~2863 MWh, or 38% of the total. Solar Solar power in Hawaii grew quickly, putting household energy generation below the cost of purchased electricity. In 2017, solar power produced 38.4% of the state's renewable electricity.. As of March 2020, 916 MW of solar generating capacity was installed in HECO areas. Wind power As of 2022, Hawaii wind farms were producing 701 MwH or 22.1% of the state's electricity. This is generated by the following wind farms:Hawaii began research into wind power in the mid-1980s with a 340 kW turbine on Maui, the 2.3MW Lalamilo Wells wind farm on Oahu and the 9 MW Kamaoa wind farm on Hawaii Island. The MOD-5B, a 3.2 MW wind turbine, on Oahu was the largest in the world in 1987. These early examples were all out of service by 2010. Biomass Hawaii has several biomass electric plants including the 10 MW Honolulu International Airport Emergency Power Facility, the 6.7 MW Green Energy Agricultural Biomass-to-Energy Facility on Kauai, and the 6.6 MW waste-to-energy Honua Power Project on Hawaii Island. The 21.5 MW Hu Honua plant remains in litigation and is not online. Wärtsilä sold a plant to Hawaii Electric to be installed at Schofield Barracks Army Base on Oahu in 2017. The plant can run on solid or gas fuels including biomass.Pacific Biodiesel operates a biodiesel production facility on Hawaii Island. It provides fuel to Hawaiian Electric Industries, the City and County of Honolulu and marine company Extended Horizons. Coal Hawaii has banned new coal plants. Between 1992 and 2022, a single plant operated in the state, AES Hawaii Power Plant, which generated 180 MWe. The plant closed in September 2022, accompanied by a 7% increase in electricity rates. Wave power The U.S. Navy and the University of Hawaii operate a Wave Energy Test Site in Kaneohe Bay. Geothermal The 38 MW Puna Geothermal Venture was constructed on Hawaii island between 1989 and 1993. It operated until May 2018 when it was shut down due to the 2018 lower Puna eruption, and resumed production at 25 MW in November 2020. Algae fuel Cellana produces oil from algae at a 2.5 hectares (6.2 acres) research site at Kailua-Kona on Hawaii island. Cellana (previously HR BioPetroleum) worked with Royal Dutch Shell on a pilot facility to grow algae on land leased from the Natural Energy Laboratory of Hawaii Authority, on the island's west shore. See also Energy in the United States == References ==
american enterprise institute
The American Enterprise Institute for Public Policy Research, known simply as the American Enterprise Institute (AEI), is a center-right think tank based in Washington, D.C., that researches government, politics, economics, and social welfare. AEI is an independent nonprofit organization supported primarily by contributions from foundations, corporations, and individuals. Founded in 1938, the organization is aligned with conservatism and neoconservatism but does not support political candidates. AEI advocates in favor of private enterprise, limited government, and democratic capitalism. Some of their positions have attracted controversy, including their defense policy recommendations for the Iraq War, their analysis of the financial crisis of 2007–2008, and their energy and environmental policies based on their more than two-decade-long opposition to the prevailing scientific opinion on climate change. AEI is governed by a 28-member Board of Trustees. Approximately 185 authors are associated with AEI. Arthur C. Brooks served as president of AEI from January 2009 through July 1, 2019. He was succeeded by Robert Doar. History Beginnings (1938–1954) AEI grew out of the American Enterprise Association (AEA), which was founded in 1938 by a group of New York businessmen led by Lewis H. Brown. AEI's founders included executives from Bristol-Myers, Chemical Bank, Chrysler, Eli Lilly, General Mills, and Paine Webber.In 1943, AEA's main offices were moved from New York City to Washington, D.C. during a time when Congress's portfolio had vastly increased during World War II. AEA opposed the New Deal, and aimed to propound classical liberal arguments for limited government. In 1944, AEA convened an Economic Advisory Board to set a high standard for research; this eventually evolved into the Council of Academic Advisers, which over the decades included economists and social scientists, including Ronald Coase, Martin Feldstein, Milton Friedman, Roscoe Pound, and James Q. Wilson.AEA's early work in Washington, D.C. involved commissioning and distributing legislative analyses to Congress, which developed AEA's relationships with Melvin Laird and Gerald Ford. Brown eventually shifted AEA's focus to commissioning studies of government policies. These subjects ranged from fiscal to monetary policy and including health care and energy policy, and authors such as Earl Butz, John Lintner, former New Dealer Raymond Moley, and Felix Morley. Brown died in 1951, and AEA languished as a result. In 1952, a group of young policymakers and public intellectuals including Laird, William J. Baroody Sr., Paul McCracken, and Murray Weidenbaum, met to discuss resurrecting AEA. In 1954, Baroody became executive vice president of the association. William J. Baroody Sr. (1954–1980) Baroody was executive vice president from 1954 to 1962 and president from 1962 to 1978. Baroody raised money for AEA to expand its financial base beyond the business leaders on the board. During the 1950s and 1960s, AEA's work became more pointed and focused, including monographs by Edward Banfield, James M. Buchanan, P. T. Bauer, Alfred de Grazia, Rose Friedman, and Gottfried Haberler,.In 1962, AEA changed its name to the American Enterprise Institute for Public Policy Research (AEI) to avoid any confusion with a trade association representing business interests attempting to influence politicians. In 1964, William J. Baroody Sr., and several of his top staff at AEI, including Karl Hess, moonlighted as policy advisers and speechwriters for presidential nominee Barry Goldwater in the 1964 presidential election. "Even though Baroody and his staff sought to support Goldwater on their own time without using the institution's resources, AEI came under scrutiny of the IRS in the years following the campaign," author Andrew Rich wrote in 2004. Representative Wright Patman subpoenaed the institute's tax papers, and the IRS initiated a two-year investigation of AEI. After this, AEI's officers attempted to avoid the appearance of partisan political advocacy.Baroody recruited a resident research faculty; Harvard University economist Gottfried Haberler was the first to join in 1972. In 1977, former president Gerald Ford joined AEI as a "distinguished fellow." Ford brought several of his administration officials with him, including Robert Bork, Arthur Burns, David Gergen, James C. Miller III, Laurence Silberman, and Antonin Scalia. Ford also founded the AEI World Forum, which he hosted until 2005. Other staff hired during this time included Walter Berns and Herbert Stein. Baroody's son, William J. Baroody Jr., a Ford White House official, also joined AEI, and later became president of AEI, succeeding his father in that role in 1978.The elder Baroody made an effort to recruit neoconservatives who had supported the New Deal and Great Society but were disaffected by what they perceived as the failure of the welfare state. This also included Cold War hawks who rejected the peace agenda of 1972 Democratic presidential candidate George McGovern. Baroody brought Jeane Kirkpatrick, Irving Kristol, Michael Novak, and Ben Wattenberg to AEI.While at AEI, Kirkpatrick authored "Dictatorships and Double Standards", which brought her to the attention of Ronald Reagan, and Kirkpatrick was later named U.S. permanent representative to the United Nations. AEI also became a home for supply-side economists during the late 1970s and early 1980s. By 1980, AEI had grown from a budget of $1 million and a staff of ten to a budget of $8 million and a staff of 125. William J. Baroody Jr. (1980–1986) Baroody Sr. retired in 1978, and was replaced by his son, William J. Baroody Jr. Baroody Sr. died in 1980, shortly before Reagan took office as U.S. president in January 1981.During the Reagan administration, several AEI staff were hired by the administration. But this, combined with prodigious growth, diffusion of research activities, and managerial problems, proved costly. Some foundations then supporting AEI perceived a drift toward the center politically. Centrists like Ford, Burns, and Stein clashed with rising movement conservatives. In 1986, the John M. Olin Foundation and the Smith Richardson Foundation withdrew funding for AEI, pushing it to the brink of bankruptcy. The board of trustees fired Baroody Jr. and, after Paul McCracken then served briefly as interim president. In December 1986, AEI hired Christopher DeMuth as its new president, and DeMuth served in the role for 22 years. Christopher DeMuth (1986–2008) In 1990, AEI hired Charles Murray (and received his Bradley Foundation support for The Bell Curve) after the Manhattan Institute dropped him. Others brought to AEI by DeMuth included John Bolton, Dinesh D'Souza, Richard Cheney, Lynne Cheney, Michael Barone, James K. Glassman, Newt Gingrich, John Lott, and Ayaan Hirsi Ali.During the George H. W. Bush and Bill Clinton administrations, AEI's revenues grew from $10 million to $18.9 million. The institute's publications Public Opinion and The AEI Economist were merged into The American Enterprise, edited by Karlyn Bowman from 1990 to 1995 and by Karl Zinsmeister from 1995 to 2006, when Glassman created The American. AEI was closely tied to the George W. Bush administration. More than twenty AEI staff members served in the Bush administration, and Bush addressed the institute on three occasions. "I admire AEI a lot—I'm sure you know that", Bush said. "After all, I have been consistently borrowing some of your best people."Cabinet officials also frequented AEI. In 2002, Danielle Pletka joined AEI to promote the foreign policy department. AEI and several of its staff—including Michael Ledeen and Richard Perle—became associated with the start of the Iraq War. President George W. Bush used a February 2003 AEI dinner to advocate for a democratized Iraq, which was intended to inspire the remainder of the Mideast. In 2006–07, AEI staff, including Frederick W. Kagan, provided a strategic framework for the 2007 surge in Iraq. The Bush administration also drew on AEI work in other areas, such as Leon Kass's appointment as the first chairman of the President's Council on Bioethics and Norman J. Ornstein's work drafting the Bipartisan Campaign Reform Act that Bush signed in 2002. Arthur C. Brooks (2008–2019) When DeMuth retired as president at the end of 2008, AEI's staff numbered 185, with 70 scholars and several dozen adjuncts, and revenues of $31.3 million. Arthur C. Brooks succeeded him as president at the start of the Late-2000s recession. In a 2009 op-ed in The Wall Street Journal, Brooks positioned AEI to be much more aggressive in responding to the policies of the Barack Obama administration. In 2018, Brooks announced that he would step down effective July 1, 2019. Termination of David Frum's residency On March 25, 2010, AEI resident fellow David Frum announced that his position at the organization had been "terminated." Following this announcement, media outlets speculated that Frum had been "forced out" for writing a post to his FrumForum blog called "Waterloo", in which he criticized the Republican Party's unwillingness to bargain with Democrats on the Patient Protection and Affordable Care Act. In the editorial, Frum claimed that his party's failure to reach a deal "led us to abject and irreversible defeat."After his termination, Frum clarified that his article had been "welcomed and celebrated" by AEI President Arthur Brooks, and that he had been asked to leave because "these are hard times." Brooks had offered Frum the opportunity to write for AEI on a nonsalaried basis, but Frum declined. The following day, journalist Mike Allen published a conversation with Frum, in which Frum expressed a belief that his termination was the result of pressure from donors. According to Frum, "AEI represents the best of the conservative world ... But the elite isn't leading anymore ... I think Arthur [Brooks] took no pleasure in this. I think he was embarrassed." Robert Doar (2019–present) In January 2019, Robert Doar was selected by AEI's board of trustees to be AEI's 12th president, succeeding Arthur Brooks on July 1, 2019. In October 2023, Doar led a AEI delegation (including Kori Schake, Dan Blumenthal, Zack Cooper, Nicholas Eberstadt, among others) to visit Taiwan and met with President Tsai Ing-wen. Personnel AEI's officers include Robert Doar, Danielle Pletka, Yuval Levin, Michael R. Strain, and Ryan Streeter.AEI has a Council of Academic Advisers, which includes Alan J. Auerbach, Eliot A. Cohen, Eugene Fama, Aaron Friedberg, Robert P. George, Eric A. Hanushek, Walter Russell Mead, Mark V. Pauly, R. Glenn Hubbard, Sam Peltzman, Harvey S. Rosen, Jeremy A. Rabkin, and Richard Zeckhauser. The Council of Academic Advisers selects the annual winner of the Irving Kristol Award.AEI was influential during the George W. Bush administration's public and foreign policy. More than 20 staff members served either in a Bush administration policy post or on one of the government's many panels and commissions, including Dick Cheney, John R. Bolton, Lynne Cheney, and Paul Wolfowitz. Board of directors AEI's board is chaired by Daniel A. D'Aniello. Current notable trustees include: Cliff Asness, hedge fund manager and the co-founder of AQR Capital Management Dick Cheney, former U.S. vice president Pete Coors, vice chairman of the board of Molson Coors Brewing Company Harlan Crow, chairman and CEO, Crow Holdings, the Trammell Crow family's investment company Ravenel B. Curry III, president, Eagle Capital Management Dick DeVos, president, Windquest Group John V. Faraci, chairman and CEO, International Paper Tully Friedman, chairman and CEO, Friedman Fleischer & Lowe Christopher Galvin, former CEO and chairman, Motorola Harvey Golub, retired chairman and CEO, American Express Company Robert F. Greenhill, founder and chairman, Greenhill & Co. Frank Hanna III, CEO, Hanna Capital Bruce Kovner, chairman, Caxton Alternative Associates (former AEI chairman) John A. Luke Jr., chairman and CEO, MeadWestvaco Kevin Rollins, former president and CEO, Dell Matthew K. Rose, executive chairman, BNSF Railway Edward B. Rust Jr., chairman and CEO, State Farm (former AEI chairman) Mel Sembler, chairman emeritus, Sembler Company Political stance and impact The institute has been described as a right-leaning counterpart to the left-leaning Brookings Institution; however, the two entities have often collaborated. From 1998 to 2008, they co-sponsored the AEI-Brookings Joint Center for Regulatory Studies, and in 2006 they launched the AEI-Brookings Election Reform Project. In 2015, a working group consisting of members from both institutions coauthored a report entitled Opportunity, Responsibility, and Security: A Consensus Plan for Reducing Poverty and Restoring the American Dream.AEI is the most prominent think tank associated with American neoconservatism, in both the domestic and international policy arenas. Irving Kristol, widely considered to be one of the founding fathers of neoconservatism, was a senior fellow at AEI (arriving from the Congress for Cultural Freedom following the revelation of that group's CIA funding) and the AEI issues a 'Irving Kristol Award' in his honour. Many prominent neoconservatives—including Jeane Kirkpatrick, Ben Wattenberg, and Joshua Muravchik—spent the bulk of their careers at AEI. Paul Ryan has described the AEI as "one of the beachheads of the modern conservative movement".According to the 2011 Global Go To Think Tank Index Report (Think Tanks and Civil Societies Program, University of Pennsylvania), AEI is number 17 in the "Top Thirty Worldwide Think Tanks" and number 10 in the "Top Fifty United States Think Tanks". As of 2019, the American Enterprise Institute also leads in YouTube subscribers among free-market groups. Research programs AEI's research is divided into seven broad categories: economic policy studies, foreign and defense policy studies, health care policy studies, political and public opinion studies, social and cultural studies, education, and poverty studies. Until 2008, AEI's work was divided into economics, foreign policy, and politics and social policy. AEI research is presented at conferences and meetings, in peer-reviewed journals and publications on the institute's website, and through testimony before and consultations with government panels. Economic policy studies Economic policy was the original focus of the American Enterprise Association, and "the Institute still keeps economic policy studies at its core". According to AEI's annual report, "The principal goal is to better understand free economies—how they function, how to capitalize on their strengths, how to keep private enterprise robust, and how to address problems when they arise". Michael R. Strain directs economic policy studies at AEI. Throughout the beginning of the 21st-century, AEI staff have pushed for a more conservative approach to aiding the recession that includes major tax-cuts. AEI supported President Bush's tax cuts in 2002 and claimed that the cuts "played a large role in helping to save the economy from a recession". AEI also suggested that further taxes were necessary in order to attain recovery of the economy. An AEI staff member said that the Democrats in congress who opposed the Bush stimulus plan were foolish for doing so as he saw the plan as a major success for the administration. Financial crisis of 2007–2008 As the financial crisis of 2007–2008 unfolded, The Wall Street Journal stated that predictions by AEI staff about the involvement of housing GSEs had come true. In the late 1990s, Fannie Mae eased credit requirements on the mortgages it purchased and exposed itself to more risk. Peter J. Wallison warned that Fannie Mae and Freddie Mac's public-private status put taxpayers on the line for increased risk. "Because of the agencies' dual public and private form, various efforts to force Fannie Mae and Freddie Mac to fulfill their public mission at the cost of their profitability have failed—and will likely continue to fail", he wrote in 2001. "The only viable solution would seem to be full privatization or the adoption of policies that would force the agencies to adopt this course themselves."Wallison ramped up his criticism of the GSEs throughout the 2000s. In 2006, and 2007, he moderated conferences featuring James B. Lockhart III, the chief regulator of Fannie and Freddie In August 2008, after Fannie and Freddie had been backstopped by the US Treasury Department, Wallison outlined several ways of dealing with the GSEs, including "nationalization through a receivership," outright "privatization," and "privatization through a receivership." The following month, Lockhart and Treasury Secretary Henry Paulson took the former path by putting Fannie and Freddie into federal "conservatorship." As the housing crisis unfolded, AEI sponsored a series of conferences featuring commentators including Desmond Lachman, and Nouriel Roubini. Makin had been warning about the effects of a housing downturn on the broader economy for months. Amid charges that many homebuyers did not understand their complex mortgages, Alex J. Pollock crafted a prototype of a one-page mortgage disclosure form.The claim that AEI predicted and warned about the financial crisis of 2007–2008 is heavily disputed. In her book, Dark Money (2016), American investigative journalist Jane Mayer writes that contrary to their claims, AEI took the "lead role" in crafting a revisionist narrative about the financial crisis, promoting what equities analyst Barry Ritholtz called "Wall Street's 'big lie'". AEI's argument, "that government programs that helped low-income home buyers get mortgages caused the collapse", did not "withstand even casual scrutiny", according to Ritholz. Multiple studies, including those from Harvard University's Joint Center for Housing Studies and the U.S. Government Accountability Office, did not support the conclusions about mortgages reached by AEI. Ritholz argues that AEI intentionally shifted the blame from the financial sector, many of whom worked or were affiliated with AEI, according to Mayer, to the government and the consumer, so as to continue promoting the questionable idea that the free market does not need regulation. Tax and fiscal policy Kevin Hassett and Alan D. Viard are AEI's principal tax policy experts, although Alex Brill, R. Glenn Hubbard, and Aparna Mathur also work on the subject. Specific subjects include "income distribution, transition costs, marginal tax rates, and international taxation of corporate income... the Pension Protection Act of 2006; dynamic scoring and the effects of taxation on investment, savings, and entrepreneurial activity; and options to fix the alternative minimum tax". Hassett has coedited several volumes on tax reform.Viard edited a book on tax policy lessons from the Bush administration. AEI's working paper series includes developing academic works on economic issues. One paper by Hassett and Mathur on the responsiveness of wages to corporate taxation was cited by The Economist; figures from another paper by Hassett and Brill on maximizing corporate income tax revenue was cited by The Wall Street Journal. Center for Regulatory and Market Studies From 1998 to 2008, the Reg-Markets Center was the AEI-Brookings Joint Center for Regulatory Studies, directed by Robert W. Hahn. The center, which no longer exists, sponsored conferences, papers, and books on regulatory decision-making and the impact of federal regulation on consumers, businesses, and governments. It covered a range of disciplines. It also sponsored an annual Distinguished Lecture series. Past lecturers in the series have included William Baumol, Supreme Court Justice Stephen Breyer, Alfred Kahn, Sam Peltzman, Richard Posner, and Cass Sunstein.Research in AEI's Financial Markets Program also includes banking, insurance and securities regulation, accounting reform, corporate governance, and consumer finance. Energy and environmental policy AEI's work on climate change has been subject to controversy. Some AEI staff and fellows have been critical of the Intergovernmental Panel on Climate Change (IPCC), the international scientific body tasked to evaluate the risk of climate change caused by human activity. According to AEI, it "emphasizes the need to design environmental policies that protect not only nature but also democratic institutions and human liberty". American historian of science Naomi Oreskes notes that this idea became prominent during the conservative turn towards anti-environmentalism in the 1980s. Corporations claimed to uphold a kind of laissez-faire capitalism that promoted individual rights by pushing for deregulation. To do this successfully, companies would fund think tanks like AEI to cast doubt on science and spread disinformation by arguing that environmental dangers were unproven.When the Kyoto Protocol was approaching in 1997, AEI was hesitant to encourage the U.S. to join. In an essay from the AEI outlook series of 2007, the authors discuss the Kyoto Protocol and state that the United States "should be wary of joining an international emissions-trading regime". To back this statement, they point out that committing to the Kyoto emissions goal would be a significant and unrealistic obligation for the United States. In addition, they state that the Kyoto regulations would have an impact not only on governmental policies, but also the private sector through expanding government control over investment decisions. AEI staff said that "dilution of sovereignty" would be the result if the U.S. signed the treaty.In February 2007, a number of sources, including the British newspaper The Guardian, reported that the AEI had offered scientists $10,000 plus travel expenses and additional payments, asking them to dispute the IPCC Fourth Assessment Report. This offer was criticized as bribery. The letters alleged that the IPCC was "resistant to reasonable criticism and dissent, and prone to summary conclusions that are poorly supported by the analytical work" and asked for essays that "thoughtfully explore the limitations of climate model outputs". In 2016, The Guardian reported that the AEI received $1.6 million in funding from ExxonMobil, and further notes that former ExxonMobil CEO Lee R. Raymond is the vice-chairman of AEI's board of trustees. This story was repeated by Newsweek, which drew criticism from its contributing editor Robert J. Samuelson because "this accusation was long ago discredited, and Newsweek shouldn't have lent it respectability." The Guardian article was disputed in a The Wall Street Journal editorial. The editorial stated: "AEI doesn't lobby, didn't offer money to scientists to question global warming, and the money it did pay for climate research didn't come from Exxon."AEI has promoted carbon taxation as an alternative to cap-and-trade regimes. "Most economists believe a carbon tax (a tax on the quantity of CO2 emitted when using energy) would be a superior policy alternative to an emissions-trading regime," wrote Kenneth P. Green, Kevin Hassett, and Steven F. Hayward. "In fact, the irony is that there is a broad consensus in favor of a carbon tax everywhere except on Capitol Hill, where the 'T word' is anathema." Other AEI staff have argued for similar policies. Thernstrom and Lane are codirecting a project on whether geoengineering would be a feasible way to "buy us time to make [the] transition [from fossil fuels] while protecting us from the worst potential effects of warming". Green, who departed AEI in 2013, expanded its work on energy policy. He has hosted conferences on nuclear power and ethanol With Aparna Mathur, he evaluated Americans' indirect energy use to discover unexpected areas in which energy efficiencies can be achieved. In October 2007, resident scholar and executive director of the AEI-Brookings Joint Center for Regulatory Studies Robert W. Hahn commented:Fending off both sincere and sophistic opposition to cap-and-trade will no doubt require some uncomfortable compromises. Money will be wasted on unpromising R&D; grotesquely expensive renewable fuels may gain a permanent place at the subsidy trough. And, as noted above, there will always be a risk of cheating. But the first priority should be to seize the day, putting a domestic emissions regulation system in place. Without America's political leadership and economic muscle behind it, an effective global climate stabilization strategy isn't possible. AEI visiting scholar N. Gregory Mankiw wrote in The New York Times in support of a carbon tax on September 16, 2007. He remarked that "there is a broad consensus. The scientists tell us that world temperatures are rising because humans are emitting carbon into the atmosphere. Basic economics tells us that when you tax something, you normally get less of it." After Energy Secretary Steven Chu recommended painting roofs and roads white in order to reflect sunlight back into space and therefore reduce global warming, AEI's magazine The American endorsed the idea. It also stated that "ultimately we need to look more broadly at creative ways of reducing the harmful effects of climate change in the long run." The American's editor-in-chief and fellow Nick Schulz endorsed a carbon tax over a cap and trade program in The Christian Science Monitor on February 13, 2009. He stated that it "would create a market price for carbon emissions and lead to emissions reductions or new technologies that cut greenhouse gases."Former scholar Steven Hayward has described efforts to reduce global warming as being "based on exaggerations and conjecture rather than science". He has stated that "even though the leading scientific journals are thoroughly imbued with environmental correctness and reject out of hand many articles that don't conform to the party line, a study that confounds the conventional wisdom is published almost every week". Likewise, former AEI scholar Kenneth Green has referred to efforts to reduce greenhouse gas emissions as "the positively silly idea of establishing global-weather control by actively managing the atmosphere's greenhouse-gas emissions", and endorsed Michael Crichton's novel State of Fear for having "educated millions of readers about climate science".Christopher DeMuth, former AEI president, accepted that the earth has warmed in recent decades, but he stated that "it's not clear why this happened" and charged as well that the IPCC "has tended to ignore many distinguished physicists and meteorologists whose work casts doubt on the influence of greenhouse gases on global temperature trends". Fellow James Glassman also disputes the prevailing scientific opinion on climate change, having written numerous articles criticizing the Kyoto accords and climate science more generally for Tech Central Station. He supported the views of U.S. Senator Jim Inhofe (R-OK), who claims that "global warming is 'the greatest hoax ever perpetrated on the American people,'" and, like Green, cites Crichton's novel State of Fear, which "casts serious doubt on global warming and extremists who espouse it". Joel Schwartz, an AEI visiting fellow, stated: "The Earth has indeed warmed during the last few decades and may warm further in the future. But the pattern of climate change is not consistent with the greenhouse effect being the main cause." Foreign and defense policy studies AEI's foreign and defense policy studies researchers focus on "how political and economic freedom—as well as American interests—are best promoted around the world". AEI staff have tended to be advocates of a hard U.S. line on threats or potential threats to the United States, including the Soviet Union during the Cold War, Saddam Hussein's Iraq, the People's Republic of China, North Korea, Iran, Syria, Venezuela, Russia, and terrorist or militant groups like al Qaeda and Hezbollah. Likewise, AEI staff have promoted closer U.S. ties with countries whose interests or values they view as aligned with America's, such as Israel, the Republic of China (Taiwan), India, Australia, Japan, Mexico, Colombia, the Philippines, the United Kingdom, and emerging post-Communist states such as Poland. AEI takes a pro-Israel stance. In 2015 it awarded Israeli Prime Minister Benjamin Netanyahu its 'Iriving Kristol Award'.AEI's foreign and defense policy studies department, directed by Danielle Pletka, is the part of the institute most commonly associated with neoconservatism, especially by its critics. Prominent foreign-policy neoconservatives at AEI include Richard Perle, Gary Schmitt, and Paul Wolfowitz. John Bolton, often said to be a neoconservative, has said he is not one, as his primary focus is on American interests, not democracy promotion. Joshua Muravchik and Michael Ledeen spent many years at AEI, although they departed at around the same time as Reuel Marc Gerecht in 2008 in what was rumored to be a "purge" of neoconservatives at the institute, possibly "signal[ing] the end of [neoconservatism's] domination over the think tank over the past several decades", although Muravchik later said it was the result of personality and management conflicts. U.S. national security strategy, defense policy, and the "surge" In late 2006, the security situation in Iraq continued to deteriorate, and the Iraq Study Group proposed a phased withdrawal of U.S. troops and further engagement of Iraq's neighbors. Consulting with AEI's Iraq Planning Group, Frederick W. Kagan published an AEI report entitled Choosing Victory: A Plan for Success in Iraq calling for "phase one" of a change in strategy to focus on "clearing and holding" neighborhoods and securing the population; a troop escalation of seven Army brigades and Marine regiments; and a renewed emphasis on reconstruction, economic development, and jobs.While the report was being drafted, Kagan and Keane were briefing President Bush, Vice President Cheney, and other senior Bush administration officials behind the scenes. According to Bob Woodward, "[Peter J.] Schoomaker was outraged when he saw news coverage that retired Gen. Jack Keane, the former Army vice chief of staff, had briefed the president on December 11 about a new Iraq strategy being proposed by the American Enterprise Institute, the conservative think tank. 'When does AEI start trumping the Joint Chiefs of Staff on this stuff?' Schoomaker asked at the next chiefs' meeting."Kagan, Keane, and Senators John McCain and Joseph Lieberman presented the plan at a January 5, 2007, event at AEI. Bush announced the change of strategy on January 10. Kagan authored three subsequent reports monitoring the progress of the surge.AEI's defense policy researchers, who also include Schmitt and Thomas Donnelly, also work on issues related to the U.S. military forces' size and structure and military partnerships with allies (both bilaterally and through institutions such as NATO). Schmitt directs AEI's Program on Advanced Strategic Studies, which "analyzes the long-term issues that will impact America's security and its ability to lead internationally". Area studies Asian studies at AEI covers "the rise of China as an economic and political power; Taiwan's security and economic agenda; Japan's military transformation; the threat of a nuclear North Korea; and the impact of regional alliances and rivalries on U.S. military and economic relationships in Asia". AEI has published numerous reports on Asia.Papers in AEI's Tocqueville on China Project series "elicit the underlying civic culture of post-Mao China, enabling policymakers to better understand the internal forces and pressures that are shaping China's future".AEI's Europe program was previously housed under the auspices of the New Atlantic Initiative, which was directed by Radek Sikorski before his return to Polish politics in 2005. Leon Aron's work forms the core of the institute's program on Russia. AEI staff tend to view Russia as posing "strategic challenges for the West".Mark Falcoff, now retired, was previously AEI's resident Latinamericanist, focusing on the Southern Cone, Panama, and Cuba. He has warned that the road for Cuba after Fidel Castro's rule or the lifting of the U.S. trade embargo would be difficult for an island scarred by a half-century of poverty and civil turmoil. Roger Noriega's focuses at AEI are on Venezuela, Brazil, the Mérida Initiative with Mexico and Central America, and hemispheric relations. AEI has historically devoted significant attention to the Middle East, especially through the work of former resident scholars Ledeen and Muravchik. Pletka's research focus also includes the Middle East, and she coordinated a conference series on empowering democratic dissidents and advocates in the Arab World. In 2009, AEI launched the Critical Threats Project, led by Kagan, to "highlight the complexity of the global challenges the United States faces with a primary focus on Iran and al Qaeda's global influence". The project includes IranTracker.org, with contributions from Ali Alfoneh, Ahmad Majidyar and Michael Rubin, among others. International organizations and economic development For several years, AEI and the Federalist Society cosponsored NGOWatch, which was later subsumed into Global Governance Watch, "a web-based resource that addresses issues of transparency and accountability in the United Nations, NGOs, and related international organizations". NGOWatch returned as a subsite of Global Governance Watch, led by Jon Entine. AEI scholars focusing on international organizations includes John Bolton, the former U.S. ambassador to the United Nations, and John Yoo, who researches international law and sovereignty.AEI's research on economic development dates back to the early days of the institute. P. T. Bauer authored a monograph on development in India in 1959, and Edward Banfield published a booklet on the theory behind foreign aid in 1970. Since 2001, AEI has sponsored the Henry Wendt Lecture in International Development, named for Henry Wendt, an AEI trustee emeritus and former CEO of SmithKline Beckman. Notable lecturers have included Angus Maddison and Deepak Lal. Nicholas Eberstadt holds the Henry Wendt Chair, focusing on demographics, population growth and human capital development; he served on the federal HELP Commission. Paul Wolfowitz, the former president of the World Bank, researches development policy in Africa. Roger Bate focuses his research on malaria, HIV/AIDS, counterfeit and substandard drugs, access to water, and other problems endemic in the developing world. Health policy studies AEI scholars have engaged in health policy research since the institute's early days. A Center for Health Policy Research was established in 1974. For many years, Robert B. Helms led the health department. AEI's long-term focuses in health care have included national insurance, Medicare, Medicaid, pharmaceutical innovation, health care competition, and cost control.The center was replaced in the mid-1980s with the Health Policy Studies Program, which continues to this day. The AEI Press has published dozens of books on health policy since the 1970s. Since 2003, AEI has published the Health Policy Outlook series on new developments in U.S. and international health policy. AEI also published "A Better Prescription" to outline their ideal plan to healthcare reform. In the report, a great amount of emphasis is placed on placing the money and control in the hands of the consumers and continuing the market-based system of healthcare. They also acknowledge that this form of healthcare "relies on financial incentives rather than central direction and control, and it recognizes that a one-size-fits-all approach will not work in a country as diverse as ours".In 2009, AEI researchers were active in assessing the Obama administration's health care proposals.Paul Ryan, then-minority point man for health care in the House of Representatives, delivered the keynote address at an AEI conference on five key elements of health reform: mandated universal coverage, insurance exchanges, the public plan option, medical practice and treatment, and revenue to cover federal health care costs.AEI scholars have long argued against the tax break for employer-sponsored health insurance, arguing that it distorts insurance markets and limits consumer choices.In the 2008 U.S. presidential election, John McCain advocated this plan while Barack Obama disparaged it; in 2009, however, members of the Obama administration indicated that lifting the exemption was "on the table." Dr. Scott Gottlieb, a medical doctor, has expressed concern about relatively unreliable comparative effectiveness research being used to restrict treatment options under a public plan. AEI publishes a series of monographs on Medicare reform, edited by Helms and Antos.Roger Bate's work includes international health policy, especially pharmaceutical quality, HIV/AIDS, malaria, and multilateral health organizations. In 2008, Dora Akunyili, then Nigeria's top drug safety official, spoke at an AEI event coinciding with the launch of Bate's book Making a Killing. After undergoing a kidney transplant in 2006, Sally Satel expanded her work from drug addiction treatment and mental health to include studies of compensation systems that she argues would increase the supply of organs for transplant. In addition to their work on pharmaceutical innovation and FDA regulation, Gottlieb and John E. Calfee have examined vaccine and antiviral drug supplies in the wake of the 2009 flu pandemic. Legal and constitutional studies The AEI Legal Center for the Public Interest, formed in 2007 from the merger of the National Legal Center for the Public Interest, houses all legal and constitutional research at AEI. Legal studies have a long pedigree at AEI; the institute was in the vanguard of the law and economics movement in the 1970s and 1980s with the publication of Regulation magazine and AEI Press books. Robert Bork published The Antitrust Paradox with AEI support. Other jurists, legal scholars, and constitutional scholars who have conducted research at AEI include Walter Berns, Richard Epstein, Bruce Fein, Robert Goldwin, Antonin Scalia, and Laurence Silberman. The AEI Legal Center sponsors the annual Gauer Distinguished Lecture in Law and Public Policy. Past lecturers include Stephen Breyer, George H. W. Bush, Christopher Cox, Douglas Ginsburg, Anthony Kennedy, Sandra Day O'Connor, Colin Powell, Ronald Reagan, William Rehnquist, Condoleezza Rice, Margaret Thatcher, and William H. Webster.Ted Frank, the director of the AEI Legal Center, focuses on liability law and tort reform. Michael S. Greve focuses on constitutional law and federalism, including federal preemption. Greve is a fixture in the conservative legal movement. According to Jonathan Rauch, in 2005, Greve convened "a handful of free-market activists and litigators met in a windowless 11th-floor conference room at the American Enterprise Institute in Washington" in opposition to the legality of the Public Company Accounting Oversight Board. "By the time the meeting finished, the participants had decided to join forces and file suit... . No one paid much attention. But the yawning stopped on May 18, [2009,] when the Supreme Court announced it will hear the case." Political and public opinion studies AEI's "Political Corner" includes a range of political viewpoints, from the center-left Norman J. Ornstein to the conservative Michael Barone. The Political Corner sponsors the biannual Election Watch series, the "longest-running election program in Washington", featuring Barone, Ornstein, Karlyn Bowman, and—formerly—Ben Wattenberg and Bill Schneider, among others. Ornstein and Fortier (an expert on absentee and early voting) collaborate on a number of election- and governance-related projects, including the Election Reform Project. AEI and Brookings are sponsoring a project on election demographics called "The Future of Red, Blue, and Purple America", co-directed by Bowman and Ruy Teixeira.AEI's work on political processes and institutions has been a central part of the institute's research programs since the 1970s. The AEI Press published a series of several dozen volumes in the 1970s and 1980s called "At the Polls"; in each volume, scholars would assess a country's recent presidential or parliamentary election. AEI scholars have been called upon to observe and assess constitutional conventions and elections worldwide. In the early 1980s, AEI scholars were commissioned by the U.S. government to monitor plebiscites in Palau, the Federated States of Micronesia, and the Marshall Islands.Another landmark in AEI's political studies is After the People Vote. AEI's work on election reform continued into the 1990s and 2000s; Ornstein led a working group that drafted the Bipartisan Campaign Reform Act of 2002.AEI published Public Opinion magazine from 1978 to 1990 under the editorship of Seymour Martin Lipset and Ben Wattenberg, assisted by Karlyn Bowman. The institute's work on polling continues with public opinion features in The American Enterprise and The American and Bowman's AEI Studies in Public Opinion. Social and cultural studies AEI's social and cultural studies program dates to the 1970s, when William J. Baroody Sr., perceiving the importance of the philosophical and cultural underpinnings of modern economics and politics, invited social and religious thinkers like Irving Kristol and Michael Novak to take up residence at AEI. Since then, AEI has sponsored research on a wide variety of issues, including education, religion, race and gender, and social welfare. Supported by the Bradley Foundation, AEI has hosted since 1989 the Bradley Lecture Series, "which aims to enrich debate in the Washington policy community through exploration of the philosophical and historical underpinnings of current controversies". Notable speakers in the series have included Kristol, Novak, Allan Bloom, Robert Bork, David Brooks, Lynne Cheney, Ron Chernow, Tyler Cowen, Niall Ferguson, Francis Fukuyama, Eugene Genovese, Robert P. George, Gertrude Himmelfarb, Samuel P. Huntington (giving the first public presentation of his "clash of civilizations" theory in 1992), Paul Johnson, Leon Kass, Charles Krauthammer, Bernard Lewis, Seymour Martin Lipset, Harvey C. Mansfield, Michael Medved, Allan H. Meltzer, Edmund Morris, Charles Murray, Steven Pinker, Norman Podhoretz, Richard Posner, Jonathan Rauch, Andrew Sullivan, Cass Sunstein, Sam Tanenhaus, James Q. Wilson, John Yoo, and Fareed Zakaria. Education Education policy studies at AEI are directed by Frederick M. Hess, who has authored, coauthored, or edited a number of volumes based on major conferences held at AEI on subjects like urban school reform, school choice, No Child Left Behind, teacher qualification, "educational entrepreneurship," student loans, and education research.Hess co-directs AEI's Future of American Education Project, whose working group includes Washington, D.C. schools chancellor Michelle Rhee and Michael Feinberg, the cofounder of KIPP. Hess works closely with Rhee: she has spoken at AEI on several occasions and appointed Hess to be one of two independent reform evaluators for the District of Columbia Public Schools. Hess coauthored Diplomas and Dropouts, a report on university graduation rates that was widely publicized in 2009. The report, along with other education-related projects, was supported by the Bill & Melinda Gates Foundation.AEI is often identified as a supporter of vouchers, but Hess has been critical of school vouchers: "[I]t is by now clear that aggressive reforms to bring market principles to American education have failed to live up to their billing. ... In the school choice debate, many reformers have gotten so invested in the language of 'choice' that they seem to forget choice is only half of the market equation. Markets are about both supply and demand—and, while 'choice' is concerned with emboldening consumer demand, the real action when it comes to prosperity, productivity, and progress is typically on the supply side." Funding In the 1980s about 60% of its funding came from organisations like Lilly Endowment, the Smith Richardson Foundation, the Rockefeller Brothers Trust and the Earhart Foundation. The remaining of their funding was from major corporations like Bethlehem Steel, Exxon, J.C. Penney and the Chase Manhattan Bank.AEI's revenues for the fiscal year ending June 2015 were $84,616,388 against expenses of $38,611,315. In 2014, the charity evaluating service American Institute of Philanthropy gave AEI an "A−" grade in its CharityWatch "Top-Rated Charities" listing.As of 2005 AEI had received $960,000 from ExxonMobil. In 2010, AEI received a US$2.5 million grant from the Donors Capital Fund, a donor-advised fund.A 2013 study by Drexel University Sociologist Robert J. Brulle noted that AEI received $86.7 million between 2003 and 2010. See also List of American Enterprise Institute scholars and fellows Francis Boyer Award Irving Kristol Award References External links Official website Organizational Profile – National Center for Charitable Statistics (Urban Institute) American Enterprise Institute at Curlie EDIRC listing (provided by RePEc) "American Enterprise Institute Internal Revenue Service filings". ProPublica Nonprofit Explorer. American Enterprise Institute at Ballotpedia
north atlantic current
The North Atlantic Current (NAC), also known as North Atlantic Drift and North Atlantic Sea Movement, is a powerful warm western boundary current within the Atlantic Ocean that extends the Gulf Stream northeastward.The NAC originates from where the Gulf Stream turns north at the Southeast Newfoundland Rise, a submarine ridge that stretches southeast from the Grand Banks of Newfoundland. The NAC flows northward east of the Grand Banks, from 40°N to 51°N, before turning sharply east to cross the Atlantic. It transports more warm tropical water to northern latitudes than any other boundary current; more than 40 Sv (40 million m3/s; 1.4 billion cu ft/s) in the south and 20 Sv (20 million m3/s; 710 million cu ft/s) as it crosses the Mid-Atlantic Ridge. It reaches speeds of 2 knots (3.7 km/h; 2.3 mph; 1.0 m/s) near the North American coast. Directed by topography, the NAC meanders heavily, but in contrast to the meanders of the Gulf Stream, the NAC meanders remain stable without breaking off into eddies.The colder parts of the Gulf Stream turn northward near the "tail" of the Grand Banks at 50°W where the Azores Current branches off to flow south of the Azores. From there the NAC flows northeastward, east of the Flemish Cap (47°N, 45°W). Approaching the Mid-Atlantic Ridge, it then turns eastward and becomes much broader and more diffuse. It then splits into a colder northeastern branch and a warmer eastern branch. As the warmer branch turns southward, most of the subtropical component of the Gulf Stream is diverted southward, and as a consequence, the North Atlantic is mostly supplied by subpolar waters, including a contribution from the Labrador Current recirculated into the NAC at 45°N.West of Continental Europe, it splits into two major branches. One branch goes southeast, becoming the Canary Current as it passes northwest Africa and turns southwest. The other major branch continues north along the coast of Northwestern Europe. Other branches include the Irminger Current and the Norwegian Current. Driven by the global thermohaline circulation, the North Atlantic Current is part of the wind-driven Gulf Stream, which goes further east and north from the North American coast across the Atlantic and into the Arctic Ocean. The North Atlantic Current, together with the Gulf Stream, have a long-lived reputation for having a considerable warming influence on European climate. However, the principal cause for differences in winter climate between North America and Europe seems to be winds rather than ocean currents (although the currents do exert influence at very high latitudes by preventing the formation of sea ice). Climate change Unlike the AMOC, the observations of Labrador Sea outflow showed no negative trend from 1997 to 2009, and the Labrador Sea convection began to intensify in 2012, reaching a new high in 2016. As of 2022, the trend of strengthened Labrador Sea convection appears to hold, and is associated with observed increases in marine primary production. Yet, a 150-year dataset suggests that even this recently strengthened convection is anomalously weak compared to its baseline state.Some climate models indicate that the deep convection in Labrador-Irminger Seas could collapse under certain global warming scenarios, which would then collapse the entire circulation in the North subpolar gyre. It is considered unlikely to recover even if the temperature is returned to a lower level, making it an example of a climate tipping point. This would result in rapid cooling, with implications for economic sectors, agriculture industry, water resources and energy management in Western Europe and the East Coast of the United States. Frajka-Williams et al. 2017 pointed out that recent changes in cooling of the subpolar gyre, warm temperatures in the subtropics and cool anomalies over the tropics, increased the spatial distribution of meridional gradient in sea surface temperatures, which is not captured by the AMO Index.A 2021 study found that this collapse occurs in only four CMIP6 models out of 35 analyzed. However, only 11 models out of 35 can simulate North Atlantic Current with a high degree of accuracy, and this includes all four models which simulate collapse of the subpolar gyre. As the result, the study estimated the risk of an abrupt cooling event over Europe caused by the collapse of the current at 36.4%, which is lower than the 45.5% chance estimated by the previous generation of models In 2022, a paper suggested that previous disruption of subpolar gyre was connected to the Little Ice Age.A 2022 Science Magazine review study on climate tipping points noted that in the scenarios where this convection collapses, it is most likely to be triggered by 1.8 degrees of global warming. However, model differences mean that the required warming may be as low as 1.1 degrees or as high as 3.8 degrees. Once triggered, the collapse of the current would most likely take 10 years from start to end, with a range between 5 and 50 years. The loss of this convection is estimated to lower the global temperature by up to 0.5 degrees, while the average temperature in certain regions of the North Atlantic decreases by around 3 degrees. There are also substantial impacts on regional precipitation. See also North Atlantic Oscillation Ocean gyre Physical oceanography References Notes Sources External links "The North Atlantic Current". Elizabeth Rowe, Arthur J. Mariano, Edward H. Ryan, Cooperative Institute for Marine and Atmospheric Studies
eocene thermal maximum 2
Eocene Thermal Maximum 2 (ETM-2), also called H-1 or the Elmo (Eocene Layer of Mysterious Origin) event, was a transient period of global warming that occurred around either 54.09 Ma or 53.69 Ma. It appears to be the second major hyperthermal that punctuated the long-term warming trend from the Late Paleocene through the early Eocene (58 to 50 Ma).The hyperthermals were geologically brief time intervals (<200,000 years) of global warming and massive input of isotopically light carbon into the atmosphere. The most extreme and best-studied event, the Paleocene-Eocene Thermal Maximum (PETM or ETM-1), occurred about 1.8 million years before ETM-2, at approximately 55.5 Ma. Other hyperthermals likely followed ETM-2 at nominally 53.6 Ma (H-2), 53.3 (I-1), 53.2 (I-2) and 52.8 Ma (informally called K, X or ETM-3). The number, nomenclature, absolute ages and relative global impact of the Eocene hyperthermals are the source of much current research. In any case, the hyperthermals appear to have ushered in the Early Eocene Climatic Optimum, the warmest sustained interval of the Cenozoic Era. They also definitely precede the Azolla event at about 49 Ma. Timing ETM-2 is clearly recognized in sediment sequences by analyzing the stable carbon isotope composition of carbon-bearing material. The 13C/12C ratio of calcium carbonate or organic matter drops significantly across the event. This is similar to what happens when one examines sediment across the PETM, although the magnitude of the negative carbon isotope excursion is not as large. The timing of Earth system perturbations during ETM-2 and PETM also appear different. Specifically, the onset of ETM-2 may have been longer (perhaps 30,000 years) while the recovery seems to have been shorter (perhaps <50,000 years). (Note, however, that the timing of short-term carbon cycle perturbations during both events remains difficult to constrain.) A thin clay-rich horizon marks ETM-2 in marine sediment from widely separated locations. In sections recovered from the deep sea (for example those recovered by Ocean Drilling Program Leg 208 on Walvis Ridge), this layer is caused by dissolution of calcium carbonate. However, in sections deposited along continental margins (for example those now exposed along the Waiau Toa / Clarence River, New Zealand), the clay-rich horizon represents dilution by excess accumulation of terrestrial material entering the ocean. Similar changes in sediment accumulation are found across the PETM. In sediment from Lomonosov Ridge in the Arctic Ocean, intervals across both ETM-2 and PETM show signs of higher temperature, lower salinity and lower dissolved oxygen. Causes The PETM and ETM-2 are thought to have a similar generic origin, although this idea is at the edge of current research. During both events, a tremendous amount of 13C-depleted carbon rapidly entered the ocean and atmosphere. This decreased the 13C/12C ratio of carbon-bearing sedimentary components, and dissolved carbonate in the deep ocean. Somehow the carbon input was coupled to an increase in Earth surface temperature and a greater seasonality in precipitation, which explains the excess terrestrial sediment discharge along continental margins. Possible explanations for changes during ETM-2 are the same as those for the PETM, and are discussed in that article. The H-2 event appears to be a "minor" hyperthermal that follows ETM-2 (H-1) by about 100,000 years. This has led to speculation that the two events are somehow coupled and paced by changes in orbital eccentricity.Sea surface temperatures (SSTs) climbed by 2–4 °C and salinity by ~1–2 ppt in subtropical waters during ETM-2. Effects on life As in the case of the PETM, reversible dwarfing of mammals has been noted during the ETM-2. See also Paleocene–Eocene Thermal Maximum References External links Appy Sluijs. "Climate and carbon cycle dynamics during late Paleocene - early Eocene transient global warming events" (PDF). Archived from the original (PDF) on 30 May 2009. Lucy Stap; Appy Sluijs; Ellen Thomas; Lucas Lourens. "Patterns and magnitude of deep sea carbonate dissolution during Eocene Thermal Maximum 2 and H2, Walvis Ridge, southeastern Atlantic Ocean".
chlorodifluoromethane
Chlorodifluoromethane or difluoromonochloromethane is a hydrochlorofluorocarbon (HCFC). This colorless gas is better known as HCFC-22, or R-22, or CHClF2. It was commonly used as a propellant and refrigerant. These applications were phased out under the Montreal Protocol in developed countries in 2020 due to the compound's ozone depletion potential (ODP) and high global warming potential (GWP), and in developing countries this process will be completed by 2030. R-22 is a versatile intermediate in industrial organofluorine chemistry, e.g. as a precursor to tetrafluoroethylene. Production and current applications Worldwide production of R-22 in 2008 was about 800 Gg per year, up from about 450 Gg per year in 1998, with most production in developing countries. R-22 use is being phased out in developing countries, where it is largely used for air conditioning applications. Air conditioning sales are growing 20% annually in India and China. R-22 is prepared from chloroform: HCCl3 + 2 HF → HCF2Cl + 2 HClAn important application of R-22 is as a precursor to tetrafluoroethylene. This conversion involves pyrolysis to give difluorocarbene, which dimerizes: 2 CHClF2 → C2F4 + 2 HClThe compound also yields difluorocarbene upon treatment with strong base and is used in the laboratory as a source of this reactive intermediate. The pyrolysis of R-22 in the presence of chlorofluoromethane gives hexafluorobenzene. Environmental effects R-22 is often used as an alternative to the highly ozone-depleting CFC-11 and CFC-12, because of its relatively low ozone depletion potential of 0.055, among the lowest for chlorine-containing haloalkanes. However, even this lower ozone depletion potential is no longer considered acceptable. As an additional environmental concern, R-22 is a powerful greenhouse gas with a GWP equal to 1810 (which indicates 1810 times as powerful as carbon dioxide). Hydrofluorocarbons (HFCs) are often substituted for R-22 because of their lower ozone depletion potential, but these refrigerants often have a higher GWP. R-410A, for example, is often substituted, but has a GWP of 2088. Another substitute is R-404A with a GWP of 3900. Other substitute refrigerants are available with low GWP. Ammonia (R-717), with a GWP of <1, remains a popular substitute on fishing vessels and large industrial applications. Ammonia's toxicity in high concentrations limit its application in small-scale refrigeration applications. Propane (R-290) is another example, and has a GWP of 3. Propane was the de facto refrigerant in systems smaller than industrial scale before the introduction of CFCs. The reputation of propane refrigerators as a fire hazard kept delivered ice and the ice box the overwhelming consumer choice despite its inconvenience and higher cost until safe CFC systems overcame the negative perceptions of refrigerators. Illegal to use as a refrigerant in the US for decades, propane is now permitted for use in limited mass suitable for small refrigerators. It is not lawful to use in air conditioners or larger refrigerators because of its flammability and potential for explosion. Phaseout in the European Union Since 1 January 2010, it has been illegal to use newly manufactured HCFCs to service refrigeration and air-conditioning equipment; only reclaimed and recycled HCFCs may be used. In practice this means that the gas has to be removed from the equipment before servicing and replaced afterwards, rather than refilling with new gas. Since 1 January 2015, it has been illegal to use any HCFCs to service refrigeration and air-conditioning equipment; broken equipment that used HCFC refrigerants must be replaced with equipment that does not use them. Phaseout in the United States R-22 was mostly phased out in new equipment in the United States by regulatory action by the EPA under the Significant New Alternatives Program (SNAP) by rules 20 and 21 of the program, due to its high global warming potential. The EPA program was consistent with the Montreal Accords, but international agreements must be ratified by the US Senate to have legal effect. A 2017 decision of the US Court of Appeals for the District of Columbia Circuit held that the US EPA lacked authority to regulate the use of R-22 under SNAP. In essence the court ruled the EPA's statutory authority was for ozone reduction, not global warming. The EPA subsequently issued guidance to the effect that the EPA would no longer regulate R-22. A 2018 ruling by the same court held that the EPA failed to conform with required procedure when it issued its guidance pursuant to the 2017 ruling, voiding the guidance, but not the prior ruling that required it. The refrigeration and air conditioning industry had already discontinued production of new R-22 equipment. The practical effect of these rulings is to reduce the cost of imported R-22 to maintain aging equipment, extending its service life, while preventing the use of R-22 in new equipment. R-22, retrofit using substitute refrigerants The energy efficiency and system capacity of systems designed for R-22 is slightly greater using R-22 than the available substitutes.R-407A is for use in low- and medium-temp refrigeration. Uses a polyolester (POE) oil. R-407C is for use in air conditioning. Uses a minimum of 20 percent POE oil. R-407F and R-407H are for use in medium- and low-temperature refrigeration applications (supermarkets, cold storage, and process refrigeration); direct expansion system design only. They use a POE oil. R-421A is for use in "air conditioning split systems, heat pumps, supermarket pak systems, dairy chillers, reach-in storage, bakery applications, refrigerated transport, self-contained display cabinets, and walk-in coolers." Uses mineral oil (MO), Alkylbenzene (AB), and POE. R-422B is for use in low-, medium-, and high-temperature applications. It is not recommended for use in flooded applications. R-422C is for use in medium- and low-temperature applications. The TXV power element will need to be changed to a 404A/507A element and critical seals (elastomers) may need to be replaced. R-422D is for use in low-temp applications, and is mineral oil compatible. R-424A is for use in air conditioning as well as medium-temp refrigeration temperature ranges of 20 to 50˚F. It works with MO, alkylbenzenes (AB), and POE oils. R-427A is for use in air conditioning and refrigeration applications. It does not require all the mineral oil to be removed. It works with MO, AB, and POE oils. R-434A is for use in water cooled and process chillers for air conditioning and medium- and low-temperature applications. It works with MO, AB, and POE oils. R-438A (MO-99) is for use in low-, medium-, and high-temperature applications. It is compatible with all lubricants. R-458A is for use in air conditioning and refrigeration applications, without capacity or efficiency loss. Works with MO, AB, and POE oils.R-32 or HFC-32 (difluoromethane) is for use in air conditioning and refrigeration applications. It has zero ozone depletion potential (ODP) [2] and a global warming potential (GWP) index 675 times that of carbon dioxide. Physical properties It has two allotropes: crystalline II below 59 K and crystalline I above 59 K and below 115.73 K. Price history and availability EPA's analysis indicated the amount of existing inventory was between 22,700t and 45,400t. In 2012 the EPA reduced the amount of R-22 by 45%, causing the price to rise by more than 300%. For 2013, the EPA has reduced the amount of R-22 by 29%. References https://www.iiar.org/ External links MSDS from DuPont International Chemical Safety Card 0049 Data at Integrated Risk Information System: IRIS 0657 CDC – NIOSH Pocket Guide to Chemical Hazards – Chlorodifluoromethane Phase change data at webbook.nist.gov IR absorption spectra IARC summaries and evaluations: Vol. 41 (1986), Suppl. 7 (1987), Vol. 71 (1999)
list of extinction events
This is a list of extinction events, both mass and minor: Timeline == References ==
fluorine
Fluorine is a chemical element; it has symbol F and atomic number 9. It is the lightest halogen and exists at standard conditions as a highly toxic, pale yellow diatomic gas. As the most electronegative reactive element, it is extremely reactive, as it reacts with all other elements except for the light inert gases. Among the elements, fluorine ranks 24th in universal abundance and 13th in terrestrial abundance. Fluorite, the primary mineral source of fluorine which gave the element its name, was first described in 1529; as it was added to metal ores to lower their melting points for smelting, the Latin verb fluo meaning 'flow' gave the mineral its name. Proposed as an element in 1810, fluorine proved difficult and dangerous to separate from its compounds, and several early experimenters died or sustained injuries from their attempts. Only in 1886 did French chemist Henri Moissan isolate elemental fluorine using low-temperature electrolysis, a process still employed for modern production. Industrial production of fluorine gas for uranium enrichment, its largest application, began during the Manhattan Project in World War II. Owing to the expense of refining pure fluorine, most commercial applications use fluorine compounds, with about half of mined fluorite used in steelmaking. The rest of the fluorite is converted into corrosive hydrogen fluoride en route to various organic fluorides, or into cryolite, which plays a key role in aluminium refining. Molecules containing a carbon–fluorine bond often have very high chemical and thermal stability; their major uses are as refrigerants, electrical insulation and cookware, and PTFE (Teflon). Pharmaceuticals such as atorvastatin and fluoxetine contain C−F bonds. The fluoride ion from dissolved fluoride salts inhibits dental cavities, and so finds use in toothpaste and water fluoridation. Global fluorochemical sales amount to more than US$69 billion a year. Fluorocarbon gases are generally greenhouse gases with global-warming potentials 100 to 23,500 times that of carbon dioxide, and SF6 has the highest global warming potential of any known substance. Organofluorine compounds often persist in the environment due to the strength of the carbon–fluorine bond. Fluorine has no known metabolic role in mammals; a few plants and sea sponges synthesize organofluorine poisons (most often monofluoroacetates) that help deter predation. Characteristics Electron configuration Fluorine atoms have nine electrons, one fewer than neon, and electron configuration 1s22s22p5: two electrons in a filled inner shell and seven in an outer shell requiring one more to be filled. The outer electrons are ineffective at nuclear shielding, and experience a high effective nuclear charge of 9 − 2 = 7; this affects the atom's physical properties.Fluorine's first ionization energy is third-highest among all elements, behind helium and neon, which complicates the removal of electrons from neutral fluorine atoms. It also has a high electron affinity, second only to chlorine, and tends to capture an electron to become isoelectronic with the noble gas neon; it has the highest electronegativity of any reactive element. Fluorine atoms have a small covalent radius of around 60 picometers, similar to those of its period neighbors oxygen and neon. Reactivity The bond energy of difluorine is much lower than that of either Cl2 or Br2 and similar to the easily cleaved peroxide bond; this, along with high electronegativity, accounts for fluorine's easy dissociation, high reactivity, and strong bonds to non-fluorine atoms. Conversely, bonds to other atoms are very strong because of fluorine's high electronegativity. Unreactive substances like powdered steel, glass fragments, and asbestos fibers react quickly with cold fluorine gas; wood and water spontaneously combust under a fluorine jet.Reactions of elemental fluorine with metals require varying conditions. Alkali metals cause explosions and alkaline earth metals display vigorous activity in bulk; to prevent passivation from the formation of metal fluoride layers, most other metals such as aluminium and iron must be powdered, and noble metals require pure fluorine gas at 300–450 °C (575–850 °F). Some solid nonmetals (sulfur, phosphorus) react vigorously in liquid fluorine. Hydrogen sulfide and sulfur dioxide combine readily with fluorine, the latter sometimes explosively; sulfuric acid exhibits much less activity, requiring elevated temperatures.Hydrogen, like some of the alkali metals, reacts explosively with fluorine. Carbon, as lamp black, reacts at room temperature to yield tetrafluoromethane. Graphite combines with fluorine above 400 °C (750 °F) to produce non-stoichiometric carbon monofluoride; higher temperatures generate gaseous fluorocarbons, sometimes with explosions. Carbon dioxide and carbon monoxide react at or just above room temperature, whereas paraffins and other organic chemicals generate strong reactions: even completely substituted haloalkanes such as carbon tetrachloride, normally incombustible, may explode. Although nitrogen trifluoride is stable, nitrogen requires an electric discharge at elevated temperatures for reaction with fluorine to occur, due to the very strong triple bond in elemental nitrogen; ammonia may react explosively. Oxygen does not combine with fluorine under ambient conditions, but can be made to react using electric discharge at low temperatures and pressures; the products tend to disintegrate into their constituent elements when heated. Heavier halogens react readily with fluorine as does the noble gas radon; of the other noble gases, only xenon and krypton react, and only under special conditions. Argon does not react with fluorine gas; however, it does form a compound with fluorine, argon fluorohydride. Phases At room temperature, fluorine is a gas of diatomic molecules, pale yellow when pure (sometimes described as yellow-green). It has a characteristic halogen-like pungent and biting odor detectable at 20 ppb. Fluorine condenses into a bright yellow liquid at −188 °C (−306 °F), a transition temperature similar to those of oxygen and nitrogen.Fluorine has two solid forms, α- and β-fluorine. The latter crystallizes at −220 °C (−364 °F) and is transparent and soft, with the same disordered cubic structure of freshly crystallized solid oxygen, unlike the orthorhombic systems of other solid halogens. Further cooling to −228 °C (−378 °F) induces a phase transition into opaque and hard α-fluorine, which has a monoclinic structure with dense, angled layers of molecules. The transition from β- to α-fluorine is more exothermic than the condensation of fluorine, and can be violent. Isotopes Only one isotope of fluorine occurs naturally in abundance, the stable isotope 19F. It has a high magnetogyric ratio and exceptional sensitivity to magnetic fields; because it is also the only stable isotope, it is used in magnetic resonance imaging. Eighteen radioisotopes with mass numbers from 13 to 31 have been synthesized, of which 18F is the most stable with a half-life of 109.77 minutes. 18F is a natural trace radioisotope produced by cosmic ray spallation of atmospheric argon as well as by reaction of protons with natural oxygen: 18O + p → 18F + n. Other radioisotopes have half-lives less than 70 seconds; most decay in less than half a second. The isotopes 17F and 18F undergo β+ decay and electron capture, lighter isotopes decay by proton emission, and those heavier than 19F undergo β− decay (the heaviest ones with delayed neutron emission). Two metastable isomers of fluorine are known, 18mF, with a half-life of 162(7) nanoseconds, and 26mF, with a half-life of 2.2(1) milliseconds. Occurrence Universe Among the lighter elements, fluorine's abundance value of 400 ppb (parts per billion) – 24th among elements in the universe – is exceptionally low: other elements from carbon to magnesium are twenty or more times as common. This is because stellar nucleosynthesis processes bypass fluorine, and any fluorine atoms otherwise created have high nuclear cross sections, allowing collisions with hydrogen or helium to generate oxygen or neon respectively.Beyond this transient existence, three explanations have been proposed for the presence of fluorine: during type II supernovae, bombardment of neon atoms by neutrinos could transmute them to fluorine; the solar wind of Wolf–Rayet stars could blow fluorine away from any hydrogen or helium atoms; or fluorine is borne out on convection currents arising from fusion in asymptotic giant branch stars. Earth Fluorine is the thirteenth most common element in Earth's crust at 600–700 ppm (parts per million) by mass. Though believed not to occur naturally, elemental fluorine has been shown to be present as an occlusion in antozonite, a variant of fluorite. Most fluorine exists as fluoride-containing minerals. Fluorite, fluorapatite and cryolite are the most industrially significant. Fluorite (CaF2), also known as fluorspar, abundant worldwide, is the main source of fluoride, and hence fluorine. China and Mexico are the major suppliers. Fluorapatite (Ca5(PO4)3F), which contains most of the world's fluoride, is an inadvertent source of fluoride as a byproduct of fertilizer production. Cryolite (Na3AlF6), used in the production of aluminium, is the most fluorine-rich mineral. Economically viable natural sources of cryolite have been exhausted, and most is now synthesised commercially. Other minerals such as topaz contain fluorine. Fluorides, unlike other halides, are insoluble and do not occur in commercially favorable concentrations in saline waters. Trace quantities of organofluorines of uncertain origin have been detected in volcanic eruptions and geothermal springs. The existence of gaseous fluorine in crystals, suggested by the smell of crushed antozonite, is contentious; a 2012 study reported the presence of 0.04% F2 by weight in antozonite, attributing these inclusions to radiation from the presence of tiny amounts of uranium. History Early discoveries In 1529, Georgius Agricola described fluorite as an additive used to lower the melting point of metals during smelting. He penned the Latin word fluorēs (fluor, flow) for fluorite rocks. The name later evolved into fluorspar (still commonly used) and then fluorite. The composition of fluorite was later determined to be calcium difluoride.Hydrofluoric acid was used in glass etching from 1720 onward. Andreas Sigismund Marggraf first characterized it in 1764 when he heated fluorite with sulfuric acid, and the resulting solution corroded its glass container. Swedish chemist Carl Wilhelm Scheele repeated the experiment in 1771, and named the acidic product fluss-spats-syran (fluorspar acid). In 1810, the French physicist André-Marie Ampère suggested that hydrogen and an element analogous to chlorine constituted hydrofluoric acid. He also proposed in a letter to Sir Humphry Davy dated August 26, 1812 that this then-unknown substance may be named fluorine from fluoric acid and the -ine suffix of other halogens. This word, often with modifications, is used in most European languages; however, Greek, Russian, and some others, following Ampère's later suggestion, use the name ftor or derivatives, from the Greek φθόριος (phthorios, destructive). The New Latin name fluorum gave the element its current symbol F; Fl was used in early papers. Isolation Initial studies on fluorine were so dangerous that several 19th-century experimenters were deemed "fluorine martyrs" after misfortunes with hydrofluoric acid. Isolation of elemental fluorine was hindered by the extreme corrosiveness of both elemental fluorine itself and hydrogen fluoride, as well as the lack of a simple and suitable electrolyte. Edmond Frémy postulated that electrolysis of pure hydrogen fluoride to generate fluorine was feasible and devised a method to produce anhydrous samples from acidified potassium bifluoride; instead, he discovered that the resulting (dry) hydrogen fluoride did not conduct electricity. Frémy's former student Henri Moissan persevered, and after much trial and error found that a mixture of potassium bifluoride and dry hydrogen fluoride was a conductor, enabling electrolysis. To prevent rapid corrosion of the platinum in his electrochemical cells, he cooled the reaction to extremely low temperatures in a special bath and forged cells from a more resistant mixture of platinum and iridium, and used fluorite stoppers. In 1886, after 74 years of effort by many chemists, Moissan isolated elemental fluorine.In 1906, two months before his death, Moissan received the Nobel Prize in Chemistry, with the following citation: [I]n recognition of the great services rendered by him in his investigation and isolation of the element fluorine ... The whole world has admired the great experimental skill with which you have studied that savage beast among the elements. Later uses The Frigidaire division of General Motors (GM) experimented with chlorofluorocarbon refrigerants in the late 1920s, and Kinetic Chemicals was formed as a joint venture between GM and DuPont in 1930 hoping to market Freon-12 (CCl2F2) as one such refrigerant. It replaced earlier and more toxic compounds, increased demand for kitchen refrigerators, and became profitable; by 1949 DuPont had bought out Kinetic and marketed several other Freon compounds. Polytetrafluoroethylene (Teflon) was serendipitously discovered in 1938 by Roy J. Plunkett while working on refrigerants at Kinetic, and its superlative chemical and thermal resistance lent it to accelerated commercialization and mass production by 1941.Large-scale production of elemental fluorine began during World War II. Germany used high-temperature electrolysis to make tons of the planned incendiary chlorine trifluoride and the Manhattan Project used huge quantities to produce uranium hexafluoride for uranium enrichment. Since UF6 is as corrosive as fluorine, gaseous diffusion plants required special materials: nickel for membranes, fluoropolymers for seals, and liquid fluorocarbons as coolants and lubricants. This burgeoning nuclear industry later drove post-war fluorochemical development. Compounds Fluorine has a rich chemistry, encompassing organic and inorganic domains. It combines with metals, nonmetals, metalloids, and most noble gases, and almost exclusively assumes an oxidation state of −1. Fluorine's high electron affinity results in a preference for ionic bonding; when it forms covalent bonds, these are polar, and almost always single. Metals Alkali metals form ionic and highly soluble monofluorides; these have the cubic arrangement of sodium chloride and analogous chlorides. Alkaline earth difluorides possess strong ionic bonds but are insoluble in water, with the exception of beryllium difluoride, which also exhibits some covalent character and has a quartz-like structure. Rare earth elements and many other metals form mostly ionic trifluorides.Covalent bonding first comes to prominence in the tetrafluorides: those of zirconium, hafnium and several actinides are ionic with high melting points, while those of titanium, vanadium, and niobium are polymeric, melting or decomposing at no more than 350 °C (660 °F). Pentafluorides continue this trend with their linear polymers and oligomeric complexes. Thirteen metal hexafluorides are known, all octahedral, and are mostly volatile solids but for liquid MoF6 and ReF6, and gaseous WF6. Rhenium heptafluoride, the only characterized metal heptafluoride, is a low-melting molecular solid with pentagonal bipyramidal molecular geometry. Metal fluorides with more fluorine atoms are particularly reactive. Hydrogen Hydrogen and fluorine combine to yield hydrogen fluoride, in which discrete molecules form clusters by hydrogen bonding, resembling water more than hydrogen chloride. It boils at a much higher temperature than heavier hydrogen halides and unlike them is miscible with water. Hydrogen fluoride readily hydrates on contact with water to form aqueous hydrogen fluoride, also known as hydrofluoric acid. Unlike the other hydrohalic acids, which are strong, hydrofluoric acid is a weak acid at low concentrations. However, it can attack glass, something the other acids cannot do. Other reactive nonmetals Binary fluorides of metalloids and p-block nonmetals are generally covalent and volatile, with varying reactivities. Period 3 and heavier nonmetals can form hypervalent fluorides.Boron trifluoride is planar and possesses an incomplete octet. It functions as a Lewis acid and combines with Lewis bases like ammonia to form adducts. Carbon tetrafluoride is tetrahedral and inert; its group analogues, silicon and germanium tetrafluoride, are also tetrahedral but behave as Lewis acids. The pnictogens form trifluorides that increase in reactivity and basicity with higher molecular weight, although nitrogen trifluoride resists hydrolysis and is not basic. The pentafluorides of phosphorus, arsenic, and antimony are more reactive than their respective trifluorides, with antimony pentafluoride the strongest neutral Lewis acid known, only behind gold pentafluoride.Chalcogens have diverse fluorides: unstable difluorides have been reported for oxygen (the only known compound with oxygen in an oxidation state of +2), sulfur, and selenium; tetrafluorides and hexafluorides exist for sulfur, selenium, and tellurium. The latter are stabilized by more fluorine atoms and lighter central atoms, so sulfur hexafluoride is especially inert. Chlorine, bromine, and iodine can each form mono-, tri-, and pentafluorides, but only iodine heptafluoride has been characterized among possible interhalogen heptafluorides. Many of them are powerful sources of fluorine atoms, and industrial applications using chlorine trifluoride require precautions similar to those using fluorine. Noble gases Noble gases, having complete electron shells, defied reaction with other elements until 1962 when Neil Bartlett reported synthesis of xenon hexafluoroplatinate; xenon difluoride, tetrafluoride, hexafluoride, and multiple oxyfluorides have been isolated since then. Among other noble gases, krypton forms a difluoride, and radon and fluorine generate a solid suspected to be radon difluoride. Binary fluorides of lighter noble gases are exceptionally unstable: argon and hydrogen fluoride combine under extreme conditions to give argon fluorohydride. Helium has no long-lived fluorides, and no neon fluoride has ever been observed; helium fluorohydride has been detected for milliseconds at high pressures and low temperatures. Organic compounds The carbon–fluorine bond is organic chemistry's strongest, and gives stability to organofluorines. It is almost non-existent in nature, but is used in artificial compounds. Research in this area is usually driven by commercial applications; the compounds involved are diverse and reflect the complexity inherent in organic chemistry. Discrete molecules The substitution of hydrogen atoms in an alkane by progressively more fluorine atoms gradually alters several properties: melting and boiling points are lowered, density increases, solubility in hydrocarbons decreases and overall stability increases. Perfluorocarbons, in which all hydrogen atoms are substituted, are insoluble in most organic solvents, reacting at ambient conditions only with sodium in liquid ammonia.The term perfluorinated compound is used for what would otherwise be a perfluorocarbon if not for the presence of a functional group, often a carboxylic acid. These compounds share many properties with perfluorocarbons such as stability and hydrophobicity, while the functional group augments their reactivity, enabling them to adhere to surfaces or act as surfactants; Fluorosurfactants, in particular, can lower the surface tension of water more than their hydrocarbon-based analogues. Fluorotelomers, which have some unfluorinated carbon atoms near the functional group, are also regarded as perfluorinated. Polymers Polymers exhibit the same stability increases afforded by fluorine substitution (for hydrogen) in discrete molecules; their melting points generally increase too. Polytetrafluoroethylene (PTFE), the simplest fluoropolymer and perfluoro analogue of polyethylene with structural unit –CF2–, demonstrates this change as expected, but its very high melting point makes it difficult to mold. Various PTFE derivatives are less temperature-tolerant but easier to mold: fluorinated ethylene propylene replaces some fluorine atoms with trifluoromethyl groups, perfluoroalkoxy alkanes do the same with trifluoromethoxy groups, and Nafion contains perfluoroether side chains capped with sulfonic acid groups. Other fluoropolymers retain some hydrogen atoms; polyvinylidene fluoride has half the fluorine atoms of PTFE and polyvinyl fluoride has a quarter, but both behave much like perfluorinated polymers. Production Elemental fluorine and virtually all fluorine compounds are produced from hydrogen fluoride or its aqueous solutions, hydrofluoric acid. Hydrogen fluoride is produced in kilns by the endothermic reaction of fluorite (CaF2) with sulfuric acid: CaF2 + H2SO4 → 2 HF(g) + CaSO4The gaseous HF can then be absorbed in water or liquefied.About 20% of manufactured HF is a byproduct of fertilizer production, which produces hexafluorosilicic acid (H2SiF6), which can be degraded to release HF thermally and by hydrolysis: H2SiF6 → 2 HF + SiF4 SiF4 + 2 H2O → 4 HF + SiO2 Industrial routes to F2 Moissan's method is used to produce industrial quantities of fluorine, via the electrolysis of a potassium fluoride/hydrogen fluoride mixture: hydrogen and fluoride ions are reduced and oxidized at a steel container cathode and a carbon block anode, under 8–12 volts, to generate hydrogen and fluorine gas respectively. Temperatures are elevated, KF•2HF melting at 70 °C (158 °F) and being electrolyzed at 70–130 °C (158–266 °F). KF, which acts to provide electrical conductivity, is essential since pure HF cannot be electrolyzed because it is virtually non-conductive. Fluorine can be stored in steel cylinders that have passivated interiors, at temperatures below 200 °C (392 °F); otherwise nickel can be used. Regulator valves and pipework are made of nickel, the latter possibly using Monel instead. Frequent passivation, along with the strict exclusion of water and greases, must be undertaken. In the laboratory, glassware may carry fluorine gas under low pressure and anhydrous conditions; some sources instead recommend nickel-Monel-PTFE systems. Laboratory routes While preparing for a 1986 conference to celebrate the centennial of Moissan's achievement, Karl O. Christe reasoned that chemical fluorine generation should be feasible since some metal fluoride anions have no stable neutral counterparts; their acidification potentially triggers oxidation instead. He devised a method which evolves fluorine at high yield and atmospheric pressure: 2 KMnO4 + 2 KF + 10 HF + 3 H2O2 → 2 K2MnF6 + 8 H2O + 3 O2↑ 2 K2MnF6 + 4 SbF5 → 4 KSbF6 + 2 MnF3 + F2↑Christe later commented that the reactants "had been known for more than 100 years and even Moissan could have come up with this scheme." As late as 2008, some references still asserted that fluorine was too reactive for any chemical isolation. Industrial applications Fluorite mining, which supplies most global fluorine, peaked in 1989 when 5.6 million metric tons of ore were extracted. Chlorofluorocarbon restrictions lowered this to 3.6 million tons in 1994; production has since been increasing. Around 4.5 million tons of ore and revenue of US$550 million were generated in 2003; later reports estimated 2011 global fluorochemical sales at $15 billion and predicted 2016–18 production figures of 3.5 to 5.9 million tons, and revenue of at least $20 billion. Froth flotation separates mined fluorite into two main metallurgical grades of equal proportion: 60–85% pure metspar is almost all used in iron smelting whereas 97%+ pure acidspar is mainly converted to the key industrial intermediate hydrogen fluoride. At least 17,000 metric tons of fluorine are produced each year. It costs only $5–8 per kilogram as uranium or sulfur hexafluoride, but many times more as an element because of handling challenges. Most processes using free fluorine in large amounts employ in situ generation under vertical integration.The largest application of fluorine gas, consuming up to 7,000 metric tons annually, is in the preparation of UF6 for the nuclear fuel cycle. Fluorine is used to fluorinate uranium tetrafluoride, itself formed from uranium dioxide and hydrofluoric acid. Fluorine is monoisotopic, so any mass differences between UF6 molecules are due to the presence of 235U or 238U, enabling uranium enrichment via gaseous diffusion or gas centrifuge. About 6,000 metric tons per year go into producing the inert dielectric SF6 for high-voltage transformers and circuit breakers, eliminating the need for hazardous polychlorinated biphenyls associated with oil-filled devices. Several fluorine compounds are used in electronics: rhenium and tungsten hexafluoride in chemical vapor deposition, tetrafluoromethane in plasma etching and nitrogen trifluoride in cleaning equipment. Fluorine is also used in the synthesis of organic fluorides, but its reactivity often necessitates conversion first to the gentler ClF3, BrF3, or IF5, which together allow calibrated fluorination. Fluorinated pharmaceuticals use sulfur tetrafluoride instead. Inorganic fluorides As with other iron alloys, around 3 kg (6.5 lb) metspar is added to each metric ton of steel; the fluoride ions lower its melting point and viscosity. Alongside its role as an additive in materials like enamels and welding rod coats, most acidspar is reacted with sulfuric acid to form hydrofluoric acid, which is used in steel pickling, glass etching and alkane cracking. One-third of HF goes into synthesizing cryolite and aluminium trifluoride, both fluxes in the Hall–Héroult process for aluminium extraction; replenishment is necessitated by their occasional reactions with the smelting apparatus. Each metric ton of aluminium requires about 23 kg (51 lb) of flux. Fluorosilicates consume the second largest portion, with sodium fluorosilicate used in water fluoridation and laundry effluent treatment, and as an intermediate en route to cryolite and silicon tetrafluoride. Other important inorganic fluorides include those of cobalt, nickel, and ammonium. Organic fluorides Organofluorides consume over 20% of mined fluorite and over 40% of hydrofluoric acid, with refrigerant gases dominating and fluoropolymers increasing their market share. Surfactants are a minor application but generate over $1 billion in annual revenue. Due to the danger from direct hydrocarbon–fluorine reactions above −150 °C (−240 °F), industrial fluorocarbon production is indirect, mostly through halogen exchange reactions such as Swarts fluorination, in which chlorocarbon chlorines are substituted for fluorines by hydrogen fluoride under catalysts. Electrochemical fluorination subjects hydrocarbons to electrolysis in hydrogen fluoride, and the Fowler process treats them with solid fluorine carriers like cobalt trifluoride. Refrigerant gases Halogenated refrigerants, termed Freons in informal contexts, are identified by R-numbers that denote the amount of fluorine, chlorine, carbon, and hydrogen present. Chlorofluorocarbons (CFCs) like R-11, R-12, and R-114 once dominated organofluorines, peaking in production in the 1980s. Used for air conditioning systems, propellants and solvents, their production was below one-tenth of this peak by the early 2000s, after widespread international prohibition. Hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs) were designed as replacements; their synthesis consumes more than 90% of the fluorine in the organic industry. Important HCFCs include R-22, chlorodifluoromethane, and R-141b. The main HFC is R-134a with a new type of molecule HFO-1234yf, a Hydrofluoroolefin (HFO) coming to prominence owing to its global warming potential of less than 1% that of HFC-134a. Polymers About 180,000 metric tons of fluoropolymers were produced in 2006 and 2007, generating over $3.5 billion revenue per year. The global market was estimated at just under $6 billion in 2011 and was predicted to grow by 6.5% per year up to 2016. Fluoropolymers can only be formed by polymerizing free radicals.Polytetrafluoroethylene (PTFE), sometimes called by its DuPont name Teflon, represents 60–80% by mass of the world's fluoropolymer production. The largest application is in electrical insulation since PTFE is an excellent dielectric. It is also used in the chemical industry where corrosion resistance is needed, in coating pipes, tubing, and gaskets. Another major use is in PFTE-coated fiberglass cloth for stadium roofs. The major consumer application is for non-stick cookware. Jerked PTFE film becomes expanded PTFE (ePTFE), a fine-pored membrane sometimes referred to by the brand name Gore-Tex and used for rainwear, protective apparel, and filters; ePTFE fibers may be made into seals and dust filters. Other fluoropolymers, including fluorinated ethylene propylene, mimic PTFE's properties and can substitute for it; they are more moldable, but also more costly and have lower thermal stability. Films from two different fluoropolymers replace glass in solar cells.The chemically resistant (but expensive) fluorinated ionomers are used as electrochemical cell membranes, of which the first and most prominent example is Nafion. Developed in the 1960s, it was initially deployed as fuel cell material in spacecraft and then replaced mercury-based chloralkali process cells. Recently, the fuel cell application has reemerged with efforts to install proton exchange membrane fuel cells into automobiles. Fluoroelastomers such as Viton are crosslinked fluoropolymer mixtures mainly used in O-rings; perfluorobutane (C4F10) is used as a fire-extinguishing agent. Surfactants Fluorosurfactants are small organofluorine molecules used for repelling water and stains. Although expensive (comparable to pharmaceuticals at $200–2000 per kilogram), they yielded over $1 billion in annual revenues by 2006; Scotchgard alone generated over $300 million in 2000. Fluorosurfactants are a minority in the overall surfactant market, most of which is taken up by much cheaper hydrocarbon-based products. Applications in paints are burdened by compounding costs; this use was valued at only $100 million in 2006. Agrichemicals About 30% of agrichemicals contain fluorine, most of them herbicides and fungicides with a few crop regulators. Fluorine substitution, usually of a single atom or at most a trifluoromethyl group, is a robust modification with effects analogous to fluorinated pharmaceuticals: increased biological stay time, membrane crossing, and altering of molecular recognition. Trifluralin is a prominent example, with large-scale use in the U.S. as a weedkiller, but it is a suspected carcinogen and has been banned in many European countries. Sodium monofluoroacetate (1080) is a mammalian poison in which two acetic acid hydrogens are replaced with fluorine and sodium; it disrupts cell metabolism by replacing acetate in the citric acid cycle. First synthesized in the late 19th century, it was recognized as an insecticide in the early 20th, and was later deployed in its current use. New Zealand, the largest consumer of 1080, uses it to protect kiwis from the invasive Australian common brushtail possum. Europe and the U.S. have banned 1080. Medicinal applications Dental care Population studies from the mid-20th century onwards show topical fluoride reduces dental caries. This was first attributed to the conversion of tooth enamel hydroxyapatite into the more durable fluorapatite, but studies on pre-fluoridated teeth refuted this hypothesis, and current theories involve fluoride aiding enamel growth in small caries. After studies of children in areas where fluoride was naturally present in drinking water, controlled public water supply fluoridation to fight tooth decay began in the 1940s and is now applied to water supplying 6 percent of the global population, including two-thirds of Americans. Reviews of the scholarly literature in 2000 and 2007 associated water fluoridation with a significant reduction of tooth decay in children. Despite such endorsements and evidence of no adverse effects other than mostly benign dental fluorosis, opposition still exists on ethical and safety grounds. The benefits of fluoridation have lessened, possibly due to other fluoride sources, but are still measurable in low-income groups. Sodium monofluorophosphate and sometimes sodium or tin(II) fluoride are often found in fluoride toothpastes, first introduced in the U.S. in 1955 and now ubiquitous in developed countries, alongside fluoridated mouthwashes, gels, foams, and varnishes. Pharmaceuticals Twenty percent of modern pharmaceuticals contain fluorine. One of these, the cholesterol-reducer atorvastatin (Lipitor), made more revenue than any other drug until it became generic in 2011. The combination asthma prescription Seretide, a top-ten revenue drug in the mid-2000s, contains two active ingredients, one of which – fluticasone – is fluorinated. Many drugs are fluorinated to delay inactivation and lengthen dosage periods because the carbon–fluorine bond is very stable. Fluorination also increases lipophilicity because the bond is more hydrophobic than the carbon–hydrogen bond, and this often helps in cell membrane penetration and hence bioavailability.Tricyclics and other pre-1980s antidepressants had several side effects due to their non-selective interference with neurotransmitters other than the serotonin target; the fluorinated fluoxetine was selective and one of the first to avoid this problem. Many current antidepressants receive this same treatment, including the selective serotonin reuptake inhibitors: citalopram, its isomer escitalopram, and fluvoxamine and paroxetine. Quinolones are artificial broad-spectrum antibiotics that are often fluorinated to enhance their effects. These include ciprofloxacin and levofloxacin. Fluorine also finds use in steroids: fludrocortisone is a blood pressure-raising mineralocorticoid, and triamcinolone and dexamethasone are strong glucocorticoids. The majority of inhaled anesthetics are heavily fluorinated; the prototype halothane is much more inert and potent than its contemporaries. Later compounds such as the fluorinated ethers sevoflurane and desflurane are better than halothane and are almost insoluble in blood, allowing faster waking times. PET scanning Fluorine-18 is often found in radioactive tracers for positron emission tomography, as its half-life of almost two hours is long enough to allow for its transport from production facilities to imaging centers. The most common tracer is fluorodeoxyglucose which, after intravenous injection, is taken up by glucose-requiring tissues such as the brain and most malignant tumors; computer-assisted tomography can then be used for detailed imaging. Oxygen carriers Liquid fluorocarbons can hold large volumes of oxygen or carbon dioxide, more so than blood, and have attracted attention for their possible uses in artificial blood and in liquid breathing. Because fluorocarbons do not normally mix with water, they must be mixed into emulsions (small droplets of perfluorocarbon suspended in water) to be used as blood. One such product, Oxycyte, has been through initial clinical trials. These substances can aid endurance athletes and are banned from sports; one cyclist's near death in 1998 prompted an investigation into their abuse. Applications of pure perfluorocarbon liquid breathing (which uses pure perfluorocarbon liquid, not a water emulsion) include assisting burn victims and premature babies with deficient lungs. Partial and complete lung filling have been considered, though only the former has had any significant tests in humans. An Alliance Pharmaceuticals effort reached clinical trials but was abandoned because the results were not better than normal therapies. Biological role Fluorine is not essential for humans and other mammals, but small amounts are known to be beneficial for the strengthening of dental enamel (where the formation of fluorapatite makes the enamel more resistant to attack, from acids produced by bacterial fermentation of sugars). Small amounts of fluorine may be beneficial for bone strength, but the latter has not been definitively established. Both the WHO and the Institute of Medicine of the US National Academies publish recommended daily allowance (RDA) and upper tolerated intake of fluorine, which varies with age and gender.Natural organofluorines have been found in microorganisms and plants but not animals. The most common is fluoroacetate, which is used as a defense against herbivores by at least 40 plants in Africa, Australia and Brazil. Other examples include terminally fluorinated fatty acids, fluoroacetone, and 2-fluorocitrate. An enzyme that binds fluorine to carbon – adenosyl-fluoride synthase – was discovered in bacteria in 2002. Toxicity Elemental fluorine is highly toxic to living organisms. Its effects in humans start at concentrations lower than hydrogen cyanide's 50 ppm and are similar to those of chlorine: significant irritation of the eyes and respiratory system as well as liver and kidney damage occur above 25 ppm, which is the immediately dangerous to life and health value for fluorine. The eyes and nose are seriously damaged at 100 ppm, and inhalation of 1,000 ppm fluorine will cause death in minutes, compared to 270 ppm for hydrogen cyanide. Hydrofluoric acid Hydrofluoric acid is the weakest of the hydrohalic acids, having a pKa of 3.2 at 25 °C. It is a volatile liquid due to the presence of hydrogen bonding (while the other hydrohalic acids are gases). It is able to attack glass, concrete, metals, and organic matter.Hydrofluoric acid is a contact poison with greater hazards than many strong acids like sulfuric acid even though it is weak: it remains neutral in aqueous solution and thus penetrates tissue faster, whether through inhalation, ingestion or the skin, and at least nine U.S. workers died in such accidents from 1984 to 1994. It reacts with calcium and magnesium in the blood leading to hypocalcemia and possible death through cardiac arrhythmia. Insoluble calcium fluoride formation triggers strong pain and burns larger than 160 cm2 (25 in2) can cause serious systemic toxicity.Exposure may not be evident for eight hours for 50% HF, rising to 24 hours for lower concentrations, and a burn may initially be painless as hydrogen fluoride affects nerve function. If skin has been exposed to HF, damage can be reduced by rinsing it under a jet of water for 10–15 minutes and removing contaminated clothing. Calcium gluconate is often applied next, providing calcium ions to bind with fluoride; skin burns can be treated with 2.5% calcium gluconate gel or special rinsing solutions. Hydrofluoric acid absorption requires further medical treatment; calcium gluconate may be injected or administered intravenously. Using calcium chloride – a common laboratory reagent – in lieu of calcium gluconate is contraindicated, and may lead to severe complications. Excision or amputation of affected parts may be required. Fluoride ion Soluble fluorides are moderately toxic: 5–10 g sodium fluoride, or 32–64 mg fluoride ions per kilogram of body mass, represents a lethal dose for adults. One-fifth of the lethal dose can cause adverse health effects, and chronic excess consumption may lead to skeletal fluorosis, which affects millions in Asia and Africa. Ingested fluoride forms hydrofluoric acid in the stomach which is easily absorbed by the intestines, where it crosses cell membranes, binds with calcium and interferes with various enzymes, before urinary excretion. Exposure limits are determined by urine testing of the body's ability to clear fluoride ions.Historically, most cases of fluoride poisoning have been caused by accidental ingestion of insecticides containing inorganic fluorides. Most current calls to poison control centers for possible fluoride poisoning come from the ingestion of fluoride-containing toothpaste. Malfunctioning water fluoridation equipment is another cause: one incident in Alaska affected almost 300 people and killed one person. Dangers from toothpaste are aggravated for small children, and the Centers for Disease Control and Prevention recommends supervising children below six brushing their teeth so that they do not swallow toothpaste. One regional study examined a year of pre-teen fluoride poisoning reports totaling 87 cases, including one death from ingesting insecticide. Most had no symptoms, but about 30% had stomach pains. A larger study across the U.S. had similar findings: 80% of cases involved children under six, and there were few serious cases. Environmental concerns Atmosphere The Montreal Protocol, signed in 1987, set strict regulations on chlorofluorocarbons (CFCs) and bromofluorocarbons due to their ozone damaging potential (ODP). The high stability which suited them to their original applications also meant that they were not decomposing until they reached higher altitudes, where liberated chlorine and bromine atoms attacked ozone molecules. Even with the ban, and early indications of its efficacy, predictions warned that several generations would pass before full recovery. With one-tenth the ODP of CFCs, hydrochlorofluorocarbons (HCFCs) are the current replacements, and are themselves scheduled for substitution by 2030–2040 by hydrofluorocarbons (HFCs) with no chlorine and zero ODP. In 2007 this date was brought forward to 2020 for developed countries; the Environmental Protection Agency had already prohibited one HCFC's production and capped those of two others in 2003. Fluorocarbon gases are generally greenhouse gases with global-warming potentials (GWPs) of about 100 to 10,000; sulfur hexafluoride has a value of around 20,000. An outlier is HFO-1234yf which is a new type of refrigerant called a Hydrofluoroolefin (HFO) and has attracted global demand due to its GWP of less than 1 compared to 1,430 for the current refrigerant standard HFC-134a. Biopersistence Organofluorines exhibit biopersistence due to the strength of the carbon–fluorine bond. Perfluoroalkyl acids (PFAAs), which are sparingly water-soluble owing to their acidic functional groups, are noted persistent organic pollutants; perfluorooctanesulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) are most often researched. PFAAs have been found in trace quantities worldwide from polar bears to humans, with PFOS and PFOA known to reside in breast milk and the blood of newborn babies. A 2013 review showed a slight correlation between groundwater and soil PFAA levels and human activity; there was no clear pattern of one chemical dominating, and higher amounts of PFOS were correlated to higher amounts of PFOA. In the body, PFAAs bind to proteins such as serum albumin; they tend to concentrate within humans in the liver and blood before excretion through the kidneys. Dwell time in the body varies greatly by species, with half-lives of days in rodents, and years in humans. High doses of PFOS and PFOA cause cancer and death in newborn rodents but human studies have not established an effect at current exposure levels. See also Notes Sources Citations Indexed references External links Media related to Fluorine at Wikimedia Commons
united states house select committee on energy independence and global warming
The House Select Committee on Energy Independence and Global Warming was a select committee of the United States House of Representatives. It was established March 8, 2007 through adoption of a resolution by a 269–150 vote of the full House.The committee existed from 2007 to 2011, and was not renewed when the Republicans gained control of the House for the 112th Congress.In 2019, the new Democratic majority established a successor committee, the United States House Select Committee on the Climate Crisis. History Democratic Speaker Nancy Pelosi announced plans to create the select committee on January 18, 2007, soon after Democrats took control of the House following the 2006 elections. The creation of the committee was criticized by House Republicans, who argued "that the committee was unnecessary or that its budget could be used better by the ethics committee." The proposal to create the committee also encountered some skepticism from House Democrats, particularly Chairman John Dingell of the powerful Energy and Commerce Committee (which has primary jurisdiction over environmental and climate change issues) and Chairman Charles Rangel of the Ways and Means Committee (which has jurisdiction on any tax legislation aimed at affecting industry behavior). Ultimately, Pelosi was able to reach a compromise with Dingell, wherein the committee was to be advisory in nature, without the legislative authority granted to standing committees. Joe Barton, the ranking Republican member of the House Energy and Commerce Committee, continued to object to the committee's existence, calling it a "platform for some members to grandstand."The committee held 80 hearings and briefings, on topics ranging from climate science to the Deepwater Horizon explosion and subsequent oil spill.The committee played a role in the creation of the 2007 energy act and the 2009 stimulus act (which included US$90 billion in spending on green energy and energy efficiency). Most prominently, the committee played a major role in shaping the 2009 climate bill—the American Clean Energy and Security Act or "Waxman-Markey"—which was passed by the House but never became law due to the Senate's refusal to take up the bill.After Republicans won control of the House in the 2010 election, the new Republican majority in the House (led by the new speaker, John Boehner of Ohio) decided to kill the committee, resulting in criticism from environmentalists and climate researchers. Jurisdiction The Select Committee on Energy Independence and Global Warming conducted hearings on energy independence and climate change issues. The committee lacked the authority to draft legislation, but worked with the House standing committees with jurisdiction over climate change issues and developed recommendations on legislative proposals. Speaker Pelosi indicated she would have liked committees with jurisdiction over energy, environment and technology policy to report legislation on these issues to the full House by July 4, 2007. Members, 111th Congress The select committee was reestablished for the 111th Congress pursuant to H.Res. 5. On January 14, 2009, Speaker Nancy Pelosi reappointed Ed Markey of Massachusetts and James Sensenbrenner of Wisconsin as Chairman and Ranking Member, respectively, of the committee. See also Climate change mitigation Climate change policy of the United States Effects of global warming Efficient energy use Global warming Energy resilience U.S. Climate Change Science Program References External links Official website
hydrofluoroolefin
Hydrofluoroolefins (HFOs) are unsaturated organic compounds composed of hydrogen, fluorine and carbon. These organofluorine compounds are of interest as refrigerants. Unlike traditional hydrofluorocarbons (HFCs) and chlorofluorocarbons (CFCs), which are saturated, HFOs are olefins, otherwise known as alkenes. HFO refrigerants are categorized as having zero ozone depletion potential (ODP) and low global warming potential (GWP) and so offer a more environmentally friendly alternative to CFC, HCFC, and HFC refrigerants. Compared to HCFCs and HFCs, HFOs have shorter tropospheric lifetimes due to the reactivity of the C=C bond with hydroxyl radicals and chlorine radicals. This quick reactivity prevents them from reaching the stratosphere and participating in the depletion of good ozone, leading to strong interest in the development and characterization of new HFO blends for use as refrigerants. Many refrigerants in the HFO class are inherently stable chemically and inert, non toxic, and non-flammable or mildly flammable. Many HFOs have the proper freezing and boiling points to be useful for refrigeration at common temperatures. They have also been adopted as blowing agents, i.e. in production of insulation foams, food industry, construction materials, and others. HFOs are being developed as "fourth generation" refrigerants with 0.1% of the GWP of HFCs. Examples HFOs in use include: 2,3,3,3-tetrafluoropropene (HFO-1234yf, trademarked as Opteon YF) and 1,3,3,3-tetrafluoropropene (HFO-1234ze). cis-1,1,1,4,4,4-hexafluoro-2-butene (HFO-1336mzz-Z; DR-2) shows also a promise in high temperature applications like cogeneration, heat recovery and medium temperature heat pumps. trans-1,1,1,4,4,4-hexafluoro-2-butene (HFO-1336mzz-E) also has good properties, and is being investigated. (developments led by Dr Kostas Kontomaris, from DuPont Flurochemicals; multiple publications in 2012-2019 and dozens of related patents).The largest brand of HFOs is Opteon, produced by Chemours (a DuPont spin-off). See also List of refrigerants == References ==
global catastrophic risk
A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk."Over the last two decades, a number of academic and non-profit organizations have been established to research global catastrophic and existential risks, formulate potential mitigation measures and either advocate for or implement these measures. Definition and classification Defining global catastrophic risks The term global catastrophic risk "lacks a sharp definition", and generally refers (loosely) to a risk that could inflict "serious damage to human well-being on a global scale".Humanity has suffered large catastrophes before. Some of these have caused serious damage but were only local in scope—e.g. the Black Death may have resulted in the deaths of a third of Europe's population, 10% of the global population at the time. Some were global, but were not as severe—e.g. the 1918 influenza pandemic killed an estimated 3–6% of the world's population. Most global catastrophic risks would not be so intense as to kill the majority of life on earth, but even if one did, the ecosystem and humanity would eventually recover (in contrast to existential risks). Similarly, in Catastrophe: Risk and Response, Richard Posner singles out and groups together events that bring about "utter overthrow or ruin" on a global, rather than a "local or regional" scale. Posner highlights such events as worthy of special attention on cost–benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole. Defining existential risks Existential risks are defined as "risks that threaten the destruction of humanity's long-term potential." The instantiation of an existential risk (an existential catastrophe) would either cause outright human extinction or irreversibly lock in a drastically inferior state of affairs. Existential risks are a sub-class of global catastrophic risks, where the damage is not only global but also terminal and permanent, preventing recovery and thereby affecting both current and all future generations. Non-extinction risks While extinction is the most obvious way in which humanity's long-term potential could be destroyed, there are others, including unrecoverable collapse and unrecoverable dystopia. A disaster severe enough to cause the permanent, irreversible collapse of human civilisation would constitute an existential catastrophe, even if it fell short of extinction. Similarly, if humanity fell under a totalitarian regime, and there were no chance of recovery then such a dystopia would also be an existential catastrophe. Bryan Caplan writes that "perhaps an eternity of totalitarianism would be worse than extinction". (George Orwell's novel Nineteen Eighty-Four suggests an example.) A dystopian scenario shares the key features of extinction and unrecoverable collapse of civilization: before the catastrophe humanity faced a vast range of bright futures to choose from; after the catastrophe, humanity is locked forever in a terrible state. Potential sources of risk Potential global catastrophic risks are conventionally classified as anthropogenic or non-anthropogenic hazards. Examples of non-anthropogenic risks are an asteroid or comet impact event, a supervolcanic eruption, a natural pandemic, a lethal gamma-ray burst, a geomagnetic storm from a coronal mass ejection destroying electronic equipment, natural long-term climate change, hostile extraterrestrial life, or the Sun transforming into a red giant star and engulfing the Earth billions of years in the future. Anthropogenic risks are those caused by humans and include those related to technology, governance, and climate change. Technological risks include the creation of artificial intelligence misaligned with human goals, biotechnology, and nanotechnology. Insufficient or malign global governance creates risks in the social and political domain, such as global war and nuclear holocaust, biological warfare and bioterrorism using genetically modified organisms, cyberwarfare and cyberterrorism destroying critical infrastructure like the electrical grid, or radiological warfare using weapons such as large cobalt bombs. Global catastrophic risks in the domain of earth system governance include global warming, environmental degradation, extinction of species, famine as a result of non-equitable resource distribution, human overpopulation, crop failures, and non-sustainable agriculture. Methodological challenges Research into the nature and mitigation of global catastrophic risks and existential risks is subject to a unique set of challenges and, as a result, is not easily subjected to the usual standards of scientific rigour. For instance, it is neither feasible nor ethical to study these risks experimentally. Carl Sagan expressed this with regards to nuclear war: "Understanding the long-term consequences of nuclear war is not a problem amenable to experimental verification". Moreover, many catastrophic risks change rapidly as technology advances and background conditions, such as geopolitical conditions, change. Another challenge is the general difficulty of accurately predicting the future over long timescales, especially for anthropogenic risks which depend on complex human political, economic and social systems. In addition to known and tangible risks, unforeseeable black swan extinction events may occur, presenting an additional methodological problem. Lack of historical precedent Humanity has never suffered an existential catastrophe and if one were to occur, it would necessarily be unprecedented. Therefore, existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most events, the failure of a complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced such an extinction event has no observers, so regardless of their frequency, no civilization observes existential risks in its history. These anthropic issues may partly be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology.To understand the dynamics of an unprecedented, unrecoverable global civilizational collapse (a type of existential risk), it may be instructive to study the various local civilizational collapses that have occurred throughout human history. For instance, civilizations such as the Roman Empire have ended in a loss of centralized governance and a major civilization-wide loss of infrastructure and advanced technology. However, these examples demonstrate that societies appear to be fairly resilient to catastrophe; for example, Medieval Europe survived the Black Death without suffering anything resembling a civilization collapse despite losing 25 to 50 percent of its population. Incentives and coordination There are economic reasons that can explain why so little effort is going into existential risk reduction. It is a global public good, so we should expect it to be undersupplied by markets. Even if a large nation invests in risk mitigation measures, that nation will enjoy only a small fraction of the benefit of doing so. Furthermore, existential risk reduction is an intergenerational global public good, since most of the benefits of existential risk reduction would be enjoyed by future generations, and though these future people would in theory perhaps be willing to pay substantial sums for existential risk reduction, no mechanism for such a transaction exists. Cognitive biases Numerous cognitive biases can influence people's judgment of the importance of existential risks, including scope insensitivity, hyperbolic discounting, availability heuristic, the conjunction fallacy, the affect heuristic, and the overconfidence effect.Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they are willing to give does not increase linearly with the magnitude of the issue: people are roughly as willing to prevent the deaths of 200,000 or 2,000 birds. Similarly, people are often more concerned about threats to individuals than to larger groups.Eliezer Yudkowsky theorizes that scope neglect plays a role in public perception of existential risks: Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking... People who would never dream of hurting a child hear of existential risk, and say, "Well, maybe the human species doesn't really deserve to survive". All past predictions of human extinction have proven to be false. To some, this makes future warnings seem less credible. Nick Bostrom argues that the absence of human extinction in the past is weak evidence that there will be no human extinction in the future, due to survivor bias and other anthropic effects.Sociobiologist E. O. Wilson argued that: "The reason for this myopic fog, evolutionary biologists contend, is that it was actually advantageous during all but the last few millennia of the two million years of existence of the genus Homo... A premium was placed on close attention to the near future and early reproduction, and little else. Disasters of a magnitude that occur only once every few centuries were forgotten or transmuted into myth." Proposed mitigation Multi-layer defense Defense in depth is a useful framework for categorizing risk mitigation measures into three layers of defense: Prevention: Reducing the probability of a catastrophe occurring in the first place. Example: Measures to prevent outbreaks of new highly infectious diseases. Response: Preventing the scaling of a catastrophe to the global level. Example: Measures to prevent escalation of a small-scale nuclear exchange into an all-out nuclear war. Resilience: Increasing humanity's resilience (against extinction) when faced with global catastrophes. Example: Measures to increase food security during a nuclear winter.Human extinction is most likely when all three defenses are weak, that is, "by risks we are unlikely to prevent, unlikely to successfully respond to, and unlikely to be resilient against".The unprecedented nature of existential risks poses a special challenge in designing risk mitigation measures since humanity will not be able to learn from a track record of previous events. Funding Some researchers argue that both research and other initiatives relating to existential risk are underfunded. Nick Bostrom states that more research has been done on Star Trek, snowboarding, or dung beetles than on existential risks. Bostrom's comparisons have been criticized as "high-handed". As of 2020, the Biological Weapons Convention organization had an annual budget of US$1.4 million. Survival planning Some scholars propose the establishment on Earth of one or more self-sufficient, remote, permanently occupied settlements specifically created for the purpose of surviving a global disaster. Economist Robin Hanson argues that a refuge permanently housing as few as 100 people would significantly improve the chances of human survival during a range of global catastrophes.Food storage has been proposed globally, but the monetary cost would be high. Furthermore, it would likely contribute to the current millions of deaths per year due to malnutrition. In 2022, a team led by David Denkenberger modeled the cost-effectiveness of resilient foods to artificial general intelligence (AGI) safety and found "~98-99% confidence" for a higher marginal impact of work on resilient foods. Some survivalists stock survival retreats with multiple-year food supplies. The Svalbard Global Seed Vault is buried 400 feet (120 m) inside a mountain on an island in the Arctic. It is designed to hold 2.5 billion seeds from more than 100 countries as a precaution to preserve the world's crops. The surrounding rock is −6 °C (21 °F) (as of 2015) but the vault is kept at −18 °C (0 °F) by refrigerators powered by locally sourced coal.More speculatively, if society continues to function and if the biosphere remains habitable, calorie needs for the present human population might in theory be met during an extended absence of sunlight, given sufficient advance planning. Conjectured solutions include growing mushrooms on the dead plant biomass left in the wake of the catastrophe, converting cellulose to sugar, or feeding natural gas to methane-digesting bacteria. Global catastrophic risks and global governance Insufficient global governance creates risks in the social and political domain, but the governance mechanisms develop more slowly than technological and social change. There are concerns from governments, the private sector, as well as the general public about the lack of governance mechanisms to efficiently deal with risks, negotiate and adjudicate between diverse and conflicting interests. This is further underlined by an understanding of the interconnectedness of global systemic risks. In absence or anticipation of global governance, national governments can act individually to better understand, mitigate and prepare for global catastrophes. Climate emergency plans In 2018, the Club of Rome called for greater climate change action and published its Climate Emergency Plan, which proposes ten action points to limit global average temperature increase to 1.5 degrees Celsius. Further, in 2019, the Club published the more comprehensive Planetary Emergency Plan.There is evidence to suggest that collectively engaging with the emotional experiences that emerge during contemplating the vulnerability of the human species within the context of climate change allows for these experiences to be adaptive. When collective engaging with and processing emotional experiences is supportive, this can lead to growth in resilience, psychological flexibility, tolerance of emotional experiences, and community engagement. Space colonization Space colonization is a proposed alternative to improve the odds of surviving an extinction scenario. Solutions of this scope may require megascale engineering. Astrophysicist Stephen Hawking advocated colonizing other planets within the Solar System once technology progresses sufficiently, in order to improve the chance of human survival from planet-wide events such as global thermonuclear war.Billionaire Elon Musk writes that humanity must become a multiplanetary species in order to avoid extinction. Musk is using his company SpaceX to develop technology he hopes will be used in the colonization of Mars. Moving the Earth In a few billion years, the Sun will expand into a red giant, swallowing the Earth. This can be avoided by moving the Earth farther out from the Sun, keeping the temperature roughly constant. That can be accomplished by tweaking the orbits of comets and asteroids so they pass close to the Earth in such a way that they add energy to the Earth's orbit. Since the Sun's expansion is slow, roughly one such encounter every 6,000 years would suffice. Skeptics and opponents Psychologist Steven Pinker has called existential risk a "useless category" that can distract from real threats such as climate change and nuclear war. Organizations The Bulletin of the Atomic Scientists (est. 1945) is one of the oldest global risk organizations, founded after the public became alarmed by the potential of atomic warfare in the aftermath of WWII. It studies risks associated with nuclear war and energy and famously maintains the Doomsday Clock established in 1947. The Foresight Institute (est. 1986) examines the risks of nanotechnology and its benefits. It was one of the earliest organizations to study the unintended consequences of otherwise harmless technology gone haywire at a global scale. It was founded by K. Eric Drexler who postulated "grey goo".Beginning after 2000, a growing number of scientists, philosophers and tech billionaires created organizations devoted to studying global risks both inside and outside of academia.Independent non-governmental organizations (NGOs) include the Machine Intelligence Research Institute (est. 2000), which aims to reduce the risk of a catastrophe caused by artificial intelligence, with donors including Peter Thiel and Jed McCaleb. The Nuclear Threat Initiative (est. 2001) seeks to reduce global threats from nuclear, biological and chemical threats, and containment of damage after an event. It maintains a nuclear material security index. The Lifeboat Foundation (est. 2009) funds research into preventing a technological catastrophe. Most of the research money funds projects at universities. The Global Catastrophic Risk Institute (est. 2011) is a US-based non-profit, non-partisan think tank founded by Seth Baum and Tony Barrett. GCRI does research and policy work across various risks, including artificial intelligence, nuclear war, climate change, and asteroid impacts. The Global Challenges Foundation (est. 2012), based in Stockholm and founded by Laszlo Szombatfalvy, releases a yearly report on the state of global risks. The Future of Life Institute (est. 2014) works to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit all life, through grantmaking, policy advocacy in the United States, European Union and United Nations, and educational outreach. Elon Musk, Vitalik Buterin and Jaan Tallinn are some of its biggest donors. The Center on Long-Term Risk (est. 2016), formerly known as the Foundational Research Institute, is a British organization focused on reducing risks of astronomical suffering (s-risks) from emerging technologies.University-based organizations include the Future of Humanity Institute (est. 2005) which researches the questions of humanity's long-term future, particularly existential risk. It was founded by Nick Bostrom and is based at Oxford University. The Centre for the Study of Existential Risk (est. 2012) is a Cambridge University-based organization which studies four major technological risks: artificial intelligence, biotechnology, global warming and warfare. All are man-made risks, as Huw Price explained to the AFP news agency, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". He added that when this happens "we're no longer the smartest things around," and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us." Stephen Hawking was an acting adviser. The Millennium Alliance for Humanity and the Biosphere is a Stanford University-based organization focusing on many issues related to global catastrophe by bringing together members of academia in the humanities. It was founded by Paul Ehrlich, among others. Stanford University also has the Center for International Security and Cooperation focusing on political cooperation to reduce global catastrophic risk. The Center for Security and Emerging Technology was established in January 2019 at Georgetown's Walsh School of Foreign Service and will focus on policy research of emerging technologies with an initial emphasis on artificial intelligence. They received a grant of 55M USD from Good Ventures as suggested by Open Philanthropy.Other risk assessment groups are based in or are part of governmental organizations. The World Health Organization (WHO) includes a division called the Global Alert and Response (GAR) which monitors and responds to global epidemic crisis. GAR helps member states with training and coordination of response to epidemics. The United States Agency for International Development (USAID) has its Emerging Pandemic Threats Program which aims to prevent and contain naturally generated pandemics at their source. The Lawrence Livermore National Laboratory has a division called the Global Security Principal Directorate which researches on behalf of the government issues such as bio-security and counter-terrorism. See also References Further reading Avin, Shahar; Wintle, Bonnie C.; Weitzdörfer, Julius; ó Héigeartaigh, Seán S.; Sutherland, William J.; Rees, Martin J. (2018). "Classifying global catastrophic risks". Futures. 102: 20–26. doi:10.1016/j.futures.2018.02.001. Corey S. Powell (2000). "Twenty ways the world could end suddenly", Discover Magazine Derrick Jensen (2006) Endgame (ISBN 1-58322-730-X). Donella Meadows (1972). The Limits to Growth (ISBN 0-87663-165-0). Edward O. Wilson (2003). The Future of Life. ISBN 0-679-76811-4 Holt, Jim (February 25, 2021). "The Power of Catastrophic Thinking". The New York Review of Books. Vol. LXVIII, no. 3. pp. 26–29. p. 28: Whether you are searching for a cure for cancer, or pursuing a scholarly or artistic career, or engaged in establishing more just institutions, a threat to the future of humanity is also a threat to the significance of what you do. Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, Chapter 6, "Sustainability or Collapse", New Society Publishers, Gabriola Island, British Columbia, Canada, 464 pages (ISBN 0865717044). Jared Diamond, Collapse: How Societies Choose to Fail or Succeed, Penguin Books, 2005 and 2011 (ISBN 9780241958681). Jean-Francois Rischard (2003). High Noon 20 Global Problems, 20 Years to Solve Them. ISBN 0-465-07010-8 Joel Garreau, Radical Evolution, 2005 (ISBN 978-0385509657). John A. Leslie (1996). The End of the World (ISBN 0-415-14043-9). Joseph Tainter, (1990). The Collapse of Complex Societies, Cambridge University Press, Cambridge, UK (ISBN 9780521386739). Martin Rees (2004). Our Final Hour: A Scientist's warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future in This Century—On Earth and Beyond. ISBN 0-465-06863-4 Roger-Maurice Bonnet and Lodewijk Woltjer, Surviving 1,000 Centuries Can We Do It? (2008), Springer-Praxis Books. Toby Ord (2020). The Precipice - Existential Risk and the Future of Humanity. Bloomsbury Publishing. ISBN 9781526600219 External links "Are we on the road to civilisation collapse?". BBC. February 19, 2019. MacAskill, William (August 5, 2022). "The Case for Longtermism". The New York Times. "What a way to go" from The Guardian. Ten scientists name the biggest dangers to Earth and assess the chances they will happen. April 14, 2005. Humanity under threat from perfect storm of crises – study. The Guardian. February 6, 2020. Annual Reports on Global Risk by the Global Challenges Foundation Center on Long-Term Risk Global Catastrophic Risk Policy Stephen Petranek: 10 ways the world could end, a TED talk
supraglacial lake
A supraglacial lake is any pond of liquid water on the top of a glacier. Although these pools are ephemeral, they may reach kilometers in diameter and be several meters deep. They may last for months or even decades at a time, but can empty in the course of hours. Lifetime Lakes may be created by surface melting during summer months, or over the period of years by rainfall, such as monsoons. They may dissipate by overflowing their banks, or creating a moulin. Effects on ice masses Lakes of a diameter greater than ~300 m are capable of driving a fluid-filled crevasse to the glacier/bed interface, through the process of hydrofracture. A surface-to-bed connection made in this way is referred to as a moulin. When these crevasses form, it can take a mere 2–18 hours to empty a lake, supplying warm water to the base of the glacier - lubricating the bed and causing the glacier to surge. The rate of emptying such a lake is equivalent to the rate of flow of the Niagara Falls. Such crevasses, when forming on ice shelves, may penetrate to the underlying ocean and contribute to the breakup of the ice shelf.Supraglacial lakes also have a warming effect on the glaciers; having a lower albedo than ice, the water absorbs more of the sun's energy, causing warming and (potentially) further melting. Context Supraglacial lakes can occur in all glaciated areas. The retreating glaciers of the Himalaya produce vast and long lived lakes, many kilometres in diameter and scores of metres deep. These may be bounded by moraines; some are deep enough to be density stratified. Most have been growing since the 1950s; the glaciers have been retreating constantly since then.A proliferation of supraglacial lakes preceded the collapse of the Antarctic Larsen B ice shelf in 2001, and may have been connected.Such lakes are also prominent in Greenland, where they have recently been understood to contribute somewhat to ice movement. Sediments Sedimentary particles often accumulate in supraglacial lakes; they are washed in by the meltwater or rainwater that supplies the lakes. The character of the sediment depends upon this water source, as well as the proximity of a sampled area to both the edge of the glacier and the edge of the lake. The amount of debris atop the glacier also has a large effect. Naturally, long lived lakes have a different sedimentary record to shorter lived pools.Sediments are dominated by coarser (coarse sand/gravel) fragments, and the accumulation rate can be immense: up to 1 metre per year near the shores of larger lakes.Upon melting of the glacier, deposits may be preserved as superglacial till (alias supraglacial moraine). Effect of global warming Greenland Ice Sheet It was once unclear whether global warming is increasing the abundance of supraglacial lakes on the Greenland Ice Sheet. However, recent research has shown that supraglacial lakes have been forming in new areas. In fact, satellite photos show that since the 1970s, when satellite measurements began, supraglacial lakes have been forming at steadily higher elevations on the ice sheet as warmer air temperatures have caused melting to occur at steadily higher elevations. However, satellite imagery and remote sensing data also reveal that high-elevation lakes rarely form new moulins there. Thus, the role of supraglacial lakes in the basal hydrology of the ice sheet is unlikely to change in the near future: they will continue to bring water to the bed by forming moulins within a few tens of kilometers of the coast. Himalaya Climate change is having a more severe effect on supraglacial lakes on mountain glaciers. In the Himalaya, many glaciers are covered by a thick layer of rocks, dirt, and other debris; this debris layer insulates the ice from the warmth of the sun, allowing more ice to stay solid when air temperatures rise above the melting point. Water collecting on the ice surface has the opposite effect, due to its high albedo as described in a previous section. Thus, more supraglacial lakes lead to a vicious cycle of more melting and more supraglacial lakes. A good example is the Ngozumpa glacier, the longest glacier in the Himalayas, which counts numerous supraglacial lakes. The drainage of supraglacial lakes on mountain glaciers can disrupt the internal plumbing structure of the glacier. Natural events such as landslides or the slow melting of a frozen moraine can incite drainage of a supraglacial lake, creating a glacial lake outburst flood. In such a flood, the lake water releases rushes down a valley. These events are sudden and catastrophic and thus provide little warning to people who live downstream, in the path of the water. In Himalayan regions, villages cluster around water sources, such as proglacial streams; these streams are the same pathways the glacial lake outburst floods travel down. == References ==
black-legged kittiwake
The black-legged kittiwake (Rissa tridactyla) is a seabird species in the gull family Laridae. This species was first described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae as Larus tridactylus. The English name is derived from its call, a shrill 'kittee-wa-aaake, kitte-wa-aaake'. The genus name Rissa is from the Icelandic name rita for this bird, and the specific tridactyla is from Ancient Greek tridaktulos, "three-toed", from tri-, "three-" and daktulos, "toe".In North America, this species is known as the black-legged kittiwake to differentiate it from the red-legged kittiwake, but in Europe, where it is the only member of the genus, it is often known just as kittiwake. Range and distribution The black-legged kittiwake is a coastal bird of the arctic to subarctic regions of the world. It can be found all across the northern coasts of the Atlantic, from Canada to Greenland as well as on the Pacific side from Alaska to the coast of Siberia. Black-legged kittiwakes' wintering range extends further south from the St-Lawrence to the southern coast of New Jersey as well as in China, the Sargasso sea and of the coast of west Africa. There are two subspecies of black-legged kittiwake. Rissa tridactyla tridactyla can be found on the Atlantic coast whereas Rissa tridactyla pollicaris is found on the Pacific coast. Out of all the laridae, the kittiwakes are the most pelagic ones. Kittiwakes are almost exclusively found at sea with the exception of the breeding period, from May to September, where they can be found nesting on the sheerest sea cliffs. They are rarely found inland, though they have been reported on few instances as far as 20 km inland. For the rest of the year, kittiwake spend most of their time on the wing out of sight from the coast. Description Adult plumage The adult is 37–41 cm (15–16 in) in length with a wingspan of 91–105 cm (36–41 in) and a body mass of 305–525 g (10.8–18.5 oz). It has a white head and body, grey back, grey wings tipped solid black, black legs and a yellow bill. Occasional individuals have pinky-grey to reddish legs, inviting confusion with red-legged kittiwake. The inside of their mouth is also a characteristic feature of the species due to its rich red colour. Such red pigmentation is due to carotenoids pigments and vitamin A which have to be acquired through their diet. Studies show that integument coloration is associated with male's reproductive success. Such hypothesis would explain the behavior of couples greeting each other by opening their mouth and flashing their bright mouth it to their partner while vocalizing. As their Latin name suggests, they only possess three toes since their hind toe is either extremely reduced or completely absent. The two subspecies being almost identical, R. tridactyla pollicaris is in general slightly larger than its counterpart R. tridactyle tridactyla. In winter, this species acquires a dark grey smudge behind the eye and a grey hind-neck collar. The bill also turns a dusky-olive color.Since kittiwakes winter at sea and rarely touch ground during this period, very little is known about their exact molting pattern. Juvenile plumage At fledging, the juveniles differ from the adults in having a black 'W' band across the length of the wings and whiter secondary and primary feathers behind the black 'W', a black hind-neck collar and a black terminal band on the tail. They can also be identified due to their solid black bill. This plumage is a hatch-year plumage and will only remain for their first year. Kittiwakes obtain their mature plumage at 4 years old, but will gradually change their juvenile plumage over time until maturity is reached. A second-year juvenile resembles a hatch-year regarding the plumage, though the bill is no longer solid black but instead has a greenish color. The black marking on the coverts and the tail is still visible. The black marking will only molt during their third year, where the black is no longer present on the coverts, but the grey smudge on the head remains. A third-year bird will also exhibit a small zone of bright yellow/orange at the base of its mostly greenish bill. It is only at four years old that the bill will reach an overall colour of bright yellow and complete its mature plumage.The old fisherman's name of "tarrock" for juvenile kittiwakes is still occasionally used. Similar species The red-legged kittiwake is the only other species in the Rissa genus and can be differentiated from its counterpart by its red legs, as the name suggests. The head of the red-legged kittiwake is slightly smaller and bears a shorter bill. The chicks of the Pacific black-legged kittiwake and red-legged kittiwake cannot be distinguished during their youngest downy phase.The juvenile black-legged kittiwake can be confused with the Bonaparte's gull juveniles although the kittiwakes' plumage has more black on the primaries and a different pattern going across the coverts. Breeding It is a coastal breeding bird around the north Pacific and north Atlantic oceans, found most commonly in North America and Europe. Kittiwakes are colonial nesters that form monogamous pairs and exhibit biparental care, meaning that both take part in nest building, incubation and chick rearing. They tend to nest in large numbers on cliffs by the sea side. It breeds in large colonies on cliffs and is very noisy on the breeding ground. Cliff nesting for gulls occurs only in the Rissa species, and the kittiwake is capable of utilizing the very sheerest of vertical cliffs, as is evident in their nesting sites on Staple Island in the outer Farne Islands. Black-legged kittiwake gulls traditionally prefer nesting on natural cliffs and ledges, and there are historically few instances of kittiwakes nesting on man-made structures. In recent years, a shift in nesting behavior has been noted, particularly in the coastal areas of Northern Norway. Nest formation Kittiwake pairs both participate in building the nest in which the female will lay their eggs. The breeding season begins in mid-June and usually end in August.Building the nest in order to welcome their fragile eggs is a tedious task and requires time and energy. The parents begin with a layer of mud and grass in order to form a platform that will cushion and help to isolate the eggs from the cold ground. A cup is then built around the platform in order to keep the eggs from rolling out of the nest. Finally, the nest is lined with soft and dry material such as moss, grass or seaweed. The nest is solidified by a continuous trampling of the materials by the pair. Throughout this period, the male will do courtship feeding by feeding the female at their nest site. The reasons for such behavior are not quite understood but many hypotheses have been brought up to explain the phenomenon. Hypothesis such as the "nutrition hypothesis" and the "copulation enhancement hypothesis" have shown evidence that this behavior evolved either through natural or sexual selection. Egg formation and incubation Kittiwakes are single-brooded, meaning that the pair will only reproduce once per year. Egg formation within the female usually takes around 15 days and normal egg clutch size ranges from one to two sub-elliptical eggs, though three eggs clutches are not impossible. The female will lay eggs on alternate days. The eggs' color varies quite a bit, ranging from white, brownish to turquoise with dark brown speckles. Once the eggs are laid, the parents will take turns and incubate their clutch for an average period of 27 days. In case of egg loss, the female might relay another egg within 15 days after the loss. Chick-rearing Chicks usually hatched through the larger end of the egg using their egg tooth. The egg tooth usually disappear after seven days post-hatching. The alpha and beta chicks tend to hatch 1.3 days apart. Kittiwake are born semi-precocial. The downy young of kittiwakes are white, since they have no need of camouflage from predators, and do not wander from the nest like Larus gulls for obvious safety reasons. Regardless of predation, the chicks are most vulnerable within their first week due to their incapacity to properly thermoregulate during that period. Kittiwake chicks also exhibit siblicide, meaning that the first-born chick may kill its sibling in order to avoid competition for food from their parents. If siblicide is to occur, it will most likely occur within the first 10 days of life of the smaller chick, in most cases the last born. The downy plumage of chicks start to be replaced by the juvenile plumage only five days after hatching and will continue to do so for about 30 days, until the juvenile plumage is complete. It is not long after the completion of their juvenile plumage that the chicks will have their first flight at 34–58 days old. Chicks will come back to the nest for several weeks after hatching and will eventually follow the adults at sea where they spend the winter. Kittiwakes reach sexual maturity at around 4–5 years old. Behavior Feeding and diet Kittiwakes are primarily pelagic piscivorous birds. Their main food source consists of fish, though it is not unlikely to find invertebrates such as copepods, polychete and squids in their diet, especially when fish is harder to find. Due to their wide range, kittiwake diet is quite variable. In the Gulf of Alaska, their diet is usually composed of Pacific capelin, Pacific herring, Pacific sand land and much more. Kittiwakes of the coast of the United Kingdom, in Europe, rely mostly on sandeels. In 2004, the kittiwake population in the Shetland islands, along with the murre (guillemot) and tern population, completely failed to reproduce successfully due to a collapse in sandeel stock. Like most gulls, kittiwake forage at the surface of the water where they tend to catch their prey while in flight or sitting on the water. Throughout winter, kittiwakes spend all of their time at sea where they forage. Unlike some gull species, they do not scavenge at landfills.The foraging style of the kittiwakes is often compared to the terns' foraging strategy due to their frequent hovering and their head diving quickly at the surface of the water. Instances of kittiwakes following whales are also common since they benefit from the fish fragments expelled by these huge marine mammals. Fishers and commercial fishing boats are also the frequent witnesses of big groups of kittiwakes, often mixed with other gull species and terns, hovering around their ship in order to benefit from the scraps rejected in their sewage water or thrown overboard.There are few studies focussing on their water needs, though they seem to prefer salt water to fresh water. Captive kittiwakes are known to refuse fresh water but will willingly drink salt water. Vocalization The kittiwake is named after its call that resembles a long "kit-ti-wake". Apart from their typical call, kittiwakes have a wide array of vocalization. Their greeting call is used by the two members of a pair when meeting at the nest after an absence of one or both members. Before and during copulation, the female will often vocalize by making a series of short high pitched "squeak". This call is also used by the female to beg for food from the male (courtship feeding). When predators are around, the kittiwake alarm call, an "oh oh oh oh" will be heard all across the colony. Kittiwake will vocalize all day for various reasons and will only stop when the sun is down. Flight Kittiwake are known for their graceful flight. Unlike larger gulls, their flight is light with the wings beating in fast strokes. When looking at them flying around the colony, kittiwakes often look as if they are playing in the wind with their agile flips and loops. Kittiwakes are highly gregarious and therefore are rarely seen flying alone far away from the colony. Relationship with humans Kittiwakes are a frequent encounter of fisheries in northern regions. Their diet consisting almost exclusively of fish, fishermen tend to seek large aggregation of seabirds since they are often a sign of fish abundance. On the other hand, kittiwake and other seabirds hang around fishing boats or platforms to collect scraps or any fish that might have been left out. Due to the kittiwake pelagic lifestyle, they rarely interact with humans on the land, other than occasional sight near the ocean's coast. In New England, the black-legged kittiwake is often called the "winter gull" since its arrival often signals to people that winter is coming.The city of Tromsø, along with other cities in the far north of Norway, has experienced a remarkable increase in the number of kittiwakes choosing to utilize city structures as nesting sites. This rise in urban nesting populations has seen the number of pairs skyrocket from a mere 13 to over 380 between 2017 and 2022. Researchers attribute this climate change related breeding failures, along with the absence of natural predators in the city, providing a safer environment for the gulls to breed and raise their young. The increasing nesting population has created challenges, as the gulls produce vast amounts of ammonia-smelling feces, discoloring buildings and streets, as well as the noise pollution generated by their constant vocalizations. To address these challenges, innovative measures have been implemented in Tromsø. One such initiative involves the establishment of "Kittiwake hotels," which are artificial bird cliff-structures built to encourage nesting away from urban facades. The hotels have, along with mitigation measures preventing nesting on the city's structures, had success in attracting kittiwakes without having a negative impact on breeding. Conservation Population trends Since the 1970s, its it believed that the global population of black-legged kittiwake has declined of about 40% in only three generations (one generation is in average 12.9 years), putting the species in a dangerous place for the future. The global population is estimated at 14,600,000-15,700,000 individuals and is in constant decline. The individual distribution of kittiwakes across the world varies quite a bit, with Europe representing more than 50% of the world's kittiwake and North America representing only 20%. In their recent species assessment, the IUCN Red List pointed out that all but one populations of kittiwakes were in decline, with the exception of the small Canadian arctic population that seems to be increasing at a rate of 1% per year. The last IUCN Red List report in 2017, the species was moved from "least concern" to a "vulnerable" status on a global scale. Threats Fisheries Since the kittiwakes are fish specialist and tend to rely on prey species, their reproductive success highly depends on fish availability. Commercial fisheries have been known to have many direct and indirect impacts on their surrounding ecosystem. Direct impacts on the fish species themselves are well known, but the presence of fisheries also has an array of impacts on marine predators that not only rely on the species harvested but also on the "bycatch" species. Fisheries harvesting species such as the sandeel, one of the main food source for kittiwakes in Europe, are known to have a huge impact of the reproductive success of local populations kittiwakes and other seabirds. Long-term research on the effect of food availability on kittiwakes in the Gulf of Alaska showed a direct correlation between food availability and reproductive success, using a supplemental-feeding experiment. Seabirds can also be a direct victim of fisheries. Their tendency to hang around them in hope of a good meal can lead to entanglements in fishing gear, often resulting in death by drowning. Global warming With global warming, the rising of ocean temperature is becoming a serious concern, affecting not only the marine flora and fauna but also the species exploiting the marine environment. Kittiwake are extremely sensitive to variation in food stocks. Such variations can be due to over exploitation, as mentioned above, but can also be due to variations in sea surface temperature. With the rising the sea surface temperature, many fishes, such as sandeels, are negatively affected by a rise in ocean temperature. Studies show that sandeels and many copepods populations are being negatively impacted due to increasing sea surface temperature. Such effect on marine species can have tremendous impact on breeding kittiwakes which rely almost exclusively on pelagic fishes, making food more scarce in a time of high energetic needs. Conservation plan There are still no global conservation plans for the black-legged kittiwake though the species is closely monitored for population trends shifts. There are currently no international legislations regarding this species. However, the black-legged kittiwake is protected under the Migratory Bird Treaty Act of 1918 that has been ratified by the US, Canada, Mexico, Russia and Japan. As for many gull species, the kittiwake is not a species of special interest for the public, therefore there are no education plans put in place in order to inform and educate people regarding this species. Subspecies There are two subspecies of black-legged kittiwake: R. t. tridactyla (Linnaeus, 1758): nominate, found in the North Atlantic Ocean, is unique among the Laridae in having only a very small or even no hind toe. R. t. pollicaris (Ridgway, 1884): found in the north Pacific Ocean, has a normally developed hind toe (as the name pollex, meaning thumb, suggests). Gallery References External links BirdLife species factsheet for Rissa tridactyla "Rissa tridactyla". Avibase. "Black-legged kittiwake media". Internet Bird Collection. Black-legged kittiwake photo gallery at VIREO (Drexel University) Interactive range map of Rissa tridactyla at IUCN Red List maps Black-legged kittiwake - Rissa tridactyla - USGS Patuxent Bird Identification InfoCenter Audio recordings of Black-legged kittiwake on Xeno-canto. Rissa tridactyla in Field Guide: Birds of the World on Flickr Black-legged kittiwake media from ARKive
dice model
The Dynamic Integrated Climate-Economy model, referred to as the DICE model or Dice model, is a neoclassical integrated assessment model developed by 2018 Nobel Laureate William Nordhaus that integrates in the neoclassical economics, carbon cycle, climate science, and estimated impacts allowing the weighing of subjectively guessed costs and subjectively guessed benefits of taking steps to slow climate change. Nordhaus also developed the RICE model (Regional Integrated Climate-Economy model), a variant of the DICE model that was updated and developed alongside the DICE model. Researchers who collaborated with Nordhaus to develop the model include David Popp, Zili Yang, and Joseph Boyer.The DICE model is one of the three main integrated assessment models used by the United States Environmental Protection Agency, and it provides estimates intermediate between the other two models. History Precursors According to a summary of the DICE and RICE models prepared by Stephen Newbold, the earliest precursor to DICE was a linear programming model of energy supply and demand in two 1977 papers of William Nordhaus. Although dynamic (in that it considered the changing levels of supply of fuel based on supply and demand and the consequence impact on carbon dioxide emissions) the model did not attempt to measure the economic impact of climate change. A 1991 paper by Nordhaus developed a steady-state model of both the economy and climate, coming quite close to the DICE model. The model The model appears to have first been proposed by economist William Nordhaus in a discussion paper for the Cowles Foundation in February 1992. He also wrote a brief note outlining the main ideas in an article for Science in November 1992. A subsequent revised model was published in Resource and Energy Economics in 1993.Nordhaus published an improved version of the model in the October 1994 book Managing the Global Commons: The Economics of Climate Change, with the first chapter as well as an appendix containing a computer program both freely available online. Marian Radetzki reviewed the book for The Energy Journal.In 1996, Nordhaus and Zili Yang published an article titled A regional dynamic general-equilibrium model of alternative climate-change strategies at The American Economic Review, established the RICE (Regional Integrated model of Climate and the Economy) model.In 1998, Nordhaus published a revised version of the DICE model in multiple papers, one of which was coauthored with Joseph Boyer in order to understand the effects of the proposed Kyoto Protocol.In 1999, Nordhaus published computer programs and spreadsheets implementing a revised version of the DICE model as well as a variant called the RICE model (RICE stands for Regional Integrated Climate-Economics, signifying that the modeling of economics and climate are being done only for a particular region rather than the whole world).In 2000, Nordhaus and Boyer co-authored a book published by MIT Press titled Warming the World: Economic Models of Global Warming with a detailed description of the updated DICE and RICE models.In 2001, Nordhaus published revised spreadsheets for the RICE model.In November 2006, Nordhaus published a new version of the DICE model with updated data, and used it to review the Stern Review.In 2010, updated RICE and DICE models were published, and the new RICE model was explained by Nordhaus in an article for the Proceedings of the National Academy of Sciences (US).In 2013, the book The Climate Casino by Nordhaus, with updated discussion of the DICE and RICE models and the broader policy implications, was published by Yale University Press. A background on the latest version of the models as used in the book was published on Nordhaus' website. 2020 rework In 2020, modelers from the Potsdam Institute for Climate Impact Research (PIK) reported a rerun of the DICE model using updated climate and economic information and found that the economically optimal climate goal was now less than 2.0 °C of global warming — and not the 3.5 °C that Nordhaus had originally calculated. The PIK team employed current understandings of the climate system and more modern social discount rates. This new result therefore broadly supports the Paris Agreement goal of holding global warming to "well below 2.0 °C". Their revised AMPL code and data are available under open licenses. Assumptions and outcomes According to the original formulation of DICE, staying below the 2 °C as agreed by the Paris agreement would cost more in mitigation investments than would be saved in damage from climate change. A 2020 paper by Glanemann, Willner and Levermann, which used an updated damage function, revised this conclusion, showing that a warming of around 2 °C would be "optimal", depending on the climate sensitivity to greenhouse gases.The DICE model is an example of a neoclassical energy-economy-environment model. The central assumption of this type of model is that market externalities create costs not captured in the price system and that government must intervene to assure that these costs are included in the supply price of the good creating the externality. Innovation is assumed to be exogenous; as such, the model is a pre-ITC model (it does not yet include Induced Technological Change). An extension of the model (DICE-PACE) that does include induced technological change, has strongly different outcomes: the optimal path would be to invest strongly early on in mitigation technology. In contrast to non-equilibrium models, investment in low carbon technology is assumed to crowd-out investments in other parts of the economy, leading to a loss of GDP. Reception Academic reception A number of variants of the DICE model have been published by researchers working separately from Nordhaus. The model has been criticised by Steve Keen for a priori assuming that 87% of the economy will be unaffected by climate change, misrepresenting contributions from natural scientists on tipping points, and selecting a high discount rate. Reception in the public policy world The DICE and RICE models have received considerable attention from others studying the economic impact of climate change. It is one of the models used by the Environmental Protection Agency for estimating the social cost of carbon. Stephen Newbold of the Environmental Protection Agency in the United States reviewed the models in 2010.The Basque Centre for Climate Change, in an October 2009 review of integrated assessment models for climate change, discussed the DICE model in detail.A report from The Heritage Foundation, a conservative think tank in the United States, called the DICE model "flawed beyond use for policymaking" on account of its extreme sensitivity to initial assumptions. Similar criticisms, including criticisms of the specific choice of discount rates chosen in the model, have been made by others. Many of these criticisms were addressed in the 2020 rework listed above. See also Integrated Global System Model Pigou Club References External links MATLAB/Octave implementation of DICE model
toarcian oceanic anoxic event
The Toarcian extinction event, also called the Pliensbachian-Toarcian extinction event, the Early Toarcian mass extinction, the Early Toarcian palaeoenvironmental crisis, or the Jenkyns Event, was an extinction event that occurred during the early part of the Toarcian age, approximately 183 million years ago, during the Early Jurassic. The extinction event had two main pulses, the first being the Pliensbachian-Toarcian boundary event (PTo-E). The second, larger pulse, the Toarcian Oceanic Anoxic Event (TOAE), was a global oceanic anoxic event, representing possibly the most extreme case of widespread ocean deoxygenation in the entire Phanerozoic eon. In addition to the PTo-E and TOAE, there were multiple other, smaller extinction pulses within this span of time.Occurring during the supergreenhouse climate of the Early Toarcian Thermal Maximum (ETTM), the Early Toarcian extinction was associated with large igneous province volcanism, which elevated global temperatures, acidified the oceans, and prompted the development of anoxia, leading to severe biodiversity loss. The biogeochemical crisis is documented by a high amplitude negative carbon isotope excursions, as well as black shale deposition. Timing The Early Toarcian extinction event occurred in two distinct pulses, with the first event being classified by some authors as its own event unrelated to the more extreme second event. The first, more recently identified pulse occurred during the mirabile subzone of the tenuicostatum ammonite zone, coinciding with a slight drop in oxygen concentrations and the beginning of warming following a late Pliensbachian cool period. This first pulse, occurring near the Pliensbachian-Toarcian boundary, is referred to as the PTo-E. The TOAE itself occurred near the tenuicostatum–serpentinum ammonite biozonal boundary, specifically in the elegantulum subzone of the serpentinum ammonite zone, during a marked, pronounced warming interval. The TOAE lasted for approximately 500,000 years, though a range of estimates from 200,000 to 1,000,000 years have also been given. The PTo-E primarily affected shallow water biota, while the TOAE was the more severe event for organisms living in deep water. Causes Geological, isotopic, and palaeobotanical evidence suggests the late Pliensbachian was an icehouse period. These ice sheets are believed to have been thin and stretched into lower latitudes, making them extremely sensitive to temperature changes. A warming trend lasting from the latest Pliensbachian to the earliest Toarcian was interrupted by a "cold snap" in the middle polymorphum zone, equivalent to the tenuicostatum ammonite zone, which was then followed by the abrupt warming interval associated with the TOAE. This global warming, driven by rising atmospheric carbon dioxide, was the mainspring of the early Toarcian environmental crisis. Carbon dioxide levels rose from about 500 ppm to about 1,000 ppm. Seawater warmed by anywhere between 3 °C and 7 °C, depending on latitude. At the height of this supergreenhouse interval, global sea surface temperatures (SSTs) averaged about 21 °C.The eruption of the Karoo-Ferrar Large Igneous Province is generally attributed to have caused the surge in atmospheric carbon dioxide levels. Argon-argon dating of Karoo-Ferrar rhyolites points to a link between Karoo-Ferrar volcanism and the extinction event, a conclusion reinforced by uranium-lead dating. The ratio of osmium-187 to osmium-188 rose from ~0.40 to ~0.53 during the PTo-E and from ~0.42 to ~0.68 during the TOAE, and many scholars conclude this change in osmium isotope ratios evidences the responsibility of this large igneous province for the biotic crises. Mercury anomalies from the approximate time intervals corresponding to the PTo-E and TOAE have likewise been invoked as tell-tale evidence of the ecological calamity's cause being a large igneous province, although some researchers attribute these elevated mercury levels to increased terrigenous flux. There is evidence that the motion of the African Plate suddenly changed in velocity, shifting from mostly northward movement to southward movement. Such shifts in plate motion are associated with similar large igneous provinces emplaced in other time intervals. A 2019 geochronological study found that the emplacement of the Karoo-Ferrar large igneous province and the TOAE were not causally linked, and simply happened to occur rather close in time, contradicting mainstream interpretations of the TOAE. The authors of the study conclude that the timeline of the TOAE does not match up with the course of activity of the Karoo-Ferrar magmatic event.The large igneous province also intruded into coal seams, releasing even more carbon dioxide and methane than it otherwise would have. Magmatic sills are also known to have intruded into shales rich in organic carbon, causing additional venting of carbon dioxide into the atmosphere.In addition, possible associated release of deep sea methane clathrates has been potentially implicated as yet another cause of global warming. Episodic melting of methane clathrates dictated by Milankovitch cycles has been put forward as an explanation fitting the observed shifts in the carbon isotope record. Other studies contradict and reject the methane hydrate hypothesis, however, concluding that the isotopic record is too incomplete to conclusively attribute the isotopic excursion to methane hydrate dissociation, that carbon isotope ratios in belemnites and bulk carbonates are incongruent with the isotopic signature expected from a massive release of methane clathrates, that much of the methane released from ocean sediments was rapidly sequestered, buffering its ability to act as a major positive feedback, and that methane clathrate dissociation occurred too late to have had an appreciable causal impact on the extinction event. Hypothetical release of methane clathrates extremely depleted in heavy carbon isotopes has furthermore been considered unnecessary as an explanation for the carbon cycle disruption.It has also been hypothesised that the release of cryospheric methane trapped in permafrost amplified the warming and its detrimental effects on marine life. Obliquity-paced carbon isotope excursions have been interpreted as some researchers as reflective of permafrost decline and consequent greenhouse gas release.Geochemical evidence from what was then the northwestern European epicontinental sea suggests that a shift from cooler, more saline water conditions to warmer, fresher conditions prompted the development of significant density stratification of the water column and induced anoxia. Rising seawater temperatures amidst a transition from icehouse to greenhouse conditions retarded ocean circulation, aiding the establishment of anoxic conditions. Further consequences resulting from large igneous province activity included increased silicate weathering and an acceleration of the hydrological cycle, as shown by a increased amount of terrestrially derived organic matter found in sedimentary rocks of marine origin during the TOAE. The enhanced continental weathering in turn led to increased eutrophication that helped drive the anoxic event in the oceans. Continual transport of continentally weathered nutrients into the ocean enabled high levels of primary productivity to be maintained over the course of the TOAE. Organic plant matter also entered the marine realm as rising sea levels inundated low-lying lands and transported vegetation outwards into the ocean. An alternate model for the development of anoxia is that epicontinental seaways became salinity stratified with strong haloclines, chemoclines, and thermoclines. This caused mineralised carbon on the seafloor to be recycled back into the photic zone, driving widespread primary productivity and in turn anoxia. The freshening of the Arctic Ocean by way of melting of Northern Hemisphere ice caps was a likely trigger of such stratification and a slowdown of global thermohaline circulation. Extensive organic carbon burial induced by anoxia was a negative feedback loop retarding the otherwise pronounced warming and may have caused global cooling in the aftermath of the TOAE.Euxinia occurred in the northwestern Tethys Ocean during the TOAE, evidenced by enhanced pyrite burial in Zázrivá, Slovakia, enhanced molybdenum burial totalling about 41 Gt of molybdenum, and δ98/95Mo excursions observed in sites in the Cleveland, West Netherlands, and South German Basins. Valdorbia, a site in the Umbria-Marche Apennines, also exhibited euxinia during the anoxic event. There is less evidence of euxinia outside the northwestern Tethys, and it likely only occurred transiently in basins in Panthalassa and the southwestern Tethys. Due to the clockwise circulation of the oceanic gyre in the western Tethys and the rough, uneven bathymetry in the northward limb of this gyre, oxic bottom waters had relatively few impediments to diffuse into the southwestern Tethys, which spared it from the far greater prevalence of anoxia and euxinia that characterised the northern Tethys. The Panthalassan deep water site of Sakahogi was mainly anoxic-ferruginous across the interval spanning the late Pliensbachian to the TOAE, but transient sulphidic conditions did occur during the PTo-E and TOAE. In northeastern Panthalassa, in what is now British Columbia, euxinia dominated anoxic bottom waters.The early stages of the TOAE were accompanied by a decrease in the acidity of seawater following a substantial decrease prior to the TOAE. Seawater pH then dropped close to the middle of the event, strongly acidifying the oceans. The sudden decline of carbonate production during the TOAE is widely believed to be the result of this abrupt episode of ocean acidification. Additionally, the enhanced recycling of phosphorus back into seawater as a result of high temperatures and low seawater pH inhibited its mineralisation into apatite, helping contribute to oceanic anoxia. The abundance of phosphorus in marine environments created a positive feedback loop whose consequence was the further exacerbation of eutrophication and anoxia. Effects on biogeochemical cycles Carbon cycle Occurring during a broader, gradual positive carbon isotope excursion as measured by δ13C values, the TOAE is associated with a global negative δ13C excursion recognised in fossil wood, organic carbon, and carbonate carbon in the tenuicostatum ammonite zone of northwestern Europe. The global ubiquity of this negative δ13C excursion has been called into question, however, due to its absence in certain deposits from the time, such as the Bächental bituminous marls, though its occurrence in areas like Greece has been cited as evidence of its global nature. The negative δ13C shift is also known from the Arabian Peninsula, the Ordos Basin, and the Neuquén Basin. The negative δ13C excursion has been found to be up to -8% in bulk organic and carbonate carbon, although analysis of compound specific biomarkers suggests a global value of around -3% to -4%. In addition, numerous smaller scale carbon isotope excursions are globally recorded on the falling limb of the larger negative δ13C excursion. A positive δ13C excursion, likely resulting from the mass burial of organic carbon during the anoxic event, is known from the subsequent falciferum ammonite zone. Sulphur cycle A positive sulphur isotope excursion in carbonate-associated sulphate occurs synchronously with the positive carbon isotope excursion in carbonate carbon during the falciferum ammonite zone. This positive sulphur isotope excursion has been attributed to the depletion of isotopically light sulphur in the marine sulphate reservoir that resulted from microbial sulphur reduction in anoxic waters. Similar positive sulphur isotope excursions corresponding to the onset of TOAE are known from pyrites in the Sakahogi and Sakuraguchi-dani localities in Japan, with the Sakahogi site displaying a less extreme but still significant pyritic positive sulphur isotope excursion during the PTo-E. Effects on life Marine invertebrates The extinction event associated with the TOAE primarily affected marine life as a result the collapse of the carbonate factory. Brachiopods were particularly severely hit, with the TOAE representing one of the most dire crises in their evolutionary history. Brachiopod taxa of large size declined significantly in abundance. Uniquely, the brachiopod genus Soaresirhynchia thrived during the later stages of the TOAE due to its low metabolic rate and slow rate of growth, making it a disaster taxon. The species S. bouchardi is known to have been a pioneer species that colonised areas denuded of brachiopods in the northwestern Tethyan region. Ostracods also suffered a major diversity loss, with almost all ostracod clades’ distributions during the time interval corresponding to the serpentinum zone shifting towards higher latitudes to escape intolerably hot conditions near the Equator. Bivalves likewise experienced a significant turnover. The decline of bivalves exhibiting high endemism with narrow geographic ranges was particularly severe. At Ya Ha Tinda, a replacement of the pre-TOAE bivalve assemblage by a smaller, post-TOAE assemblage occurred, while in the Cleveland Basin, the inoceramid Pseudomytiloides dubius experienced the Lilliput effect. Ammonoids, having already experienced a major morphological bottleneck thanks to the Gibbosus Event, about a million years before the Toarcian extinction, suffered further losses in the Early Toarcian diversity collapse. Belemnite richness in the northwestern Tethys dropped during the PTo-E but slightly increased across the TOAE. Belemnites underwent a major change in habitat preference from cold, deep waters to warm, shallow waters. Their average rostrum size also increased, though this trend heavily varied depending on the lineage of belemnites. The Toarcian extinction was unbelievably catastrophic for corals; 90.9% of all Tethyan coral species and 49% of all genera were wiped out. Other affected invertebrate groups included echinoderms, radiolarians, dinoflagellates, and foraminifera. Trace fossils, an indicator of bioturbation and ecological diversity, became highly undiverse following the TOAE.Carbonate platforms collapsed during both the PTo-E and the TOAE. Enhanced continental weathering and nutrient runoff was the dominant driver of carbonate platform decline in the PTo-E, while the biggest culprits during the TOAE were heightened storm activity and a decrease in the pH of seawater.The recovery from the mass extinction among benthos commenced with the recolonisation of barren locales by opportunistic pioneer taxa. Benthic recovery was slow and sluggish, being regularly set back thanks to recurrent episodes of oxygen depletion, which continued for hundreds of thousands of years after the main extinction interval. Many marine invertebrate taxa found in South America migrated through the Hispanic Corridor into European seas after the extinction event, aided in their dispersal by higher sea levels. Marine vertebrates The TOAE had minor effects on marine reptiles, in stark contrast to the major impact it had on many clades of marine invertebrates. In fact, in the Southwest German Basin, ichthyosaur diversity was higher after the extinction interval, although this may be in part a sampling artefact resulting from a sparse Pliensbachian marine vertebrate fossil record. Terrestrial vertebrates The TOAE is suggested to have caused the extinction of various clades of dinosaurs, including coelophysids, dilophosaurids, and many basal sauropodomorph clades, as a consequence of the remodelling of terrestrial ecosystems caused by global climate change. Some heterodontosaurids and thyreophorans also perished in the extinction event. In the wake of the extinction event, many derived clades of ornithischians, sauropods, and theropods emerged, with most of these post-extinction clades greatly increasing in size relative to dinosaurs before the TOAE. Terrestrial plants The volcanogenic extinction event initially impacted terrestrial ecosystems more severely than marine ones. A shift towards a low diversity assemblage of cheirolepid conifers, cycads, and Cerebropollenites-producers adapted for high aridity from a higher diversity ecological assemblage of lycophytes, conifers, seed ferns, and wet-adapted ferns is observed in the palaeobotanical and palynological record over the course of the TOAE. The coincidence of the zenith of Classopolis and the decline of seed ferns and spore producing plants with increased mercury loading implicates heavy metal poisoning as a key contributor to the floristic crisis during the Toarcian mass extinction. Poisoning by mercury, along with chromium, copper, cadmium, arsenic, and lead is speculated to be responsible for heightened rates of spore malformation and dwarfism concomitant with enrichments in all these toxic metals. Geologic effects The TOAE was associated with widespread phosphatisation of marine fossils believed to result from the warming-induced increase in weathering that increased phosphate flux into the ocean. This produced exquisitely preserved lagerstätten across the world, such as Ya Ha Tinda, Strawberry Bank, and the Posidonia Shale.As is common during anoxic events, black shale deposition was widespread during the deoxygenation events of the Toarcian. Toarcian anoxia was responsible for the deposition of commercially extracted oil shales, particularly in China.Additionally, the Toarcian was punctuated by intervals of extensive kaolinite enrichment. These kaolinites correspond to negative oxygen isotope excursions and high Mg/Ca ratios and are thus reflective of climatic warming events that characterised much of the Toarcian. Palaeogeographic changes The intertropical convergence zone migrated southwards across southern Gondwana, turning much of the region more arid. This aridification was interrupted, however, in the spinatus ammonite biozone and across the Pliensbachian-Toarcian boundary itself.The large rise in sea levels resulting from the intense global warming led to the formation of the Laurasian Seaway, which enabled the flow of cool water low in salt content to flow into the Tethys Ocean from the Arctic Ocean. The opening of this seaway may have potentially acted as a mitigating factor that ameliorated to a degree the oppressively anoxic conditions that were widespread across much of the Tethys.The enhanced hydrological cycle during early Toarcian warming caused lakes to grow in size. During the anoxic event, the Sichuan Basin was transformed into a giant lake, which was believed to be approximately thrice as large as modern-day Lake Superior. Lacustrine sediments deposited as a result of this lake's existence are represented by the Da’anzhai Member of the Ziliujing Formation. Roughly ~460 gigatons (Gt) of organic carbon and ~1,200 Gt of inorganic carbon were likely sequestered by this lake over the course of the TOAE. Comparison with present global warming The TOAE and the Palaeocene-Eocene Thermal Maximum have been proposed as analogues to modern anthropogenic global warming based on the comparable quantity of greenhouse gases released into the atmosphere in all three events. Some researchers argue that evidence for a major increase in Tethyan tropical cyclone intensity during the TOAE suggests that a similar increase in magnitude of tropical storms is bound to occur as a consequence of present climate change. See also Bonarelli Event Selli Event == References ==