title
stringlengths
2
145
content
stringlengths
86
178k
emissions trading scheme in south korea
South Korea’s Emissions Trading Scheme (KETS) is the second largest in scale after the European Union Emission Trading Scheme and was launched on January 1, 2015. South Korea is the second country in Asia to initiate a nationwide carbon market after Kazakhstan. Complying to the country’s pledge made at the Copenhagen Accord of 2009, the South Korean government aims to reduce its greenhouse gas (GHG) emissions by 30% below its business as usual scenario by 2020. They have officially employed the cap-and-trade system and the operation applies to over 525 companies which are accountable for approximately 68% of the nation’s GHG output. The operation is divided up into three periods. The first and second phases consist of 3 years each, 2015 to 2017 and 2018 to 2020. The final phase will spread out over the next 5 years from 2021 to 2025.The cap-and-trade system is a tool of carbon pricing that has been adapted by several countries to mitigate greenhouse gas emissions through a market mechanism. It entails a market open to the transaction of trade permits, which allow participating businesses or countries to emit a given amount of greenhouse gases. A cap is set by the government which defines the maximum level of total emissions permitted during a certain time period. The South Korean government had set the emissions cap for the first year of implementation (2015) as 573 MtCO2e. The major objectives of the KETS is to place South Korea at the forefront of the global effort in reducing GHG emissions and to develop its market competitiveness in the clean energy sector. As one of the top 10 largest contributors to global greenhouse gas emissions and a nation with the highest growth rate in GHG emissions, South Korea’s awareness of its carbon footprint has increased over the years. The country grows more vulnerable to climate change as the average temperature has risen by 1.5 degrees Celsius causing frequent natural disasters. Furthermore, the South Korean government aims to cut back its reliance on imported fossil fuel energy which accounts for roughly 97% of its primary energy consumption. Lastly, by implementing the emissions trading scheme, the government has prospects of developing its green industries and increase its global share of the clean energy market. Background Government efforts to adapt environmentally-friendly energy systems began since the 1990s. The aftermath of the global recession in 2008 increased the awareness of energy self-sufficiency in the country which led to the Green Growth Agenda campaign under former President Lee Myung-Bak’s administration. This agenda focused on three pillars: energy sufficiency, economic growth and environmental protection. In 2009 UN member states commenced at the United Nations Framework for Climate Change Conference and signed the Copenhagen Accord at which South Korea pledged to reduce its emissions by 30% of its projected business as usual scenario. Copenhagen Accord In 2009 member states commenced at the United Nations Framework for Climate Change Conference (UNFCCC) and signed the Copenhagen Accord. South Korea pledged to reduce its emissions by 30% of its projected business as usual scenario by the year 2020. There were two main motives behind this voluntary commitment. At the domestic level, former President Lee Myung-bak had pledged a 7% annual growth rate and a green economic agenda as part of his election campaign. The Lee administration considered the ETS as a viable mechanism for green growth and also increasing Korea’s competitiveness in the global green market. At the international level, Korea was undergoing pressure from the international community to strengthen efforts against climate change. Since South Korea was not enlisted as an Annex I Country in the Kyoto Protocol, it had been exempt from emission reduction obligations. However, considering the size of its economy and its high position on emission rankings, the country could not avoid international pressure for stronger commitment. The voluntary pledge at Copenhagen received critical views at home. The feasibility of establishing a legally binding system of emissions reduction and the potential for cultivation the green sector as an economic growth engine were put into question. The Framework Act on Low Carbon, Green Growth Also referred to as the Framework Act, this set the legal groundwork of the emissions trading scheme since 2010. Article 46 clause 1 states “The Government may operate a system for trading emissions of greenhouse gases by utilizing market functions in order to accomplish its target of reduction of greenhouse gases.” Target Management System Under provisions of Article 42, the Framework Act introduced the Target Management System (TMS) in 2012 which served as a transitional step before implementing the emissions trading scheme. Business entities subject under the TMS were obligated to reduce both direct and indirect GHG emissions under the supervision of the Ministry of Environment. The key difference between of the TMS from the ETS was that it lacked a market-function and there were no incentives for compliance. It served a vital role in collecting emissions data and strengthening the foundations of the ETS. National Emissions Permit Allocation Plan The Ministry of Environment was appointed to devise the allocation plan specifying the following elements: Total amount of emission permits to be issued for each phase Standard for distributing allowances in each sector Type of business entities involved Criteria and incentives for early emission reduction among participants Banking and borrowing of permits and offsets.The Allocation Plan focused on the outline and execution of the ETS while a separate Master Plan was devised to incorporate the ETS with other abatement policies. Timeline of the Workings and Implementation of KETS 2010: At the Climate Conference in Copenhagen, South Korea pledges to reduce its GHG emissions 30% below its BAU level by 2020. 2012: The Act on the Allocation and Trading of Greenhouse Gas Emission Permits (ETS Act) and Enforcement Decree are legislated. 2014: January - The South Korean government appoints Korea Exchange (KEX) to oversee the exchange of emission permits. The Ministry of Strategy and Finance (MOSF) releases the first Master Plan, outlining the legal execution of the Allocation plan during the first commitment phase of 2015-2017 June: The Ministry of Environment (MOE) releases National Emissions Permit Allocation Plan including details of operation during Phase 1 and total number of allowances to be circulated with their allocation method. September: Government holds first meeting of Emission Permits Allocation Committee (EPAC) revising and approving the Allocation Plan. The plan is finalized and ready for execution. 2015: Emissions Trading Scheme is officially launched in January. Mechanism To accomplish its Copenhagen pledge, South Korea is restrained from producing more than 543 million tCO2e by 2020. According to varying calculation methods of business as usual scenarios, this can indicate 233.1 million tCO2e or more. 6 greenhouse gases are covered in the scheme: Carbon Dioxide, Methane, Nitrous Oxide, Hydrofluorocarbons, Perfluorocarbons, and Sulfur hexafluoride. More than 525 business entities are subject to the ETS with specific emission targets set for each sectors. Similar to that of the European Union, South Korea has divided its emissions trading scheme into three phases. With differing time spans and targets within each phase. The country is currently in its first phase which started in 2015 and lasts until 2017. The second phase occupies a 3-year-span from 2018 to 2020 and the final third phase is from 2021 to 2025. Allocation of carbon allowances During Phase 1 all permits will be allocated freely without the employment of the auction system. Three sectors including grey clinker, oil refinery and aviation will receive free allowances based on data analysis of previous activity. A reserve will maintain approximately 5% of the total allowances for purposes of market stabilization or accepting new entrants. Any surplus of allowances not allocated to specific entities will also be kept in the reserve. In the second phase from 2018 to 2020 97% of the permits will be freely allocated and 3% auctioned. In the third and final 5-year period between 2021 and 2025 less than 90% will go under free allocation and the rest will be auctioned. To lessen the burden of particular businesses those involved in energy-intensive and trade-exposed sectors (EITE) will be granted 100% of their allowances for free in all three phases. Compliance measures The Certification Committee of the Ministry of Environment holds supervisory authority to review the annual emissions reports and the emissions will be verified by a third-party. Penalty for exceeding the cap will be less than triple of the average market price for allowances in the compliance year. Participating facilities are required to create a yearly emission inventory verified by a third party which will report to the government. This reports will be enlisted in the Emission Trading Registry System (ETRS). Banking and Borrowing Offsets and Credits Implementation of Phase 1 Immediate Results Among the 525+ business entities that were subject to the newly introduced emissions trading scheme in South Korea more than 500 were able to achieve their targets by the deadline of June 30, 2015. Opposition The scheme has faced significant opposition from the start. The Federation of Korean Industries filed and the Korea Chamber of Commerce filed a report demanding the postponement of the implementation year from 2013 to 2015. They expressed their concerns of losing global competitiveness due to an increase in their production costs. Furthermore, more than 40 lawsuits were filed against the government in regards to the problem of permit allocation. Sub-industry groups such as petrochemicals and non-ferrous producers voiced out their complaint that the government did not sufficiently allocate permits and lacked proper explanation of the unequal distribution. References s "Target Management Scheme." Greenhouse Gas Inventory and Research Center of Korea. Greenhouse Gas Inventory and Research Center of Korea, n.d. Web. 9 Oct. 2016.
thailand greenhouse gas management organization
The Thailand Greenhouse Gas Management Organisation (TGO) (Thai: องค์การบริหารจัดการก๊าซเรือนกระจก (องค์การมหาชน), RTGS: Ongkan Borihan Chatkan Kat Ruean Krachok (Ongkan Maha Chon)); or อบก., RTGS: oboko) is an autonomous governmental organization under the Ministry of Natural Resources and Environment (MNRE) established by the royal decree, Establishment of Thailand Greenhouse Gas Management Organisation (Public Organisation) BE 2550 (2007). It is responsible for reducing greenhouse gas (GHG) emissions in Thailand. The royal decree was effective as of 7 July 2007. Management and budget As of 2020 the executive director of the Thailand Greenhouse Gas Management Organization is Mr.Kiatchai Maitriwong. TGO's budget for FY2019 is 158.1 million baht. National goal Thailand signed the Paris Agreement on climate change on 22 April 2016. It submitted an "Intended Nationally Determined Contribution" (INDC) target for greenhouse gas (GHG) reductions between 7–20% of the "business as usual" (BAU) scenario by 2020. By 2030 Thailand has pledged to reduce GHG emissions by 20–25% from the BAU baseline.: 35 Objectives Section 7 of the decree establishing TGO prescribed TGO's objectives: to analyse, scrutinise, and collect views and opinions in relation to the approval of projects, as well as to pursue and appraise authorised projects to promote the market for greenhouse gas emissions trading to be an information centre for greenhouse gas operations to create a database of authorised projects and the approved trading of greenhouse gas emissions, in accordance with policy determined by the national board to enhance the efficiency as well as instruct public agencies and private bodies in greenhouse gas operations to manage greenhouse gas management public relations to support climate change operations Progress According to the Bangkok Post, in 2006, the year before TGO was established, Thailand emitted 232 million tonnes (Mt) of carbon dioxide (CO2), 44 million tonnes of that number from burning coal. By 2016, Thailand's CO2 emissions had risen to 342 million tonnes, 65 million tonnes of it from coal burning. The International Energy Agency's (IEA) numbers differ: it reports total emissions figures of 202 Mt in 2006 and 245 Mt in 2016. As of 2018, Thailand's greenhouse gas emissions continue to increase. A slight reduction in the annual GHG growth rate of 3.3% in 2014 is attributed to greenhouse gas reduction measures and sequestration by forests.: 15 References External links Works related to 2007 Thailand Greenhouse Gas Management Organisation Establishment Decree at Wikisource
criticism of suvs
Sport utility vehicles (SUVs) have been criticized for a variety of environmental and safety-related reasons. They generally have poorer fuel efficiency and require more resources to manufacture than smaller vehicles, thus contributing more to climate change and environmental degradation. Their higher center of gravity significantly increases their risk of rollovers. Their larger mass increases their momentum, which results in a larger braking distance and more damage to other road users in collisions. Their higher front-end profile reduces visibility and makes them at least twice as likely to kill pedestrians they hit. Additionally, the psychological sense of security they provide influences drivers to drive less cautiously or rely on their car for their perceived safety, rather than their own driving. Safety SUVs are generally safer to their occupants and more dangerous to other road users than mid-size cars. A 2021 study by the University of Illinois Springfield showed, for example, that SUVs are 8-times more likely to kill children in an accident than passenger cars, and multiple times more lethal to adult pedestrians and cyclists. When it comes to mortality for vehicle occupants, four-door minicars have a death rate (per 100,000 registration years rather than mileage) of 82, compared with 46 for very large four-doors. This survey reflects the effects of both vehicle design and driving behaviour. Drivers of SUVs, minivans, and large cars may drive differently from the drivers of small or mid-size cars, and this may affect the survey result. Rollover A high center of gravity makes a vehicle more prone to rollover accidents than lower vehicles, especially if the vehicle leaves the road, or if the driver makes a sharp turn during an emergency maneuver. Figures from the US National Highway Traffic Safety Administration show that most passenger cars have about a 10% chance of rollover if involved in a single-vehicle crash, while SUVs have between 14% and 23% (varying from a low of 14% for the all-wheel-drive (AWD) Ford Edge to a high of 23% for the front-wheel-drive (FWD) Ford Escape). Many modern SUVs are equipped with electronic stability control (ESC) to prevent rollovers on flat surfaces, but 95% of rollovers are "tripped", meaning that the vehicle strikes something low, such as a curb or shallow ditch, causing it to tip over.According to NHTSA data, early SUVs were at a disadvantage in single-vehicle accidents (such as when the driver falls asleep or loses control swerving around a deer), which involve 43% of fatal accidents, with more than double the chance of rolling over. This risk related closely to overall US motor vehicle fatality data, showing that SUVs and pickups generally had a higher fatality rate than cars of the same manufacturer.According to Consumer Reports, as of 2009, SUV rollover safety had improved to the extent that on average there were slightly fewer driver fatalities per million vehicles, due to rollovers, in SUVs as opposed to cars. By 2011 the IIHS reported that "drivers of today's SUVs are among the least likely to die in a crash". Poor Handling Vehicles that are larger and heavier in size like SUVs require large amounts of braking power and more powerful steering assists to aid in turning the wheels more quickly. Because of this, the reaction of an SUV to sudden braking and steering maneuvers will be very different to drivers who are more accustomed to lighter vehicles. This is due to the combination of a vastly higher center of gravity and excessive weight severely affecting the cornering ability of SUVs with rollovers much more likely than cars or minivans, even at low speeds. Construction Heavier-duty SUVs are typically designed with a truck-style chassis with separate body, while lighter-duty (including cross-over) models are more similar to cars, which are typically built with a unitary construction (in which the body actually forms the structure). Originally designed and built to be work vehicles using a truck chassis, SUVs were not comprehensively redesigned to be safely used as passenger vehicles. The British television programme Fifth Gear staged a 40 mph (64 km/h) crash between a first generation (1989–98) Land Rover Discovery with a separate chassis and body, and a modern Renault Espace IV with monocoque (unit) design. The older SUV offered less protection for occupants than the modern multi-purpose vehicle with unitary construction. In some SUV fatalities involving truck-based construction, lawsuits against the automakers "were settled quietly and confidentially, without any public scrutiny of the results—or the underlying problems with SUV design", thus hiding the danger of vehicles such as the Ford Bronco and Explorer compared to regular passenger cars. Risk to other road users Because of greater height and weight and rigid frames, it is contended by Malcolm Gladwell, writing in The New Yorker magazine, that SUVs can affect traffic safety. This height and weight, while potentially giving an advantage to occupants of the vehicle, may pose a risk to drivers of smaller vehicles in multi-vehicle accidents, particularly side impacts.The initial tests of the Ford Excursion were "horrifying" for its ability to vault over the hood of a Ford Taurus. The big SUV was modified to include a type of blocker bar suggested by the French transportation ministry in 1971, a kind of under-vehicle roll bar designed to keep the large Ford Excursion from rolling over cars that were hit by it. The problem is "impact incompatibility", where the "hard points" of the end of chassis rails of SUVs are higher than the "hard points" of cars, causing the SUV to override the engine compartment and crumple zone of the car. There have been few regulations covering designs of SUVs to address the safety issue. The heavy weight is a risk factor with very large passenger cars, not only with SUVs. The typically higher SUV bumper heights and those built using stiff truck-based frames, also increases risks in crashes with passenger cars. The Mercedes ML320 was designed with bumpers at the same height as required for passenger cars.In parts of Europe, effective 2006, the fitting of metal bullbars, also known as grille guards, brush guards, and push bars, to vehicles such as 4x4s and SUVs are only legal if pedestrian-safe plastic bars and grilles are used. Bullbars are often used in Australia, South Africa, and parts of the United States to protect the vehicle from being disabled should it collide with wildlife. Safety improvements during the 2010s to the present led automobile manufacturers to make design changes to align the energy-absorbing structures of SUVs with those of cars. As a result, car occupants were only 28 percent more likely to die in collisions with SUVs than with cars between 2013 and 2016, compared with 59 percent between 2009 and 2012, according to the IIHS. Visibility and backover deaths Larger vehicles can create visibility problems for other road users by obscuring their view of traffic lights, signs, and other vehicles on the road, plus the road itself. Depending on the design, drivers of some larger vehicles may themselves suffer from poor visibility to the side and the rear. Poor rearward vision has led to many "backover deaths" where vehicles have run over small children when backing out of driveways. The problem of backover deaths has become so widespread that reversing cameras are being installed on some vehicles to improve rearward vision.While SUVs are often perceived as having inferior rearward vision compared with regular passenger cars, this is not supported by controlled testing which found poor rearward visibility was not limited to any single vehicle class. Australia's NRMA motoring organisation found that regular passenger cars commonly provided inferior rearward vision compared to SUVs, both because of the prevalence of reversing cameras on modern SUVs and the shape of many popular passenger cars, with their high rear window lines and boots (trunks) obstructing rearward vision. In NRMA testing, two out of 42 SUVs (5%) and 29 out of 163 (18%) regular cars had the worst rating (>15-metre blind spot). Of the vehicles that received a perfect 0-metre blind spot rating, 11 out of 42 (26%) were SUVs and eight out of 163 (5%) were regular passenger cars. All of the "perfect score" vehicles had OEM reversing cameras. Wide bodies in narrow lanes The wider bodies of larger vehicles mean they occupy a greater percentage of road lanes. This is particularly noticeable on the narrow roads sometimes found in dense urban areas or rural areas in Europe. Wider vehicles may also have difficulty fitting in some parking spaces and encroach further into traffic lanes when parked alongside the road. Psychology SUV safety concerns are affected by a perception among some consumers that SUVs are safer for their drivers than standard cars, and that they need not take basic precautions as if they were inside a "defensive capsule". According to G. C. Rapaille, a psychological consultant to automakers, many consumers feel safer in SUVs simply because their ride height makes "[their passengers] higher and dominate and look down [sic]. That you can look down [on other people] is psychologically a very powerful notion." This and the height and weight of SUVs may lead to consumers' perception of safety.Gladwell also noted that SUV popularity is also a sign that people began to shift automobile safety focus from active to passive, to the point that in the US potential SUV buyers will give up an extra 30 ft (9.1 m) of braking distance because they believe they are helpless to avoid a tractor-trailer hit on any vehicle. The four-wheel drive option available to SUVs reinforced the passive safety notion. To support Gladwell's argument, he mentioned that automotive engineer David Champion noted that in his previous driving experience with Range Rover, his vehicle slid across a four-lane road because he did not perceive the slipping that others had experienced. Gladwell concluded that when a driver feels unsafe when driving a vehicle, it makes the vehicle safer. When a driver feels safe when driving, the vehicle becomes less safe.Stephen Popiel, a vice president of Millward Brown Goldfarb automotive market-research company, noted that for most automotive consumers, safety has to do with the notion that they are not in complete control. Gladwell argued that many "accidents" are not outside driver's control, such as drunk driving, wearing seat belts, and the driver's age and experience. Sense of security Study into the safety of SUVs conclusions have been mixed. In 2004, the National Highway Traffic Safety Administration released results of a study that indicated that drivers of SUVs were 11% more likely to die in an accident than people in cars. These figures were not driven by vehicle inherent safety alone but indicated perceived increased security on the part of drivers. For example, US SUV drivers were found to be less likely to wear their seatbelts and showed a tendency to drive more recklessly (most sensationally perhaps, in a 1996 finding that SUV drivers were more likely to drive drunk).Actual driver death rates are monitored by the IIHS and vary between models. These statistics do show average driver death rates in the US were lower in larger vehicles from 2002 to 2005, and that there was significant overlap between vehicle categories. The IIHS report states, "Pound for pound across vehicle types, cars almost always have lower death rates than pickups or SUVs." The NHTSA recorded occupant (driver or passenger) fatalities per 100 million vehicle miles traveled at 1.16 in 2004 and 1.20 in 2003 for light trucks (SUVs, pick-ups and minivans) compared to 1.18 in 2004 and 1.21 in 2003 for passenger cars (all other vehicles). Marketing practices The marketing techniques used to sell SUVs have been under criticism. Advertisers and manufacturers alike have been assailed for greenwashing. Critics have cited SUV commercials that show the product being driven through a wilderness area, even though relatively few SUVs are ever driven off-road. Fuel economy The recent growth of SUVs is sometimes given as one reason why the population has begun to consume more gasoline than in previous years. SUVs generally use more fuel than passenger vehicles or minivans with the same number of seats. Additionally, SUVs up to 8,500 pounds GVWR are classified by the US government as light trucks, and thus are subject to the less strict light truck standard under the Corporate Average Fuel Economy (CAFE) regulations, and SUVs which exceed 8,500 pounds GVWR have been entirely exempt from CAFE standards. This provides less incentive for US manufacturers to produce more fuel-efficient models. As a result of their off-road design SUVs may have fuel-inefficient features. High profile increases wind resistance and greater mass require heavier suspensions and larger drivetrains, which both contribute to increased vehicle weight. Some SUVs come with tires designed for off-road traction rather than low rolling resistance. Fuel economy factors include: High masses (compared to the average load) causing high energy demand in transitional operation (in the cities) P a c c e l = m v e h i c l e ⋅ a ⋅ v {\displaystyle {P_{accel}=m_{vehicle}\cdot a\cdot v}} where P a c c e l {\displaystyle P_{accel}\,\!} stands for power, m v e h i c l e {\displaystyle m_{vehicle}\,\!} for the vehicle mass, a {\displaystyle {a}\,\!} for acceleration and v {\displaystyle {v}\,\!} for the vehicle velocity. High cross-sectional area causing very high drag losses especially when driven at high speed P d r a g = A c r o s s ⋅ c w v e h i c l e ⋅ v a i r 3 ρ a i r 2 {\displaystyle {P_{drag}=A_{cross}\cdot cw_{vehicle}\cdot {\frac {v_{air}^{3}\rho _{air}}{2}}}} where P d r a g {\displaystyle P_{drag}\,\!} stands for the power, A c r o s s {\displaystyle {A_{cross}}\,\!} for the cross-sectional area of the vehicle, ρ a i r {\displaystyle {\rho _{air}}\,\!} for the density of the air and v a i r {\displaystyle v_{air}\,\!} for the relative velocity of the air (incl. wind). High rolling resistance due to all-terrain tires (even worse if low pressure is needed offroad) and high vehicle mass driving the rolling resistance P r o l l = μ r o l l ⋅ m v e h i c l e ⋅ v {\displaystyle {P_{roll}=\mu _{roll}\cdot m_{vehicle}\cdot v}} where μ r o l l {\displaystyle \mu _{roll}\,\!} stands for the rolling resistance factor and m v e h i c l e {\displaystyle m_{vehicle}\,\!} for the vehicle mass.Average data for vehicle types sold in the US: Drag resistance (assuming the same drag coefficient which is not a safe assumption) for SUVs may be 30% higher and the acceleration force has to be 35% larger for the same acceleration, which again is not a safe assumption, than family sedans if we use the figures from the above table. Pollution Because SUVs tend to use more fuel (mile for mile) than cars with the same engine type, they generate higher volumes of pollutants (particularly carbon dioxide) into the atmosphere. This has been confirmed by LCA (Life Cycle Assessment) studies, which quantify the environmental impacts of products such as cars, often from the time they are produced until they are recycled. One LCA study which took into account the production of greenhouse gases, carcinogens, and waste production found that exclusive cars, sports cars and SUVs were "characterized by a poor environmental performance." Another study found that family size internal combustion vehicles still produced fewer emissions than a hybrid SUV.Various eco-activist groups, such as the Earth Liberation Front or Les Dégonflés have targeted SUV dealerships and privately owned SUVs due to concern over increased fuel usage.In the US, light trucks and SUVs are held to a less-strict pollution control standard than passenger cars. In response to the perception that a growing share of fuel consumption and emissions are attributable to these vehicles, the Environmental Protection Agency ruled that by the model year 2009, emissions from all light trucks and passenger cars will be regulated equally.The British national newspaper The Independent reported on a study carried out by CNW Marketing Research which suggested that CO2 emissions alone do not reflect the true environmental costs of a car. The newspaper reported that: "CNW moves beyond the usual CO2 emissions figures and uses a "dust-to-dust" calculation of a car's environmental impact, from its creation to its ultimate destruction." The newspaper also reported that the CNW research put the Jeep Wrangler above the Toyota Prius and other hybrid cars as the greenest car that could be bought in the US. However, it was noted that Toyota disputed the proportion of energy used to make a car compared with how much the vehicle uses during its life; CNW said 80% of the energy a car uses is accounted for by manufacture and 20% in use. Toyota claimed the reverse.The report has raised controversy. When Oregon radio station KATU asked for comment on the CNW report, Professor John Heywood (with the Massachusetts Institute of Technology (MIT)) saw merit in the study saying, "It raises...some good questions" but "I can only guess at how they did the detailed arithmetic.... The danger is a report like this will discourage the kind of thinking we want consumers to do – should I invest in this new technology, should I help this new technology?" The Rocky Mountain Institute alleged that even after making assumptions that would lower the environmental impact of the Hummer H3 relative to the Prius, "the Prius still has a lower impact on the environment. This indicates that the unpublished assumptions and inputs used by CNW must continue the trend of favoring the Hummer or disfavoring the Prius. Since the researchers at Argonne Labs performed a careful survey of all recent life cycle analysis of cars, especially hybrids, our research underlines the deep divide between CNW's study and all scientifically reviewed and accepted work on the same topic."A report done by the Pacific Institute alleges "serious biases and flaws" in the study published by CNW, claiming that "the report's conclusions rely on faulty methods of analysis, untenable assumptions, selective use and presentation of data, and a complete lack of peer review."For his part, CNW's Art Spinella says environmental campaigners may be right about SUVs, but hybrids are an expensive part of the automotive picture. The vehicle at the top of his environmentally-friendly list is the Scion xB because it is easy to build, cheap to run and recycle, and carries a cost of 49 cents a mile over its lifetime. "I don't like the Hummer people using that as an example to justify the fact that they bought a Hummer," he said. "Just as it's not for Prius owners to necessarily believe that they're saving the entire globe, the environment for the entire world, that's not true either."In the June 2008 "From Dust to Dust" study, the Prius cost per lifetime-mile fell 23.5%, to $2.19 per lifetime mile, while the H3 cost rose 12.5%, to $2.33 per lifetime-mile. Actual results depend upon the distance driven during the vehicle's life. Greenhouse gas emissions Unmodified, SUVs emit 700 megatonnes of carbon dioxide per year, which causes global warming. Whereas SUVs can be electrified, their (manufacturing) emissions will always be larger than smaller electric cars. They can also be converted to run on a variety of alternative fuels, including hydrogen. That said, the vast majority of these vehicles are not converted to use alternative fuels. Weight and size The weight of a passenger vehicle has a direct statistical contribution to its driver fatality rate according to Informed for LIFE, more weight being beneficial (to the occupant).The length and especially width of large SUVs is controversial in urban areas. In areas with limited parking spaces, large SUV drivers have been criticized for parking in stalls marked for compact cars or that are too narrow for the width of larger vehicles. Critics have stated that this causes problems such as the loss of use of the adjacent space, reduced accessibility into the entry of an adjacent vehicle, blockage of driveway space, and damage inflicted, by the door, to adjacent vehicles. As a backlash against the alleged space consumption of SUVs, the city of Florence, has restricted access of SUVs to the center, and Paris and Vienna have debated banning them altogether.Despite common perceptions, SUVs often have equivalent or less interior storage space than wagons. While handling worse and burning more fuel due to high centre of gravity and weight respectively. Activism Siân Berry was a founder of the Alliance against Urban 4×4s, which began in Camden in 2003 and became a national campaign demanding measures to stop 4×4s (or sport utility vehicles) "taking over our cities". The campaign was known for its "theatrical demonstrations" and mock parking tickets, credited to Berry (although now adapted by numerous local groups).In Sweden, a group which called themselves "Asfaltsdjungelns indianer" (en: The Indians of the asphalt jungle), carried out actions in Stockholm, Gothenburg, Malmö and a number of smaller cities. The group, created in 2007, released the air from the tires on an estimated 300 SUVs during their first year. Their mission was to highlight the high fuel consumption of SUVs, as they thought that SUV owners did not have the right to drive such big vehicles at the expense of others. The group received some attention in media, and declared a truce in December 2007.Similar activist groups, most likely inspired by the Swedish group, have carried out actions in Denmark, Scotland, and Finland. See also Automobile safety Environmental impact of transport Air pollution Environmental aspects of the electric car Gas-guzzler Mobile source air pollution Exhaust gas – Gases emitted as a result of fuel reactions in combustion engines == References ==
runaway greenhouse effect
A runaway greenhouse effect occurs when a planet's atmosphere contains greenhouse gas in an amount sufficient to block thermal radiation from leaving the planet, preventing the planet from cooling and from having liquid water on its surface. A runaway version of the greenhouse effect can be defined by a limit on a planet's outgoing longwave radiation which is asymptotically reached due to higher surface temperatures evaporating water into the atmosphere, increasing its optical depth. This positive feedback means the planet cannot cool down through longwave radiation (via the Stefan–Boltzmann law) and continues to heat up until it can radiate outside of the absorption bands of the water vapour. The runaway greenhouse effect is often formulated with water vapour as the condensable species. The water vapour reaches the stratosphere and escapes into space via hydrodynamic escape, resulting in a desiccated planet. This likely happened in the early history of Venus. A runaway greenhouse effect would have virtually no chance of being caused by people. Venus-like conditions on Earth require a large long-term forcing that is unlikely to occur until the sun brightens by some tens of percents, which will take a few billion years. History While the term was coined by Caltech scientist Andrew Ingersoll in a paper that described a model of the atmosphere of Venus, the initial idea of a limit on terrestrial outgoing infrared radiation was published by George Simpson in 1927. The physics relevant to the, later-termed, runaway greenhouse effect was explored by Makoto Komabayashi at Nagoya university. Assuming a water vapor-saturated stratosphere, Komabayashi and Ingersoll independently calculated the limit on outgoing infrared radiation that defines the runaway greenhouse state. The limit is now known as the Komabayashi–Ingersoll limit to recognize their contributions. Physics of the runaway greenhouse The runaway greenhouse effect is often formulated in terms of how the surface temperature of a planet changes with differing amounts of received starlight. If the planet is assumed to be in radiative equilibrium, then the runaway greenhouse state is calculated as the equilibrium state at which water cannot exist in liquid form. The water vapor is then lost to space through hydrodynamic escape. In radiative equilibrium, a planet's outgoing longwave radiation (OLR) must balance the incoming stellar flux. The Stefan–Boltzmann law is an example of a negative feedback that stabilizes a planet's climate system. If the Earth received more sunlight it would result in a temporary disequilibrium (more energy in than out) and result in warming. However, because the Stefan–Boltzmann response mandates that this hotter planet emits more energy, eventually a new radiation balance can be reached and the temperature will be maintained at its new, higher value. Positive climate change feedbacks amplify changes in the climate system, and can lead to destabilizing effects for the climate. An increase in temperature from greenhouse gases leading to increased water vapor (which is itself a greenhouse gas) causing further warming is a positive feedback, but not a runaway effect, on Earth. Positive feedback effects are common (e.g. ice–albedo feedback) but runaway effects do not necessarily emerge from their presence. Though water plays a major role in the process, the runaway greenhouse effect is not a result of water vapor feedback.The runaway greenhouse effect can be seen as a limit on a planet's outgoing longwave radiation that, when surpassed, results in a state where water cannot exist in its liquid form (hence, the oceans have all "boiled away"). A planet's outgoing longwave radiation is limited by this evaporated water, which is an effective greenhouse gas and blocks additional infrared radiation as it accumulates in the atmosphere. Assuming radiative equilibrium, runaway greenhouse limits on outgoing longwave radiation correspond to limits on the increase in stellar flux received by a planet to trigger the runaway greenhouse effect. Two limits on a planet's outgoing longwave radiation have been calculated that correspond with the onset of the runaway greenhouse effect: the Komabayashi–Ingersoll limit and the Simpson–Nakajima limit. At these values the runaway greenhouse effect overcomes the Stefan–Boltzmann feedback so an increase in a planet's surface temperature will not increase the outgoing longwave radiation.The Komabayashi–Ingersoll limit was the first to be analytically derived and only considers a grey stratosphere in radiative equilibrium. A grey stratosphere (or atmosphere) is an approach to modeling radiative transfer that does not take into account the frequency-dependence of absorption by a gas. In the case of a grey stratosphere or atmosphere, the Eddington approximation can be used to calculate radiative fluxes. This approach focuses on the balance between the outgoing longwave radiation at the tropopause, F IRtop ↑ {\textstyle F_{\text{IRtop}}^{\uparrow }} , and the optical depth of water vapor, τ tp {\textstyle \tau _{\text{tp}}} , in the tropopause, which is determined by the temperature and pressure at the tropopause according to the saturation vapor pressure. This balance is represented by the following equationsWhere the first equation represents the requirement for radiative equilibrium at the tropopause and the second equation represents how much water vapor is present at the tropopause. Taking the outgoing longwave radiation as a free parameter, these equations will intersect only once for a single value of the outgoing longwave radiation, this value is taken as the Komabayashi–Ingersoll limit. At that value the Stefan–Boltzmann feedback breaks down because the tropospheric temperature required to maintain the Komabayashi–Ingersoll OLR value results in a water vapor optical depth that blocks the OLR needed to cool the tropopause.The Simpson–Nakajima limit is lower than the Komabayashi–Ingersoll limit, and is thus typically more realistic for the value at which a planet enters a runaway greenhouse state. For example, given the parameters used to determine a Komabayashi–Ingersoll limit of 385 W/m2, the corresponding Simpson–Nakajima limit is only about 293 W/m2. The Simpson–Nakajima limit builds off of the derivation of the Komabayashi–Ingersoll limit by assuming a convective troposphere with a surface temperature and surface pressure that determines the optical depth and outgoing longwave radiation at the tropopause. The moist greenhouse limit Because the model used to derive the Simpson–Nakajima limit (a grey stratosphere in radiative equilibrium and a convecting troposphere) can determine the water concentration as a function of altitude, the model can also be used to determine the surface temperature (or conversely, amount of stellar flux) that results in a high water mixing ratio in the stratosphere. While this critical value of outgoing longwave radiation is less than the Simpson–Nakajima limit, it still has dramatic effects on a planet's climate. A high water mixing ratio in the stratosphere would overcome the effects of a cold trap and result in a "moist" stratosphere, which would result in the photolysis of water in the stratosphere that in turn would destroy the ozone layer and eventually lead to a dramatic loss of water through hydrodynamic escape. This climate state has been dubbed the moist greenhouse effect, as the end-state is a planet without water, though liquid water may exist on the planet's surface during this process. Connection to habitability The concept of a habitable zone has been used by planetary scientists and astrobiologists to define an orbital region around a star in which a planet (or moon) can sustain liquid water. Under this definition, the inner edge of the habitable zone (i.e., the closest point to a star that a planet can be until it can no longer sustain liquid water) is determined by the outgoing longwave radiation limit beyond which the runaway greenhouse process occurs (e.g., the Simpson–Nakajima limit). This is because a planet's distance from its host star determines the amount of stellar flux the planet receives, which in turn determines the amount of outgoing longwave radiation the planet radiates back to space. While the inner habitable zone is typically determined by using the Simpson–Nakajima limit, it can also be determined with respect to the moist greenhouse limit, though the difference between the two is often small.Calculating the inner edge of the habitable zone is strongly dependent on the model used to calculate the Simpson–Nakajima or moist greenhouse limit. The climate models used to calculate these limits have evolved over time, with some models assuming a simple one-dimensional, grey atmosphere, and others using a full radiative transfer solution to model the absorption bands of water and carbon dioxide. These earlier models that used radiative transfer derived the absorption coefficients for water from the HITRAN database, while newer models use the more current and accurate HITEMP database, which has led to different calculated values of thermal radiation limits. More accurate calculations have been done using three-dimensional climate models that take into account effects such as planetary rotation and local water mixing ratios as well as cloud feedbacks. The effect of clouds on calculating thermal radiation limits is still in debate (specifically, whether or not water clouds present a positive or negative feedback effect). Runaway greenhouse effect in the Solar System Venus A runaway greenhouse effect involving carbon dioxide and water vapor likely occurred on Venus. In this scenario, early Venus may have had a global ocean if the outgoing thermal radiation was below the Simpson–Nakajima limit but above the moist greenhouse limit. As the brightness of the early Sun increased, the amount of water vapor in the atmosphere increased, increasing the temperature and consequently increasing the evaporation of the ocean, leading eventually to the situation in which the oceans evaporated. This scenario helps to explain why there is little water vapor in the atmosphere of Venus today. If Venus initially formed with water, the runaway greenhouse effect would have hydrated Venus' stratosphere, and the water would have escaped to space. Some evidence for this scenario comes from the extremely high deuterium to hydrogen ratio in Venus' atmosphere, roughly 150 times that of Earth, since light hydrogen would escape from the atmosphere more readily than its heavier isotope, deuterium.Venus is sufficiently strongly heated by the Sun that water vapor can rise much higher in the atmosphere and be split into hydrogen and oxygen by ultraviolet light. The hydrogen can then escape from the atmosphere while the oxygen recombines or bonds to iron on the planet's surface. The deficit of water on Venus due to the runaway greenhouse effect is thought to explain why Venus does not exhibit surface features consistent with plate tectonics, meaning it would be a stagnant lid planet.Carbon dioxide, the dominant greenhouse gas in the current Venusian atmosphere, owes its larger concentration to the weakness of carbon recycling as compared to Earth, where the carbon dioxide emitted from volcanoes is efficiently subducted into the Earth by plate tectonics on geologic time scales through the carbonate–silicate cycle, which requires precipitation to function. Earth Early investigations on the effect of atmospheric carbon dioxide levels on the runaway greenhouse limit found that it would take orders of magnitude higher amounts of carbon dioxide to take the Earth to a runaway greenhouse state. This is because carbon dioxide is not anywhere near as effective at blocking outgoing longwave radiation as water is. Within current models of the runaway greenhouse effect, carbon dioxide (especially anthropogenic carbon dioxide) does not seem capable of providing the necessary insulation for Earth to reach the Simpson–Nakajima limit.Debate remains, however, on whether carbon dioxide can push surface temperatures towards the moist greenhouse limit. Climate scientist John Houghton wrote in 2005 that "[there] is no possibility of [Venus's] runaway greenhouse conditions occurring on the Earth". However, climatologist James Hansen stated in Storms of My Grandchildren (2009) that burning coal and mining oil sands will result in runaway greenhouse on Earth. A re-evaluation in 2013 of the effect of water vapor in the climate models showed that James Hansen's outcome would require ten times the amount of CO2 we could release from burning all the oil, coal, and natural gas in Earth's crust.As with the uncertainties in calculating the inner edge of the habitable zone, the uncertainty in whether CO2 can drive a moist greenhouse effect is due to differences in modeling choices and the uncertainties therein. The switch from using HITRAN to the more current HITEMP absorption line lists in radiative transfer calculations has shown that previous runaway greenhouse limits were too high, but the necessary amount of carbon dioxide would make an anthropogenic moist greenhouse state unlikely. Full three-dimensional models have shown that the moist greenhouse limit on surface temperature is higher than that found in one-dimensional models and thus would require a higher amount of carbon dioxide to initiate a moist greenhouse than in one-dimensional models.Other complications include whether the atmosphere is saturated or sub-saturated at some humidity, higher CO2 levels in the atmosphere resulting in a less hot Earth than expected due to Rayleigh scattering, and whether cloud feedbacks stabilize or destabilize the climate system.Complicating the matter, research on Earth's climate history has often used the term "runaway greenhouse effect" to describe large-scale climate changes when it is not an appropriate description as it does not depend on Earth's outgoing longwave radiation. Though the Earth has experienced a diversity of climate extremes, these are not end-states of climate evolution and have instead represented climate equilibria different from that seen on Earth today. For example, it has been hypothesized that large releases of greenhouse gases may have occurred concurrently with the Permian–Triassic extinction event or Paleocene–Eocene Thermal Maximum. Additionally, during 80% of the latest 500 million years, the Earth is believed to have been in a greenhouse state due to the greenhouse effect, when there were no continental glaciers on the planet, the levels of carbon dioxide and other greenhouse gases (such as water vapor and methane) were high, and sea surface temperatures (SSTs) ranged from 40 °C (104 °F) in the tropics to 16 °C (65 °F) in the polar regions. Distant future Most scientists believe that a runaway greenhouse effect is inevitable in the long term, as the Sun gradually becomes more luminous as it ages, and spell the end of all life on Earth. As the Sun becomes 10% brighter about one billion years from now, the surface temperature of Earth will reach 47 °C (117 °F) (unless Albedo is increased sufficiently), causing the temperature of Earth to rise rapidly and its oceans to boil away until it becomes a greenhouse planet, similar to Venus today. According to the astrobiologists Peter Ward and Donald Brownlee in their book The Life and Death of Planet Earth, the current loss rate is approximately one millimeter of ocean per million years due to the colder upper layer of the troposphere acting as a cold trap currently preventing Earth from permanently losing its water to space at present, even with manmade global warming (this is why manmade climate change in the near future will make extreme weather patterns worse in the short term, as a warmer atmosphere can hold more moisture due to it still being too cold to allow water vapor to escape into space), as well as being overshadowed by shorter-term changes in sea level, such as the currently rising sea level due to the melting of glaciers and polar ice, but the rate is gradually accelerating, as the sun gets warmer, to perhaps as fast as one millimeter every 1000 years, by ultimately making the atmosphere so hot that the cold trap is pushed even higher up until it eventually fails to prevent the water from being lost to space. Ward and Brownlee predict that there will be two variations of the future warming feedback: the "moist greenhouse" in which water vapor dominates the troposphere and starts to accumulate in the stratosphere and the "runaway greenhouse" in which water vapor becomes a dominant component of the atmosphere such that the Earth starts to undergo rapid warming, which could send its surface temperature to over 900 °C (1,650 °F), causing its entire surface to melt and killing all life, perhaps about three billion years from now. In both cases, the moist and runaway greenhouse states the loss of oceans will turn the Earth into a primarily-desert world. The only water left on the planet would be in a few evaporating ponds scattered near the poles as well as huge salt flats around what was once the ocean floor, much like the Atacama Desert in Chile or Badwater Basin in Death Valley. The small reservoirs of water may allow life to remain for a few billion more years. As the Sun brightens, CO2 levels should decrease due to an increase of activity in the carbon-silicate cycle corresponding to the increase of temperature. That would mitigate some of the heating Earth would experience because of the Sun's increase in brightness. Eventually, however, as the water escapes, the carbon cycle will cease as plate tectonics come to a halt because of the need for water as a lubricant for tectonic activity. Runaway refrigerator effect Mars may have experienced the opposite of a runaway greenhouse effect: a runaway refrigerator effect. Through this effect, a runaway feedback process may have removed much of the carbon dioxide and water vapor from the atmosphere and cooled the planet. Water condensed on the surface, which led to carbon dioxide dissolving in the water and chemically binding to minerals. This reduced the greenhouse effect, lowering the temperature, causing more water to condense. The end result was lower temperatures, with water being frozen as subsurface permafrost, leaving only a thin atmosphere. See also Atmosphere of Venus, an example of a runaway greenhouse effect Greenhouse and icehouse Earth TRAPPIST-1b References Further reading Steffen, Will; Rockström, Johan; Richardson, Katherine; Lenton, Timothy M.; Folke, Carl; Liverman, Diana; Summerhayes, Colin P.; Barnosky, Anthony D.; Cornell, Sarah E.; Crucifix, Michel; Donges, Jonathan F.; Fetzer, Ingo; Lade, Steven J.; Scheffer, Marten; Winkelmann, Ricarda; Schellnhuber, Hans Joachim (6 August 2018). "Trajectories of the Earth System in the Anthropocene". Proceedings of the National Academy of Sciences. 115 (33): 8252–8259. Bibcode:2018PNAS..115.8252S. doi:10.1073/pnas.1810141115. ISSN 0027-8424. PMC 6099852. PMID 30082409. We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a "Hothouse Earth" pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene.
carbon neutrality in the united states
Carbon neutrality in the United States refers to reducing U.S. greenhouse gas emissions to the point where carbon emissions are neutral compared to the absorption of carbon dioxide, and often called "net zero". Like the European Union, and countries worldwide, the United States has implemented carbon neutrality measures and law reform at both federal and state levels: the Presidency has set a goal of reducing carbon emissions by 50% to 52% compared to 2005 levels by 2030, a carbon free power sector by 2035, and for the entire economy to be net zero by 2050. by April 2023, 22 states, plus Washington DC and Puerto Rico had set legislative or executive targets for clean power production. all cars or light vehicles will have zero emissions (i.e. no internal combustion engine with gas or diesel) by 2035 in light duty vehicles, and no longer be bought by federal government by 2027. the California Air Resources Board voted in 2022 to draft new rules banning gas furnaces and water heaters, and requiring zero emission appliances in 2030. By 2022, four states have gas bans in new buildings. List of state clean electricity laws The following is a list of measures to move to clean electricity in 22 states, plus Washington DC and Puerto Rico, dates, and the details of their laws. Phase out of fossil fuel transport California in 2020 set a 2035 target for all passenger vehicles and light-duty trucks to cease emissions. Connecticut, Massachusetts, New Jersey, New York State, North Carolina, Rhode Island, Vermont also have laws for 2035. Maine, Oregon, Washington have laws for 2030. Phase out of gas boiler California has proposed a ban on gas furnaces and heating or water systems and appliances. See also Fossil fuel phase-out Phase-out of fossil fuel vehicles Phase-out of gas boilers Plastic bans Montreal Protocol Notes External links Clean Energy States Alliance, Table of clean electricity goals
sewer gas
Sewer gas is a complex, generally obnoxious smelling mixture of toxic and nontoxic gases produced and collected in sewage systems by the decomposition of organic household or industrial wastes, typical components of sewage.Sewer gases may include hydrogen sulfide, ammonia, methane, esters, carbon monoxide, sulfur dioxide and nitrogen oxides. Improper disposal of petroleum products such as gasoline and mineral spirits contribute to sewer gas hazards. Sewer gases are of concern due to their odor, health effects, and potential for creating fire or explosions. In homes Sewer gas is typically restricted from entering buildings through plumbing traps that create a water seal at potential points of entry. In addition, plumbing vents allow sewer gases to be exhausted outdoors. Infrequently used plumbing fixtures may allow sewer gas to enter a home due to evaporation of water in the trap, especially in warm weather. The result is the most common means of sewer gas entering buildings and can be solved easily by using the fixtures regularly or adding water to their drains. One of the most common traps to dry out are floor drains such as those typically placed near home furnaces, water heaters and rooms with underfloor heating. Infrequently used utility sinks, tubs, showers, and restrooms also are common culprits. Trap primers are available that automatically add water to remote or little used traps such as these. Blocked plumbing vents, typically at the roof, also can cause water seals to fail via siphoning of the water. Exposure to sewer gas also can happen if the gas seeps in via a leaking plumbing drain or vent pipe, or even through cracks in a building’s foundation. Sewer gas is typically denser than atmospheric gases and may accumulate in basements, but may eventually mix with surrounding air. Individuals who work in sanitation industries or on farms might be exposed on the job if they clean or maintain municipal sewers, manure storage tanks, or septic tanks. In buildings with HVAC air handlers that admit outside air for ventilation, plumbing vents placed too closely to air intakes or windows can be a source of sewer gas odors. In some cases airflow around buildings and wind effects may contribute to sewer gas odor problems even with appropriately separated vents and air intakes. Increasing vent heights, adding vent pipe filters, or providing powered dilution and exhaust can help reduce occurrences. History During the mid-nineteenth century, when indoor plumbing was being developed, it was a common belief that disease was caused largely by miasmas, or literally "polluted air." (Malaria, a disease spread by mosquitoes that breed in marshy areas, got its name from the Italian words for "bad air" because people initially blamed it on marsh gas.) Originally, traps in drain pipes were designed to help keep this bad air from passing back into living spaces within buildings. However, during the Broad Street cholera outbreak in London, in the summer of 1854, physician John Snow, among others, worked to prove that polluted water was the culprit, not the foul smells from sewage pipes or other sources. Subsequently, even as the germ theory of disease developed, society was slow to accept the idea that odors from sewers were relatively harmless when it came to the spread of disease. Health effects In most homes, sewer gas may have an unpleasant odor, but does not often pose a significant health hazard. Residential sewer pipes primarily contain the gases found in air (nitrogen, oxygen, carbon dioxide, etc.). Often, methane is the gas of next highest concentration, but typically remains at nontoxic levels, especially in properly vented systems. However, if sewer gas has a distinct “rotten egg” smell, especially in sewage mains, septic tanks, or other sewage treatment facilities, it may be due to hydrogen sulfide content, which can be detected by human olfactory senses in concentrations as low as parts per billion. Exposure to low levels of this chemical can irritate the eyes, cause a cough or sore throat, shortness of breath, and fluid accumulation in the lungs. Prolonged low-level exposure may cause fatigue, pneumonia, loss of appetite, headaches, irritability, poor memory, and dizziness. High concentrations of hydrogen sulfide (>150 ppm) can produce olfactory fatigue, whereby the scent becomes undetectable. At higher concentrations (>300 ppm), hydrogen sulfide can cause loss of consciousness and death. Very high concentrations (>1000 ppm) can result in immediate collapse, occurring after a single breath. Explosion risk Sewer gas can contain methane and hydrogen sulfide, both highly flammable and potentially explosive substances. As such, ignition of the gas is possible with flame or sparks. The methane concentration in open sewers is lower (7 to 15 ppmv) than the closed drains (up to 300 ppmv) in samples collected 2 cm (0.8 in) above the level of sewage. Greenhouse gas contribution Fully vented sewer gases contribute to greenhouse gas emissions. Septic vent pipes can be fitted with filters that remove some odors.Sewer gas can be used as a power source, thus reducing the consumption of fossil fuels. The gas is piped into a cleaning system and then used as a fuel to power a generator or combined heat and power (CHP) plant. Impact on sewerage Gases present in sewerage can strongly impact material durability due to the action of microorganisms. The most deleterious one is associated to hydrogen sulfide that can result in biogenic sulfide corrosion or microbial corrosion. In worst cases, it may lead to the collapse of the structure with significant cost for its rehabilitation. See also Fire protection Indoor air quality Louisville sewer explosions Plumbing Potable cold and hot water supply Rainwater, surface, and subsurface water drainage Septic systems Sewer gas destructor lamp Marsh gas == References ==
landfill gas emission reduction in brazil
Brazil has established a strong public policy using Clean Development Mechanism Projects to reduce methane emissions from landfills. An important component of these projects is the sale of avoided emissions by the private market to generate revenue. Introduction Faced with serious pollution challenges, Brazil established public policy that would create incentives for the foreign and national private market to invest financial, technological, and human resources in the country. The premise is that experienced companies would bring their technology to Brazil, in an effort to reduce methane gas emissions. The specific technology and projects discussed in this article refer to landfill gas projects. Although this technology was new to Brazil in the early 2000s when companies first began implementing them, these methods were not new to Europe or North America. Additionally, Brazil is just one of many countries participating in similar projects around the world. Background Brazil signed the Kyoto Protocol on April 29, 1998, and ratified it on August 23, 2002. To date, Brazil has 347 clean development mechanism (CDM) projects, which account for 7.3% of the total projects worldwide. Estimated projections by the United Nations Environment Programme (UNEP) show that by 2012, Brazil will have 102 million certified emission reductions (CER), a $1,225 million value. Unlike its fellow BRIC countries, in Brazil the largest component of potential CER projects is landfill gas projects, with a 31.3% share. According to the national survey on basic sanitation (PNSB) conducted in 2008, all of the 5,564 municipalities have access to basic sanitation. According to the Environmental Sanitation Technology Company (CETESB) study, the 6,000 waste sites in Brazil receive 60,000 tonnes of waste per day. Seventy-six percent (76%) of this waste goes to dumps with no management, gas collection, or water treatment. This same study showed that 83.18% of Brazil's methane gas emissions come from uncontrolled waste sites. Landfill gas projects Private companies have submitted CDM projects to the United Nations Framework Convention on Climate Change (UNFCCC) to use landfill gas (LFG) discharges from waste management sites to earn carbon credits or CER. There are over 100 LFG CDM projects in Brazil. The diagram below illustrates the process. First, once the waste management company has developed the landfill with the new technology, (1a) it calculates how much methane (CH4) would have been emitted into the air without its intervention. (1b) Then it converts the CH4 into carbon equivalents (C02e). (2a) Next, the company projects how much methane it expects to emit into the air, with the new technology. Again, (2b) it converts the CH4 into C02e. (3) Next, the company determines the avoided emissions or CER but subtracting the emission projections with the technology from the baseline emissions without the technology. (4) Once credited, the company sells the CER through a broker to companies that will produce emissions greater than their allotted capacity. SASA The SASA landfill is located in Tremembé in São Paulo State of Brazil. Onyx SASA is a subsidiary of Veolia Environnement and is an officially registered project with UNFCCC, as of November 24, 2005. SenterNovem, an agency of the Dutch Ministry of Economic Affairs in the Netherlands, is a partner in the project. The following flow chart depicts the process used by the landfill: (1) Methane (CH4) or carbon equivalents (CO2e) are captured by the vertical wells. (2) Next, a horizontal drain that is connected to the vertical wells extracts the CO2e. (3) Then, a high density collection pipe captures the CO2e and transfers it to the evaporator. (4) Any CO2e that did not evaporate, is transferred to an enclosed flare. (5) The remaining emissions are then vented into the air. At the filing of the report, Onyx SASA anticipated the landfill would accrue 700,625 tons CO2e from 2003 through 2012 in CER. As of 2011, Onyx SASA has filed monitoring reports for the periods of 2003 through 2007. The following chart outlines the actual CER realized to date: Additionally, the project design report states Onyx SASA expects to revegetate and reforest the land; upon fulfillment, 150,000 trees will be planted around the landfill. Paulínia Empresa de Saneamento e Tratamento de Resíduos (ESTRE) is a private waste management Brazilian-based company, founded in 1999. ESTRE operates seven sites in Brazil, Uruguay, and Argentina. It offers waste management services, including recycling and landfills, to private companies and the government. The Paulínia Landfill Gas Project (EPLGP) is located in Campinas in São Paulo State of Brazil. The project was registered on March 3, 2006, with UNFCCC. The goal of the EPLGP is to reduce greenhouse emissions. The following schematic illustrates the process of capturing and recycling the gas emissions: As illustrated above, (1) wells installed in the landfill collect the methane (CH4). (2) Next, high density pipes connected to the wells transfer the CH4 to the blower. (3) Any remaining CH4 is then sent to the flare. (4) Last, the CH4 is flared into the air. The following table outlines the forecasted and actual yearly outputs of CER according to the monitoring reports filed with UNFCCC: ^The notable increase in actual CER versus the projected CER is due to the increase in waste received by the landfill, from 2.5 tons per day as reported in the CDM application to 5 tons per day. Legislation: National Policy on Climate Change After Brazil's Congress passed the climate change legislation, on December 29, 2009, President Luiz Inácio Lula da Silva signed the National Policy on Climate Change (PNMC). The law requires Brazil to reduce greenhouse gas emissions by 38.9% by 2020. On December 9, 2010, President Lula signed a decree which details the provisions of PNMC. At its foundation, PNMC focuses on prevention, citizen participation, and sustainable development. Law N° 12.187 of 2009 There are 13 articles in the legislation: Article 1 establishes the laws and principals governing PNMC.Article 2 establishes definitions and key terms related to climate change including, adverse effects of climate change, emissions, greenhouse gases, and mitigation.Article 3 states: Everyone has a duty to reduce the human impact on climate change Steps will be taken to anticipate, prevent, and minimize the causes of climate change as determined by the scientific community Measures will be taken to consider and distribute the burden among various socio-economic populations and communities Sustainable development is a prerequisite for mitigating climate change and the needs of each population and territory should be dually considered Actions taken at the national level to mitigate climate change must consider actions taken at the municipal, state, and private sectors VetoedArticle 4 outlines the goals: Reconcile socio-economic development and climate protection Reduce greenhouse emissions Vetoed Strengthen methods for removal of anthropogenic green house emissions Use measures to promote adaption by all three spheres of the Brazilian Federation with participation and collaboration from economic and social sectors, in particular those most affected by climate change Preserve, conserve, and restore environmental resources, in particular natural biomes considered a National Heritage Consolidate and expand legally protected areas and encourage reforestation and revegetation of degraded areas Stimulate development of the Brazilian Market for Emissions Reduction (MBRE)Article 5 establishes the guidelines of PNMC: Follow-through on commitments made through the Kyoto Protocol and other climate change measures Assess measurable benefits, quantifiable and verifiable, of mitigation actions Adapt measures to reduce adverse effects Integrate strategies at the local, regional, and national levels Encourage participation of the federal, state, county, and municipal levels, as well as the productive sector, academia and civil society organizations, and implement policies, plans, programs, and actions Promote and develop scientific research Utilize financial and economic instruments to promote mitigation actions Identify and articulate the instruments used to protect climate change Promote activities that effectively reduce gas emissions Promote international cooperation Improve systematic observation Promote dissemination of information, education, and training Stimulate and support practices and activities with low emissions and sustainable consumptionArticle 6 outlines the instruments, committees, plans, funding, policy, research, monitoring, indicators, and assessments PNMC will utilize toward climate change.Article 7 outlines institutional instruments PNMC will utilize: Interministerial Committee on Climate Change Interministerial Commission on Global Climate Change Brazilian Forum on Climate Change The Brazilian Network for Research on Global Climate Change - Climate Network Commission for Coordination of Activities of Meteorology, Climatology and HydrologyArticle 8 addresses the official financial institutions line of credit to support climate change efforts.Article 9 notes that MBRE will be monitored by the Securities Commission.Article 10 was vetoed.Article 11 states that public policy and government programs should be compatible with PNMC.Article 12 states the country's adoption of greenhouse emission gas reductions of 36.1% to 38.9%.Article 13 states that the law will become official on its publication date of December 31, 2009. Presidential decree N° 7.390 of 2010 The decree specifies how Brazil quantifies greenhouse emissions, how it will achieve the reduction, and a legal requirement for estimating annual emissions. The policy will use Brazil's 2005 emission rate as the business as usual base line for comparison of future emissions. The Policy: Provides authority for adopting mitigation actions to achieve the reduction goal Requires reduction efforts to be compatible with sustainable development and economic and social interests Designates instruments for implementation, include the National Climate Change Plan and the National Climate Change Fund Establishes sectoral plans for mitigation and adaptation in forests, agriculture, energy, and transportation Creates the Brazilian Market for Emissions Reductions (MBRE) for trading in avoided emissions certificatesSpecifically, the decree lists the following Action Plans: Prevention and Control of Deforestation in the Amazon Prevention and Control of Deforestation and Forest Fires in the Cerrado Ten Year Plan for Expansion of Energy Consolidation of an Economy of Low-Carbon in Agriculture Reducing Emissions from SteelPer the decree, the Sectoral Plans will include: Emission reduction target in 2020 and incremental goals with a maximum interval of three years Actions to be implemented Definition of indicators for monitoring and evaluating effectiveness Proposed regulatory instruments and incentives for implementation Competitive alignment with industry studies' estimated costs and impactsPer the decree, the following sectors are included in the estimations: Change of Land Use: 1,404 million tons of CO2e (e=equivalent) Energy: 868 million tons of CO2e Agriculture: 730 million tons of CO2e Industrial Processes and Waste: 234 million tons of CO2eThe National Climate Change Fund "supports mitigation and adaptation projects and will rely principally on a to-be-determined portion of future oil and gas revenues." See also Environment of Brazil == References ==
secunda ctl
Secunda CTL is a synthetic fuel plant owned by Sasol at Secunda, Mpumalanga in South Africa. It uses coal liquefaction to produce petroleum-like synthetic crude oil from coal. The process used by Sasol is based on the Fischer–Tropsch process. It is the largest coal liquefaction plant and the largest single emitter of greenhouse gas in the world. Secunda CTL consists of two production units. The Sasol II unit was constructed in 1980 and the Sasol III unit in 1984. It has total production capacity of 160,000 barrels per day (25,000 m3/d). Greenhouse gas emissions As of 2020 it is the world's largest single emitter of greenhouse gas, at 56.5 million tonnes CO2 a year. However, if Afşin-Elbistan C power station in Turkey is built and operated at planned capacity it would emit over 60 million tonnes a year, though this project was stopped on the grounds of possible soil and air pollution. Air Liquide acquired the 42,000 tons/day oxygen production in 2020, with plans for 900 MW power plants to reduce CO2 emissions. Unique plant infrastructure The Sasol III Steam Plant has a 301 m (988 ft) tall chimney built by Concor, which consists of a 292 m (958 ft) high windshield and four 300 m (980 ft) reinforced concrete flues which together with a 1 m (3.3 ft) high temporary roof on the 4th flue make it one the tallest structures in Africa. In Media As a major component of South Africa's economy, Secunda was in turn a major target of the African National Congress during the apartheid era. Two ANC attacks (and their aftermath) were dramatized in the 2006 film Catch a Fire. See also Sasol Coal gasification References External links Commercial Use of FT Synthesis Operating Facilities Sasol - NETL official website
special report on global warming of 1.5 °c
The Special Report on Global Warming of 1.5 °C (SR15) was published by the Intergovernmental Panel on Climate Change (IPCC) on 8 October 2018. The report, approved in Incheon, South Korea, includes over 6,000 scientific references, and was prepared by 91 authors from 40 countries. In December 2015, the 2015 United Nations Climate Change Conference called for the report. The report was delivered at the United Nations' 48th session of the IPCC to "deliver the authoritative, scientific guide for governments" to deal with climate change. Its key finding is that meeting a 1.5 °C (2.7 °F) target is possible but would require "deep emissions reductions" and "rapid, far-reaching and unprecedented changes in all aspects of society". Furthermore, the report finds that "limiting global warming to 1.5 °C compared with 2 °C would reduce challenging impacts on ecosystems, human health and well-being" and that a 2 °C temperature increase would exacerbate extreme weather, rising sea levels and diminishing Arctic sea ice, coral bleaching, and loss of ecosystems, among other impacts.SR15 also has modelling that shows that, for global warming to be limited to 1.5 °C, "Global net human-caused emissions of carbon dioxide (CO2) would need to fall by about 45 percent from 2010 levels by 2030, reaching 'net zero' around 2050." The reduction of emissions by 2030 and its associated changes and challenges, including rapid decarbonisation, was a key focus on much of the reporting which was repeated through the world.When the Paris Agreement was adopted, the UNFCCC invited the Intergovernmental Panel on Climate Change to write a special report on "How can humanity prevent the global temperature rise more than 1.5 degrees above pre-industrial level". Its full title is "Global Warming of 1.5 °C, an IPCC special report on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty".The finished report summarizes the findings of scientists, showing that maintaining a temperature rise to below 1.5 °C remains possible, but only through "rapid and far-reaching transitions in energy, land, urban and infrastructure..., and industrial systems". Meeting the Paris target of 1.5 °C (2.7 °F) is possible but would require "deep emissions reductions", "rapid", "far-reaching and unprecedented changes in all aspects of society". In order to achieve the 1.5 °C target, CO2 emissions must decline by 45% (relative to 2010 levels) by 2030, reaching net zero by around 2050. Deep reductions in non-CO2 emissions (such as nitrous oxide and methane) will also be required to limit warming to 1.5 °C. Under the pledges of the countries entering the Paris Accord, a sharp rise of 3.1 to 3.7 °C is still expected to occur by 2100. Holding this rise to 1.5 °C avoids the worst effects of a rise by even 2 °C. However, a warming of even 1.5 degrees will still result in large-scale drought, famine, heat stress, species die-off, loss of entire ecosystems, and loss of habitable land, throwing more than 100 million into poverty. Effects will be most drastic in arid regions including the Middle East and the Sahel in Africa, where fresh water will remain in some areas following a 1.5 °C rise in temperatures but are expected to dry up completely if the rise reaches 2 °C. Main statements Global warming will likely rise to 1.5 °C above pre-industrial levels between 2030 and 2052 if warming continues to increase at the current rate. SR15 provides a summary of, on one hand, existing research on the impact that a warming of 1.5 °C (equivalent to 2.7 °F) would have on the planet, and on the other hand, the necessary steps to limit global warming.Even assuming full implementation of conditional and unconditional Nationally Determined Contributions submitted by nations in the Paris Agreement, net emissions would increase compared to 2010, leading to a warming of about 3 °C by 2100, and more afterwards. In contrast, limiting warming below or close to 1.5 °C would require to decrease net emissions by around 45% by 2030 and reach net zero by 2050 (i.e. keeping total cumulative emissions within a carbon budget). Even just for limiting global warming to below 2 °C, CO2 emissions should decline by 25% by 2030 and by 100% by 2075.Pathways (i.e. scenarios and portfolios of mitigation options) that would allow such reduction by 2050 describe a rapid transition towards producing electricity through lower-emission methods, and increasing use of electricity instead of other fuels in sectors such as transportation. On average, the pathways describing the proportion of primary energy produced by renewables as increasing to 60%, while the proportion produced by coal drops to 5% and oil to 13%. Most pathways describe a larger role for nuclear energy and carbon capture and storage, and less usage of natural gas. They also assume that other measures are simultaneously undertaken: e.g. non-CO2 emissions (such as methane, black carbon, nitrous oxide) are to be similarly reduced, energy demand is unchanged, reduced by even 30% or offsetted by an unprecedented scale of carbon dioxide removal methods yet to be developed, while new policies and research allows to improve efficiency in agriculture and industry. Impact of 1.5 °C or 2 °C warming According to the report, with global warming of 1.5 °C there would be increased risks to "health, livelihoods, food security, water supply, human security, and economic growth". Impact vectors include reduction in crop yields and nutritional quality. Livestock are also affected with rising temperatures through "changes in feed quality, spread of diseases, and water resource availability". "Risks from some vector-borne diseases, such as malaria and dengue fever, are projected to increase.""Limiting global warming to 1.5°C, compared with 2°C, could reduce the number of people both exposed to climate-related risks and susceptible to poverty by up to several hundred million by 2050." Climate-related risks associated with increasing global warming depend on geographic location, "levels of development and vulnerability", and the speed and reach of climate mitigation and climate adaptation practices. For example, "urban heat islands amplify the impacts of heatwaves in cities." In general, "countries in the tropics and Southern Hemisphere subtropics are projected to experience the largest impacts on economic growth." Weather, sea level and ice Many regions and seasons experience warming greater than the global annual average, e.g. "2–3 times higher in the Arctic. Warming is generally higher over land than over the ocean," and it correlates with temperature extremes (which are projected to warm up to twice more on land than the global mean surface temperature) as well as precipitation extremes (both heavy rain and droughts). The assessed levels of risk generally increased compared to the previous IPCC report.The "global mean sea level is projected rise (relative to 1986–2005) by 0.26 to 0.77 m by 2100 for 1.5 °C global warming" and about 0.1 m more for 2 °C. A difference of 0.1 m may correspond to 10 million more or fewer people exposed to related risks. "Sea level rise will continue beyond 2100 even if global warming is limited to 1.5 °C. Around 1.5 °C to 2 °C of global warming," irreversible instabilities could be triggered in Antarctica and "Greenland ice sheet, resulting in multi-metre rise in sea level." "An ice-free Arctic summer is projected once per century" (per decade) for 1.5 °C (respectively 2 °C). "Limiting global warming to 1.5 °C rather than 2 °C is projected to prevent the thawing over centuries of a permafrost area in the range of 1.5 to 2.5 million km2." Ecosystems "A decrease in global annual catch for marine fisheries of about 1.5 or 3 million tonnes for 1.5 °C or 2 °C of global warming" is projected by one global fishery model cited in the report. Coral reefs are projected to decline by a further 70–90% at 1.5 °C, and even more than 99% at 2 °C. "Of 105,000 species studied, 18% of insects, 16% of plants and 8% of vertebrates fare projected to lose over half of their climatically determined geographic range for global warming of 2 °C."Approximately "4% or 13% of the global terrestrial land area is projected to undergo a transformation of ecosystems from one type to another" at 1 °C or 2 °C, respectively. "High-latitude tundra and boreal forests are particularly at risk of climate change-induced degradation and loss, with woody shrubs already encroaching into the tundra and will proceed with further warming." Limiting the temperature increase Human activities (anthropogenic greenhouse gas emissions) have already contributed 0.8–1.2 °C (1.4–2.2 °F) of warming. Nevertheless, the gases which have been emitted so far are unlikely to cause global temperature to rise to 1.5 °C alone, meaning a global temperature rise to 1.5 °C above pre-industrial levels is avoidable, assuming net zero emissions are reached soon. Carbon budget Limiting global warming to 1.5 °C requires staying within a total carbon budget, i.e. limiting total cumulative emissions of CO2. In other words, if net anthropogenic CO2 emissions are kept above zero, a global warming of 1.5 °C and more will eventually be reached. The value of the total net anthropogenic CO2 budget since the pre-industrial era is not assessed in the report. Estimates of 400–800 GtCO2 (gigatonnes of CO2) for the remaining budget are given (580 GtCO2 and 420 GtCO2 for a 50% and 66% probability of limiting warming to 1.5 °C, using global mean surface air temperature (GSAT); or 770 and 570 GtCO2, for 50% and 66% probabilities, using global mean surface temperature (GMST)). This is about 300 GtCO2 more compared to a previous IPCC report, due to updated understanding and further advances in methods. Emissions around the time of the report were depleting this budget at 42±3 GtCO2 per year. Anthropogenic emissions from the pre-industrial period to the end of 2017 are estimated to have reduced the budget for 1.5 °C by approximately 2200±320 GtCO2.The estimates for the budget come with significant uncertainties, associated with: climate response to CO2 and non-CO2 emissions (these contribute about ±400 GtCO2 in uncertainty), the level of historic warming (±250 GtCO2), potential additional carbon release from future permafrost thawing and methane release from wetlands (reducing the budget by up to 100 GtCO2 over the century), and the level of future non-CO2 mitigation (±400 GtCO2). Necessary emission reductions Current nationally stated mitigation ambitions, as submitted under the Paris Agreement, would lead to global greenhouse gas emissions of 52–58 GtCO2eq per year, by 2030. "Pathways reflecting these ambitions would not limit global warming to 1.5 °C, even if supplemented by very challenging increases in the scale and ambition of emissions reductions after 2030." Instead, they are "broadly consistent" with a warming of about 3 °C by 2100, and more afterwards. Limit global warming to 1.5 °C with no or limited overshoot would require reducing emissions to below 35 GtCO2eq per year in 2030, regardless of the modelling pathway chosen. Most fall within 25–30 GtCO2eq per year, a 40–50% reduction from 2010 levels.The report says that for limiting warming to below 1.5 C "global net human-caused emissions of CO2 would need to fall by about 45% from 2010 levels by 2030, reaching net zero around 2050." Even just for limiting global warming to below 2 °C, CO2 emissions should decline by 25% by 2030 and by 100% by 2070.Non-CO2 emissions should decline in more or less similar ways. This involves deep reductions in emissions of methane and black carbon: at least 35% of both by 2050, relative to 2010, to limit warming near 1.5 °C. Such measures could be undertaken in the energy sector and by reducing nitrous oxide and methane from agriculture, methane from the waste sector, and some other sources of black carbon and hydrofluorocarbons.On timescales longer than tens of years, it may still be necessary to sustain net negative CO2 emissions and/or further reduce non-CO2 radiative forcing (*), in order to prevent further warming (due to Earth system feedbacks), reverse ocean acidification, and minimise sea level rise.(*) Non-CO2 emissions included in this Report are all anthropogenic emissions other than CO2 that result in radiative forcing. These include short-lived climate forcers, such as methane, some fluorinated gases, ozone precursors, aerosols or aerosol precursors, such as black carbon and sulphur dioxide, respectively, as well as long-lived greenhouse gases, such as nitrous oxide or some fluorinated gases. The radiative forcing associated with non-CO2 emissions and changes in surface albedo is referred to as non-CO2 radiative forcing. Pathways to 1.5 °C Various pathways are considered, describing scenarios for mitigation of global warming, including portfolios for energy supply and negative emission technologies (like afforestation or carbon dioxide removal). Examples of actions consistent with the 1.5 °C pathway include "shifting to low- or zero-emission power generation, such as renewables; changing food systems, such as diet changes away from land-intensive animal products; electrifying transport and developing 'green infrastructure', such as building green roofs, or improving energy efficiency by smart urban planning, which will change the layout of many cities." As another example, an increase of forestation by 10,000,000 square kilometres (3,900,000 sq mi) by 2050 relative to 2010 would be required.The pathways also assume an increase in annual investments in low-carbon energy technologies and energy efficiency by roughly a factor of four to ten by 2050 compared to 2015. Carbon dioxide removal The emission pathways that reach 1.5 °C contained in the report assume the use of carbon dioxide removal (CDR) to offset for remaining emissions. Pathways that overshoot the goal rely on CDR to remove carbon dioxide at a rate that exceeds remaining emissions in order to return to 1.5 °C. However, understanding is still limited about the effectiveness of net negative emissions to reduce temperatures after an overshoot. Reversing an overshoot of 0.2 °C might not be achievable given considerable implementation challenges. The report highlights a CDR technology called bioenergy with carbon capture and storage (BECCS). The report notes that apart from afforestation/reforestation and ecosystem restoration, "the feasibility of massive-scale deployment of many CDR technologies remains an open question", with areas of uncertainty regarding technology upscaling, governance, ethical issues, policy and carbon cycle. The report notes that CDR technology is in its infancy and the feasibility is an open question. Estimates from recent literature are cited, giving a potential of up to 5 GtCO2 per year for BECCS and up to 3.6 GtCO2 per year for afforestation. Solar radiation management The report describes several proposals for solar radiation management (SRM). It concludes that SRMs have potential to limit warming, but "face large uncertainties and knowledge gaps as well as substantial risks, [...] and constraints"; "the impacts of SRM (both biophysical and societal), costs, technical feasibility, governance and ethical issues associated need to be carefully considered." An analysis of the geoengineering proposals published in Nature Communication confirmed findings of the SR15, stating that "all are in early stages of development, involve substantial uncertainties and risks, and raise ethical and governance dilemmas. Based on present knowledge, climate geoengineering techniques cannot be relied on to significantly contribute to meeting the Paris Agreement temperature goals". Process There are three IPCC working groups: Working Group I (WG I), co-chaired by Valerie Masson-Delmotte and Panmao Zhai, covers the physical science of climate change. Working Group II (WG II), co-chaired by Hans-Otto Pörtner and Debra Roberts, examines "impacts, adaptation and vulnerability". The "mitigation of climate change" is dealt with by Working Group III (WG III), co-chaired by Priyardarshi Shukla and Jim Skea. The "Task Force on National Greenhouse Gas Inventories" "develops methodologies for measuring emissions and removals". There are also Technical Support Units that guide "the production of IPCC assessment reports and other products". Contributors Researchers from 40 countries, representing 91 authors and editors contributed to the report, which includes over 6,000 scientific references. Reactions Researchers In his 1 October 2018 opening statement at the 48th Session held in Incheon, Korea, Hoesung Lee, who has been Chair of the IPCC since 6 October 2015, described this IPCC meeting as "one of the most important" in its history. Debra Roberts, IPCC contributor called it the "largest clarion bell from the science community". Roberts hopes "it mobilises people and dents the mood of complacency".In a CBC interview, Paul Romer was asked if the Nobel Prize in economic sciences that he and William Nordhaus received shortly before the SR15 was released, was timed as a message. Romer said that he was optimistic that measures will be taken in time to avert climate catastrophe. Romer compared the angst and lack of political will in imposing a carbon tax to the initial angst surrounding the chlorofluorocarbon (CFC) ban and the positive impact it had on restoring the depleted ozone layer. The 1987 Montreal Protocol banned Chlorofluorocarbon (CFO) and the ozone layer recovered by 2000. In giving the Nobel to Nordhaus and Romer, the Royal Swedish Academy of Sciences cited Nordhaus as saying "the most efficient remedy for problems caused by greenhouse gases is a global scheme of universally imposed carbon taxes".Howard J. Herzog, a senior research engineer at the Massachusetts Institute of Technology, said that carbon capture and storage technologies, except reforestation, are problematic because of their impact on the environment, health and high cost. In the article there is a link to another article that refers to a study published in the scientific journal "Nature Energy". The study says that we can limit warming to 1.5 degrees without carbon capture and storage, by technological innovation and changing lifestyle.A 2021 study found that degrowth scenarios, where economic output either "declines" or declines in terms of contemporary economic metrics such as current GDP, have been neglected in considerations of 1.5 °C scenarios in the report, finding that investigated degrowth scenarios "minimize many key risks for feasibility and sustainability compared to technology-driven pathways" with a core problem of such being feasibility in the context of contemporary decision-making of politics and globalized rebound- and relocation-effects. Politics Australia Prime Minister Scott Morrison emphasised that the report was not specifically for Australia but for the whole world. Energy Minister Angus Taylor said the Government would "not be distracted" by the IPCC report saying "A debate about climate change and generation technologies in 2050 won't bring down current power prices for Australian households and small businesses." Environment Minister Melissa Price said that scientists are "drawing a very long bow" to say coal should be phased out by 2050 and supported new coal-fired power stations pledging not to legislate the Paris targets. Australia is not on track to meet the commitments under Paris agreement according to modelling conducted by ClimateWorks Australia. Canada Canadian Environment Minister Catherine McKenna acknowledged that the SR15 report would say Canada is not "on track" for 1.5 °C. Canada will not be implementing new plans but it will continue to move forward on a "national price on carbon, eliminating coal-fired power plants, making homes and businesses more energy-efficient, and investing in clean technologies and renewable energy". In response to a question on the sense of urgency of the SR15 report during a 9 October interview on CBC News's Power and Politics Andrew Scheer, the Leader of the Opposition, promised that they are putting forward a "comprehensive plan to reduce CO2 without imposing a carbon tax" which Scheer said "raised costs without actually reducing emissions". European Union According to The New York Times, the European Union indicated it might add more ambitious reform goals centered around reducing emissions. On 9 October, the Council of the European Union presented their response to SR15 and their position for the Katowice Climate Change Conference of the Parties (COP 24) held in Poland in December 2018. Their environment ministers noted recent progress in legislation to reduce greenhouse gas emissions. The Council's 9 October pointed to climate change legislation such as, the "new EU 2030 renewable energy target of 32%, the new energy efficiency target of 32.5%, the reform of the EU emission trading system, the emission reduction targets in sectors falling outside the scope of ETS and the integration of land use, land use change and forestry (LULUCF) in the EU's climate and energy framework. Low-emissions and climate resilient growth is possible: The EU is continuing successfully to decouple economic growth from emissions. Between 1990 and 2016, the EU's GDP grew by 53% while total emissions fell by 22.4%. The EU's share of global greenhouse gas emissions fell from an estimated 17.3% in 1990 to 9.9% in 2012. India The Centre for Science and Environment said the repercussions for developing countries such as India, would be "catastrophic" at 2 °C warming and that the impact even at 1.5 °C described in SR15 is much greater than anticipated. Crop yields would decline and poverty would increase. New Zealand The Minister for Climate Change James Shaw said that the Report "has laid out a strong case for countries to make every effort to limit temperature rise to 1.5°C above pre-industrial levels. ... The good news is that the IPCC's report is broadly in line with this Government's direction on climate change and it's highly relevant to the work we are doing with the Zero Carbon Bill." United States President Donald Trump said that he had received the report, but wanted to learn more about those who "drew it" before offering conclusions. In an interview with ABC's "This Week" the director of the National Economic Council, Larry Kudlow, stated, "personally, I think the UN study is way too difficult," and that the authors "overestimate" the likelihood for environmental disasters. Since the publication Trump stated in an interview on 60 Minutes that he didn't know that climate change is manmade and that "it'll change back again", the scientists who say it's worse than ever have "a very big political agenda" and that "we have scientists that disagree with [manmade climate change]." COP24 The governments of four countries (the gas/oil-producers USA, Russia, Saudi Arabia and Kuwait) blocked a proposal to welcome the Intergovernmental Panel on Climate Change's (IPCC) Special Report on Global Warming of 1.5 °C at the 2018 United Nations Climate Change Conference (COP24). Other The "Special Report on Global Warming of 1.5 °C" (SR15) is cited by Greta Thunberg in her speeches "Wherever I Go I Seem to Be Surrounded by Fairy Tales" (United States Congress, Washington DC, 18 September 2019) and "We Are the Change and Change Is Coming" (Week For Future, Climate Strike, Montreal, 27 September 2019), both published in the second edition of No One Is Too Small to Make a Difference. At the 2019 World Economic Forum, the head of the International Monetary Fund, Kristalina Georgieva, said that: "The big eye opener [into climate change and its effects] was when last year I read [the SR15] IPCC report. I tell you, I could not sleep that night. [...] What have we done?". See also Representative Concentration Pathway (RCP) References Further reading IPCC, 2018: Global Warming of 1.5 °C. An IPCC Special Report on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty [V. Masson-Delmotte, P. Zhai, H. O. Pörtner, D. Roberts, J. Skea, P. R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J. B. R. Matthews, Y. Chen, X. Zhou, M. I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, T. Waterfield (eds.)]. Report website, chapters I–V External links ClimateClock: time left to reaching the 1.5 °C threshold
climate change in grenada
Climate change in Grenada has received significant public and political attention in Grenada. As of 2013, the mitigation of its effects has been high on the agenda of the Government of Grenada, which seeks to set an example through innovation and green technology. Greenhouse gas emissions Given its small size, Grenada is not a major contributor to greenhouse gas emissions, but does use fossil fuel to generate most of its electricity. The Government of Grenada has set a goal of generating 50% of its energy from solar and wind power by 2030, and is taking steps to abolish Grenlec, the state-run electric utility. Because tourism is a mainstay of the economy, there is also interest in exploring the use of seawater for air-conditioning. Mitigation and adaptation Adaptation As of 2013, Grenada had a US$6.9 million pilot project to adapt its irrigation system to climate change and conduct local and regional water planning, funded by the German International Climate Initiative (IKI). Groundwater depletion, lower water tables, disruption of water supply by hurricanes (such as Hurricane Ivan), saltwater intrusion, and rising sea levels pose challenges for providing a consistent water supply for agriculture and tourism. Society and culture Activism In 2013, the newspaper The Washington Diplomat profiled Grenada's ambassador to the United States, Angus Friday, who has served as a "senior climate policy specialist at the World Bank." In his earlier posting as Grenadian Ambassador to the United Nations, "he frequently advocated for small Caribbean and Pacific island nations threatened by rising ocean levels." See also Climate change in the Caribbean == References ==
energy in belgium
Energy in Belgium describes energy and electricity production, consumption and import in Belgium. It is governed by the energy policy of Belgium, which is divided between several levels of government. For example, regional governments are responsible for awarding green certificates (except for offshore wind parks) while the national government is responsible for all nuclear power. As a member country of the European Union Belgium also complies with its energy policy. Belgium is heavily reliant on ageing nuclear reactors and gas powered generators, although renewables (especially wind power) are generating an increasing percentage of electricity consumed. The energy plan for Brussels is for it to be carbon neutral by 2050, with emissions down by 40% in 2030, 67% in 2040 and 90% in 2050 compared to 2005. Belgium as a whole has a target of a 55% reduction in emissions by 2030. Energy statistics Primary energy consumption Primary energy is the amount of extractable energy present in fuels as they are found in nature. It is often expressed in tonnes of oil equivalent (toe) or watt-hour (Wh). Unless stated otherwise the lower heating value is used in the remainder of this text. A portion of primary energy is converted into other forms before it is used, depending on the energy conversion efficiency of the installation and method employed. This number differs significantly from the final energy as consumed by end users. Import In 2021, crude oil was imported mainly from the Netherlands.Natural gas net imports are mainly from the Netherlands and Norway in 2021. Electricity Electrabel is main producer of electricity, followed by EDF Luminus. Short term trading is done via the Belpex energy exchange, which is now part of APX-ENDEX. The Belgian transmission grid, operated by Elia System Operator, has a central position in the Synchronous grid of Continental Europe. This allows Belgium to trade electricity with its neighbours. Although currently there are only physical connections with the Netherlands and France, links with Germany (Alegro) and the United Kingdom (Nemo) are planned. Currently a maximum of 3500 MW can be imported. In comparison, the net installed generation capacity in Belgium is estimated to be 19,627 MW.According to the GEMIX report the potential of renewable energy sources is 17 TWh per year. Energy types Nuclear power Nuclear power typically contributes between 50% and 60% of the electricity produced domestically (50.4% in 2010). Belgium has two nuclear power plants: Nuclear Plant Doel with four reactors of (1) 392, (2) 433, (3) 1006 and (4) 1008 MWe (1975) Nuclear Plant Tihange with three reactors of (1) 962, (2) 1008 and (3) 1015 MWe (1975)By law the nuclear power plants are to be phased-out. Two reactors (Doel 3 and Tihange 2) were closed in 2012; however the government has extended the life of the remaining five. The lifetime of one old reactor was extended to 2025; and in 2023, because of the Russian invasion of Ukraine, it was agreed to extend the life of Doel 4 and Tihange 3 reactors to 2035. Fossil fuels Coal power The use of coal in thermal power plants has been decreasing. In 2000 coal was still used to produce 14.25% of electricity, by 2006 this had dropped to about 10%; and in 2010 it was down to 6.3%. The last conventional coal units of the thermal power plants in Mol and Kallo were closed in March 2016. Natural gas In 2022 gas accounted for 24.4% of gross electricity generated, with coal at 0.04%. Fluxys is the main operator in natural gas transmission. Several power stations use a combined cycle including: Drogenbos, Amercoeur, Tessenderlo. Building permits are being processed for plants in Seneffe and Visé. Oil refining At the end of 2011 Belgium had a distillation capacity 41 Mt. That year 72% of the capacity was used. Renewables Renewable energy includes wind, solar, biomass and geothermal energy sources. In 2000, renewable energy (including biomass) was used for producing 0.95% of the 78.85 TWh of electricity produced domestically This had risen to 13.01% in 2021.On 11 May 2022 7,112 MW was generated by combined wind and solar energy production. Wind power At the start of 2012, there were 498 operational wind turbines in Belgium, with a capacity of 1080 MW. The amount of electricity generated from wind energy has surpassed 2 TWh per year. By 2021 wind power accounted for 19% of Belgium’s installed power generation capacity and 11% of total power generation. There are seven large-scale offshore wind farm projects. Northwind (216MW), Thorntonbank Wind Farm (325 MW), Belwind Wind Farm (330 MW) are operational. The others are in various stages of planning. Solar power The exploitation of Solar power is on the rise in Belgium. In 2021 solar accounted for 27% of Belgium’s power generation capacity and 6% of total power generation. Biomass and waste In 2009, biomass and biogas were used to generate 3.5 TWh or 3.8% of gross domestic electricity production. In 2010 5.07 million tonnes of waste was produced in Belgium, of which 1.75 Mt was incinerated. Nearly always (99.8% of the time) energy was recovered during incineration. Non renewable waste was used for producing 1.4% of the gross domestic electricity production. 1.9 Mt was recycled and 1 Mt was composted or fermented; only 0.062 Mt was dumped. Ten years earlier this was only 0.71%. Hydroelectric power Belgium has two pumped storage hydroelectric power stations: Coo-Trois-Ponts (1164 MW) and Plate-Taille (143 MW). Pumped storage stations are a net consumer of electricity, but they contributed 1.4% to the gross electricity production in 2010. Despite the limited potential there are also a number of stations generating hydroelectric power. With a combined capacity of about 100 MW. Contributing 0.3% of gross domestic production in 2010. Almost all of this capacity is realised in the Walloon Region. Even though hydroelectric power was used extensively in Flanders prior to the industrial revolution, there are no rivers where it can be generated on a large scale. The region's 15 installations have a combined capacity just shy of 1 MW (994 kW). Final energy consumption In 2010 the largest share (34%) of final energy was for domestic use (this includes: households, service sector, commerce, and agriculture). Transport and industrial sector both consumed about a quarter. Fossil fuels are also used as raw material in several manufacturing processes, this non-energetic use accounts for the remainder of the final energy. A more detailed picture of the energy and type of fuel used by various activities is given in the table below. Brussels-Capital Region In the Brussels-Capital Region, the electricity and natural gas net are operated by Sibelga. In 2011, the natural gas consumption was 10,480 GWh and the electricity consumption was 5,087 GWh.Sibelga invests in combined heat and power (CHP) installations for which it receives green certificates. In 2011 its eleven installations had a combined capacity of 17.8 MWe and 19.7 MWth and generated 50.5 GWh of electricity.The Region of Brussels-Capital also encourages MicroCHP and implemented the European directive of 2002/91/CE on Energy Performance of Buildings. Corporations The companies Umicore, BASF, Solvay, Duferco, Tessenderlo Chemie, ArcelorMittal, and Air Liquide together account for about 15% of the total electricity consumption of Belgium in 2006. Greenhouse gas emissions In 1990, the greenhouse gas (GHG) emissions were 146.9 million tons of CO2 equivalent (Mt CO2 eq), whose 88 million tons came from the Flemish Region, 54.8 from the Walloon Region and 4 Mt from the Brussels-capital Region.Being a member of the European Union, Belgium, applied the European Union Emission Trading Scheme set up by the Directive 2003/87/EC. The Kyoto protocol sets a 7.5% reduction of greenhouse gas emission target compared to 1990. Belgium set up a National Allocation Plan at the federal level with target for each of the three regions. Belgium takes part in the United Nations Framework Convention on Climate Change and has ratified the Kyoto Protocol. On 14 November 2002, Belgium signed the Cooperation Agreement for the implementation of a National Climate Plan and reporting in the context of the UNFCCC and the Kyoto protocol. The first National Allocation Plan was for the period from 2005 to 2007. The European commission approved it on 20 October 2004. The second allocation plan was for the period 2008–2012 and aims a reduction of 7.5% of green house gas emissions compared to 1990. By 2019, the Walloon region had decreased 34% of its CO2 emissions, while Flanders had only decreased 8%. Business According to Forbes list of billionaires (2011), the Belgian billionaire Wang Xingchun ($1 B 2011) made his wealth in coal business. Wang is a resident of Singapore who holds Belgian citizenship. Wang is the chairman of minerals concern Winsway Coking Coal, an company that imports coal from Mongolia to China that went public in Hong Kong in 2010. See also Electricity sector in Belgium Energy policy of Belgium == References ==
energy planning
Energy planning has a number of different meanings, but the most common meaning of the term is the process of developing long-range policies to help guide the future of a local, national, regional or even the global energy system. Energy planning is often conducted within governmental organizations but may also be carried out by large energy companies such as electric utilities or oil and gas producers. These oil and gas producers release greenhouse gas emissions. Energy planning may be carried out with input from different stakeholders drawn from government agencies, local utilities, academia and other interest groups. Since 1973, energy modeling, on which energy planning is based, has developed significantly. Energy models can be classified into three groups: descriptive, normative, and futuristic forecasting.Energy planning is often conducted using integrated approaches that consider both the provision of energy supplies and the role of energy efficiency in reducing demands (Integrated Resource Planning). Energy planning should always reflect the outcomes of population growth and economic development. There are also several alternative energy solutions which avoid the release of greenhouse gasses, like electrifying current machines and using nuclear energy. A unused energy plan for cities is created as a result of a careful investigation of the arranging prepare, which coordinating city arranging and vitality arranging together and gives energy arrangements for high-level cities and mechanical parks. Planning and market concepts Energy planning has traditionally played a strong role in setting the framework for regulations in the energy sector (for example, influencing what type of power plants might be built or what prices were charged for fuels). But in the past two decades many countries have deregulated their energy systems so that the role of energy planning has been reduced, and decisions have increasingly been left to the market. This has arguably led to increased competition in the energy sector, although there is little evidence that this has translated into lower energy prices for consumers. Indeed, in some cases, deregulation has led to significant concentrations of "market power" with large very profitable companies having a large influence as price setters. Integrated resource planning Approaches to energy planning depends on the planning agent and the scope of the exercise. Several catch-phrases are associated with energy planning. Basic to all is resource planning, i.e. a view of the possible sources of energy in the future. A forking in methods is whether the planner considers the possibility of influencing the consumption (demand) for energy. The 1970s energy crisis ended a period of relatively stable energy prices and stable supply-demand relation. Concepts of demand side management, least cost planning and integrated resource planning (IRP) emerged with new emphasis on the need to reduce energy demand by new technologies or simple energy saving. Sustainable energy planning Further global integration of energy supply systems and local and global environmental limits amplifies the scope of planning both in subject and time perspective. Sustainable energy planning should consider environmental impacts of energy consumption and production, particularly in light of the threat of global climate change, which is caused largely by emissions of greenhouse gases from the world's energy systems, which is a long-term process. The 2022 renewable energy industry outlook shows supportive policies from an administration focused on combatting climate change in 2022's political landscape aid an expected growth of the renewable energy industry Biden has argued in favor of developing the clean energy industry in the US and in the world to vigorously address climate change. President Biden expressed his intention to move away from the oil industry. 2022 administration calls for, "Plan for Climate Change and Environmental Justice", which aims to reach 100% carbon-free power generation by 2035 and net-zero emissions by 2050 in the USA.Many OECD countries and some U.S. states are now moving to more closely regulate their energy systems. For example, many countries and states have been adopting targets for emissions of CO2 and other greenhouse gases. In light of these developments, broad scope integrated energy planning could become increasingly importantSustainable Energy Planning takes a more holistic approach to the problem of planning for future energy needs. It is based on a structured decision making process based on six key steps, namely: Exploration of the context of the current and future situation Formulation of particular problems and opportunities which need to be addressed as part of the Sustainable Energy Planning process. This could include such issues as "peak oil" or "economic recession/depression", as well as the development of energy demand technologies. Create a range of models to predict the likely impact of different scenarios. This traditionally would consist of mathematical modelling but is evolving to include "Soft System Methodologies" such as focus groups, peer ethnographic research, "what if" logical scenarios etc. Based on the output from a wide range of modelling exercises and literature reviews, open forum discussion etc., the results are analysed and structured in an easily interpreted format. The results are then interpreted to determine the scope, scale and likely implementation methodologies which would be required to ensure successful implementation. This stage is a quality assurance process which actively interrogates each stage of the Sustainable Energy Planning process and checks if it has been carried out rigorously, without any bias and that it furthers the aims of sustainable development and does not act against them. The last stage of the process is to take action. This may consist of the development, publication and implementation of a range of policies, regulations, procedures or tasks which together will help to achieve the goals of the Sustainable Energy Plan.Designing for implementation is often carried out using "Logical Framework Analysis" which interrogates a proposed project and checks that it is completely logical, that it has no fatal errors and that appropriate contingency arrangements have been put in place to ensure that the complete project will not fail if a particular strand of the project fails. Sustainable energy planning is particularly appropriate for communities who want to develop their own energy security, while employing best available practice in their planning processes. Energy planning tools (software) Energy planning can be conducted on different software platforms and over various timespans and with different qualities of resolution (i.e very short divisions of time/space or very large divisions). There are multiple platforms available for all sorts of energy planning analysis, with focuses on different areas, and significant growth in terms of modeling software or platforms available in recent years. Energy planning tools can be identified as commercial, open source, educational, free, and as used by governments (often custom tools). Potential energy solutions Electrification One potential energy option is the move to electrify all machines that currently use fossil fuels for their energy source. There are already electric alternatives available such as electric cars, electric cooktops, and electric heat pumps, now these products need to be widely implemented to electrify and decarbonize our energy use. To reduce our dependence on fossil fuels and transfer to electric machines, it requires that all electricity be generated by renewable sources. As of 2020 60.3% of all energy generated in the United States came from fossil fuels, 19.7% came from nuclear energy, and 19.8% came from renewables. The United States is still heavily relying on fossil fuels as a source of energy. For the electrification of our machines to help the efforts to decarbonize, more renewable energy sources, such as wind and solar would have to be built. Another potential problem that comes with the use of renewable energy is the energy transmission. A study conducted by Princeton University found that the locations with the highest renewable potential are in the Midwest, however, the places with the highest energy demand are coastal cities. To effectively make use of the electricity coming from these renewable sources, the U.S. electric grid would have to be nationalized, and more high voltage transmission lines would have to be built. The total amount of electricity that the grid would have to be able to accommodate has to increase. If more electric cars were being driven there would be a decline in gasoline demand and an increased demand for electricity, this increased demand for electricity would require our electric grids to be able to transport more energy at any given moment than is currently viable. Nuclear Energy Nuclear energy is sometimes considered to be a clean energy source. Nuclear energy's only associated carbon emission takes place during the process of mining for uranium, but the process of obtaining energy from uranium does not emit any carbon. A primary concern in using nuclear energy stems from the issue of what to do with radioactive waste. The highest level source of radioactive waste comes from the spent reactor fuel, the radioactive fuel decreases over time through radioactive decay. The time it takes for the radioactive waste to decay depends on the length of the substance's half-life. Currently, the United States does not have a permanent disposal facility for high-level nuclear waste. Public support behind increasing nuclear energy production is an important consideration when planning for sustainable energy. Nuclear energy production has a complicated past. Multiple nuclear power plants having accidents or meltdowns has tainted the reputation of nuclear energy for many. A considerable section of the public is concerned about the health and environmental impacts of a nuclear power plant melting down, believing that the risk is not worth the reward. Though there is a portion of the population that believes expanding nuclear energy is necessary and that the threats of climate change far outweigh the possibility of a meltdown, especially considering the advancements in technology that have been made within recent decades. Global greenhouse gas emissions and energy production The majority of global manmade greenhouse gas emissions is derived from the energy sector, contributing to 72.0% of global emissions. The majority of that energy goes toward producing electricity and heat (31.0%), the next largest contributor is agriculture (11%), followed by transportation (15%), forestry (6%) and manufacturing (12%). There are multiple different molecular compounds that fall under the classification of green house gases including, carbon dioxide, methane, and nitrous oxide. Carbon dioxide is the largest emitted greenhouse gas, making up 76% of global emission. Methane is the second largest emitted greenhouse gas at 16%, methane is primarily emitted from the agriculture industry. Lastly nitrous oxide makes up 6% of global emitted greenhouse gases, agriculture and industry are the largest emitters of nitrous oxide.The challenges in the energy sector include the reliance on coal. Coal production remains key to the energy mix and global imports rely on coal to meet the growing demand for gas Energy planning evaluates the current energy situation and estimates future changes based on industrialization patterns and resource availability. Many of the future changes and solutions depend on the global effort to move away from coal and begin making energy efficient technology and continue to electrify the world. See also Capacity factor – Electrical production measure Wind power forecasting – Estimate of the expected production of one or more wind turbines Wind energy software – Type of specialized software Variable renewable energy#Intermittent energy source – Class of renewable energy sources Wind resource assessment – process by which wind power developers estimate the future energy production of a wind farmPages displaying wikidata descriptions as a fallback Virtual power plant – Cloud-based distributed power plant Electricity#Generation and transmission – Phenomena related to electric charge Transmission system operator – Energy transporter Base load – Minimum level of demand on an electrical grid over a span of time Merit order – Ranking of available sources of energy Load factor (electrical) – The average power divided by the peak power over a period of time Load following power plant – Power plant that adjusts output based on demandPages displaying short descriptions of redirect targets Peak demand – Highest power demand on a grid in a specified period References External links An online community for energy planners working on energy for sustainable development. A masters education on Energy planning at Aalborg University in Denmark.
transport in the united kingdom
Transport in the United Kingdom is highly facilitated by road, rail, air and water networks. Transport is a devolved matter with each of the countries of the United Kingdom having separate systems under separate governments. For details of transport in each country, see: Transport in England Transport in Wales Transport in Scotland Transport in Northern IrelandA radial road network totals 29,145 miles (46,904 km) of main roads, 2,173 miles (3,497 km) of motorways and 213,750 miles (344,000 km) of paved roads. The National Rail network of 10,072 route miles (16,116 km) in Great Britain and 189 route miles (303 route km) in Northern Ireland carries over 18,000 passenger and 1,000 freight trains daily. Urban rail networks exist in all cities and towns with dense bus and light rail networks. There are many regional and international airports, with Heathrow Airport in London being the second busiest in the world and busiest in Europe. The UK also has a network of ports which received over 486 million tons of goods in 2019. Transport trends Since 1952 (the earliest date for which comparable figures are available), the United Kingdom saw a growth of car use, which increased its modal share, while the use of buses declined, and railway use has grown. However, since the 1990s, rail has started increasing its modal share at the expense of cars, increasing from 5% to 10% of passenger-kilometres travelled. This coincided with the privatisation of British Rail. In 1952, 27% of distance travelled was by car or taxi; with 42% being by bus or coach and 18% by rail. A further 11% was by bicycle and 3% by motorcycle. The distance travelled by air was negligible. Passenger transport continues to grow strongly. Figures from the Department for Transport show in 2018 people made 4.8 billion local bus passenger journeys, 58% of all public transport journeys. There were 1.8 billion rail passenger journeys in the United Kingdom. Light rail and tram travel also continued to grow, to the highest level (0.3 million journeys) since comparable records began in 1983. In 2018/19, there was £18.1bn of public expenditure on railways, an increase of 12% (£1.9bn). The average amount of time people wait at a stop or station for public transport in London and Manchester is 10 minutes.Freight transport has undergone similar changes, increasing in volume and shifting from railways onto the road. In 1953 89 billion tonne kilometres of goods were moved, with rail accounting for 42%, road 36% and water 22%. By 2010 the volume of freight moved had more than doubled to 222 billion tonne kilometres, of which 9% was moved by rail, 19% by water, 5% by pipeline and 68% by road. Despite the growth in tonne kilometres, the environmental external costs of trucks and lorries in the UK have reportedly decreased. Between 1990 and 2000, there has been a move to heavier goods vehicles due to major changes in the haulage industry including a shift in sales to larger articulated vehicles. A larger than average fleet turnover has ensured a swift introduction of new and cleaner vehicles in the UK.The adoption of plug-in electric vehicles is widely supported by the British government through the plug-in car and van grants schemes and other incentives. About 745,000 light-duty plug-in electric vehicles had been registered in the UK up until December 2021, consisting of 395,000 all-electric vehicles and 350,000 plug-in hybrids. In 2019, the UK had the second largest European stock of light-duty plug-in vehicles in use after Norway. Greenhouse gas emissions A critical issue for the transport sector is its contribution to climate change emissions. Transport became the largest sector of greenhouse gas emissions in 2016. Since 1990 carbon dioxide emissions from transport in the UK have reduced by just 4% compared with an economy-wide reduction of 43%. Emissions from surface transport accounted for 22% of carbon dioxide emissions in the UK in 2019 with cars being responsible for over half of that. The Climate Change Committee has suggested that transport will need to cut its emissions to zero by a mix of demand reduction, the adoption of more efficient combustion engine vehicles, changing to non-car based modes and electrification of the fleet. Air transport There are 471 airports and airfields in the UK. There are also 11 heliports. Heathrow Airport is the largest airport by traffic volume in the country, is owned by Heathrow Airport Holdings, and also one of the top 10 world's busiest airports by passenger numbers. Gatwick Airport is the second largest airport, is owned by Global Infrastructure Partners, and the third largest is Manchester Airport, which is run by Manchester Airport Group, which also owns various other airports. Other major airports include Stansted Airport in Essex and Luton Airport in Bedfordshire, both about 30 miles (48 km) north of London, Birmingham Airport in Solihull, Newcastle Airport, Liverpool Airport, and Bristol Airport. Outside England, Cardiff, Edinburgh and Belfast, are the busiest airports serving Wales, Scotland and Northern Ireland respectively. The largest airline in the United Kingdom by passenger traffic is easyJet, whereas British Airways is largest by fleet size and international destinations. Others include Jet2, TUI Airways and Virgin Atlantic. Rail transport The rail network in the United Kingdom consists of two independent parts, that of Northern Ireland and that of Great Britain. Since 1994, the latter has been connected to mainland Europe via the Channel Tunnel. The network of Northern Ireland is connected to that of the Republic of Ireland. The National Rail network of 10,072 miles (16,209 km) in Great Britain and 189 route miles (303 route km) in Northern Ireland carries 1.7 billion passengers and 110 million tonnes of freight annually.Urban rail networks are also well developed in London and several other cities. There were once over 30,000 miles (48,000 km) of rail network in the UK. The UK was ranked eighth among national European rail systems in the 2017 European Railway Performance Index assessing intensity of use, quality of service and safety. Great Britain The rail network in Great Britain is the oldest such network in the world. The system consists of five high-speed main lines (the West Coast, East Coast, Midland, Great Western and Great Eastern), which radiate from London to the rest of the country, augmented by regional rail lines and dense commuter networks within cities and other high-speed lines. High Speed 1 is operationally separate from the rest of the network, and is built to the same standard as the TGV system in France. The world's first passenger railway running on steam was the Stockton and Darlington Railway, opened on 27 September 1825. Just under five years later the world's first intercity railway was the Liverpool and Manchester Railway, designed by George Stephenson and opened by the Prime Minister, the Duke of Wellington on 15 September 1830. The network grew rapidly as a patchwork of literally hundreds of separate companies during the Victorian era, which eventually was consolidated into just four by 1922, as the boom in railways ended and they began to lose money. Eventually, the entire system came under state control in 1948, under the British Transport Commission's Railway Executive. After 1962 it came under the control of the British Railways Board; then British Railways (later British Rail), and the network was reduced to less than half of its original size by the infamous Beeching cuts of the 1960s when many unprofitable branch lines were closed. Several stations have been reopened in Wales and in England. In 1994 and 1995, British Rail was split into infrastructure, maintenance, rolling stock, passenger and freight companies, which were privatised from 1996 to 1997. The privatisation has delivered very mixed results, with healthy passenger growth, mass refurbishment of infrastructure, investment in new rolling stock, and safety improvements being offset by concerns over network capacity and the overall cost to the taxpayer, which has increased due to growth in passenger numbers. While the price of anytime and off-peak tickets has increased, the price of Advance tickets has dramatically decreased in real terms: the average Advance ticket in 1995 cost £9.14 (in 2014 prices) compared to £5.17 in 2014.In Britain, the infrastructure (track, stations, depots and signalling chiefly) is owned and maintained by Network Rail, a not-for-profit company. Passenger services are operated by public and private train-operating companies (TOCs), which are franchises awarded by the Department for Transport (in England), Transport Scotland, and Transport for Wales. Examples include Avanti West Coast and East Midlands Railway. Freight trains are operated by freight operating companies, such as DB Cargo UK, which are commercial operations unsupported by the government. Most train operating companies do not own the locomotives and coaches that they use to operate passenger services. Instead, they are required to lease these from the three rolling stock companies (ROSCOs), with train maintenance carried out by companies such as Bombardier and Alstom. Rail passenger revenue in 2018/19 increased in real terms year-on-year. In 2018/19, there was £18.1bn of public expenditure on railways, an increase of 12%. There were 1.8 billion rail passenger journeys in England. Light rail and tram travel also continued to grow, to the highest level (0.3 million journeys) since comparable records began in 1983.In Great Britain there are 10,274 miles (16,534 km) of 1,435 mm (4 ft 8+1⁄2 in) gauge track, reduced from a historic peak of over 30,000 miles (48,000 km). Of this, 3,062 miles (4,928 km) is electrified and 7,823 miles (12,590 km) is double or multiple tracks. The maximum scheduled speed on the regular network has historically been around 125 miles per hour (201 km/h) on the InterCity lines. On High Speed 1, trains are now able to reach the speeds of French TGVs. High Speed 2 is critical for the UK's low carbon transport future. HS2 is a new high speed railway linking up London, the Midlands, the North and Scotland serving over 25 stations, including eight of Britain's 10 largest cities and connecting around 30 million.The Network North programme consists of hundreds of transport projects mostly in Northern England and Midlands, including new high-speed lines linking up major cities and railway improvements. To cope with increasing passenger numbers, there is a large ongoing programme of upgrades to the network, including Thameslink, Crossrail, electrification of lines, in-cab signalling, new inter-city trains and high-speed lines. Great British Railways is a planned state-owned public body that will oversee rail transport in Great Britain. The Office of Rail and Road is responsible for the economic and safety regulation of the UK's railways. Northern Ireland In Northern Ireland, Northern Ireland Railways (NIR) both owns the infrastructure and operates passenger rail services. The Northern Ireland rail network is one of the few networks in Europe that carry no freight. It is publicly owned. NIR was united in 1996 with Northern Ireland's two publicly owned bus operators – Ulsterbus and Metro (formally Citybus) – under the brand Translink. In Northern Ireland there is 212 miles (341 km) of track at 1,600 mm (5 ft 3 in) gauge. 118 miles (190 km) of it is double track. International rail services Eurostar operates trains via the Channel Tunnel to France, Belgium and The Netherlands, while the joint Northern Ireland Railways/Iarnród Éireann Enterprise trains link Northern Ireland and the Republic of Ireland as well as one Iarnród Éireann train per weekday in the morning from Dublin to Newry. Rapid transit Three cities in the United Kingdom have rapid transit systems. The most well known is the London Underground (commonly known as the Tube), the oldest rapid transit system in the world which opened 1863. Another system also in London is the separate Docklands Light Railway. Although this is more of an elevated light metro system due to its lower passenger capacities; further, it is integrated with the Underground in many ways. Outside London, there is the Glasgow Subway which is the third oldest rapid transit system in the world. One other system, the Tyne & Wear Metro (opened 1980), serves Newcastle, Gateshead, Sunderland, North Tyneside and South Tyneside, and has many similarities to a rapid transit system including underground stations, but is sometimes considered to be light rail. Urban rail Urban commuter rail networks are focused on many of the country's major cities: Belfast – Belfast Suburban Rail Birmingham – West Midlands Railway Bristol – Great Western Railway Cardiff – Valley Lines Edinburgh – ScotRail Exeter – Great Western Railway Glasgow – ScotRail Leeds – MetroTrain Liverpool – Merseyrail London – London Underground, London Overground, and Elizabeth line Manchester – Northern and TransPennine Express Newcastle – Tyne & Wear MetroThey consist of several railway lines connecting city centre stations of major cities to suburbs and surrounding towns. Train services and ticketing are fully integrated with the national rail network and are not considered separate. In London, a route for Crossrail 2 has been safeguarded. Trams and light rail Tram systems were popular in the United Kingdom in the late 19th and early 20th century. However, with the rise of the car they began to be widely dismantled in the 1950s. By 1962 only the Blackpool tramway and the Glasgow Corporation Tramways remained; the final Glasgow service was withdrawn on 4 September 1962. Recent years have seen a revival the United Kingdom, as in other countries, of trams together with light rail systems. Since the 1990s, a second generation of tram networks have been built and have started operating in Manchester in 1992, Sheffield in 1994, the West Midlands in 1999, South London in 2000, Nottingham in 2004 and Edinburgh in 2014, whilst the original trams in Blackpool were upgraded to second generation vehicles in 2012. Coventry Very Light Rail is a planned system around the city of Coventry. Four light rapid transit lines are opening in the Welsh Capital of Cardiff as part of the current South Wales Metro plan Phase 1 in 2023, which will reach as far out of the capital as Hirwaun, a town 31 miles (50 km) from Cardiff Bay, as well as three new lines planned to open by 2026. Road transport The road network in Great Britain, in 2006, consisted of 7,596 miles (12,225 km) of trunk roads (including 2,176 miles (3,502 km) of motorway), 23,658 miles (38,074 km) of principal roads (including 34 miles (55 km) of motorway), 71,244 miles (114,656 km) of "B" and "C" roads, and 145,017 miles (233,382 km) of unclassified roads (mainly local streets and access roads) – totalling 247,523 miles (398,350 km).Road is the most popular method of transport in the United Kingdom, carrying over 90% of motorised passenger travel and 65% of domestic freight. The major motorways and trunk roads, many of which are dual carriageway, form the trunk network which links all cities and major towns. These carry about one third of the nation's traffic, and occupy about 0.16% of its land area.The motorway system, which was constructed from the 1950s onwards. National Highways (a UK government-owned company) is responsible for maintaining motorways and trunk roads in England. Other English roads are maintained by local authorities. In Scotland and Wales roads are the responsibility of Transport Scotland, an executive agency of the Scottish Government, and the North and Mid Wales Trunk Road Agent and South Wales Trunk Road Agent on behalf of the Welsh Government respectively. Northern Ireland's roads are overseen by the Department for Infrastructure Roads (DfI Roads). In London, Transport for London is responsible for all trunk roads and other major roads, which are part of the Transport for London Road Network. Toll roads are rare in the United Kingdom, though there are a number of toll bridges. Road traffic congestion has been identified as a key concern for the future prosperity of the United Kingdom, and policies and measures are being investigated and developed by the government to reduce congestion. In 2003, the United Kingdom's first toll motorway, the M6 Toll, opened in the West Midlands area to relieve the congested M6 motorway. Rod Eddington, in his 2006 report Transport's role in sustaining the UK's productivity and competitiveness, recommended that the congestion problem should be tackled with a "sophisticated policy mix" of congestion-targeted road pricing and improving the capacity and performance of the transport network through infrastructure investment and better use of the existing network. Congestion charging systems do operate in the cities of London and Durham and on the Dartford Crossing. Driving is on the left. The usual maximum speed limit is 70 miles per hour (112 km/h) on motorways and dual carriageways. On 29 April 2015, the UK Supreme Court ruled that the government must take immediate action to cut air pollution, following a case brought by environmental lawyers at ClientEarth. Cycle infrastructure The National Cycle Network, created by the charity Sustrans, is the UK's major network of signed routes for cycling. It uses dedicated bike paths as well as roads with minimal traffic, and covers 14,000 miles, passing within a mile of half of all homes. Other cycling routes such as The National Byway, the Sea to Sea Cycle Route and local cycleways can be found across the country. Segregated cycle paths are being installed in some cities in the UK such as London, Glasgow, Manchester, Bristol, Cardiff for example. In London Transport for London has installed Cycleways. Road passenger transport Buses Local bus services cover the whole country. Since deregulation the majority (80% by the late 1990s) of these local bus companies have been taken over by one of the "Big Five" private transport companies: Arriva, FirstGroup, Go-Ahead Group, Mobico Group and Stagecoach Group. In Northern Ireland coach, bus (and rail) services remain state-owned and are provided by Translink. Cities and regions such as Manchester and Nottingham have publicity owned bus networks and other transport. Coaches Coaches provide long-distance links throughout the UK: in England and Wales the majority of coach services are provided by National Express. Flixbus and Megabus run no-frills coach services in competition with National Express, the latter's services in Scotland are operated in co-operation with Scottish Citylink. BlaBlaBus also operate to France and the Low Countries from London. Road freight transport In 2014, there were around 285,000 HGV drivers in the UK and in 2013 the trucking industry moved 1.6 billion tonnes of goods, generating £22.9 billion in revenue. Water transport Due to the United Kingdom's island location, before the Channel Tunnel the only way to enter or leave the country (apart from air travel) was on water, except at the border between Northern Ireland and the Republic of Ireland. Ports and harbours About 95% of freight enters the United Kingdom by sea (75% by value). Four major ports handle the most freight traffic: Grimsby & Immingham on the east coast. Port of London, on the River Thames. Milford Haven, in south-west Wales. Port of Southampton, on the Solent.There are many other ports and harbours around the United Kingdom, including the following: Aberdeen, Avonmouth, Barrow, Barry, Belfast, Boston, Bristol, Cairnryan, Cardiff, Dover, Edinburgh/Leith, Falmouth, Felixstowe, Fishguard, Glasgow, Gloucester, Grangemouth, Grimsby, Harwich, Heysham, Holyhead, Hull, Kirkwall, Larne, Liverpool, Londonderry, Manchester, Oban, Pembroke Dock, Peterhead, Plymouth, Poole, Port Talbot, Portishead, Portsmouth, Scapa Flow, Stornoway, Stranraer, Sullom Voe, Swansea, Tees (Middlesbrough), Tyne (Newcastle). Merchant marine For long periods of recent history, Britain had the largest registered merchant fleet in the world, but it has slipped down the rankings partly due to the use of flags of convenience. There are 429 ships of 1,000 gross tonnage (GT) or over, making a total of 9,181,284 GT (9,566,275 tonnes deadweight (DWT)). These are split into the following types: bulk carrier 18, cargo ship 55, chemical tanker 48, container ship 134, liquefied gas 11, passenger ship 12, passenger/cargo ship 64, petroleum tanker 40, refrigerated cargo ship 19, roll-on/roll-off 25, vehicle carrier 3. Ferries Ferries, both passenger only and passengers and vehicles, operate within the United Kingdom across rivers and stretches of water. In east London the Woolwich Ferry links the North and South Circular Roads. Gosport and Portsmouth are linked by the Gosport Ferry; Southampton and Isle of Wight are linked by ferry and fast Catamaran ferries; North Shields and South Shields on Tyneside are linked by the Shields Ferry; and the Mersey has the Mersey Ferry. In Scotland, Caledonian MacBrayne provides passenger and RO-RO ferry services in the Firth of Clyde, to various islands of the Inner and Outer Hebrides from Oban and other ports. Orkney Ferries provides services within the Orkney Isles; and NorthLink Ferries provides services from the Scottish mainland to Orkney and Shetland, mainly from Aberdeen although other ports are also used. Ferries operate to Northern Ireland from Stranraer and Cairnryan to Larne and Belfast. Holyhead, Pembroke Dock and Fishguard are the principal ports for ferries between Wales and Ireland. Heysham and Liverpool/Birkenhead have ferry services to the Isle of Man. Passenger ferries operate internationally to nearby countries such as France, the Republic of Ireland, Belgium, the Netherlands, and Spain. Ferries usually originate from one of the following: Dover with services to Calais operated by P&O Ferries, My Ferry Link and DFDS Seaways. Portsmouth International Port is the main hub for longer services on the Western Channel to Ouistreham, Le Havre, Cherbourg and Saint Malo in France, and Santander and Bilbao in Spain. Most services are operated by French company Brittany Ferries with some to Cherbourg and Le Havre by Condor Ferries and DFDS Seaways. Services from Plymouth operate to Santander (Spain) and Roscoff (France) with Brittany Ferries. Services from Weymouth and Poole operate to the Channel Islands with Condor Ferries and to Cherbourg (from Poole) with Brittany Ferries. Holyhead is the principal ferry port for services to Ireland, operated by Irish Ferries and Stena Line with some services originating from Pembroke. Hull and Harwich are the main hubs for services to the Netherlands.More services from Ramsgate, Newhaven, Southampton, and Lymington operate to France, Belgium and the Isle of Wight. Waterbuses operate on rivers in some of the country's largest cities such as London (London River Services and Thames Clippers), Cardiff (Cardiff Waterbus) and Bristol (Bristol Ferry Boat). Other shipping Cruise ships depart from the United Kingdom for destinations worldwide, many heading for ports around the Mediterranean and Caribbean.The Cunard Line still offer a scheduled transatlantic service between Southampton and New York City with RMS Queen Mary 2. The Solent is a world centre for yachting and home to largest number of private yachts in the world. Inland waterways Major canal building began in the United Kingdom after the onset of the Industrial Revolution in the 18th century. A large canal network was built and it became the primary method of transporting goods throughout the country; however, by the 1830s with the development of the railways, the canal network began to go into decline. There are currently 1,988 miles (3,199 km) of waterways in the United Kingdom and the primary use is recreational. 385 miles (620 km) is used for commerce and leisure. Education and professional development The United Kingdom also has a well-developed network of organisations offering education and professional development in the transport and logistics sectors. A number of Universities offer degree programmes in transport, usually covering transport planning, engineering of transport infrastructure, and management of transport and logistics services. The Institute for Transport Studies at the University of Leeds is one such organisation. Pupils in England and Wales can study transport and logistics in apprenticeship studies at further education and sixth form colleges. Professional development for those working in the transport and logistics sectors is provided by a number of Professional Institutes representing specific sectors. These include: Chartered Institute of Logistics and Transport (CILT(UK)) Chartered Institution of Highways and Transportation (CIHT) Chartered Institution of Railway Operators Transport Planning Society (TPS)Through these professional bodies, transport planners and engineers can train for a number of professional qualifications, including: Chartered engineer Incorporated engineer Transport planning professional See also Rail transport in Great Britain Royal Train Transport in England Transport in London British Transport Police References This article incorporates public domain material from The World Factbook. CIA.
home composting
Home composting is the process of using household waste to make compost at home. Composting is the biological decomposition of organic waste by recycling food and other organic materials into compost. Home composting can be practiced within households for various environmental advantages, such as increasing soil fertility, reduce landfill and methane contribution, and limit food waste. History While composting was cultivated during the Neolithic Age in Scotland, home composting experienced a much later start. Indoor composting, also known as home composting, was discovered in 1905 by Albert Howard who went on to develop the practice for the next 30 years.J.I. Rodale, considered the pioneer of the organic method in America, continued Howard’s work and further developed indoor composting from 1942 on. Since then, various methods of composting have been adapted. Indoor composting aided in organic gardening and farming and the development of modern composting. It originally entailed a layering method, where materials are stacked in alternating layers and the stack is turned at least twice. Fundamentals Aerobic vs. Anaerobic Two ways to home compost are through the aerobic and anaerobic method. Aerobic composting involves the decomposition of organic materials using oxygen and is the recommended method for home composting. There are several benefits of aerobic (with oxygen) composting over anaerobic (without oxygen) composting such as less harmful byproducts. While aerobic composting does produce some carbon dioxide, anaerobic composting releases methane, which is a greenhouse gas significantly more harmful than carbon dioxide. Aerobic compost is a faster process due to availability of oxygen allowing for growth of composting microorganisms. Aerobic composting calls for larger bins, oxygen, moisture, and turning (only if without worms). Organic Waste There are various types of organic waste that can be used to compost at home. Composting requires two types of organic materials: "green" waste and "brown" waste. This is due to organic waste requiring four elements to decompose: nitrogen, carbon, oxygen, and water. A proper carbon-to-nitrogen ratio must be maintained along with proper oxygen and water levels in order to create compost. An effective ratio is 25-30 parts carbon to 1 part nitrogen.All compostable material has carbon, but have different levels of nitrogen. Greens have a lower carbon-to-nitrogen ratio. Greens refer to leafy or fresh organic ingredients and are generally wet. Browns are richer in carbon and are generally dry ingredients. Too much carbon will result in a drier compost pile that will take more time to decompose while too much nitrogen will result in a more moist, slimy, and pungent pile. To obtain an effective ratio for decomposition, include two to four parts brown compost to one part green compost in the pile. Implementation Step 1: Set Up Bin The first step of composting at home is to secure a composting bin and location. Bin Type - Composting indoors usually calls for a closed bin method while composting outside in the garden or yard allows for the open bin method without a cover. Compost bins can be purchased online but various alternatives for closed compost bins are old wooden dressers, garbage cans, wine crates, and more while open compost bins can be made using wooden posts, metal stakes, and wire mesh. Bin Size - Bin size can range from 5 gallon bins for a small household to 18 gallons for a large household. A 3 x 3 x 3 foot container will also suffice. Drainage - Bins need ample drainage and may require holes to be drilled at the bottom. Location - Whether indoor or outdoor, locating the bin in a dry and shady spot is suggested.Securing an additional smaller compost bin to collect food scraps is recommended if the primary bin is further from the main area where compost materials are frequently produced. This will avoid the inconvenience of constantly moving to the location of the main compost bin. Step 2: Gather Materials The next step to home composting is to gather materials for the compost layers. Most items available in a household include various food scraps, coffee grounds, tea bags, shredded paper, and more. To maintain a proper carbon-to-nitrogen ratio, collect approximately two to four parts of brown compost matter to one part green compost matter. Breaking down ingredients before adding them to the compost pile will allow them to decompose more easily and quickly. Step 3: Add to Bin There are various methods of composting but the suggested method at home involves aerobic composting with worms (vermicomposting) or without worms. Layering Home composting can be completed through a layering process. Start with a layer of coarse ingredients to allow for airflow, then alternate with layers of nitrogen-rich (greens) and carbon-rich materials (browns), and mix together. Bury food scraps in the center of the pile and add soil on top for every few layers. Vermicomposting To vermicompost, approximately one pound of worms can be added to the top of the soil layer but will need ample bedding (newspaper, shredded paper, etc.). Red wiggler worms (Eisenia fetida) are suggested as they are able to eat half their body weight in one day. Vermicomposting can take place indoors or outdoors. However, it is recommended to keep the worm bin indoors since worms can die from extreme temperatures. Vermicomposting is faster (2–3 months) than no-worm composting (3–9 months), involves minimal maintenance, limits odor, and provides multiple nutrients to the soil. Step 4: Aftercare Maintenance Regardless of the method used, a proportionally small amount of water may need to be added to the pile when dry to ensure proper moisture content. Composting without worms will require turning the pile every few weeks to guarantee proper aeration. The more often it is turned, the faster the compost will decompose. Vermicomposting does not require turning. Usage Compost is finished if the material is dark, crumbly, smells earthy, and contains no added scraps. Finished compost can be used in a multitude of ways such as for mulch, amending soil, fertilizer, and compost tea. Environmental Benefits Increase Soil Health Home composting will promote soil health biologically, chemically, and structurally. It contains three major nutrients (nitrogen, phosphorus, and potassium) as well as other elements like calcium, iron, magnesium, and zinc that assist in soil and plant health. It works as a natural and organic fertilizer as opposed to using synthetic fertilizers with harmful chemicals. Home compost is also able to improve soil water retention, capacity, and productivity. It provides beneficial microbes that increase nutrients and humus formation in the soil. Humus acts like a glue agent and binds soil together, which helps prevent soil erosion. Reduce Greenhouse Gas Emissions One benefit of aerobic home composting is the reduction in methane emissions, one of the most threatening greenhouse gases to the environment. Food waste and packaging are responsible for 70% of household waste that resides in landfills. Over 95% of food waste ends up at landfills where it produces methane, carbon dioxide, and other greenhouse gases through anaerobic digestion. These greenhouse gases trap heat within the atmosphere and further contribute to climate change. It is predicted that by 2050, global greenhouse gas emissions will increase by 80% from food production alone. Home composting can limit landfill waste and therefore, methane emissions as well.When food waste is thrown out and ends up in waterways, it can contribute to algae blooms. Algae blooms can produce toxic emissions that have harmful health effects on mammals and organisms, including humans. Eutrophication, or extreme nutrient levels, leads to dense algae bloom formation which can damage drinking water and develop “dead zones” that harm marine life. Algae blooms also heavily contribute to global methane emissions.Greenhouse gases are emitted in the manufacturing of synthetic fertilizers so by using organic compost material to fertilize home gardens instead, these emissions will be reduced. By limiting the amount of food waste that ends up in landfills and using homemade fertilizer through home composting, households will reduce their carbon footprint. Reduce Waste Food waste contributes to the hunger crisis, in which 690 million people in the world are undernourished and households are the reason behind a significant fraction of food waste. A food chain waste study of Melbourne demonstrated that 40% of waste occurs post-consumer. This adds to the wastage of energy, emissions, and cost of production and supply. Almost an equal amount of food that is produced is disposed of (approximately 40%). The U.S Department of Agriculture estimates that approximately 133 billion pounds and $161 billion worth of food were wasted in 2010 alone. Home composting can limit the amount of waste contributed by households since it will not be disposed of but instead be used productively. == References ==
global climate coalition
The Global Climate Coalition (GCC) (1989–2001) was an international lobbyist group of businesses that opposed action to reduce greenhouse gas emissions and engaged in climate change denial, publicly challenging the science behind global warming. The GCC was the largest industry group active in climate policy and the most prominent industry advocate in international climate negotiations. The GCC was involved in opposition to the Kyoto Protocol, and played a role in blocking ratification by the United States. The coalition knew it could not deny the scientific consensus, but sought to sow doubt over the scientific consensus on climate change and create manufactured controversy.The GCC dissolved in 2001 after membership declined in the face of improved understanding of the role of greenhouse gases in climate change and of public criticism. The GCC declared that its primary objective had been achieved: U.S. President George W. Bush withdrew the U.S., which alone accounted for nearly a quarter of the world's greenhouse gas emissions, from the Kyoto Protocol process, and thus mandatory global reductions were rendered unreachable. Founding The Global Climate Coalition (GCC) was formed in 1989 as a project under the auspices of the National Association of Manufacturers. The GCC was formed to represent the interests of the major producers and users of fossil fuels, to oppose regulation to mitigate global warming, and to challenge the science behind global warming. Context for the founding of the GCC from 1988 included the establishment of the Intergovernmental Panel on Climate Change (IPCC) and NASA climatologist James Hansen's congressional testimony that climate change was occurring. The government affairs' offices of five or six corporations recognized that they had been inadequately organized for the Montreal Protocol, the international treaty that phased out ozone depleting chlorofluorocarbons, and the Clean Air Act in the United States, and recognized that fossil fuels would be targeted for regulation.According to GCC's mission statement on the home page of its website, GCC was established: "to coordinate business participation in the international policy debate on the issue of global climate change and global warming," and GCC's executive director in a 1993 press release said GCC was organized "as the leading voice for industry on the global climate change issue."GCC reorganized independently in 1992, with the first chairman of the board of directors being the director of government relations for the Phillips Petroleum Company. Exxon, later ExxonMobil, was a founding member, and a founding member of the GCC's board of directors; the energy giant also had a leadership role in coalition. The American Petroleum Institute (API) was a leading member of the coalition. API's executive vice president was a chairman of the coalition's board of directors. Other GCC founding members included the National Coal Association, United States Chamber of Commerce, American Forest & Paper Association, and Edison Electric Institute. GCC's executive director John Shlaes was previously the director of government relations at the Edison Electric Institute. GCC was run by Ruder Finn, a public relations firm. GCC's comprehensive PR campaign was designed by E. Bruce Harrison, who had been creating campaigns for the US industry against environmental legislation from the 1970s.GCC was the largest industry group active in climate policy. About 40 companies and industry associations were GCC members. Considering member corporations, member trade associations, and business represented by member trade associations, GCC represented over 230,000 businesses. Industry sectors represented included: aluminium, paper, transportation, power generation, petroleum, chemical, and small businesses. All major oil companies were members until 1996 (Shell left in 1998). GCC members were from industries that would have been adversely effected by limitations on fossil fuel consumption. GCC was funded by membership dues. Advocacy activities GCC was one of the most powerful lobbyist groups against action to mitigate global warming. It was the most prominent industry advocate in international climate negotiations, and led a campaign opposed to policies to reduce greenhouse gas emissions. The GCC was one of the most powerful non-governmental organizations representing business interests in climate policy, according to Kal Raustiala, professor at the UCLA School of Law.GCC's advocacy activities included lobbying government officials, grassroots lobbying through press releases and advertising, participation in international climate conferences, criticism of the processes of international climate organizations, critiques of climate models, and personal attacks on scientists and environmentalists. Policy positions advocated by the coalition included denial of anthropogenic climate change, emphasizing the uncertainty in climatology, advocating for additional research, highlighting the benefits and downplaying the risks of climate change, stressing the priority of economic development, defending national sovereignty, and opposition to the regulation of greenhouse gas emissions. GCC sent delegations to all of the major international climate conventions. Only nations and non-profits may send official delegates to the United Nations Climate Change conferences. GCC registered with the United Nations Framework Convention on Climate Change (UNFCCC) as a non-governmental organization, and executives from GCC members attended official UN conferences as GCC delegates.In 1990, after US president, George H. W. Bush, addressed the Intergovernmental Panel on Climate Change (IPCC) urging caution in responding to global warming, and offering no new proposals, GCC said Bush's speech was "very strong" and concurred with the priorities of economic development and additional research. GCC sent 30 attendees to the 1992 Earth Summit in Rio de Janeiro, where it lobbied to keep targets and timetables out of the Framework Convention on Climate Change. In December, 1992 GCC's executive director wrote in a letter to The New York Times: "...there is considerable debate on whether or not man-made greenhouse gases (produced primarily by burning fossil fuels) are triggering a dangerous 'global warming' trend." In 1992 GCC distributed a half-hour video entitled The Greening of Planet Earth, to hundreds of journalists, the White House, and several Middle Eastern oil-producing countries, which suggested that increasing atmospheric carbon dioxide could boost crop yields and solve world hunger.In 1993, after then US president Bill Clinton pledged "to reducing our emissions of greenhouse gases to their 1990 levels by the year 2000," GCC's executive director said it "could jeopardize the economic health of the nation." GCC's lobbying was key to the defeat in the United States Senate of Clinton's 1993 BTU tax proposal. In 1994, after United States Secretary of Energy Hazel R. O'Leary said the 1992 UNFCCC needed to be strengthened, and that voluntary carbon dioxide reductions may not be enough, GCC said it was: "disturbed by the implication that the President's voluntary climate action plan, which is just getting under way, will be inadequate and that more stringent measures may be needed domestically."GCC did not fund original scientific research and its climate claims relied largely on the World Climate Review and its successor the World Climate Report edited by Patrick Michaels and funded by the Western Fuels Association. GCC promoted the views of climate deniers such as Michaels, Fred Singer, and Richard Lindzen. In 1996, GCC published a report entitled Global warming and extreme weather: fact vs. fiction written by Robert E. Davis.GCC members questioned the efficacy of climate change denial and shifted their message to highlighting the economic costs of proposed greenhouse gas emission regulations and the limited effectiveness of proposals exempting developing nations. In 1995, after the United Nations Climate Change conference in Berlin agreed to negotiate greenhouse gas emission limits, GCC's executive director said the agreement gave "developing countries like China, India and Mexico a free ride" and would "change the relations between sovereign countries and the United Nations. This could have very significant implications. It could be a way of capping our economy." At a Washington, D.C. press conference on the eve of the second United Nations Climate Change conference in Geneva, GCC's executive director said, "The time for decision is not yet now." At the conference in Geneva, GCC issued a statement that said it was too early to determine the causes of global warming. GCC representatives lobbied scientists at the September, 1996 IPCC conference in Mexico City.After actor Leonardo DiCaprio, chairman of Earth Day 2000, interviewed Clinton for ABC News, GCC sent out an e-mail that said that DiCaprio's first car was a Jeep Grand Cherokee and that his current car was a Chevrolet Tahoe. Predicting Future Climate Change: A Primer In 1995, GCC assembled an advisory committee of scientific and technical experts to compile an internal-only, 17-page report on climate science entitled Predicting Future Climate Change: A Primer, which said: "The scientific basis for the Greenhouse Effect and the potential impact of human emissions of greenhouse gases such as CO2 on climate is well established and cannot be denied." In early 1996, GCC's operating committee asked the advisory committee to redact the sections that rebutted contrarian arguments, and accepted the report and distributed it to members. The draft document was disclosed in a 2007 lawsuit filed by the auto industry against California's efforts to regulate automotive greenhouse gas emissions.According to The New York Times, the primer demonstrated that "even as the coalition worked to sway opinion, its own scientific and technical experts were advising that the science backing the role of greenhouse gases in global warming could not be refuted." According to the Union of Concerned Scientists in 2015, the primer was: "remarkable for indisputably showing that, while some fossil fuel companies' deception about climate science has continued to the present day, at least two decades ago the companies' own scientific experts were internally alerting them about the realities and implications of climate change." IPCC Second Assessment Report GCC was an industry participant in the review process of the IPCC Second Assessment Report. In 1996, prior to the publication of the Second Assessment Report, GCC distributed a report entitled The IPCC: Institutionalized Scientific Cleansing to reporters, US Congressmen, and scientists. The coalition report said that Benjamin D. Santer, the lead author of Chapter 8 in the assessment, entitled "Detection of Climate Change and Attribution of Causes," had altered the text, after acceptance by the Working Group, and without approval of the authors, to strike content characterizing the uncertainty of the science. Frederick Seitz repeated GCC's charges in a letter to The Wall Street Journal published June 12, 1996. The coalition ran newspaper advertisements that said: "unless the management of the IPCC promptly undertakes to republish the printed versions ... the IPCC's credibility will have been lost."Santer and his co-authors said the edits were integrations of comments from peer review as per agreed IPCC processes. Opposition to Kyoto Protocol GCC was the main industry group in the United States opposed to the Kyoto Protocol, which committed signatories to reduce greenhouse gas emissions. The coalition "was the leading industry group working in opposition to the Kyoto Protocol," according to Greenpeace, and led opposition to the Kyoto Protocol, according to the Los Angeles Times.Prior to 1997, GCC spent about $1 million annually lobbying against limits on CO2 emissions; before Kyoto, GCC annual revenue peaked around $1.5 million; GCC spent $13 million on advertising in opposition to the Kyoto treaty. The coalition funded the Global Climate Information Project and hired the advertising firm that produced the 1993–1994 Harry and Louise advertising campaign which opposed Clinton's health care initiative. The advertisements said, "the UN Climate Treaty isn't Global...and it won't work" and "Americans will pay the price...50 cents more for every gallon of gasoline."GCC opposed the signing of the Kyoto Protocol by Clinton. GCC was influential in the withdrawal from the Kyoto Protocol by the administration of President George W. Bush. According to briefing notes prepared by the United States Department of State for the under-secretary of state, Bush's rejection of the Kyoto Protocol was "in part based on input from" GCC. GCC lobbying was key to the July, 1997 unanimous passage in the United States Senate of the Byrd–Hagel Resolution, which reflected the coalition's position that restrictions on greenhouse gas emissions must include developing countries. GCC's chairman told a US congressional committee that mandatory greenhouse gas emissions limits were: "an unjustified rush to judgement." The coalition sent 50 delegates to the third Conference of the Parties to the United Nations Climate Change Conference in Kyoto. On December 11, 1997, the day the Kyoto delegates reached agreement on legally binding limits on greenhouse gas emissions, GCC's chairman said the agreement would be defeated by the US Senate. In 2001, GCC's executive director compared the Kyoto Protocol to the RMS Titanic. Membership decline GCC's challenge to science prompted a backlash from environmental groups. Environmentalists described GCC as a "club for polluters" and called for members to withdraw their support. "Abandonment of the Global Climate Coalition by leading companies is partly in response to the mounting evidence that the world is indeed getting warmer," according to environmentalist Lester R. Brown. In 1998, Green Party delegates to the European Parliament introduced an unsuccessful proposal that the World Meteorological Organization name hurricanes after GCC members. Defections weakened the coalition. In 1996, British Petroleum resigned and later announced support for the Kyoto Protocol and commitment to greenhouse gas emission reductions. In 1997, Royal Dutch Shell withdrew after criticism from European environmental groups. In 1999, Ford Motor Company was the first US company to withdraw; the New York Times described the departure as "the latest sign of divisions within heavy industry over how to respond to global warming." DuPont left the coalition in 1997 and Shell USA (then known as Shell Oil Company) left in 1998. In 2000, GCC corporate members were the targets of a national student-run university divestiture campaign. Between December, 1999 and early March, 2000, Texaco, the Southern Company, General Motors and Daimler-Chrysler withdrew. Some former coalition members joined the Business Environmental Leadership Council within the Pew Center on Global Climate Change which represented diverse stakeholders, including business interests, with a commitment to peer-reviewed scientific research and accepted the need for emissions restrictions to address climate change.In 2000, GCC restructured as an association of trade associations; membership was limited to trade associations, and individual corporations were represented through their trade association. Brown called the restructuring "a thinly veiled effort to conceal the real issue – the loss of so many key corporate members." Dissolution After US President George W. Bush withdrew the US from the Kyoto process in 2001, GCC disbanded. Absent the participation of the US, the effectiveness of the Kyoto process was limited. GCC said on its website that its mission had been successfully achieved, writing "At this point, both Congress and the Administration agree that the U.S. should not accept the mandatory cuts in emissions required by the protocol." Networks of well-funded industry lobbyists and other climate change denial groups continue its work. Reception In 2015, the Union of Concerned Scientists compared GCC's role in the public policy debate on climate change to the roles in the public policy debate on tobacco safety of the Tobacco Institute, the tobacco industry's lobbyist group, and the Council for Tobacco Research, which promoted misleading science. Environmentalist Bill McKibben said that, by promoting doubt about the science, "throughout the 1990s, even as other nations took action, the fossil fuel industry's Global Climate Coalition managed to make American journalists treat the accelerating warming as a he-said-she-said story." According to the Los Angeles Times, GCC members integrated projections from climate models into their operational planning while publicly criticising the models.Former Vice President Al Gore described the oil companies' blocking campaign as "the most serious crime of the post-World War Two era". Members Membership notes References Bibliography Banerjee, Neela; Song, Lisa; Hasemyer, David (September 16, 2015). "Exxon's Own Research Confirmed Fossil Fuels' Role in Global Warming Decades Ago". InsideClimate News. Retrieved October 14, 2015. Brill, Ken (June 20, 2001). "Your meeting with members of the Global Climate Coalition" (PDF). United States Department of State. Retrieved February 15, 2016. Brown, Lester R. (July 25, 2000). "The Rise and Fall of the Global Climate Coalition". In Brown, Lester R.; Larsen, Janet; Fischlowitz-Roberts, Bernie (eds.). The Earth Policy Reader: Today's Decisions, Tomorrow's World. Routledge. ISBN 9781134208340. Retrieved February 6, 2016. Farley, Maggie (December 7, 1997). "Showdown at Global Warming Summit". Los Angeles Times. Retrieved February 6, 2016. Franz, Wendy E. (1998). "Science, skeptics, and non-state actors in the greenhouse" (PDF). Belfer Center for Science and International Affairs. Retrieved February 12, 2016. Hammond, Keith (December 4, 1997). "Astroturf Troopers, How the polluters' lobby uses phony front groups to attack the Kyoto treaty". Mother Jones. Retrieved February 6, 2016. Helvarg, David (December 16, 1996). "The greenhouse spin". The Nation. Vol. 263, no. 20. pp. 21–24. Retrieved February 10, 2016. Jones, Charles A.; Levy, David L. (2007). "North American business strategies towards climate change". European Management Journal. 25 (6): 428–440. doi:10.1016/j.emj.2007.07.001. Lee, Jennifer 8. (May 28, 2003). "Exxon backs groups that question global warming". The New York Times. Retrieved February 7, 2016. Levy, David (November 1997). "Not to worry, say business lobbyists". Dollars & Sense. Levy, David L. (January 2001). "Business and climate change: Privatizing environmental regulation?". Dollars & Sense. Levy, David; Rothenberg, Dandra (October 1999). "Corporate strategy and climate change: heterogeneity and change in the global automobile industry". Belfer Center for Science and International Affairs. CiteSeerX 10.1.1.25.6082. {{cite journal}}: Cite journal requires |journal= (help) Lieberman, Amy; Rust, Susanne (December 31, 2015). "Big Oil braced for global warming while it fought regulations". Los Angeles Times. Retrieved January 24, 2016. Lorenzetti, Laura (September 16, 2015). "Exxon has known about climate change since the 1970s". Fortune. Retrieved October 14, 2015. May, Bob (January 27, 2005). "Under-informed, over here". The Guardian. Retrieved February 8, 2016. McGregor, Ian (2008). Organising to Influence the Global Politics of Climate Change (PDF). Australian and New Zealand Academy of Management Conference. Retrieved February 16, 2016. Mitchell, Alison (December 13, 1997). "G.O.P. Hopes Climate Fight Echoes Health Care Outcome". The New York Times. Retrieved February 8, 2016. Mooney, Chris (May 2005). "Some Like It Hot". Mother Jones. Retrieved February 24, 2016. Mulvey, Kathy; Shulman, Seth (July 2015). "The Climate Deception Dossiers" (PDF). Union of Concerned Scientists. Retrieved February 11, 2016. Rahm, Dianne (2009). Climate Change Policy in the United States: The Science, the Politics and the Prospects for Change. McFarland & Company. ISBN 9780786458011. Revkin, Andrew C. (April 23, 2009). "Industry Ignored Its Scientists on Climate". The New York Times. Retrieved February 6, 2016. Van den Hove, Sybille; Le Menestrel, Marc; De Bettignies, Henri-Claude (2002). "The oil industry and climate change: strategies and ethical dilemmas". Climate Policy. 2 (1): 3–18. doi:10.3763/cpol.2002.0202. S2CID 219594585. Retrieved February 23, 2016. Vidal, John (June 8, 2005). "Revealed: how oil giant influenced Bush". The Guardian. Retrieved February 6, 2016. Whitman, Elizabeth (October 10, 2015). "Exxon Arctic Drilling Benefitting From Global Warming: Oil Company Denied Climate Change Science While Factoring It Into Arctic Operations, Report Shows". International Business Times. Retrieved October 21, 2015. External links GCC homepage - No longer active as of March 2006; internet archive version
embedded emissions
One way of attributing greenhouse gas (GHG) emissions is to measure the embedded emissions of goods that are being consumed (also referred to as "embodied emissions", "embodied carbon emissions", or "embodied carbon"). This is different from the question of to what extent the policies of one country to reduce emissions affect emissions in other countries (the "spillover effect" and "carbon leakage" of an emissions reduction policy). The UNFCCC measures emissions according to production, rather than consumption. Consequently, embedded emissions on imported goods are attributed to the exporting, rather than the importing, country. The question of whether to measure emissions on production instead of consumption is partly an issue of equity, i.e., who is responsible for emissions.The 37 Parties listed in Annex B to the Kyoto Protocol have agreed to legally binding emission reduction commitments. Under the UNFCCC accounting of emissions, their emission reduction commitments do not include emissions attributable to their imports. In a briefing note, Wang and Watson (2007) asked the question, "who owns China's carbon emissions?". In their study, they suggested that nearly a quarter of China's CO2 emissions might be a result of its production of goods for export, primarily to the US but also to Europe. Based on this, they suggested that international negotiations based on within country emissions (i.e., emissions measured by production) may be "[missing] the point". Recent research confirms that, in 2004, 23% of global emissions were embedded in goods traded internationally, mostly flowing from China and other developing countries, such as Russia and South Africa, to the U.S., Europe and Japan. These states are included in a group of ten, as well as the Middle East, that make up 71% of the total difference in regional emissions. In Western Europe the difference in the import and export of emissions is particularly pronounced, with imported emissions making up 20-50% of consumed emissions. The majority of the emissions transferred between these states is contained in the trade of machinery, electronics, chemicals, rubber and plastics.Research by the Carbon Trust in 2011 revealed that approximately 25% of all CO2 emissions from human activities 'flow' (i.e. are imported or exported) from one country to another. The flow of carbon was found to be roughly 50% emissions associated with trade in commodities such as steel, cement, and chemicals, and 50% in semi-finished/finished products such as motor vehicles, clothing or industrial machinery and equipment. Embodied carbon in construction The embodied carbon of buildings is estimated to count for 11% of global carbon emissions and 75% of a building's emissions over its entire lifecycle. The World Green Building Council has set a target for all new buildings to have at least 40% less embodied carbon.A life-cycle assessment for embodied carbon calculates the carbon used throughout each stage of a building's life: construction, use and maintenance, and demolition or disassembly.Re-use is a key consideration when addressing embodied carbon in construction. The architect Carl Elefante is known for coining the phrase, "The greenest building is the building that is already built." The reason that existing buildings are usually more sustainable than new buildings is that the quantity of carbon emissions which occurs during construction of a new building is large in comparison to the annual operating emissions of the building, especially as operations become more energy efficient and energy supplies transition to renewable generation.Beyond re-use, there are two principal areas of focus in the reduction of embodied carbon in construction. The first is to reduce the quantity of construction material ('construction mass') while the second is the substitution of lower carbon alternative materials. Typically—where reduction of embodied carbon is a goal—both of these are addressed. Often, the most significant scope for reduction of construction mass is found in structural design, where measures such as reduced beam or slab span (and an associated increase in column density) can yield large carbon savings.To assist material substitution (with low carbon alternatives), manufacturers of materials such as steel re-bar, glulam, and precast concrete typically provide Environmental Product Declarations (EPD) which certify the carbon impact as well as general environmental impacts of their products. Minimizing the use of carbon-intensive materials may mean selecting lower carbon versions of glass and steel products, and products manufactured using low-emissions energy sources. Embodied carbon may be reduced in concrete construction through the use of Portland cement alternatives such as Ground granulated blast-furnace slag, recycled aggregates and industry by-products. Carbon-neutral, carbon positive, and carbon-storing materials include bio-based materials such as timber, bamboo, hemp fibre and hempcrete, wool, dense-pack cellulose insulation, and cork.A 2021 study focused on "carbon-intensive hotspot materials (e.g., concrete foundations and slab floors, insulated roof and wall panels, and structural framing) in light industrial buildings" estimated that a "sizable reduction (~60%) in embodied carbon is possible in two to three years by bringing readily-available low-carbon materials into wider use". Embodied carbon policy and legislation A variety of policies, regulations, and standards exist worldwide with respect to embodied carbon, according to the American Institute of Architects.Eight states introduced procurement policies related to embodied carbon in 2021: Washington, Oregon, California, Colorado, Minnesota, Connecticut, New York, and New Jersey.In Colorado, HB21-1303: Global Warming Potential for Public Project Materials (better known as "Buy Clean Colorado") was signed into law July 6, 2021. The law uses environmental product declarations (EPDs) to help drive the use of low-embodied-carbon materials."In Europe, embodied carbon emissions have been limited in the Netherlands since 2018, and this is scheduled to happen in Denmark, Sweden, France and Finland between 2023 and 2027.""On May 10, 2023, Toronto [began to] to require lower-carbon construction materials, limiting embodied carbon from new [city-owned] municipal building construction. New City-owned buildings must now limit upfront embodied emission intensity — emissions associated with manufacturing, transporting, and constructing major structural and envelope systems — to below 350 kg CO2e/m2." See also Carbon footprint Emissions embodied in international trade Embodied energy References External links Resources: Embodied Carbon in Buildings, Boston Society for Architecture Ten Steps Towards Reducing Embodied Carbon, The American Institute of Architects Carbon Positive Reset! 1.5 °C 2020 Teach-In Carbon Smart Materials Palette
iso 14064
The ISO 14064 standard (initially published in 2006 and updated in 2018) is the core part of the ISO 14060 family of standards that are part of the ISO 14000 series of international standards by the International Organization for Standardization (ISO) for environmental management. The ISO 14064 standards provides governments, businesses, regions and other organisations with a complementary set of tools for programs to quantify, monitor, report and verify greenhouse gas emissions. The ISO 14064 standards supports organisations to participate in both regulated and voluntary programs such as emissions trading schemes and public reporting using a globally recognised standard. Structure of Standard The Standard has three parts: ISO 14064-1:2018 specifies principles and requirements at the organization level for quantification and reporting of greenhouse gas (GHG) emissions and removals. It includes requirements for the design, development, management, reporting and verification of an organization's GHG inventory. ISO 14064-2:2019 specifies principles and requirements and provides guidance at the project level. ISO 14064-3:2019 specifies principles and requirements and provides guidance for those conducting or managing the validation and/or verification of greenhouse gas (GHG) assertions. It can be applied to organizational or GHG project quantification, including GHG quantification, monitoring and reporting carried out in accordance with ISO 14064-1 or ISO 14064-2. Uses ISO 14064-3 specifies requirements for selecting GHG validators/verifiers, establishing the level of assurance, objectives, criteria and scope, determining the validation/verification approach, assessing GHG data, information, information systems and controls, evaluating GHG assertions and preparing validation/verification statements. The ISO 14064-3 verification standard is one of the standards accepted by the Carbon Disclosure Project, the widely used climate impact disclosure system, as a valid framework for measuring and reporting GHG emissions.The principles behind ISO 14064 have been used in national calculation methodologies such as the UK's Carbon Trust Standard. History The standards were updated in 2018 from their initial versions published in 2006. They incorporate work that was carried out by BSI through its Publicly Available Specifications BSI PAS 2060 Carbon Neutrality and BSI PAS 2050 Product Carbon Footprints. References External links ISO website detailing standard
second-generation biofuels
Second-generation biofuels, also known as advanced biofuels, are fuels that can be manufactured from various types of non-food biomass. Biomass in this context means plant materials and animal waste used especially as a source of fuel. First-generation biofuels are made from sugar-starch feedstocks (e.g., sugarcane and corn) and edible oil feedstocks (e.g., rapeseed and soybean oil), which are generally converted into bioethanol and biodiesel, respectively. Second-generation biofuels are made from different feedstocks and therefore may require different technology to extract useful energy from them. Second generation feedstocks include lignocellulosic biomass or woody crops, agricultural residues or waste, as well as dedicated non-food energy crops grown on marginal land unsuitable for food production. The term second-generation biofuels is used loosely to describe both the 'advanced' technology used to process feedstocks into biofuel, but also the use of non-food crops, biomass and wastes as feedstocks in 'standard' biofuels processing technologies if suitable. This causes some considerable confusion. Therefore it is important to distinguish between second-generation feedstocks and second-generation biofuel processing technologies. The development of second-generation biofuels has seen a stimulus since the food vs. fuel dilemma regarding the risk of diverting farmland or crops for biofuels production to the detriment of food supply. The biofuel and food price debate involves wide-ranging views, and is a long-standing, controversial one in the literature. Introduction Second-generation biofuel technologies have been developed to enable the use of non-food biofuel feedstocks because of concerns to food security caused by the use of food crops for the production of first-generation biofuels. The diversion of edible food biomass to the production of biofuels could theoretically result in competition with food and land uses for food crops. First-generation bioethanol is produced by fermenting plant-derived sugars to ethanol, using a similar process to that used in beer and wine-making (see Ethanol fermentation). This requires the use of food and fodder crops, such as sugar cane, corn, wheat, and sugar beet. The concern is that if these food crops are used for biofuel production that food prices could rise and shortages might be experienced in some countries. Corn, wheat, and sugar beet can also require high agricultural inputs in the form of fertilizers, which limit the greenhouse gas reductions that can be achieved. Biodiesel produced by transesterification from rapeseed oil, palm oil, or other plant oils is also considered a first-generation biofuel. The goal of second-generation biofuel processes is to extend the amount of biofuel that can be produced sustainably by using biomass consisting of the residual non-food parts of current crops, such as stems, leaves and husks that are left behind once the food crop has been extracted, as well as other crops that are not used for food purposes (non-food crops), such as switchgrass, grass, jatropha, whole crop maize, miscanthus and cereals that bear little grain, and also industry waste such as woodchips, skins and pulp from fruit pressing, etc.The problem that second-generation biofuel processes are addressing is to extract useful feedstocks from this woody or fibrous biomass, which is predominantly composed of plant cell walls. In all vascular plants the useful sugars of the cell wall are bound within the complex carbohydrates (polymers of sugar molecules) hemicellulose and cellulose, but made inaccessible for direct use by the phenolic polymer lignin. Lignocellulosic ethanol is made by extracting sugar molecules from the carbohydrates using enzymes, steam heating, or other pre-treatments. These sugars can then be fermented to produce ethanol in the same way as first-generation bioethanol production. The by-product of this process is lignin. Lignin can be burned as a carbon neutral fuel to produce heat and power for the processing plant and possibly for surrounding homes and businesses. Thermochemical processes (liquefaction) in hydrothermal media can produce liquid oily products from a wide range of feedstock that has a potential to replace or augment fuels. However, these liquid products fall short of diesel or biodiesel standards. Upgrading liquefaction products through one or many physical or chemical processes may improve properties for use as fuel. Second-generation technology The following subsections describe the main second-generation routes currently under development. Thermochemical routes Carbon-based materials can be heated at high temperatures in the absence (pyrolysis) or presence of oxygen, air and/or steam (gasification). These thermochemical processes yield a mixture of gases including hydrogen, carbon monoxide, carbon dioxide, methane and other hydrocarbons, and water. Pyrolysis also produces a solid char. The gas can be fermented or chemically synthesised into a range of fuels, including ethanol, synthetic diesel, synthetic gasoline or jet fuel.There are also lower temperature processes in the region of 150–374 °C, that produce sugars by decomposing the biomass in water with or without additives. Gasification Gasification technologies are well established for conventional feedstocks such as coal and crude oil. Second-generation gasification technologies include gasification of forest and agricultural residues, waste wood, energy crops and black liquor. Output is normally syngas for further synthesis to e.g. Fischer–Tropsch products including diesel fuel, biomethanol, BioDME (dimethyl ether), gasoline via catalytic conversion of dimethyl ether, or biomethane (synthetic natural gas). Syngas can also be used in heat production and for generation of mechanical and electrical power via gas motors or gas turbines. Pyrolysis Pyrolysis is a well established technique for decomposition of organic material at elevated temperatures in the absence of oxygen. In second-generation biofuels applications forest and agricultural residues, wood waste and energy crops can be used as feedstock to produce e.g. bio-oil for fuel oil applications. Bio-oil typically requires significant additional treatment to render it suitable as a refinery feedstock to replace crude oil. Torrefaction Torrefaction is a form of pyrolysis at temperatures typically ranging between 200–320 °C. Feedstocks and output are the same as for pyrolysis. Hydrothermal liquefaction Hydrothermal liquefaction is a process similar to pyrolysis that can process wet materials. The process is typically at moderate temperatures up to 400 °C and higher than atmospheric pressures. The capability to handle a wide range of materials make hydrothermal liquefaction viable for producing fuel and chemical production feedstock. Biochemical routes Chemical and biological processes that are currently used in other applications are being adapted for second-generation biofuels. Biochemical processes typically employ pre-treatment to accelerate the hydrolysis process, which separates out the lignin, hemicellulose and cellulose. Once these ingredients are separated, the cellulose fractions can be fermented into alcohols.Feedstocks are energy crops, agricultural and forest residues, food industry and municipal biowaste and other biomass containing sugars. Products include alcohols (such as ethanol and butanol) and other hydrocarbons for transportation use. Types of biofuel The following second-generation biofuels are under development, although most or all of these biofuels are synthesized from intermediary products such as syngas using methods that are identical in processes involving conventional feedstocks, first-generation and second-generation biofuels. The distinguishing feature is the technology involved in producing the intermediary product, rather than the ultimate off-take. A process producing liquid fuels from gas (normally syngas) is called a gas-to-liquid (GtL) process. When biomass is the source of the gas production the process is also referred to as biomass-to-liquids (BTL). From syngas using catalysis Biomethanol can be used in methanol motors or blended with petrol up to 10–20% without any infrastructure changes. BioDME can be produced from Biomethanol using catalytic dehydration or it can be produced directly from syngas using direct DME synthesis. DME can be used in the compression ignition engine. Bio-derived gasoline can be produced from DME via high-pressure catalytic condensation reaction. Bio-derived gasoline is chemically indistinguishable from petroleum-derived gasoline and thus can be blended into the gasoline pool. Biohydrogen can be used in fuel cells to produce electricity. Mixed Alcohols (i.e., mixture of mostly ethanol, propanol, and butanol, with some pentanol, hexanol, heptanol, and octanol). Mixed alcohols are produced from syngas with several classes of catalysts. Some have employed catalysts similar to those used for methanol. Molybdenum sulfide catalysts were discovered at Dow Chemical and have received considerable attention. Addition of cobalt sulfide to the catalyst formulation was shown to enhance performance. Molybdenum sulfide catalysts have been well studied but have yet to find widespread use. These catalysts have been a focus of efforts at the U.S. Department of Energy's Biomass Program in the Thermochemical Platform. Noble metal catalysts have also been shown to produce mixed alcohols. Most R&D in this area is concentrated in producing mostly ethanol. However, some fuels are marketed as mixed alcohols (see Ecalene and E4 Envirolene) Mixed alcohols are superior to pure methanol or ethanol, in that the higher alcohols have higher energy content. Also, when blending, the higher alcohols increase compatibility of gasoline and ethanol, which increases water tolerance and decreases evaporative emissions. In addition, higher alcohols have also lower heat of vaporization than ethanol, which is important for cold starts. (For another method for producing mixed alcohols from biomass see bioconversion of biomass to mixed alcohol fuels) Biomethane (or Bio-SNG) via the Sabatier reaction From syngas using Fischer–Tropsch The Fischer–Tropsch (FT) process is a gas-to-liquid (GtL) process. When biomass is the source of the gas production the process is also referred to as biomass-to-liquids (BTL). A disadvantage of this process is the high energy investment for the FT synthesis and consequently, the process is not yet economic. FT diesel can be mixed with fossil diesel at any percentage without need for infrastructure change and moreover, synthetic kerosene can be produced Biocatalysis Biohydrogen might be accomplished with some organisms that produce hydrogen directly under certain conditions. Biohydrogen can be used in fuel cells to produce electricity. Butanol and Isobutanol via recombinant pathways expressed in hosts such as E. coli and yeast, butanol and isobutanol may be significant products of fermentation using glucose as a carbon and energy source. DMF (2,5-Dimethylfuran). Recent advances in producing DMF from fructose and glucose using catalytic biomass-to-liquid process have increased its attractiveness. Other processes HTU (Hydro Thermal Upgrading) diesel is produced from wet biomass. It can be mixed with fossil diesel in any percentage without need for infrastructure. Wood diesel. A new biofuel was developed by the University of Georgia from woodchips. The oil is extracted and then added to unmodified diesel engines. Either new plants are used or planted to replace the old plants. The charcoal byproduct is put back into the soil as a fertilizer. According to the director Tom Adams since carbon is put back into the soil, this biofuel can actually be carbon negative not just carbon neutral. Carbon negative decreases carbon dioxide in the air reversing the greenhouse effect not just reducing it. Second Generation Feedstocks To qualify as a second generation feedstock, a source must not be suitable for human consumption. Second-generation biofuel feedstocks include specifically grown inedible energy crops, cultivated inedible oils, agricultural and municipal wastes, waste oils, and algae. Nevertheless, cereal and sugar crops are also used as feedstocks to second-generation processing technologies. Land use, existing biomass industries and relevant conversion technologies must be considered when evaluating suitability of developing biomass as feedstock for energy. Energy crops Plants are made from lignin, hemicellulose and cellulose; second-generation technology uses one, two or all of these components. Common lignocellulosic energy crops include wheat straw, Arundo donax, Miscanthus spp., short rotation coppice poplar and willow. However, each offers different opportunities and no one crop can be considered 'best' or 'worst'. Municipal solid waste Municipal Solid Waste comprises a very large range of materials, and total waste arisings are increasing. In the UK, recycling initiatives decrease the proportion of waste going straight for disposal, and the level of recycling is increasing each year. However, there remains significant opportunities to convert this waste to fuel via gasification or pyrolysis. Green waste Green waste such as forest residues or garden or park waste may be used to produce biofuel via different routes. Examples include Biogas captured from biodegradable green waste, and gasification or hydrolysis to syngas for further processing to biofuels via catalytic processes. Black liquor Black liquor, the spent cooking liquor from the kraft process that contains concentrated lignin and hemicellulose, may be gasified with very high conversion efficiency and greenhouse gas reduction potential to produce syngas for further synthesis to e.g. biomethanol or BioDME. The yield of crude tall oil from process is in the range of 30 – 50 kg / ton pulp. Greenhouse gas emissions Lignocellulosic biofuels reduces greenhouse gas emissions by 60–90% when compared with fossil petroleum (Börjesson.P. et al. 2013. Dagens och framtidens hållbara biodrivmedel), which is on par with the better of current biofuels of the first-generation, where typical best values currently is 60–80%. In 2010, average savings of biofuels used within EU was 60% (Hamelinck.C. et al. 2013 Renewable energy progress and biofuels sustainability, Report for the European Commission). In 2013, 70% of the biofuels used in Sweden reduced emissions with 66% or higher. (Energimyndigheten 2014. Hållbara biodrivmedel och flytande biobränslen 2013). Commercial development An operating lignocellulosic ethanol production plant is located in Canada, run by Iogen Corporation. The demonstration-scale plant produces around 700,000 litres of bioethanol each year. A commercial plant is under construction. Many further lignocellulosic ethanol plants have been proposed in North America and around the world. The Swedish specialty cellulose mill Domsjö Fabriker in Örnsköldsvik, Sweden develops a biorefinery using Chemrec's black liquor gasification technology. When commissioned in 2015 the biorefinery will produce 140,000 tons of biomethanol or 100,000 tons of BioDME per year, replacing 2% of Sweden's imports of diesel fuel for transportation purposes. In May 2012 it was revealed that Domsjö pulled out of the project, effectively killing the effort. In the UK, companies like INEOS Bio and British Airways are developing advanced biofuel refineries, which are due to be built by 2013 and 2014 respectively. Under favourable economic conditions and strong improvements in policy support, NNFCC projections suggest advanced biofuels could meet up to 4.3 per cent of the UK's transport fuel by 2020 and save 3.2 million tonnes of CO2 each year, equivalent to taking nearly a million cars off the road.Helsinki, Finland, 1 February 2012 – UPM is to invest in a biorefinery producing biofuels from crude tall oil in Lappeenranta, Finland. The industrial scale investment is the first of its kind globally. The biorefinery will produce annually approximately 100,000 tonnes of advanced second-generation biodiesel for transport. Construction of the biorefinery will begin in the summer of 2012 at UPM’s Kaukas mill site and be completed in 2014. UPM's total investment will amount to approximately EUR 150 million.Calgary, Alberta, 30 April 2012 – Iogen Energy Corporation has agreed to a new plan with its joint owners Royal Dutch Shell and Iogen Corporation to refocus its strategy and activities. Shell continues to explore multiple pathways to find a commercial solution for the production of advanced biofuels on an industrial scale, but the company will NOT pursue the project it has had under development to build a larger scale cellulosic ethanol facility in southern Manitoba.In India, Indian Oil Companies have agreed to build seven second generation refineries across the country. The companies who will be participating in building of 2G biofuel plants are Indian Oil Corporation (IOCL), HPCL and BPCL. In May 2018, the Government of India unveiled a biofuel policy wherein a sum of INR 5,000 crores was allocated to set-up 2G biorefineries. Indian oil marketing companies were in a process of constructing 12 refineries with a capex of INR 10,000 crores. See also Algae fuel Cellulosic ethanol commercialization Food vs fuel IEA Bioenergy Jatropha Renewable Fuel Standard References External links The National Non-Food Crops Centre
hybrid electric bus
A hybrid electric bus is a bus that combines a conventional internal combustion engine propulsion system with an electric propulsion system. These type of buses normally use a Diesel-electric powertrain and are also known as hybrid Diesel-electric buses. The introduction of hybrid electric vehicles and other green vehicles for purposes of public transport forms a part of sustainable transport schemes. Powertrain Types of hybrid vehicle drivetrain A hybrid electric bus may have either a parallel powertrain (e.g., Volvo B5LH) or a series powertrain (e.g., some versions of the Alexander Dennis Enviro400 MMC). Plug-in hybrid A plug-in hybrid school bus effort began in 2003 in Raleigh, NC, when Advanced Energy began working between districts across the country and manufacturers to understand the needs of both. The effort demonstrated both a technical and business feasibility and as a result was able to secure funding in 2005 from NASEO to purchase up to 20 buses. The resulting RFP from Advanced Energy was won by IC Bus using a product jointly produced with Enova for a 22-mile plug-in hybrid product with a $140k premium over existing buses. The buses performed well in testing with 70% reductions in fuel usage although only in specific conditions. The United States Department of Energy (USDOE) announced the selection of Navistar Corporation for a cost-shared award of up to $10 million to develop, test, and deploy plug-in hybrid electric (PHEV) school buses. The project aims to deploy 60 vehicles for a three-year period in school bus fleets across the nation. The vehicles will be capable of running in either electric-only or hybrid modes and will be recharged from a standard electrical outlet. Because electricity will be their primary fuel, they will consume less petroleum than standard vehicles. To develop the PHEV school bus, Navistar will examine a range of hybrid architectures and evaluate advanced energy storage devices, with the goal of developing a vehicle with a 40-mile (64 km) electric range. Travel beyond the 40-mile (64 km) range will be facilitated by a clean Diesel engine capable of running on renewable fuels. The DOE funding will cover up to half of the project's cost and will be provided over three years, subject to annual appropriations. Tribrid Bus Tribrid buses have been developed by the University of Glamorgan in Wales. They are powered by hydrogen fuel or solar cells, batteries and ultracapacitors. Air pollution and greenhouse gas emissions A report prepared by Purdue University suggests introducing more hybrid Diesel-electric buses and a fuel containing 20% biodiesel (BD20) would further reduce greenhouse emissions and petroleum consumption. Manufacturers Current manufacturers of Diesel-electric hybrid buses include Alexander Dennis, Azure Dynamics Corporation, Ebus, Eletra (Brazil), New Flyer Industries, Tata (India), Gillig, Motor Coach Industries, Orion Bus Industries, North American Bus Industries, Daimler AG's Mitsubishi Fuso, MAN, Designline, BAE Systems, Volvo Buses, VDL Bus & Coach, Wrightbus, Castrosua, Tata Hispano and many more. Toyota claims to have started with the Coaster Hybrid Bus in 1997 on the Japanese market. Since 1999, Hybrid electric buses with gas turbine generators have been developed by several manufacturers in the US and New Zealand, with the most successful design being the buses made by Designline of New Zealand. The first model went into commercial service in Christchurch since 1999, and later models were sold for daily service in Auckland, Hong Kong, Newcastle upon Tyne, and Tokyo. The Whispering Wheel bus is another HEV using in-wheel motors. It was tested in winter 2003–04 in Apeldoorn in the Netherlands.In Japan, Mitsubishi Fuso have developed a diesel engine hybrid bus using lithium batteries in 2002, and this model has since seen limited service in several Japanese cities. The Blue Ribbon City Hybrid bus was presented by Hino, a Toyota affiliate, in January 2005. For the North American transit bus market, New Flyer Industries, Gillig, North American Bus Industries, and Nova Bus produce hybrid electric buses using components from either BAE Systems (series hybrid, initially branded HybriDrive and now branded Series-E), or Allison Transmission (parallel/series hybrid, branded Hybrid EP or H 40/50 EP). In May 2003 General Motors started to tour with hybrid electric buses developed together with Allison. General Electric introduced its hybrid electric gear shifters on the market in 2005. Several hundreds of those buses have entered into daily operation in the U.S. In 2006, Nova Bus, which had previously marketed the RTS hybrid before that model was discontinued, added a Diesel-electric hybrid option for its LFS series. In the United Kingdom, Wrightbus has introduced a development of the London "Double-Decker", a new interpretation of the traditional red buses that are a feature of the extreme traffic density in London. The Wright Pulsar Gemini HEV bus uses a small Diesel engine with electric storage through a lithium ion battery pack. The use of a 1.9-litre Diesel instead of the typical 7.0-litre engine in a traditional bus demonstrates the possible advantages of serial hybrids in extremely traffic-dense environments. Based on a London test cycle, a reduction in CO2 emissions of 31% and fuel savings in the range of 40% have been demonstrated, compared with a "Euro-4" compliant bus. Former hybrid bus manufacturers ISE Corporation ThunderVolt (filed for bankruptcy in 2010) Azure Dynamics (filed for bankruptcy in 2012) Conversions Hybrid Electric Vehicle Technologies (HEVT) makes conversions of new and used vehicles (aftermarket and retrofit conversions), from combustion buses and conventional hybrid electric buses into plug-in buses. List of transit authorities using hybrid electric buses Transit authorities that use hybrid electric buses: North America United States Federal funding generally comes from the federal Diesel Emissions Reduction Act. ABQ RIDE (Albuquerque, New Mexico) Ann Arbor Area Transportation Authority (AAATA) (Ann Arbor, Michigan) Autoridad Metropolitana de Autobuses (San Juan, Puerto Rico) Baltimore, Maryland Bee-Line Bus System (Westchester County, New York) Berks Area Reading Transportation Authority (Berks County, Pennsylvania) Bloomington Transit (Bloomington, Indiana) Broome County Transit (Broome County, New York) Broward County Transit (Broward County, Florida) Capital Area Transportation Authority (Lansing, Michigan) Capital District Transportation Authority (Albany, New York) Central New York Regional Transportation Authority (Syracuse, New York) Charlotte Area Transit System (Charlotte, North Carolina) Chatham Area Transit (Savannah, Georgia) Chicago Transit Authority Citibus (Lubbock, Texas) Central Ohio Transit Authority (Columbus, Ohio) Clarksville Transit System (CTS) (Clarksville, Tennessee) Community Transit (Snohomish County, Washington) C-Tran (Vancouver, Washington) Citilink (Fort Wayne, Indiana) Cache Valley Transit District (Logan, Utah) DART First State (Delaware) Durham Area Transit Authority (Durham, North Carolina) Eureka Transit Service (Eureka, California) GoRaleigh (formerly Capital Area Transit) (Raleigh, North Carolina) Greater Lafayette Public Transportation Corporation (Lafayette, IN and West Lafayette, IN) Greater Lynchburg Transit Company (Lynchburg, VA) Greenville Area Transit (Greenville, North Carolina) Hillsborough Area Regional Transit (Hillsborough County, Florida) Howard Transit, (Howard County, Maryland) IndyGo (Indianapolis, Indiana) Jacksonville Transportation Authority Kanawha Valley Regional Transportation Authority Kansas City Area Transportation Authority King County Metro Transit Authority (Seattle, Washington) Lane Transit District (Lane County, Oregon) Long Beach Transit (Long Beach, California) LACMTA (Los Angeles, California) LANta (Lehigh Valley, Pennsylvania) Madison Metro Transit (Wisconsin) Manatee County Area Transit (Manatee County, Florida) Massachusetts Bay Transportation Authority (Boston, MA) MATA (Memphis, Tennessee) MATBUS – Metro Area Transit (Fargo, ND – Moorhead, MN) Metropolitan Transit Authority of Harris County, Texas (Houston, Texas) Minneapolis-Saint Paul Metro Transit MTA Maryland (Baltimore, Maryland) Nashville Metropolitan Transit Authority New York City Transit Authority Niagara Frontier Transportation Authority (Buffalo, New York) North County Transit District (North San Diego County, California) Orange County Transportation Authority (Orange County, California) Pioneer Valley Transit Authority (Springfield, Massachusetts) Port Authority of Allegheny County (Pittsburgh, Pennsylvania) Regional Transportation Commission of Southern Nevada/Citizens Area Transit (Las Vegas, Nevada) Rhode Island Public Transit Authority (Providence, Rhode Island). 1 gas and 1 diesel for testing use only; diesel was converted gas was hybrid from factory. Roaring Fork Transportation Authority (Aspen, Colorado) San Diego Metropolitan Transit System/San Diego Transit (San Diego, California) San Francisco MUNI (San Francisco, California) San Joaquin Regional Transit District (Stockton, California) Santa Clara Valley Transportation Authority - VTA (Santa Clara County, California) Santa Rosa CityBus (Santa Rosa, California) Sarasota County Area Transit (Sarasota County, Florida) Sound Transit (Puget Sound region, Washington) Southeastern Pennsylvania Transportation Authority (Philadelphia, Pennsylvania) Southwest Ohio Regional Transit Authority (Cincinnati, Ohio) Spokane Transit Authority (Spokane, Washington) TCAT (Ithaca, NY) TheBus (Honolulu, Hawaii) The Rapid The Interurban Transit Partnership Grand Rapids, Michigan *Has 5 vehicles used in fixed route service. TriMet (Portland, Oregon): two vehicles University of Michigan parking and transportation services (Ann Arbor, Michigan) Utah Transit Authority (Salt Lake City, Utah) Washington Metropolitan Area Transit Authority Canada Transit Windsor (Windsor, Ontario) Edmonton Transit System (Edmonton, Alberta) Hamilton Street Railway (Hamilton, Ontario) OC Transpo (Ottawa, Ontario) RTC (Quebec City, Quebec) RTL (Longueuil, Quebec) Saskatoon Transit, Saskatchewan STL (Laval, Quebec) STL (Lévis, Quebec) STM (Montreal, Quebec) STO (Gatineau, Quebec) STS (Sherbrooke, Quebec) St. Catharines Transit Commission (St. Catharines, Ontario) Toronto Transit Commission buses [673 out of 2137 regular buses are hybrid as of 2020] Coast Mountain Bus Company (Vancouver, British Columbia) BC Transit (Kelowna and Victoria). GRT (Waterloo, Ontario) [currently 6 out of 218 buses in service are hybrid] London Transit Commission (London, Ontario) Strathcona County Transit (Strathcona County, Alberta) [as of 2014, 10 Nova Bus LFS HEV diesel-electric hybrid buses remain in service] Halifax Transit, Halifax, Nova Scotia. Currently owns two hybrid-electric buses. Lethbridge Transit, Lethbridge, Alberta. 11 out of 42 buses are hybrid. Asia China Beijing Public Transport Kunming Bus Shenzhen Bus Group Shenzhen Eastern Bus Shenzhen Western Bus Jinan Bus Zhengzhou Bus Group Hong Kong Citybus New World First Bus Kowloon Motor Bus India Delhi Multi-Module Transit Mumbai BEST CNG-Hybrid Iran Vehicle, Fuel and Environment Research Institute (VFERI) Pakistan TransP esh Greenline, Karachi awar Japan Marunouchi Shuttleetc. Philippines Green Frog Hybrid Buses Singapore SBS Transit SMRT Buses Tower Transit Singapore Thailand BMTA Europe Belarus Minsk Slutsk Germany Dresden Hagen Lübeck Munich Nuremberg Hungary Budapest – The fleet consists of 28 Volvo 7900A Hybrid (articulated). Kecskemét – The fleet consists of 20 Mercedes-Benz Citaro G BlueTec®-Hybrid (articulated). Norway Nettbuss, Hamar Ruter, Oslo Nettbuss, Trondheim Nettbuss, Arendal Nobina, Tromsø Vestviken Kollektivtrafikk, Vestfold. Scania Citywide. Romania STB, Bucharest – The fleet consists of 130 Mercedes-Benz Citaro Hybrid. UK The Green Bus Fund is a fund which is supporting bus companies and local authorities in the UK to help them buy new electric buses. London Buses, London. This is the largest fleet in the UK, with around 2,300 vehicles in use. National Express West Midlands, Birmingham – 18 currently, 21 more planned Stagecoach, Manchester, Oxford, Sheffield, Newcastle Oxford Bus Company, Oxfordshire – 52 currently FirstGroup, Bath, Somerset, Bristol, Manchester Metroshuttle, Leeds, Essex Reading Buses Lothian Buses Cumfybus, Merseyside Brighton & Hove Stagecoach East Scotland, Aberdeen Arriva Yorkshire, from April 2013 Spain Barcelona (MAN Lion's City Hybrid) Empresa Municipal de Transportes, Madrid Figueres, within the electric bus Project, IDAE Sweden Jönköpings Länstrafik, Jönköping. MAN Lion's City Hybrid. Göteborgs Spårvägar, Gothenburg. Volvo 7700 Hybrid. Storstockholms Lokaltrafik, Stockholm. MAN Lion's City Hybrid. Other European countries Ljubljanski potniški promet (5 Kutsenits Hydra City II/III Hybrid's), Ljubljana, Slovenia Paris: RATP is using a hybrid electric bus outfitted with ultracapacitors; the model used is the MAN Lion's City Hybrid. Milan, Italy Team Trafikk, Trondheim, Norway, with 10 Volvo B5L Vienna, Austria PostAuto, Switzerland: one vehicle is being tested since April 2010; the test will continue for three years. Warsaw, Poland, 4 Solaris hybrid (combustion-electric) buses Luxemburg (Sales-Lentz, Emile Weber and AVL) Belgium / Flanders (De Lijn) Belgium / Wallonia (TEC): 90 Volvo 7900H (plug-in hybrid) + 208 solaris (combustion-electric) ordered in 2016Q4 Other countries Egypt IMUT: http://www.i-mut.net/en/about-us. Buenos Aires, Argentina Christchurch, New Zealand Curitiba, Brazil Mexico City, Mexico (Metrobús Line 4) Bogotá, Colombia See also References External links The Plug-in Hybrid Electric School Bus Project Media related to Hybrid-powered buses at Wikimedia Commons
byrd–hagel resolution
The Byrd–Hagel Resolution was a United States Senate Resolution passed unanimously with a vote of 95–0 on 25 July 1997, sponsored by Senators Chuck Hagel and Robert Byrd. The resolution stated that the US should not sign a climate treaty that would 'mandate new commitments to limit or reduce greenhouse gas emissions for the Annex I Parties, unless ...[it]... also mandates new specific scheduled commitments to limit or reduce greenhouse gas emissions for Developing Country Parties within the same compliance period', or would result in serious harm to the economy of the United States. This effectively prohibited the US from ratifying the Kyoto Protocol. Impact Despite the unanimous passage of the Byrd–Hagel resolution, U.N. Ambassador Peter Burleigh signed the Kyoto Protocol on behalf of the Clinton Administration on November 12 1998. However the Clinton Administration ultimately withheld the treaty from acquiring Senate approval due to potential political backlash and disapproval.At the outset of the Bush administration, Senators Chuck Hagel, Jesse Helms, Larry Craig, and Pat Roberts wrote a letter to President George W. Bush seeking to identify his stance on the Kyoto Protocol and climate change policy. In a letter dated March 13, 2001, President Bush responded "I oppose the Kyoto Protocol because it exempts 80 percent of the world, including major population centers such as China and India, from compliance, and would cause serious harm to the U.S. economy. The Senate's vote, 95-0, shows that there is a clear consensus that the Kyoto Protocol is an unfair and ineffective means of addressing global climate change concerns."During the Obama administration, the U.S. pursued climate policies such as the Copenhagen Accord and Paris Agreement in advocating for more comprehensive environmental reform and to allow nations to self determine their own emission based commitments. In a 2015 Time article, former U.S. Secretary of Defense Chuck Hagel stated that "Congress should play an active role in the negotiations—not by blocking the deal, but by sending a new Global Climate Change Observer Group to report on the proceedings in Paris and closely evaluate other countries’ climate plans". Related legislation H.Res.211 was a U.S. House of Representatives Resolution that was introduced with the support of 102 cosponsors on July 31, 1997. H.Res.211 maintained similar language to the Byrd–Hagel Resolution incurring that the Kyoto Protocol should "(1) mandate new commitments to limit or reduce greenhouse gas emissions for the Annex 1 Parties, unless the protocol or other agreement also mandates new specific scheduled commitments to limit or reduce greenhouse gas emissions for Developing Country Parties within the same compliance period; or (2) result in serious harm to the U.S. economy." Despite the initial popularity of the resolution it ultimately was referred to the Subcommittee on International Economic Policy and Trade where no subsequent action took place. == Notes ==
efficient energy use
Efficient energy use, sometimes simply called energy efficiency, is the process of reducing the amount of energy required to provide products and services. For example, insulating a building allows it to use less heating and cooling energy to achieve and maintain a thermal comfort. Installing light-emitting diode bulbs, fluorescent lighting, or natural skylight windows reduces the amount of energy required to attain the same level of illumination compared to using traditional incandescent light bulbs. Improvements in energy efficiency are generally achieved by adopting a more efficient technology or production process or by application of commonly accepted methods to reduce energy losses. There are many motivations to improve energy efficiency. Decreasing energy use reduces energy costs and may result in a financial cost saving to consumers if the energy savings offset any additional costs of implementing an energy-efficient technology. Reducing energy use is also seen as a solution to the problem of minimizing greenhouse gas emissions. Improved energy efficiency in buildings, industrial processes and transportation could reduce the world's energy needs in 2050 by one third, and help reduce global emissions of greenhouse gases. Another important solution is to remove government-led energy subsidies that promote high energy consumption and inefficient energy use in more than half of the countries in the world.Energy efficiency and renewable energy are said to be the twin pillars of sustainable energy policy and are high priorities in the sustainable energy hierarchy. In many countries energy efficiency is also seen to have a national security benefit because it can be used to reduce the level of energy imports from foreign countries and may slow down the rate at which domestic energy resources are depleted. Aims Energy productivity, which measures the output and quality of goods and services per unit of energy input, can come from either reducing the amount of energy required to produce something, or from increasing the quantity or quality of goods and services from the same amount of energy. From the point of view of an energy consumer, the main motivation of energy efficiency is often simply saving money by lowering the cost of purchasing energy. Additionally, from an energy policy point of view, there has been a long trend in a wider recognition of energy efficiency as the "first fuel", meaning the ability to replace or avoid the consumption of actual fuels. In fact, International Energy Agency has calculated that the application of energy efficiency measures in the years 1974-2010 has succeeded in avoiding more energy consumption in its member states than is the consumption of any particular fuel, including fossil fuels (i.e. oil, coal and natural gas).Moreover, it has long been recognized that energy efficiency brings other benefits additional to the reduction of energy consumption. Some estimates of the value of these other benefits, often called multiple benefits, co-benefits, ancillary benefits or non-energy benefits, have put their summed value even higher than that of the direct energy benefits.These multiple benefits of energy efficiency include things such as reduced greenhouse gas emissions, reduced air pollution and improved health, and improved energy security. Methods for calculating the monetary value of these multiple benefits have been developed, including e.g. the choice experiment method for improvements that have a subjective component (such as aesthetics or comfort) and Tuominen-Seppänen method for price risk reduction. When included in the analysis, the economic benefit of energy efficiency investments can be shown to be significantly higher than simply the value of the saved energy.Energy efficiency has proved to be a cost-effective strategy for building economies without necessarily increasing energy consumption. For example, the state of California began implementing energy-efficiency measures in the mid-1970s, including building code and appliance standards with strict efficiency requirements. During the following years, California's energy consumption has remained approximately flat on a per capita basis while national US consumption doubled. As part of its strategy, California implemented a "loading order" for new energy resources that puts energy efficiency first, renewable electricity supplies second, and new fossil-fired power plants last. States such as Connecticut and New York have created quasi-public Green Banks to help residential and commercial building-owners finance energy efficiency upgrades that reduce emissions and cut consumers' energy costs. Related concepts Energy conservation Energy conservation is broader than energy efficiency in including active efforts to decrease energy consumption, for example through behaviour change, in addition to using energy more efficiently. Examples of conservation without efficiency improvements are heating a room less in winter, using the car less, air-drying your clothes instead of using the dryer, or enabling energy saving modes on a computer. As with other definitions, the boundary between efficient energy use and energy conservation can be fuzzy, but both are important in environmental and economic terms. Sustainable energy Energy efficiency—using less energy to deliver the same goods or services, or delivering comparable services with less goods—is a cornerstone of many sustainable energy strategies. The International Energy Agency (IEA) has estimated that increasing energy efficiency could achieve 40% of greenhouse gas emission reductions needed to fulfil the Paris Agreement's goals. Energy can be conserved by increasing the technical efficiency of appliances, vehicles, industrial processes, and buildings. Unintended consequences If the demand for energy services remains constant, improving energy efficiency will reduce energy consumption and carbon emissions. However, many efficiency improvements do not reduce energy consumption by the amount predicted by simple engineering models. This is because they make energy services cheaper, and so consumption of those services increases. For example, since fuel efficient vehicles make travel cheaper, consumers may choose to drive farther, thereby offsetting some of the potential energy savings. Similarly, an extensive historical analysis of technological efficiency improvements has conclusively shown that energy efficiency improvements were almost always outpaced by economic growth, resulting in a net increase in resource use and associated pollution. These are examples of the direct rebound effect.Estimates of the size of the rebound effect range from roughly 5% to 40%. The rebound effect is likely to be less than 30% at the household level and may be closer to 10% for transport. A rebound effect of 30% implies that improvements in energy efficiency should achieve 70% of the reduction in energy consumption projected using engineering models. Options Appliances Modern appliances, such as, freezers, ovens, stoves, dishwashers, clothes washers and dryers, use significantly less energy than older appliances. Installing a clothesline will significantly reduce one's energy consumption as their dryer will be used less. Current energy-efficient refrigerators, for example, use 40 percent less energy than conventional models did in 2001. Following this, if all households in Europe changed their more than ten-year-old appliances into new ones, 20 billion kWh of electricity would be saved annually, hence reducing CO2 emissions by almost 18 billion kg. In the US, the corresponding figures would be 17 billion kWh of electricity and 27,000,000,000 lb (1.2×1010 kg) CO2. According to a 2009 study from McKinsey & Company the replacement of old appliances is one of the most efficient global measures to reduce emissions of greenhouse gases. Modern power management systems also reduce energy usage by idle appliances by turning them off or putting them into a low-energy mode after a certain time. Many countries identify energy-efficient appliances using energy input labeling.The impact of energy efficiency on peak demand depends on when the appliance is used. For example, an air conditioner uses more energy during the afternoon when it is hot. Therefore, an energy-efficient air conditioner will have a larger impact on peak demand than off-peak demand. An energy-efficient dishwasher, on the other hand, uses more energy during the late evening when people do their dishes. This appliance may have little to no impact on peak demand. Over the period 2001-2021, tech companies have replaced traditional silicon switches in an electric circuit with quicker gallium nitride transistors to make new gadgets as energy efficient as feasible. Gallium nitride transistors are, however, more costly. This is a significant change in lowering the carbon footprint. Building design Buildings are an important field for energy efficiency improvements around the world because of their role as a major energy consumer. However, the question of energy use in buildings is not straightforward as the indoor conditions that can be achieved with energy use vary a lot. The measures that keep buildings comfortable, lighting, heating, cooling and ventilation, all consume energy. Typically the level of energy efficiency in a building is measured by dividing energy consumed with the floor area of the building which is referred to as specific energy consumption or energy use intensity: Energy consumed Built area {\displaystyle {\frac {\text{Energy consumed}}{\text{Built area}}}} However, the issue is more complex as building materials have embodied energy in them. On the other hand, energy can be recovered from the materials when the building is dismantled by reusing materials or burning them for energy. Moreover, when the building is used, the indoor conditions can vary resulting in higher and lower quality indoor environments. Finally, overall efficiency is affected by the use of the building: is the building occupied most of the time and are spaces efficiently used — or is the building largely empty? It has even been suggested that for a more complete accounting of energy efficiency, specific energy consumption should be amended to include these factors: Embodied energy + Energy consumed − Energy recovered Built area × Utilization rate × Quality factor {\displaystyle {\frac {{\text{Embodied energy}}+{\text{Energy consumed}}-{\text{Energy recovered}}}{{\text{Built area}}\times {\text{Utilization rate}}\times {\text{Quality factor}}}}} Thus a balanced approach to energy efficiency in buildings should be more comprehensive than simply trying to minimize energy consumed. Issues such as quality of indoor environment and efficiency of space use should be factored in. Thus the measures used to improve energy efficiency can take many different forms. Often they include passive measures that inherently reduce the need to use energy, such as better insulation. Many serve various functions improving the indoor conditions as well as reducing energy use, such as increased use of natural light. A building's location and surroundings play a key role in regulating its temperature and illumination. For example, trees, landscaping, and hills can provide shade and block wind. In cooler climates, designing northern hemisphere buildings with south facing windows and southern hemisphere buildings with north facing windows increases the amount of sun (ultimately heat energy) entering the building, minimizing energy use, by maximizing passive solar heating. Tight building design, including energy-efficient windows, well-sealed doors, and additional thermal insulation of walls, basement slabs, and foundations can reduce heat loss by 25 to 50 percent.Dark roofs may become up to 39 °C (70 °F) hotter than the most reflective white surfaces. They transmit some of this additional heat inside the building. US Studies have shown that lightly colored roofs use 40 percent less energy for cooling than buildings with darker roofs. White roof systems save more energy in sunnier climates. Advanced electronic heating and cooling systems can moderate energy consumption and improve the comfort of people in the building.Proper placement of windows and skylights as well as the use of architectural features that reflect light into a building can reduce the need for artificial lighting. Increased use of natural and task lighting has been shown by one study to increase productivity in schools and offices. Compact fluorescent lamps use two-thirds less energy and may last 6 to 10 times longer than incandescent light bulbs. Newer fluorescent lights produce a natural light, and in most applications they are cost effective, despite their higher initial cost, with payback periods as low as a few months. LED lamps use only about 10% of the energy an incandescent lamp requires. Effective energy-efficient building design can include the use of low cost passive infra reds to switch-off lighting when areas are unoccupied such as toilets, corridors or even office areas out-of-hours. In addition, lux levels can be monitored using daylight sensors linked to the building's lighting scheme to switch on/off or dim the lighting to pre-defined levels to take into account the natural light and thus reduce consumption. Building management systems link all of this together in one centralised computer to control the whole building's lighting and power requirements.In an analysis that integrates a residential bottom-up simulation with an economic multi-sector model, it has been shown that variable heat gains caused by insulation and air-conditioning efficiency can have load-shifting effects that are not uniform on the electricity load. The study also highlighted the impact of higher household efficiency on the power generation capacity choices that are made by the power sector.The choice of which space heating or cooling technology to use in buildings can have a significant impact on energy use and efficiency. For example, replacing an older 50% efficient natural gas furnace with a new 95% efficient one will dramatically reduce energy use, carbon emissions, and winter natural gas bills. Ground source heat pumps can be even more energy-efficient and cost-effective. These systems use pumps and compressors to move refrigerant fluid around a thermodynamic cycle in order to "pump" heat against its natural flow from hot to cold, for the purpose of transferring heat into a building from the large thermal reservoir contained within the nearby ground. The end result is that heat pumps typically use four times less electrical energy to deliver an equivalent amount of heat than a direct electrical heater does. Another advantage of a ground source heat pump is that it can be reversed in summertime and operate to cool the air by transferring heat from the building to the ground. The disadvantage of ground source heat pumps is their high initial capital cost, but this is typically recouped within five to ten years as a result of lower energy use. Smart meters are slowly being adopted by the commercial sector to highlight to staff and for internal monitoring purposes the building's energy usage in a dynamic presentable format. The use of power quality analysers can be introduced into an existing building to assess usage, harmonic distortion, peaks, swells and interruptions amongst others to ultimately make the building more energy-efficient. Often such meters communicate by using wireless sensor networks. Green Building XML is an emerging scheme, a subset of the Building Information Modeling efforts, focused on green building design and operation. It is used as input in several energy simulation engines. But with the development of modern computer technology, a large number of building performance simulation tools are available on the market. When choosing which simulation tool to use in a project, the user must consider the tool's accuracy and reliability, considering the building information they have at hand, which will serve as input for the tool. Yezioro, Dong and Leite developed an artificial intelligence approach towards assessing building performance simulation results and found that more detailed simulation tools have the best simulation performance in terms of heating and cooling electricity consumption within 3% of mean absolute error. Leadership in Energy and Environmental Design (LEED) is a rating system organized by the US Green Building Council (USGBC) to promote environmental responsibility in building design. They currently offer four levels of certification for existing buildings (LEED-EBOM) and new construction (LEED-NC) based on a building's compliance with the following criteria: Sustainable sites, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, and innovation in design. In 2013, USGBC developed the LEED Dynamic Plaque, a tool to track building performance against LEED metrics and a potential path to recertification. The following year, the council collaborated with Honeywell to pull data on energy and water use, as well as indoor air quality from a BAS to automatically update the plaque, providing a near-real-time view of performance. The USGBC office in Washington, D.C. is one of the first buildings to feature the live-updating LEED Dynamic Plaque.A deep energy retrofit is a whole-building analysis and construction process that uses to achieve much larger energy savings than conventional energy retrofits. Deep energy retrofits can be applied to both residential and non-residential (“commercial”) buildings. A deep energy retrofit typically results in energy savings of 30 percent or more, perhaps spread over several years, and may significantly improve the building value. The Empire State Building has undergone a deep energy retrofit process that was completed in 2013. The project team, consisting of representatives from Johnson Controls, Rocky Mountain Institute, Clinton Climate Initiative, and Jones Lang LaSalle will have achieved an annual energy use reduction of 38% and $4.4 million. For example, the 6,500 windows were remanufactured onsite into superwindows which block heat but pass light. Air conditioning operating costs on hot days were reduced and this saved $17 million of the project's capital cost immediately, partly funding other retrofitting. Receiving a gold Leadership in Energy and Environmental Design (LEED) rating in September 2011, the Empire State Building is the tallest LEED certified building in the United States. The Indianapolis City-County Building recently underwent a deep energy retrofit process, which has achieved an annual energy reduction of 46% and $750,000 annual energy saving. Energy retrofits, including deep, and other types undertaken in residential, commercial or industrial locations are generally supported through various forms of financing or incentives. Incentives include pre-packaged rebates where the buyer/user may not even be aware that the item being used has been rebated or "bought down". "Upstream" or "Midstream" buy downs are common for efficient lighting products. Other rebates are more explicit and transparent to the end user through the use of formal applications. In addition to rebates, which may be offered through government or utility programs, governments sometimes offer tax incentives for energy efficiency projects. Some entities offer rebate and payment guidance and facilitation services that enable energy end use customers tap into rebate and incentive programs. To evaluate the economic soundness of energy efficiency investments in buildings, cost-effectiveness analysis or CEA can be used. A CEA calculation will produce the value of energy saved, sometimes called negawatts, in $/kWh. The energy in such a calculation is virtual in the sense that it was never consumed but rather saved due to some energy efficiency investment being made. Thus CEA allows comparing the price of negawatts with price of energy such as electricity from the grid or the cheapest renewable alternative. The benefit of the CEA approach in energy systems is that it avoids the need to guess future energy prices for the purposes of the calculation, thus removing the major source of uncertainty in the appraisal of energy efficiency investments. Industry Industries use a large amount of energy to power a diverse range of manufacturing and resource extraction processes. Many industrial processes require large amounts of heat and mechanical power, most of which is delivered as natural gas, petroleum fuels, and electricity. In addition some industries generate fuel from waste products that can be used to provide additional energy. Because industrial processes are so diverse it is impossible to describe the multitude of possible opportunities for energy efficiency in industry. Many depend on the specific technologies and processes in use at each industrial facility. There are, however, a number of processes and energy services that are widely used in many industries. Various industries generate steam and electricity for subsequent use within their facilities. When electricity is generated, the heat that is produced as a by-product can be captured and used for process steam, heating or other industrial purposes. Conventional electricity generation is about 30% efficient, whereas combined heat and power (also called co-generation) converts up to 90 percent of the fuel into usable energy.Advanced boilers and furnaces can operate at higher temperatures while burning less fuel. These technologies are more efficient and produce fewer pollutants.Over 45 percent of the fuel used by US manufacturers is burnt to make steam. The typical industrial facility can reduce this energy usage 20 percent (according to the US Department of Energy) by insulating steam and condensate return lines, stopping steam leakage, and maintaining steam traps.Electric motors usually run at a constant speed, but a variable speed drive allows the motor's energy output to match the required load. This achieves energy savings ranging from 3 to 60 percent, depending on how the motor is used. Motor coils made of superconducting materials can also reduce energy losses. Motors may also benefit from voltage optimisation.Industry uses a large number of pumps and compressors of all shapes and sizes and in a wide variety of applications. The efficiency of pumps and compressors depends on many factors but often improvements can be made by implementing better process control and better maintenance practices. Compressors are commonly used to provide compressed air which is used for sand blasting, painting, and other power tools. According to the US Department of Energy, optimizing compressed air systems by installing variable speed drives, along with preventive maintenance to detect and fix air leaks, can improve energy efficiency 20 to 50 percent. Transportation Automobiles The estimated energy efficiency for an automobile is 280 Passenger-Mile/106 Btu. There are several ways to enhance a vehicle's energy efficiency. Using improved aerodynamics to minimize drag can increase vehicle fuel efficiency. Reducing vehicle weight can also improve fuel economy, which is why composite materials are widely used in car bodies. More advanced tires, with decreased tire to road friction and rolling resistance, can save gasoline. Fuel economy can be improved by up to 3.3% by keeping tires inflated to the correct pressure. Replacing a clogged air filter can improve a cars fuel consumption by as much as 10 percent on older vehicles. On newer vehicles (1980s and up) with fuel-injected, computer-controlled engines, a clogged air filter has no effect on mpg but replacing it may improve acceleration by 6-11 percent. Aerodynamics also aid in efficiency of a vehicle. The design of a car impacts the amount of gas needed to move it through air. Aerodynamics involves the air around the car, which can affect the efficiency of the energy expended.Turbochargers can increase fuel efficiency by allowing a smaller displacement engine. The 'Engine of the year 2011' is the Fiat TwinAir engine equipped with an MHI turbocharger. "Compared with a 1.2-liter 8v engine, the new 85 HP turbo has 23% more power and a 30% better performance index. The performance of the two-cylinder is not only equivalent to a 1.4-liter 16v engine, but fuel consumption is 30% lower."Energy-efficient vehicles may reach twice the fuel efficiency of the average automobile. Cutting-edge designs, such as the diesel Mercedes-Benz Bionic concept vehicle have achieved a fuel efficiency as high as 84 miles per US gallon (2.8 L/100 km; 101 mpg‑imp), four times the current conventional automotive average.The mainstream trend in automotive efficiency is the rise of electric vehicles (all-electric or hybrid electric). Electric engines have more than double the efficiency of internal combustion engines. Hybrids, like the Toyota Prius, use regenerative braking to recapture energy that would dissipate in normal cars; the effect is especially pronounced in city driving. Plug-in hybrids also have increased battery capacity, which makes it possible to drive for limited distances without burning any gasoline; in this case, energy efficiency is dictated by whatever process (such as coal-burning, hydroelectric, or renewable source) created the power. Plug-ins can typically drive for around 40 miles (64 km) purely on electricity without recharging; if the battery runs low, a gas engine kicks in allowing for extended range. Finally, all-electric cars are also growing in popularity; the Tesla Model S sedan is the only high-performance all-electric car currently on the market. Street lighting Cities around the globe light up millions of streets with 300 million lights. Some cities are seeking to reduce street light power consumption by dimming lights during off-peak hours or switching to LED lamps. LED lamps are known to reduce the energy consumption by 50% to 80%. Aircraft There are several ways to improve aviation's use of energy through modifications aircraft and air traffic management. Aircraft improve with better aerodynamics, engines and weight. Seat density and cargo load factors contribute to efficiency. Air traffic management systems can allow automation of takeoff, landing, and collision avoidance, as well as within airports, from simple things like HVAC and lighting to more complex tasks such as security and scanning. International standards International standards ISO 17743 and ISO 17742 provide a documented methodology for calculating and reporting on energy savings and energy efficiency for countries and cities. Examples by country or region Europe The first EU-wide energy efficiency target was set in 1998. Member states agreed to improve energy efficiency by 1 percent a year over twelve years. In addition, legislation about products, industry, transport and buildings has contributed to a general energy efficiency framework. More effort is needed to address heating and cooling: there is more heat wasted during electricity production in Europe than is required to heat all buildings in the continent. All in all, EU energy efficiency legislation is estimated to deliver savings worth the equivalent of up to 326 million tons of oil per year by 2020.The EU set itself a 20% energy savings target by 2020 compared to 1990 levels, but member states decide individually how energy savings will be achieved. At an EU summit in October 2014, EU countries agreed on a new energy efficiency target of 27% or greater by 2030. One mechanism used to achieve the target of 27% is the 'Suppliers Obligations & White Certificates'. The ongoing debate around the 2016 Clean Energy Package also puts an emphasis on energy efficiency, but the goal will probably remain around 30% greater efficiency compared to 1990 levels. Some have argued that this will not be enough for the EU to meet its Paris Agreement goals of reducing greenhouse gas emissions by 40% compared to 1990 levels. In the European Union, 78% of enterprises proposed energy-saving methods in 2023, 67% listed energy contract renegotiation as a strategy, and 62% stated passing on costs to consumers as a plan to deal with energy market trends. Germany Energy efficiency is central to energy policy in Germany. As of late 2015, national policy includes the following efficiency and consumption targets (with actual values for 2014):: 4  Recent progress toward improved efficiency has been steady aside from the financial crisis of 2007–08. Some however believe energy efficiency is still under-recognised in terms of its contribution to Germany's energy transformation (or Energiewende).Efforts to reduce final energy consumption in transport sector have not been successful, with a growth of 1.7% between 2005–2014. This growth is due to both road passenger and road freight transport. Both sectors increased their overall distance travelled to record the highest figures ever for Germany. Rebound effects played a significant role, both between improved vehicle efficiency and the distance travelled, and between improved vehicle efficiency and an increase in vehicle weights and engine power.: 12 In 2014, the German federal government released its National Action Plan on Energy Efficiency (NAPE). The areas covered are the energy efficiency of buildings, energy conservation for companies, consumer energy efficiency, and transport energy efficiency. The central short-term measures of NAPE include the introduction of competitive tendering for energy efficiency, the raising of funding for building renovation, the introduction of tax incentives for efficiency measures in the building sector, and the setting up energy efficiency networks together with business and industry. In 2016, the German government released a green paper on energy efficiency for public consultation (in German). It outlines the potential challenges and actions needed to reduce energy consumption in Germany over the coming decades. At the document's launch, economics and energy minister Sigmar Gabriel said "we do not need to produce, store, transmit and pay for the energy that we save". The green paper prioritizes the efficient use of energy as the "first" response and also outlines opportunities for sector coupling, including using renewable power for heating and transport. Other proposals include a flexible energy tax which rises as petrol prices fall, thereby incentivizing fuel conservation despite low oil prices. Spain In Spain, four out of every five buildings use more energy than they should. They are either inadequately insulated or consume energy inefficiently.The Unión de Créditos Immobiliarios (UCI), which has operations in Spain and Portugal, is increasing loans to homeowners and building management groups for energy-efficiency initiatives. Their Residential Energy Rehabilitation initiative aims to remodel and encourage the use of renewable energy in at least 3720 homes in Madrid, Barcelona, Valencia, and Seville. The works are expected to mobilize around €46.5 million in energy efficiency upgrades by 2025 and save approximately 8.1 GWh of energy. It has the ability to reduce carbon emissions by 7,545 tonnes per year. Poland In May 2016 Poland adopted a new Act on Energy Efficiency, to enter into force on 1 October 2016. Australia In July 2009, the Council of Australian Governments, which represents the individual states and territories of Australia, agreed to a National Strategy on Energy Efficiency (NSEE). This is a ten-year plan accelerating the implementation of a nationwide adoption of energy-efficient practices and a preparation for the country's transformation into a low carbon future. The overriding agreement that governs this strategy is the National Partnership Agreement on Energy Efficiency. Canada In August 2017, the Government of Canada released Build Smart - Canada's Buildings Strategy, as a key driver of the Pan-Canadian Framework on Clean Growth and Climate Change, Canada's national climate strategy. United States A 2011 Energy Modeling Forum study covering the United States examined how energy efficiency opportunities will shape future fuel and electricity demand over the next several decades. The US economy is already set to lower its energy and carbon intensity, but explicit policies will be necessary to meet climate goals. These policies include: a carbon tax, mandated standards for more efficient appliances, buildings and vehicles, and subsidies or reductions in the upfront costs of new more energy-efficient equipment.Programs and organisations: Alliance to Save Energy American Council for an Energy-Efficient Economy Building Codes Assistance Project Building Energy Codes Program Consortium for Energy Efficiency Energy Star, from United States Environmental Protection Agency See also Carbon footprint Energy audit Energy conservation measures Energy efficiency implementation Energy recovery Energy resilience List of least carbon efficient power stations The Green Deal Energy Reduction AssetsInternational programs: 80 Plus 2000-watt society IEA Solar Heating & Cooling Implementing Agreement Task 13 International Energy Agency International Electrotechnical Commission International Partnership for Energy Efficiency Cooperation World Sustainable Energy Days == References ==
emission inventory
An emission inventory (or emissions inventory) is an accounting of the amount of pollutants discharged into the atmosphere. An emission inventory usually contains the total emissions for one or more specific greenhouse gases or air pollutants, originating from all source categories in a certain geographical area and within a specified time span, usually a specific year. An emission inventory is generally characterized by the following aspects: Why: The types of activities that cause emissions What: The chemical or physical identity of the pollutants included, and the quantity thereof Where: The geographic area covered When: The time period over which emissions are estimated How: The methodology to useEmission inventories are compiled for both scientific applications and for use in policy processes. Use Emissions and releases to the environment are the starting point of every environmental pollution problem. Information on emissions therefore is an absolute requirement in understanding environmental problems and in monitoring progress towards solving these. Emission inventories provide this type of information. Emission inventories are developed for a variety of purposes: Policy use: by policy makers to track progress towards emission reduction targets develop strategies and policies or Scientific use: Inventories of natural and anthropogenic emissions are used by scientists as inputs to air quality models Policy use Two more or less independent types of emission reporting schemes have been developed: Annual reporting of national total emissions of greenhouse gases and air pollutants in response to obligations under international conventions and protocols; this type of emissions reporting aims at monitoring the progress towards agreed national emission reduction targets; Regular emission reporting by individual industrial facilities in response to legal obligations; this type of emission reporting is developed to support public participation in decision-making.Examples of the first are the annual emission inventories as reported to the United Nations Framework Convention on Climate Change (UNFCCC) for greenhouse gases and to the UNECE Convention on Long-Range Transboundary Air Pollution (LRTAP) for air pollutants. In the United States, a national emissions inventory is published annually by the United States Environmental Protection Agency. This inventory is called the "National Emissions Inventory", and can be found here: [2] Examples of the second are the so-called Pollutant Release and Transfer Registers. Policy users typically are interested in annual total emission only. Scientific use Air quality models need input to describe all air pollution sources in the study area. Air emission inventories provide this type of information. Depending on the spatial and temporal resolution of the models, the spatial and temporal resolution of the inventories frequently has to be increased beyond what is available from national emission inventories as reported to the international conventions and protocols. Compilation For each of the pollutants in the inventory emissions are typically estimated by multiplying the intensity of each relevant activity ('activity rate') in the geographical area and time span with a pollutant dependent proportionality constant ('emission factor'). Why: the source categories To compile an emission inventory, all sources of the pollutants must be identified and quantified. Frequently used source categorisations are those defined by the Intergovernmental Panel on Climate Change (IPCC) in the Revised 1996 IPCC Guidelines for National Greenhouse Gas Inventories, IPCC Good practice guidance and uncertainty management in national greenhouse gas inventories, IPCC Good practice guidance for land use, land use change and forestry and more recently the 2006 IPCC Guidelines for National Greenhouse Gas Inventories those defined in the UNECE Convention on Long-Range Transboundary Air Pollution (LRTAP); recently the LRTAP Convention adopted a source categorisation that is largely consistent with those of IPCC, to replace the more technology oriented Standardized Nomenclature for Air Pollutants (SNAP) used until 2005.Both source categorisations make a clear distinction between sources related to the combustion of (fossil) fuels and those that are not caused by combustion. In most cases the specific fuel combusted in the former is added to the source definition. Source categories include: Energy Fuel combustion Stationary combustion Industrial combustion Residential heating Mobile combustion (transport) Fugitive emissions from (fossil) fuel use Industrial Processes Solvent and other product use Agriculture LULUCF (Land Use, Land Use Change and Forestry) WasteMany researchers and research projects use their own source classifications, sometimes based on either the IPCC or the SNAP source categories, but in most cases the source categories listed above will be included. What: the pollutants Emission inventories have been developed and still are being developed for two major groups of pollutants: Greenhouse gases: Carbon dioxide (CO2), Methane (CH4), Nitrous oxide (N2O) and A number of fluorinated gaseous compounds (HFCs, PFCs, SF6) Other greenhouse gases, not included in the United Nations Framework Convention on Climate Change (UNFCCC) Air pollutants: Acidifying pollutants: sulphur dioxide (SO2), nitrogen oxides (NOx, a combination of nitrogen monoxide, NO and nitrogen dioxide, NO2) and ammonia (NH3), Photochemical smog precursors: again nitrogen oxides and non-methane volatile organic compounds (NMVOCs) Particulates and particulate precursors Toxic pollutants like heavy metals and persistent organic pollutants Carbon monoxide (CO) Where: geographical resolution Typically national inventories provide data summed at the national territory only. In some cases additional information on major industrial stacks ('point sources') is available. Stacks are also called release points, because not all emissions come from stacks. Other industrial sources include fugitive emissions, which cannot be attributed to any single release point. Some inventories are compiled from sub-national entities such as states and counties (in the U.S.), which can provide additional spatial resolution. In scientific applications, where higher resolutions are needed, geographical information such as population densities, land use or other data can provide tools to disaggregate the national level emissions to the required resolution, matching the geographical resolution of the model. When: temporal resolution Similarly, national emission inventories provide total emissions in a specific year, based on national statistics. In some model applications higher temporal resolutions are needed, for instance when modelling air quality problems related to road transport. In such cases data on time dependent traffic intensities (rush hours, weekends and working days, summer and winter driving patterns, etc.) can be used to establish the required higher temporal resolution. Inventories compiled from Continuous Emissions Monitors (CEMs) can provide hourly emissions data. How: methodology to compile an emission inventory The European Environment Agency updated in 2007 the third edition of the inventory guidebook. The guidebook is prepared by the UNECE/EMEP Task Force on Emission Inventories and Projections and provides a detailed guide to the atmospheric emissions inventory methodology. Especially for Road Transport the European Environment Agency finances COPERT 4, a software program to calculate emissions which will be included in official annual national inventories. Quality The quality of an emission inventory depends on its use. In policy applications, the inventory should comply with all what has been decided under the relevant convention. Both the UNFCCC and LRTAP conventions require an inventory to follow the quality criteria below (see): A well constructed inventory should include enough documentation and other data to allow readers and users to understand the underlying assumptions and to assess its usability in an intended application. See also Emission factor Emissions & Generation Resource Integrated Database Greenhouse gas inventory Notes External links National inventories of GhG emitted in 2019 (received by the UNFCCC in 2021) Sources and further reading United Nations Framework Convention on Climate Change Intergovernmental Panel on Climate Change U.S. Environmental Protection Agency: Clearinghouse for Inventories and Emissions Factors U.S. Environmental Protection Agency: National Greenhouse Gas Emissions Data U.S. Environmental Protection Agency: Toxics Release Inventory European Environment Agency EMEP/CORINAIR Emission Inventory Guidebook 2009 U.S. Toxic Air Emissions Map COPERT 4 - Computer Programme to Calculate Emissions from Road Transport Methodology for the calculation of exhaust emissions - Road Transport
california public utilities commission
The California Public Utilities Commission (CPUC or PUC) is a regulatory agency that regulates privately owned public utilities in the state of California, including electric power, telecommunications, natural gas and water companies. In addition, the CPUC regulates common carriers, including household goods movers, limousines, rideshare services (§ Transportation network companies), self-driving cars, and rail crossing safety. The CPUC has headquarters in the Civic Center district of San Francisco, and field offices in Los Angeles and Sacramento. History On April 1, 1878, the California Office of the Commissioner of Transportation was created. During the 19th century, public concerns over the unbridled power of the Southern Pacific Railroad grew to the point that a three-member Railroad Commission was established, primarily to approve transportation prices. However, the Southern Pacific quickly dominated this commission to its advantage, and public outrage re-ignited. As experience with public regulation grew, other common utilities were brought under the oversight of the Railroad Commission.On March 3, 1879, the California Constitution was adopted by constitutional convention and was ratified by the electorate on May 7, 1879, and included provisions relating to Railroad Commissioners in article XII. On April 15, 1880, the Board of Railroad Commissioners was created. On March 20, 1909, the Railroad Commission of the State of California replaced these other entities. On February 9, 1911, the California Legislature passed the Railroad Commission Act reorganizing the Railroad Commission.On March 24, 1911, the California Legislature proposed a constitutional amendment giving it constitutional status, which was ratified by the electorate on October 10, 1911. On June 16, 1945, a constitutional amendment was proposed by the legislature to rename the Railroad Commission as the California Public Utilities Commission, which was ratified by the electorate on November 5, 1946. As a result of the amendment, the Constitution of California declares that the Public Utilities Code is the highest law in the state, that the legislature has unlimited authority to regulate public utilities under the Public Utilities Code, and that its provisions override any conflicting provision of the State Constitution which deals with the subject of regulation of public utilities. In 2013 and 2014, the Office of State Audits and Evaluations (OSAE) audited the CPUC budgeting practices and found significant weaknesses in the CPUC's budget operations. As a result, the CPUC could not assess whether transportation and utility companies are over or under charging ratepayers for user fees. Additionally, the CPUC did not know if user fees were spent on the programs they were collected for. The CPUC did not regularly audit public utilities to ensure the fees they collect from ratepayers were accurately paid to CPUC. Additionally, the CPUC did not consistently pursue delinquent user fees it was aware of, that utility companies collected. In October 2014, Commission President Michael Peevey decided to step down at the upcoming end of his second six-year term in December. The agency had an apparent cozy relationship with Pacific Gas & Electric, a utility whose gas line exploded in San Bruno killing eight people in 2010. His home in the Los Angeles suburb of La Cañada Flintridge was searched by criminal investigators in January 2015.In 2020, external auditors from Sjoberg Evanshenk Consulting delivered a series of reports commissioned by the CPUC for roughly $250,000. These reports reaffirmed continued weak budgeting practices and further discovered that approximately $200 million due from utility companies, including $50 million past due since 2017, with portions dating back as far as the 1990s. In February 2021, OSAE reaffirmed these findings, in response to a whistle blower complaint by former Executive Director, Alice Stebbins. In December 2020, Alice Stebbins was dismissed from the position of executive director after allegedly "violating state personnel rules" and misleading "the public by asserting that as much as $200 million was missing from accounts intended to fund programs for the state’s blind". However, "Bay City News Foundation and ProPublica found that Stebbins was right about the missing money" Structure Five commissioners each serve staggered six-year terms as the governing body of the agency. Commissioners are appointed by the governor and must be confirmed by the California State Senate. The CPUC meets publicly to carry out the business of the agency, which may include the adoption of utility rate changes, rules on safety and service standards, implementation of conservation programs, investigation into unlawful or anticompetitive practices by regulated utilities and intervention into federal proceedings which affect California ratepayers. As of December 2022, the commissioners are: President Alice Busching Reynolds (appointed November 22, 2021 by Gov. Gavin Newsom, effective December 31, 2021, and confirmed August 17, 2022; term expires January 1, 2027) Genevieve Shiroma (appointed January 22, 2019 by Gov. Gavin Newsom; term expires in 2024) Darcie L. Houck (appointed February 9, 2021 by Gov. Gavin Newsom; term expires in 2027) John Reynolds (appointed December 23, 2021 and reappointed December 22, 2022 by Gov. Gavin Newsom; term expires in 2027) Karen Douglas (appointed December 22, 2022 by Gov. Gavin Newsom; term expires in 2027)Some regulatory laws are implemented by the California State Legislature through the passage of laws. These laws often reside in the California Public Utilities Code. The CPUC Headquarters are in San Francisco with offices in Los Angeles and Sacramento and the CPUC employs 1000 including judges, engineers, analysts, lawyers, auditors, and support. Exclusions The CPUC does not regulate the rates of utilities and common carriers operated by government agencies. Thus, such organizations as the Los Angeles Department of Water and Power, San Francisco's Bay Area Rapid Transit, and other municipally operated utilities or common carriers are not subject to rate regulation or tariff schedule filing with the CPUC. However, all municipal utilities and carriers in California must follow Public Utilities provisions on holding hearings and obtaining public input before raising rates or changing terms of service, and municipal utility customers have means of appeal of potential disconnections. Additionally, the CPUC has jurisdiction over components of the safety operations of government run utilities and common carriers. Energy and climate change The CPUC regulates investor-owned electric and gas utilities within the state of California, including Pacific Gas & Electric, Southern California Edison, Southern California Gas and San Diego Gas & Electric. Among its stated goals for energy regulation are to establish service standards and safety rules, authorize utility rate changes, oversee markets to inhibit anti-competitive activity, prosecute unlawful utility marketing and billing activities, govern business relationships between utilities and their affiliates, resolve complaints by customers against utilities, implement energy efficiency and conservation programs and programs for the low-income and disabled, oversee the merger and restructure of utility corporations, and enforce the California Environmental Quality Act for utility construction. Leuwam Tesfai has served as Director of Energy Division and Deputy Executive Director of Energy and Climate Policy since 2022. California Solar Initiative The California Solar Initiative (CSI) is overseen by the California Public Utilities Commission (CPUC) and provides incentives for solar system installations to customers of the state's three investor-owned utilities (IOUs): Pacific Gas and Electric Company (PG&E), Southern California Edison (SCE) and San Diego Gas and Electric (SDG&E). The CSI program provides upfront incentives for solar systems installed on existing residential homes, as well as existing and new commercial, industrial, government, non-profit, and agricultural properties within the service territories of the IOUs. The CSI program has a goal to install 1,800 MW of new solar (excluding solar water heating) by the end of 2016. As of January 13, 2015, the CSI program has achieved a total of 1,743 MW of installed capacity, 96.8% of the program's goal, since its inception.On January 12, 2006, the CPUC issued an Interim Order that set initial policy and funding for the program. The CPUC was nearing an August 24, 2006 Commission vote on proposed incentive level design, administrative structure, and planning schedule, when SB 1 was signed into law on August 21, 2006, by Governor Arnold Schwarzenegger. While SB 1 codified the state's commitment to the creation of a self-sustaining solar market, it also introduced several unanticipated requirements for the program. In order to conform to state law, the CPUC then worked with parties to issue a proposed decision on SB 1's impacts to the California Solar Initiative program for public comment; this decision was approved by Commissioners on December 14, 2006. The program launched on January 1, 2007.The CSI Program was designed to be responsive to economies of scale in the California solar market – as the solar market grows, it was expected solar system costs would drop and incentives offered through the program decline. The CPUC divided the overall megawatt goal for the incentive program into ten programmatic incentive level steps, and assigned a target amount of capacity in each step to receive an incentive based on dollars per-watt or cents per-kilowatt-hour. Greenhouse gas emissions standards In January 2007, the CPUC adopted a greenhouse gas emissions standard that required new long-term commitments for baseload generation to serve California consumers with power plants that have emissions no greater than a combined cycle gas turbine plant. The CPUC said the emissions standard is a vital step in addressing global warming. Cap and trade On February 8, 2008, CPUC President Michael Peevey issued a proposed decision on the implementation of California's greenhouse gas emissions legislation, AB 32. The decision recommends a cap and trade program for the electricity sector in California that would impose regulations on owners and operators of generation in California and out-of-state generators delivering electricity to the California electrical grid. Zero Net Energy In 2007 the CPUC adopted goals to have all California residential construction use zero net energy by 2020, and all new commercial construction use zero net energy by 2030. Zero Net Energy buildings each contribute an amount of renewable energy to a utility that will balance out any amount of non-renewable energy they extract from the utility. For residential buildings, the CPUC participates in California's Zero Net Energy program that helps builders and homeowners select effective home energy upgrades. Telecommunications Communications The CPUC regulates intrastate telecommunications service and also the terms and conditions of service of wireless phone providers (but not entry or rates, which are the responsibility of the Federal Communications Commission.) The CPUC has developed a consumer-oriented communications website. The CPUC also reviews third-party-verification recordings to monitor for telephone slamming. The Digital Infrastructure and Video Competition Act of 2006 (DIVCA) made the CPUC responsible for video (what were formerly known as cable TV) franchises. The DIVCA granted the CPUC limited authority to regulate video service providers via a statewide franchise scheme. The CPUC is responsible for licensing video service providers, and enforcing certain anti discrimination and build out requirements imposed by the Act. Local franchise authorities will continue to regulate rights of way used by video providers, handle consumer complaints, and requirements as to public, educational, and government access (PEG) channels. Previously, licensing of franchises was handled by local authorities such as the Sacramento Metropolitan Cable Television Commission. The CPUC also played a key role in the Governor's Broadband Task Force formed in 2006. The task force produced two reports making recommendations to the Governor on what could be done to enhance broadband in California, engaging in a broadband mapping project for California, and producing a broadband speed report. In response to the Task Force mapping project and report, the CPUC launched an innovative California Advanced Services Fund (CASF), which is an infrastructure grant program for deploying broadband in unserved areas of California. The program is funded by a telephone surcharge of 0.56% for the period of March 1, 2018 to December 31, 2022. As of AB 1665, passed in October, 2017, broadband providers may apply for up to 100% funding of capital costs to deploy last-mile broadband in unserved areas. Unserved areas are those where no facility-based broadband provider offers service at speeds of at least 6 megabits per second downstream and 1 megabit per second upstream. Call recording The concept behind General Order 107-B is that telephone calls cannot be recorded in California unless all parties to the call know it is being recorded. The order gives specific requirements for lawfully recording telephone calls. Based on the 1983 version, one way to meet the requirements may be to give a verbal warning. This often occurs by the playing of a recording in an automatic call distribution queue: "Your call may be recorded or monitored for quality assurance purposes." Another method allowed to warn all callers a call is being recorded is the presence of a recorder warning tone: a 1,440 Hz tone repeating every fifteen seconds. In the 1960s, radio stations with call-in programs used to employ a recorder warning tone. The law now exempts lines used for call-in to broadcasts or cablecasts since it is presumed a caller to a telecast is aware their call is subject to being transmitted or recorded and intends for this to happen. The order requires that telephone utilities disconnect telephone service for violating the order. Transportation Transportation network companies A transportation network company (TNC) is a company that uses an online-enabled platform to connect passengers with drivers using their personal, non-commercial, vehicles. Examples include Lyft, Uber, Wingz, Haxi and Summon.The definition of a TNC was created by the CPUC in 2013, as a result of a rulemaking process around new and previously unregulated forms of transportation. Prior to the definition, the CPUC had attempted to group TNC services in the same category as limousines. Taxi industry groups opposed the creation of the new category, arguing that TNCs are taking away their business as illegal taxicab operations.The CPUC established regulations for TNC services at the same time as the definition. These included driver background checks, driver training, drug and alcohol policies, minimum insurance coverage of $1 million, and company licensing through the CPUC. Intrastate airlines Prior to the federal Airline Deregulation Act in 1978, the CPUC regulated intrastate airlines operating in California including jet air carriers as Pacific Southwest Airlines (PSA) and Air California; both no longer exist. References External links Official website California Emerging Technology Fund
loy yang power station
The Loy Yang Power Station is a brown coal- fired thermal power station located on the outskirts of the city of Traralgon, in south-eastern Victoria, Australia. It consists of two sections, known as Loy Yang A (4 units) and Loy Yang B (2 units). Both Loy Yang A and B are supplied by the Loy Yang brown coal mine. The Loy Yang power stations are located in the brown coal rich Latrobe Valley, along with the Yallourn Power Station. If Loy Yang A and Loy Yang B are counted together they are the largest power station in Australia, generating 3280 MW of power (however, if counted separately, the 2,880 MW Eraring Power Station is the largest). Loy Yang A & B are base load power stations, and together produce 50% of Victoria's electricity requirements. Loy Yang also serves as the mainland connection point for the Basslink electricity interconnector cable which runs under Bass Strait, connecting it to the George Town sub-station in northern Tasmania. Technical Features All six Loy Yang boilers are of the forced circulation tower type and are made by International Combustion Australia. Steam is supplied at a pressure of 16 MPa and a temperature of 540 °C. Loy Yang A Loy Yang A has four generating units with a combined capacity of 2,210 MW (2,960,000 hp), which have been completed between 1984 and 1988. Loy Yang A consists of three units with Kraftwerk Union alternators (units 1, 3 and 4) and one unit by Brown Boveri & Cie (unit 2) that was supposed to be the second unit at the gas-fired Newport Power Station. Later during the 2000s the turbine/generator couplings were upgraded on the 3 Kraftwerk Union units to allow an increase in MCR to 560 MW. Loy Yang A is the mainland connection point for the Basslink electricity interconnector cable. Loy Yang B Loy Yang B has two units with a capacity of 1,070 MW (1,430,000 hp) which entered service in 1993 and 1996. The two units have Hitachi turbo generators. Loy Yang B employs up to 152 full-time staff and another 40 contractors. It is Victoria's newest and most efficient brown coal-fired power station and can generate approximately 17% of Victoria's power needs. Fuel supply Four giant bucket-wheel excavators, called dredgers, operate 24 hours a day in the Loy Yang open cut mine, mostly feeding coal directly to the boilers via conveyor belt, 18 hours of reserve supply is held in a 70,000 tonnes (69,000 long tons) coal bunker. Each year approximately 30 million tonnes (30×10^6 long tons) of coal are extracted from the open pit. The open cut coal mine pit is about 200 metres (660 ft) deep, 3 kilometres (1.9 mi) and 2 kilometres (1.2 mi) wide at its widest. The current mining licence has been extended by the Victorian Government up to the year 2065. History The squatter James Rintoul established a stock run at the place where Sheepwash Creek meets the Latrobe River, which he named "Loy Yang".Loy Yang facility was originally constructed through the 1980s by International Combustion Australia, who was contracted by the government owned State Electricity Commission of Victoria (SECV). It consists of two separate units, Loy Yang A and Loy Yang B. Constructed in stages, it was originally planned that the Loy Yang complex would consist of eight generating units, of 525 megawatts (704,000 hp) each upon completion. The privatization of the SECV resulted in only six generating units being completed, four in Loy Yang A and two in Loy Yang B. The chimneys were constructed by Thiess Contractors.The Loy Yang complex was privatised in 1995, as were most of the assets of the SECV. Prior to the Government of Victoria's privatisations from the mid-1990s, a 51% stake of Loy Yang B was sold to Mission Energy. Later Edison Mission bought the complete plant, and later again sold it to the joint venture International Power Mitsui. In 1995, Loy Yang B was the world's first coal-fired power station to gain quality accreditation to ISO 9001 and the first Australian power station to gain environmental accreditation to ISO 14001. In March 2010 it was announced that the operators of Loy Yang A (Loy Yang Power) signed a contract with Alcoa World Alumina & Chemicals Australia for the supply of electricity to power aluminium smelters at Portland and Point Henry until 2036. The Point Henry Smelter ceased operation in 2014 and is now closed. In June 2012 AGL Energy acquired Loy Yang A and the Loy Yang coal mine.In June 2012 AGL Energy acquired Loy Yang A and the Loy Yang coal mine. In 2020, AGL announced plans to build a 200 MW / 800 MWh (4 hours) battery storage power station at Loy Yang A to increase flexibility.Until November 2017, Loy Yang B was jointly owned by Engie (formerly GDF Suez Australia), which held a 70% stake, and Mitsui & Co with 30%. In November 2017, Engie sold Loy Yang B to Chow Tai Fook Enterprises for a reported AU$1.2 billion, despite some reported that it was acquired by Chow Tai Fook Enterprises' subsidiary Alinta Energy instead. Greenhouse gas emissions Carbon Monitoring for Action estimates this power station emits 14.4 million tonnes (14.2×10^6 long tons) of greenhouse gases each year as a result of burning coal. On 3 September 2007 the Loy Yang complex was the target of climate change activists. The activists locked themselves to conveyor belts and reduced power production for several hours before being cut free. Four people were arrested.On 23 September 2021 Environment Victoria took legal action against AGL Energy, which owns Loy Yang A (and also EnergyAustralia, owner of Yallourn power station, and Alinta at Loy Yang B) for failure to manage climate pollution, in a case that was the first to test the Victorian government’s climate change laws. Engineering heritage award The power station received an Engineering Heritage Marker from Engineers Australia as part of its Engineering Heritage Recognition Program. Future In September 2022, AGL announced that Loy Yang A would close in 2035.Alinta Energy, the owner of Loy Yang B, has announced an intention to operate it until 2047. However, the state government has announced a target that 95% of Victoria's electricity will be generated by renewable energy by 2035. It would be impossible to meet this target if Loy Yang B continued to operate in its present form. The Premier of Victoria, Daniel Andrews, has stated that it is unlikely to operate to the previously announced closure date. Gallery References External links Loy Yang Power International Power Australian Energy Market Operator Participant Registration List
heating film
Heating films are a method of electric resistance heating, providing relatively low temperatures (compared to many conventional heating systems) over large areas. Heating films can be directly installed to provide underfloor heating, wall radiant heating and ceiling radiant heating. The films can also be used in heating panels to produce wall or ceiling panel heaters. Although heating films do not usually run at very high temperatures (typically 30 °C (86 °F) on floors and up to 40 °C (104 °F) on walls), due to the large surface area they cover, they can provide significant energy output. Also due to the low temperature, undesirable heat losses can be lower, when compared to higher temperature wet heating systems with losses from long pipe runs from the central heating source. Description Electric resistance heating films consist of a substrate film, most commonly made from plastic, printed or coated with a resistance material and electric contact busbars. This is laminated or coated with another layer of insulating film, usually similar to the substrate. Electrical contacts can be made using various types of crimp contacts, which make contact through the covering films to the internal busbars. If the operating voltage is high enough, the contacts must be additionally insulated and additional safety insulation or safety earth covering must be provided over the film to protect users from shock, in case of damage to the installed film. Heating films can be used as a replacement for conventional heating systems, as a primary heat source or used to augment existing systems. Due to the simplicity of installation and ease of hiding the installation, it can be popular with DIY installers. The films can be placed or removed when needed, when installed under carpets of other easily removable floor covering. Heating controllers can allow the residents to select which surface will be heated. Films are available with different power outputs per m2. The required power output will depend on the area covered, building insulation, required energy input to the room and maximum desired surface temperature. Many countries have regulations for the maximum floor or wall temperature. Thermostats with surface temperature sensors may be required in these jurisdictions. This heating system is mainly used in Asian countries like Korea, where it is considered as a modern version of ondol heating. Some heating films are semi-permanent, meaning that they need to be changed after a certain period of use, and are not widely available everywhere. However, many modern heating films can be considered permanent, lasting at least 20 years with no degradation in performance. Heating films are becoming increasingly popular globally Ondol radiant floor heating An ondol, also called gudeul (Korean: 구들), in Korean traditional architecture, is underfloor heating which uses direct heat transfer from wood smoke to the underside of a thick masonry floor. Many modern Korean houses use a radiant underfloor heating system that's called ondol, referring to the traditional ondol heating system. In fact, Korean people refer to any domestic heating system as ondol. Technically, the modern day ondol is based on the same concept as the underfloor radiant heating invented by the American architect Frank Lloyd Wright, who was interested in Japanese architecture. Wright came with the idea while visiting a Japanese nobleman's house, where he found a tea room that was different from the rest of the house, using a Korean ondol as a heating system. He then took the same idea to create a heating system that fits American houses. Wright invented modern radiant floor heating, using hot water running through pipes instead of hot air through flues.A heating film, itself, is a variation of the modern ondol, but it doesn't require hot water and pipes as it is fully electric. There are also many variations of this underfloor heating system, as many underfloor heaters are based on PTC heating element, far-infrared rays, carbon film, or cables, etc. Operating voltage Heating films for domestic heating are available to be powered directly from the mains electricity supply in different countries, e.g. 120 volts AC, 220 volts AC, etc. or low voltage, e.g. 24 volts. Mains voltage In general, due to safety issues, heating films supplied directly from the mains are only installed underfloor or in ceilings. Due to potentially hazardous voltages from mains electricity, protection from electric shock is usually required, either due to damage (e.g. nails) or water ingress. Most commonly, additional insulation layers are added over the heating film and sealed with waterproof tapes, although this does not provide protection from shock due to damage like nails. Metalised bags or sheets, connected to earth can also used, relying on earth leakage (RCD) protection devices on the mains supply to provide protection from shock due to damage. Due to the additional protection layers, the heating efficiency of mains films can be reduced, however the lack of power supply makes these types of heating film attractive. Low voltage Low voltage heating films are usually designed to function at voltages within SELV range and therefore present no danger of shock. Operation at such voltages means that the films have many fewer limitations on where and how they can be installed, however higher currents are required for operation, requiring thicker wires and voltage converting isolation transformers or power supplies to run from the local mains. Alternatively, low voltage heating films may be suitable for off grid installations where battery, solar, wind or other power is available at the required voltage and current. Low voltage heating films can be particularly suitable for use in wet areas, where water ingress is a possibility. Low voltage heating films are also suitable for installation on walls to provide radiant heating. These films can include surface finishes designed to take plaster coatings directly onto the installed film. Due to the low voltage, they are completely electric shock safe when damaged due to nails, screws or when other fittings are put directly through the film. Breaks in the resistive film will affect the power output and metal bridges could cause excessive localised heating due to increased current flow and should not be made across breaks in the film. Breaks in the busbars would also cause functional issues. It is also possible to install low voltage heating films connected directly to the mains, by connecting sufficient of the sheets in series. However, in this case, similar insulation and shock protection would be required as for mains film installations. Heating films installed in walls and ceilings, provide heating predominantly using infrared or far infrared radiation, directly to the occupants of a room. Underfloor heating can also provides radiant heat, but floor coverings must be selected to provide infrared transmission and heat conduction. Greenhouse gas emissions of heating film The greenhouse gas emissions of heating film is split between manufacture, shipping, installation and operational emissions. Manufacture Due to variations between the construction of heating films from different manufacturers, calculations for greenhouse gas emissions to manufacture the products should be made for each manufacturer's film. Most heating films use plastic films, typically PET. These films can be produced from various sources, including recycling, which should be taken into account. Even where the plastic is recycled or derived from renewable sources, energy will be required to produce it and this must be accounted for. The resistance layer can be made in many different ways. Typically it is carbon based, with various other chemicals to make it adhere to the substrate sheet. All of these components must be accounted for when considering manufacturing emissions. The copper busbars, although small, will be a source of emissions. The copper may be printed as particles, which will require suitable chemicals to provide adhesion or stuck on as a ribbon, which will also require suitable adhesive chemicals. Silver particles may also be used to provide additional conductivity between the copper and resistance layer. Shipping Heating films are lightweight, typically 200 to 300 grams per square metre (0.061 lb/sq ft). Greenhouse gas emissions due to shipping will depend on distance travelled and transport method. Assuming the heating film travels a similar distance to other heating systems from manufacturer to installation, due to its comparatively low weight, emissions to ship heating film will be comparatively lower. Installation Installation of heating films will consume almost no energy when installed underfloor. Where adhesives are used for installation, e.g. on walls and ceilings, the emissions involved in manufacturing and shipping the adhesive must also be included. Some adhesives can produce environmentally damaging emissions as they cure. Low emission adhesives should be used or these emissions must be taken into account when considering installation emissions. Wiring could also add emissions, where cable runs through walls and floors need to be provided. Operating emissions Since heating films are powered by electricity, no direct emissions will occur at the installation. The operating emissions will depend entirely on the electricity source and emissions will occur wherever the electricity is generated. E.g. where the heating film is supplied directly by nuclear, wind or solar energy, the greenhouse gas emissions will be zero. Where the electricity is supplied by mains electricity in a region where the electricity is produced from coal, the overall operating greenhouse gas emission will be high. However, most countries are aiming to reduce greenhouse gas emissions for electricity generation. In countries where these policies are in place, heating film supplied from the mains can be considered to produce lower greenhouse gas emissions than coal, oil or gas heating systems and this will tend to improve over time. Transparent heating film Transparent heating films are a variant of the basic heating films but are built according to a different architecture. It consists of a thin and flexible polymer film with a conductive optical coating. These heating films can be used in museums as a replacement for heating tables References External links https://ecofloorheating.com https://www.ebay.com/sch/ecofloorheating/m.html?_nkw=&_armrs=1&_ipg=&_from= http://www.heatingfilm.net/ http://www.nrcan.gc.ca/energy/products/categories/heating/13740 http://www.instructables.com/id/DIY-carbon-tape-heated-vest-modular-system/
cement kiln
Cement kilns are used for the pyroprocessing stage of manufacture of portland and other types of hydraulic cement, in which calcium carbonate reacts with silica-bearing minerals to form a mixture of calcium silicates. Over a billion tonnes of cement are made per year, and cement kilns are the heart of this production process: their capacity usually defines the capacity of the cement plant. As the main energy-consuming and greenhouse-gas–emitting stage of cement manufacture, improvement of kiln efficiency has been the central concern of cement manufacturing technology. Emissions from cement kilns are a major source of greenhouse gas emissions, accounting for around 2.5% of non-natural carbon emissions worldwide. The manufacture of cement clinker A typical process of manufacture consists of three stages: grinding a mixture of limestone and clay or shale to make a fine "rawmix" (see Rawmill); heating the rawmix to sintering temperature (up to 1450 °C) in a cement kiln; grinding the resulting clinker to make cement (see Cement mill).In the second stage, the rawmix is fed into the kiln and gradually heated by contact with the hot gases from combustion of the kiln fuel. Successive chemical reactions take place as the temperature of the rawmix rises: 70 to 110 °C – Free water is evaporated. 400 to 600 °C – clay-like minerals are decomposed into their constituent oxides; principally SiO2 and Al2O3. dolomite (CaMg(CO3)2) decomposes to calcium carbonate (CaCO3), MgO and CO2. 650 to 900 °C – calcium carbonate reacts with SiO2 to form belite (Ca2SiO4) (also known as C2S in the Cement Industry). 900 to 1050 °C – the remaining calcium carbonate decomposes to calcium oxide (CaO) and CO2. 1300 to 1450 °C – partial (20–30%) melting takes place, and belite reacts with calcium oxide to form alite (Ca3O·SiO4) (also known as C3S in the Cement Industry).Alite is the characteristic constituent of Portland cement. Typically, a peak temperature of 1400–1450 °C is required to complete the reaction. The partial melting causes the material to aggregate into lumps or nodules, typically of diameter 1–10 mm. This is called clinker. The hot clinker next falls into a cooler which recovers most of its heat, and cools the clinker to around 100 °C, at which temperature it can be conveniently conveyed to storage. The cement kiln system is designed to accomplish these processes. Early history Portland cement clinker was first made (in 1825) in a modified form of the traditional static lime kiln. The basic, egg-cup shaped lime kiln was provided with a conical or beehive shaped extension to increase draught and thus obtain the higher temperature needed to make cement clinker. For nearly half a century, this design, and minor modifications, remained the only method of manufacture. The kiln was restricted in size by the strength of the chunks of rawmix: if the charge in the kiln collapsed under its own weight, the kiln would be extinguished. For this reason, beehive kilns never made more than 30 tonnes of clinker per batch. A batch took one week to turn around: a day to fill the kiln, three days to burn off, two days to cool, and a day to unload. Thus, a kiln would produce about 1500 tonnes per year. Around 1885, experiments began on design of continuous kilns. One design was the shaft kiln, similar in design to a blast furnace. Rawmix in the form of lumps and fuel were continuously added at the top, and clinker was continually withdrawn at the bottom. Air was blown through under pressure from the base to combust the fuel. The shaft kiln had a brief period of use before it was eclipsed by the rotary kiln, but it had a limited renaissance from 1970 onward in China and elsewhere, when it was used for small-scale, low-tech plants in rural areas away from transport routes. Several thousand such kilns were constructed in China. A typical shaft kiln produces 100-200 tonnes per day. From 1885, trials began on the development of the rotary kiln, which today accounts for more than 95% of world production. The rotary kiln The rotary kiln consists of a tube made from steel plate, and lined with firebrick. The tube slopes slightly (1–4°) and slowly rotates on its axis at between 30 and 250 revolutions per hour. Rawmix is fed in at the upper end, and the rotation of the kiln causes it gradually to move downhill to the other end of the kiln. At the other end fuel, in the form of gas, oil, or pulverized solid fuel, is blown in through the "burner pipe", producing a large concentric flame in the lower part of the kiln tube. As material moves under the flame, it reaches its peak temperature, before dropping out of the kiln tube into the cooler. Air is drawn first through the cooler and then through the kiln for combustion of the fuel. In the cooler the air is heated by the cooling clinker, so that it may be 400 to 800 °C before it enters the kiln, thus causing intense and rapid combustion of the fuel. The earliest successful rotary kilns were developed in Pennsylvania around 1890, based on a design by Frederick Ransome, and were about 1.5 m in diameter and 15 m in length. Such a kiln made about 20 tonnes of clinker per day. The fuel, initially, was oil, which was readily available in Pennsylvania at the time. It was particularly easy to get a good flame with this fuel. Within the next 10 years, the technique of firing by blowing in pulverized coal was developed, allowing the use of the cheapest available fuel. By 1905, the largest kilns were 2.7 x 60 m in size, and made 190 tonnes per day. At that date, after only 15 years of development, rotary kilns accounted for half of world production. Since then, the capacity of kilns has increased steadily, and the largest kilns today produce around 10,000 tonnes per day. In contrast to static kilns, the material passes through quickly: it takes from 3 hours (in some old wet process kilns) to as little as 10 minutes (in short precalciner kilns). Rotary kilns run 24 hours a day, and are typically stopped only for a few days once or twice a year for essential maintenance. One of the main maintenance works on rotary kilns is tyre and roller surface machining and grinding works which can be done while the kiln works in full operation at speeds up to 3.5 rpm. This is an important discipline, because heating up and cooling down are long, wasteful, and damaging processes. Uninterrupted runs as long as 18 months have been achieved. The wet process and the dry process From the earliest times, two different methods of rawmix preparation were used: the mineral components were either dry-ground to form a flour-like powder, or were wet-ground with added water to produce a fine slurry with the consistency of paint, and with a typical water content of 40–45%.The wet process suffered the obvious disadvantage that, when the slurry was introduced into the kiln, a large amount of extra fuel was used in evaporating the water. Furthermore, a larger kiln was needed for a given clinker output, because much of the kiln's length was committed to the drying process. On the other hand, the wet process had a number of advantages. Wet grinding of hard minerals is usually much more efficient than dry grinding. When slurry is dried in the kiln, it forms a granular crumble that is ideal for subsequent heating in the kiln. In the dry process, it is very difficult to keep the fine powder rawmix in the kiln, because the fast-flowing combustion gases tend to blow it back out again. It became a practice to spray water into dry kilns in order to "damp down" the dry mix, and thus, for many years there was little difference in efficiency between the two processes, and the overwhelming majority of kilns used the wet process. By 1950, a typical large, wet process kiln, fitted with drying-zone heat exchangers, was 3.3 x 120 m in size, made 680 tonnes per day, and used about 0.25–0.30 tonnes of coal fuel for every tonne of clinker produced. Before the energy crisis of the 1970s put an end to new wet-process installations, kilns as large as 5.8 x 225 m in size were making 3000 tonnes per day. An interesting footnote on the wet process history is that some manufacturers have in fact made very old wet process facilities profitable through the use of waste fuels. Plants that burn waste fuels enjoy a negative fuel cost (they are paid by industries needing to dispose of materials that have energy content and can be safely disposed of in the cement kiln thanks to its high temperatures and longer retention times). As a result, the inefficiency of the wet process is an advantage—to the manufacturer. By locating waste burning operations at older wet process locations, higher fuel consumption actually equates to higher profits for the manufacturer, although it produces correspondingly greater emission of CO2. Manufacturers who think such emissions should be reduced are abandoning the use of wet process. Preheaters In the 1930s, significantly, in Germany, the first attempts were made to redesign the kiln system to minimize waste of fuel. This led to two significant developments: the grate preheater the gas-suspension preheater. Grate preheaters The grate preheater consists of a chamber containing a chain-like high-temperature steel moving grate, attached to the cold end of the rotary kiln. A dry-powder rawmix is turned into a hard pellets of 10–20 mm diameter in a nodulizing pan, with the addition of 10-15% water. The pellets are loaded onto the moving grate, and the hot combustion gases from the rear of the kiln are passed through the bed of pellets from beneath. This dries and partially calcines the rawmix very efficiently. The pellets then drop into the kiln. Very little powdery material is blown out of the kiln. Because the rawmix is damped in order to make pellets, this is referred to as a "semi-dry" process. The grate preheater is also applicable to the "semi-wet" process, in which the rawmix is made as a slurry, which is first de-watered with a high-pressure filter, and the resulting "filter-cake" is extruded into pellets, which are fed to the grate. In this case, the water content of the pellets is 17-20%. Grate preheaters were most popular in the 1950s and 60s, when a typical system would have a grate 28 m long and 4 m wide, and a rotary kiln of 3.9 x 60 m, making 1050 tonnes per day, using about 0.11-0.13 tonnes of coal fuel for every tonne of clinker produced. Systems up to 3000 tonnes per day were installed. Gas-suspension preheaters The key component of the gas-suspension preheater is the cyclone. A cyclone is a conical vessel into which a dust-bearing gas-stream is passed tangentially. This produces a vortex within the vessel. The gas leaves the vessel through a co-axial "vortex-finder". The solids are thrown to the outside edge of the vessel by centrifugal action, and leave through a valve in the vertex of the cone. Cyclones were originally used to clean up the dust-laden gases leaving simple dry process kilns. If, instead, the entire feed of rawmix is forced to pass through the cyclone, a very efficient heat exchange takes place: the gas is efficiently cooled, hence producing less waste of heat to the atmosphere, and the raw mix is efficiently heated. The heat transfer efficiency is further increased if a number of cyclones are connected in series. The number of cyclones stages used in practice varies from 1 to 6. Energy, in the form of fan-power, is required to draw the gases through the string of cyclones, and at a string of 6 cyclones, the cost of the added fan-power needed for an extra cyclone exceeds the efficiency advantage gained. It is normal to use the warm exhaust gas to dry the raw materials in the rawmill, and if the raw materials are wet, hot gas from a less efficient preheater is desirable. For this reason, the most commonly encountered suspension preheaters have 4 cyclones. The hot feed that leaves the base of the preheater string is typically 20% calcined, so the kiln has less subsequent processing to do, and can therefore achieve a higher specific output. Typical large systems installed in the early 1970s had cyclones 6 m in diameter, a rotary kiln of 5 x 75 m, making 2500 tonnes per day, using about 0.11-0.12 tonnes of coal fuel for every tonne of clinker produced. A penalty paid for the efficiency of suspension preheaters is their tendency to block up. Salts, such as the sulfate and chloride of sodium and potassium, tend to evaporate in the burning zone of the kiln. They are carried back in vapor form, and re-condense when a sufficiently low temperature is encountered. Because these salts re-circulate back into the rawmix and re-enter the burning zone, a recirculation cycle establishes itself. A kiln with 0.1% chloride in the rawmix and clinker may have 5% chloride in the mid-kiln material. Condensation usually occurs in the preheater, and a sticky deposit of liquid salts glues dusty rawmix into a hard deposit, typically on surfaces against which the gas-flow is impacting. This can choke the preheater to the point that air-flow can no longer be maintained in the kiln. It then becomes necessary to manually break the build-up away. Modern installations often have automatic devices installed at vulnerable points to knock out build-up regularly. An alternative approach is to "bleed off" some of the kiln exhaust at the kiln inlet where the salts are still in the vapor phase, and remove and discard the solids in this. This is usually termed an "alkali bleed" and it breaks the recirculation cycle. It can also be of advantage for cement quality reasons, since it reduces the alkali content of the clinker. The alkali content is a critical property of cement. Indeed, cement with a too high alkali content can cause a harmful alkali–silica reaction (ASR) in concrete made with aggregates containing reactive amorphous silica. Hygroscopic and swelling sodium silicagel is formed inside the reactive aggregates which develop characteristics internal fissures. This expansive chemical reaction occurring in the concrete matrix generate high tensile stress in concrete and creates cracks that can ruine a concrete structure. However, hot gas is run to waste so the process is inefficient and increases kiln fuel consumption. Precalciners In the 1970s the precalciner was pioneered in Japan, and has subsequently become the equipment of choice for new large installations worldwide. The precalciner is a development of the suspension preheater. The philosophy is this: the amount of fuel that can be burned in the kiln is directly related to the size of the kiln. If part of the fuel necessary to burn the rawmix is burned outside the kiln, the output of the system can be increased for a given kiln size. Users of suspension preheaters found that output could be increased by injecting extra fuel into the base of the preheater. The logical development was to install a specially designed combustion chamber at the base of the preheater, into which pulverized coal is injected. This is referred to as an "air-through" precalciner, because the combustion air for both the kiln fuel and the calciner fuel all passes through the kiln. This kind of precalciner can burn up to 30% (typically 20%) of its fuel in the calciner. If more fuel were injected in the calciner, the extra amount of air drawn through the kiln would cool the kiln flame excessively. The feed is 40-60% calcined before it enters the rotary kiln. The ultimate development is the "air-separate" precalciner, in which the hot combustion air for the calciner arrives in a duct directly from the cooler, bypassing the kiln. Typically, 60-75% of the fuel is burned in the precalciner. In these systems, the feed entering the rotary kiln is 100% calcined. The kiln has only to raise the feed to sintering temperature. In theory the maximum efficiency would be achieved if all the fuel were burned in the preheater, but the sintering operation involves partial melting and nodulization to make clinker, and the rolling action of the rotary kiln remains the most efficient way of doing this. Large modern installations typically have two parallel strings of 4 or 5 cyclones, with one attached to the kiln and the other attached to the precalciner chamber. A rotary kiln of 6 x 100 m makes 8,000–10,000 tonnes per day, using about 0.10-0.11 tonnes of coal fuel for every tonne of clinker produced. The kiln is dwarfed by the massive preheater tower and cooler in these installations. Such a kiln produces 3 million tonnes of clinker per year, and consumes 300,000 tonnes of coal. A diameter of 6 m appears to be the limit of size of rotary kilns, because the flexibility of the steel shell becomes unmanageable at or above this size, and the firebrick lining tends to fail when the kiln flexes. A particular advantage of the air-separate precalciner is that a large proportion, or even 100%, of the alkali-laden kiln exhaust gas can be taken off as alkali bleed (see above). Because this accounts for only 40% of the system heat input, it can be done with lower heat wastage than in a simple suspension preheater bleed. Because of this, air-separate precalciners are now always prescribed when only high-alkali raw materials are available at a cement plant. The accompanying figures show the movement towards the use of the more efficient processes in North America (for which data is readily available). But the average output per kiln in, for example, Thailand is twice that in North America. Ancillary equipment Essential equipment in addition to the kiln tube and the preheater are: Cooler Fuel mills Fans Exhaust gas cleaning equipment. Coolers Early systems used rotary coolers, which were rotating cylinders similar to the kiln, into which the hot clinker dropped. The combustion air was drawn up through the cooler as the clinker moved down, cascading through the air stream. In the 1920s, satellite coolers became common and remained in use until recently. These consist of a set (typically 7–9) of tubes attached to the kiln tube. They have the advantage that they are sealed to the kiln, and require no separate drive. From about 1930, the grate cooler was developed. This consists of a perforated grate through which cold air is blown, enclosed in a rectangular chamber. A bed of clinker up to 0.5 m deep moves along the grate. These coolers have two main advantages: (1) they cool the clinker rapidly, which is desirable from a clinker quality point of view; it avoids that alite (C3S), thermodynamically unstable below 1250 °C, revert to belite (C2S) and free CaO (C) on slow cooling: C3S → C2S + C + heat (an exothermic reaction favored by the heat release),(as alite is responsible for the early strength development in cement setting and hardening, the highest possible content of the clinker in alite is desirable) and, (2) because they do not rotate, hot air can be ducted out of them for use in fuel drying, or for use as precalciner combustion air. The latter advantage means that they have become the only type used in modern systems . Fuel mills Fuel systems are divided into two categories: Direct firing Indirect firingIn direct firing, the fuel is fed at a controlled rate to the fuel mill, and the fine product is immediately blown into the kiln. The advantage of this system is that it is not necessary to store the hazardous ground fuel: it is used as soon as it is made. For this reason it was the system of choice for older kilns. A disadvantage is that the fuel mill has to run all the time: if it breaks down, the kiln has to stop if no backup system is available. In indirect firing, the fuel is ground by an intermittently run mill, and the fine product is stored in a silo of sufficient size to supply the kiln though fuel mill stoppage periods. The fine fuel is metered out of the silo at a controlled rate and blown into the kiln. This method is now favoured for precalciner systems, because both the kiln and the precalciner can be fed with fuel from the same system. Special techniques are required to store the fine fuel safely, and coals with high volatiles are normally milled in an inert atmosphere (e.g. CO2). Fans A large volume of gases has to be moved through the kiln system. Particularly in suspension preheater systems, a high degree of suction has to be developed at the exit of the system to drive this. Fans are also used to force air through the cooler bed, and to propel the fuel into the kiln. Fans account for most of the electric power consumed in the system, typically amounting to 10–15 kW·h per tonne of clinker. Gas cleaning The exhaust gases from a modern kiln typically amount to 2 tonnes (or 1500 cubic metres at STP) per tonne of clinker made. The gases carry a large amount of dust—typically 30 grams per cubic metre. Environmental regulations specific to different countries require that this be reduced to (typically) 0.1 gram per cubic metre, so dust capture needs to be at least 99.7% efficient. Methods of capture include electrostatic precipitators and bag-filters. See also cement kiln emissions. Kiln fuels Fuels that have been used for primary firing include coal, petroleum coke, heavy fuel oil, natural gas, landfill off-gas and oil refinery flare gas. Because the clinker is brought to its peak temperature mainly by radiant heat transfer, and a bright (i.e. high emissivity) and hot flame is essential for this, high carbon fuels such as coal which produces a luminous flame are often preferred for kiln firing. Where it is cheap and readily available, natural gas is also sometimes used. However, because it produces a much less luminous flame, it tends to result in lower kiln output. Alternative fuels In addition to these primary fuels, various combustible waste materials have been fed to kilns. These alternative fuels (AF) include: Used motor-vehicle tires Sewage sludge Agricultural waste Landfill gas Refuse-derived fuel (RDF) Chemical and other hazardous wasteCement kilns are an attractive way of disposing of hazardous materials, because of: the temperatures in the kiln, which are much higher than in other combustion systems (e.g. incinerators), the alkaline conditions in the kiln, afforded by the high-calcium rawmix, which can absorb acidic combustion products, the ability of the clinker to absorb heavy metals into its structure.A notable example is the use of scrapped motor-vehicle tires, which are very difficult to dispose of by other means. Whole tires are commonly introduced in the kiln by rolling them into the upper end of a preheater kiln, or by dropping them through a slot midway along a long wet kiln. In either case, the high gas temperatures (1000–1200 °C) cause almost instantaneous, complete and smokeless combustion of the tire. Alternatively, tires are chopped into 5–10 mm chips, in which form they can be injected into a precalciner combustion chamber. The steel and zinc in the tires become chemically incorporated into the clinker, partially replacing iron that must otherwise be fed as raw material. A high level of monitoring of both the fuel and its combustion products is necessary to maintain safe operation.For maximum kiln efficiency, high quality conventional fuels are the best choice. However, burning any fuels, especially hazardous waste materials, can result in toxic emissions. Thus, it is necessary for operators of cement kilns to closely monitor many process variables to ensure emissions are continuously minimized. In the U.S., cement kilns are regulated as a major source of air pollution by the EPA and must meet stringent air pollution control requirements. Kiln control The objective of kiln operation is to make clinker with the required chemical and physical properties, at the maximum rate that the size of kiln will allow, while meeting environmental standards, at the lowest possible operating cost. The kiln is very sensitive to control strategies, and a poorly run kiln can easily double cement plant operating costs.Formation of the desired clinker minerals involves heating the rawmix through the temperature stages mentioned above. The finishing transformation that takes place in the hottest part of the kiln, under the flame, is the reaction of belite (C2S = 2CaO·SiO2, or Ca2SiO4) with calcium oxide to form alite (C3S = 3CaO·SiO2, or Ca3SiO5): Ca2SiO4 + CaO → Ca3SiO5Also abbreviated in the cement chemist notation (CCN) as: C2S + C → C3S (endothermic reaction favored by a higher temperature)Tricalcium silicate (C3S, alite, Ca3SiO5) is thermodynamically unstable below 1250 °C, but can be preserved in a metastable state at room temperature by fast cooling (quenching): on slow cooling it tends to revert to belite (Ca2SiO4) and CaO.If the reaction is incomplete, excessive amounts of free calcium oxide remain in the clinker. Regular measurement of the free CaO content is used as a means of tracking the clinker quality. As a parameter in kiln control, free CaO data is somewhat ineffective because, even with fast automated sampling and analysis, the data, when it arrives, may be 10 minutes "out of date", and more immediate data must be used for minute-to-minute control. Conversion of belite to alite requires partial melting, the resulting liquid being the solvent in which the reaction takes place. The amount of liquid, and hence the speed of the finishing reaction, is related to temperature. To meet the clinker quality objective, the most obvious control is that the clinker should reach a peak temperature such that the finishing reaction takes place to the required degree. A further reason to maintain constant liquid formation in the hot end of the kiln is that the sintering material forms a dam that prevents the cooler upstream feed from flooding out of the kiln. The feed in the calcining zone, because it is a powder evolving carbon dioxide, is extremely fluid. Cooling of the burning zone, and loss of unburned material into the cooler, is called "flushing", and in addition to causing lost production can cause massive damage. However, for efficient operation, steady conditions need to be maintained throughout the whole kiln system. The feed at each stage must be at a temperature such that it is "ready" for processing in the next stage. To ensure this, the temperature of both feed and gas must be optimized and maintained at every point. The external controls available to achieve this are few: Feed rate: this defines the kiln output Rotary kiln speed: this controls the rate at which the feed moves through the kiln tube Fuel injection rate: this controls the rate at which the "hot end" of the system is heated Exhaust fan speed or power: this controls gas flow, and the rate at which heat is drawn from the "hot end" of the system to the "cold end"In the case of precalciner kilns, further controls are available: Independent control of fuel to kiln and calciner Independent fan controls where there are multiple preheater strings.The independent use of fan speed and fuel rate is constrained by the fact that there must always be sufficient oxygen available to burn the fuel, and in particular, to burn carbon to carbon dioxide. If carbon monoxide is formed, this represents a waste of fuel, and also indicates reducing conditions within the kiln which must be avoided at all costs since it causes destruction of the clinker mineral structure. For this reason, the exhaust gas is continually analyzed for O2, CO, NO and SO2. The assessment of the clinker peak temperature has always been problematic. Contact temperature measurement is impossible because of the chemically aggressive and abrasive nature of the hot clinker, and optical methods such as infrared pyrometry are difficult because of the dust and fume-laden atmosphere in the burning zone. The traditional method of assessment was to view the bed of clinker and deduce the amount of liquid formation by experience. As more liquid forms, the clinker becomes stickier, and the bed of material climbs higher up the rising side of the kiln. It is usually also possible to assess the length of the zone of liquid formation, beyond which powdery "fresh" feed can be seen. Cameras, with or without infrared measurement capability, are mounted on the kiln hood to facilitate this. On many kilns, the same information can be inferred from the kiln motor power drawn, since sticky feed riding high on the kiln wall increases the eccentric turning load of the kiln. Further information can be obtained from the exhaust gas analyzers. The formation of NO from nitrogen and oxygen takes place only at high temperatures, and so the NO level gives an indication of the combined feed and flame temperature. SO2 is formed by thermal decomposition of calcium sulfate in the clinker, and so also gives an indication of clinker temperature. Modern computer control systems usually make a "calculated" temperature, using contributions from all these information sources, and then set about controlling it. As an exercise in process control, kiln control is extremely challenging, because of multiple inter-related variables, non-linear responses, and variable process lags. Computer control systems were first tried in the early 1960s, initially with poor results due mainly to poor process measurements. Since 1990, complex high-level supervisory control systems have been standard on new installations. These operate using expert system strategies, that maintain a "just sufficient" burning zone temperature, below which the kiln's operating condition will deteriorate catastrophically, thus requiring rapid-response, "knife-edge" control. Cement kiln emissions Emissions from cement works are determined both by continuous and discontinuous measuring methods, which are described in corresponding national guidelines and standards. Continuous measurement is primarily used for dust (particulates), NOx (nitrogen oxides) and SO2 (sulfur dioxide), while the remaining parameters relevant pursuant to ambient pollution legislation are usually determined discontinuously by individual measurements. The following descriptions of emissions refer to modern kiln plants based on dry process technology. Carbon dioxide During the clinker burning process CO2 is emitted. CO2 accounts for the main share of these gases. CO2 emissions are both raw material-related and energy-related. Raw material-related emissions are produced during limestone decarbonation (CaCO3 → CaO + CO2) and account for about half of total CO2 emissions. Use of fuels with higher hydrogen content than coal and use of alternative fuels can reduce net greenhouse gas emissions. Dust To manufacture 1 t of Portland cement, about 1.5 to 1.7 t raw materials, 0.1 t coal and 1 t clinker (besides other cement constituents and sulfate agents) must be ground to dust fineness during production. In this process, the steps of raw material processing, fuel preparation, clinker burning and cement grinding constitute major emission sources for particulate components. While particulate emissions of up to 3,000 mg/m3 were measured leaving the stack of cement rotary kiln plants as recently as in the 1960s, legal limits are typically 30 mg/m3 today, and much lower levels are achievable. Nitrogen oxides (NOx) The clinker burning process is a high-temperature process resulting in the formation of nitrogen oxides (NOx). The amount formed is directly related to the main flame temperature (typically 1850–2000 °C). Nitrogen monoxide (NO) accounts for about 95%, and nitrogen dioxide (NO2) for about 5% of this compound present in the exhaust gas of rotary kiln plants. As most of the NO is converted to NO2 in the atmosphere, emissions are given as NO2 per cubic metre exhaust gas. Without reduction measures, process-related NOx contents in the exhaust gas of rotary kiln plants would in most cases considerably exceed the specifications of e.g. European legislation for waste burning plants (0.50 g/m3 for new plants and 0.80 g/m3 for existing plants). Reduction measures are aimed at smoothing and optimising plant operation. Technically, staged combustion and Selective Non-Catalytic NO Reduction (SNCR) are applied to cope with the emission limit values. High process temperatures are required to convert the raw material mix to Portland cement clinker. Kiln charge temperatures in the sintering zone of rotary kilns range at around 1450 °C. To reach these, flame temperatures of about 2000 °C are necessary. For reasons of clinker quality the burning process takes place under oxidising conditions, under which the partial oxidation of the molecular nitrogen in the combustion air resulting in the formation of nitrogen monoxide (NO) dominates. This reaction is also called thermal NO formation. At the lower temperatures prevailing in a precalciner, however, thermal NO formation is negligible: here, the nitrogen bound in the fuel can result in the formation of what is known as fuel-related NO. Staged combustion is used to reduce NO: calciner fuel is added with insufficient combustion air. This causes CO to form. The CO then reduces the NO into molecular nitrogen: 2 CO + 2 NO → 2 CO2 + N2.Hot tertiary air is then added to oxidize the remaining CO. Sulfur dioxide (SO2) Sulfur is input into the clinker burning process via raw materials and fuels. Depending on their origin, the raw materials may contain sulfur bound as sulfide or sulfate. Higher SO2 emissions by rotary kiln systems in the cement industry are often attributable to the sulfides contained in the raw material, which become oxidised to form SO2 at the temperatures between 370 °C and 420 °C prevailing in the kiln preheater. Most of the sulfides are pyrite or marcasite contained in the raw materials. Given the sulfide concentrations found e.g. in German raw material deposits, SO2 emission concentrations can total up to 1.2 g/m3 depending on the site location. In some cases, injected calcium hydroxide is used to lower SO2 emissions. The sulfur input with the fuels is completely converted to SO2 during combustion in the rotary kiln. In the preheater and the kiln, this SO2 reacts to form alkali sulfates, which are bound in the clinker, provided that oxidizing conditions are maintained in the kiln. Carbon monoxide (CO) and total carbon The exhaust gas concentrations of CO and organically bound carbon are a yardstick for the burn-out rate of the fuels utilised in energy conversion plants, such as power stations. By contrast, the clinker burning process is a material conversion process that must always be operated with excess air for reasons of clinker quality. In concert with long residence times in the high-temperature range, this leads to complete fuel burn-up. The emissions of CO and organically bound carbon during the clinker burning process are caused by the small quantities of organic constituents input via the natural raw materials (remnants of organisms and plants incorporated in the rock in the course of geological history). These are converted during kiln feed preheating and become oxidized to form CO and CO2. In this process, small portions of organic trace gases (total organic carbon) are formed as well. In case of the clinker burning process, the content of CO and organic trace gases in the clean gas therefore may not be directly related to combustion conditions. The amount of released CO2 is about half a ton per ton of clinker. Dioxins and furans (PCDD/F) Rotary kilns of the cement industry and classic incineration plants mainly differ in terms of the combustion conditions prevailing during clinker burning. Kiln feed and rotary kiln exhaust gases are conveyed in counter-flow and mixed thoroughly. Thus, temperature distribution and residence time in rotary kilns afford particularly favourable conditions for organic compounds, introduced either via fuels or derived from them, to be completely destroyed. For that reason, only very low concentrations of polychlorinated dibenzo-p-dioxins and dibenzofurans (colloquially "dioxins and furans") can be found in the exhaust gas from cement rotary kilns. Polychlorinated biphenyls (PCB) The emission behaviour of PCB is comparable to that of dioxins and furans. PCB may be introduced into the process via alternative raw materials and fuels. The rotary kiln systems of the cement industry destroy these trace components virtually completely. Polycyclic aromatic hydrocarbons (PAH) PAHs (according to EPA 610) in the exhaust gas of rotary kilns usually appear at a distribution dominated by naphthalene, which accounts for a share of more than 90% by mass. The rotary kiln systems of the cement industry destroy virtually completely the PAHs input via fuels. Emissions are generated from organic constituents in the raw material. Benzene, toluene, ethylbenzene, xylene (BTEX) As a rule benzene, toluene, ethylbenzene and xylene are present in the exhaust gas of rotary kilns in a characteristic ratio. BTEX is formed during the thermal decomposition of organic raw material constituents in the preheater. Gaseous inorganic chlorine compounds (HCl) Chlorides are a minor additional constituents contained in the raw materials and fuels of the clinker burning process. They are released when the fuels are burnt or the kiln feed is heated, and primarily react with the alkalis from the kiln feed to form alkali chlorides. These compounds, which are initially vaporous, condense on the kiln feed or the kiln dust, at temperatures between 700 °C and 900 °C, subsequently re-enter the rotary kiln system and evaporate again. This cycle in the area between the rotary kiln and the preheater can result in coating formation. A bypass at the kiln inlet allows effective reduction of alkali chloride cycles and to diminish coating build-up problems. During the clinker burning process, gaseous inorganic chlorine compounds are either not emitted at all, or in very small quantities only. Gaseous inorganic fluorine compounds (HF) Of the fluorine present in rotary kilns, 90 to 95% is bound in the clinker, and the remainder is bound with dust in the form of calcium fluoride stable under the conditions of the burning process. Ultra-fine dust fractions that pass through the measuring gas filter may give the impression of low contents of gaseous fluorine compounds in rotary kiln systems of the cement industry. Trace elements and heavy metals The emission behaviour of the individual elements in the clinker burning process is determined by the input scenario, the behaviour in the plant and the precipitation efficiency of the dust collection device. The trace elements (e.g., heavy metals) introduced into the burning process via the raw materials and fuels may evaporate completely or partially in the hot zones of the preheater and/or rotary kiln depending on their volatility, react with the constituents present in the gas phase, and condense on the kiln feed in the cooler sections of the kiln system. Depending on the volatility and the operating conditions, this may result in the formation of cycles that are either restricted to the kiln and the preheater or include the combined drying and grinding plant as well. Trace elements from the fuels initially enter the combustion gases, but are emitted to an extremely small extent only owing to the retention capacity of the kiln and the preheater. Under the conditions prevailing in the clinker burning process, non-volatile elements (e.g. arsenic, vanadium, nickel) are completely bound in the clinker. Elements such as lead and cadmium preferentially react with the excess chlorides and sulfates in the section between the rotary kiln and the preheater, forming volatile compounds. Owing to the large surface area available, these compounds condense on the kiln feed particles at temperatures between 700 °C and 900 °C. In this way, the volatile elements accumulated in the kiln-preheater system are precipitated again in the cyclone preheater, remaining almost completely in the clinker. Thallium (as the chloride) condenses in the upper zone of the cyclone preheater at temperatures between 450 °C and 500 °C. As a consequence, a cycle can be formed between preheater, raw material drying and exhaust gas purification. Mercury and its compounds are not precipitated in the kiln and the preheater. They condense on the exhaust gas route due to the cooling of the gas and are partially adsorbed by the raw material particles. This portion is precipitated in the kiln exhaust gas filter. Owing to trace element behaviour during the clinker burning process and the high precipitation efficiency of the dust collection devices, trace element emission concentrations are on a low overall level. References == Further reading ==
alternative fuel vehicle
An alternative fuel vehicle is a motor vehicle that runs on alternative fuel rather than traditional petroleum fuels (petrol or petrodiesel). The term also refers to any technology (e.g. electric cars, hybrid electric vehicles, solar-powered vehicles) powering an engine that does not solely involve petroleum. Because of a combination of factors, such as environmental and health concerns including climate change and air pollution, high oil-prices and the potential for peak oil, development of cleaner alternative fuels and advanced power systems for vehicles has become a high priority for many governments and vehicle manufacturers around the world. Vehicle engines powered by gasoline/petrol first emerged in the 1860s and 1870s; they took until the 1930s to completely dominate the original "alternative" engines driven by steam (18th century), by gases (early 19th century), or by electricity (c. 1830s). Hybrid electric vehicles such as the Toyota Prius are not actually alternative fuel vehicles, but through advanced technologies in the electric battery and motor/generator, they make a more efficient use of petroleum fuel. Other research-and-development efforts in alternative forms of power focus on developing all-electric and fuel cell vehicles, and even on the stored energy of compressed air. An environmental analysis of the impacts of various vehicle-fuels extends beyond just operating efficiency and emissions, especially if a technology comes into wide use. A life-cycle assessment of a vehicle involves production and post-use considerations. In general, the lifecycle greenhouse gas emissions of battery-electric vehicles are lower than emissions from hydrogen, PHEV, hybrid, compressed natural gas, gasoline, and diesel vehicles. Current deployments As of 2019, there were more than 1.49 billion motor vehicles on the world's roads, compared with approximately 159 million alternative fuel and advanced technology vehicles that had been sold or converted worldwide at the end of 2022 and consisting of: Over 65 million flex fuel automobiles, motorcycles and light duty trucks by the end of 2021, led by Brazil with 38.3 million and the United States with 27 million. Over 26 million plug-in electric vehicles, 70% of which were battery electric vehicles (BEVs) and 30% of which were plug-in hybrids (PHEVs). China had 13.8 million units, Europe 7.8 million, and the United States 3 million. In 2022, annual sales exceeded 10 million vehicles, up 55% relative to 2021. 24.9 million LPG powered vehicles by December 2013, led by Turkey with 3.93 million, South Korea (2.4 million), and Poland (2.75 million). 24.5 million natural gas vehicles by the end of 2017, led by China (5.35 million) followed by Iran (4.0 million), India (3.05 million), Pakistan (3 million), Argentina (2.3 million), and Brazil (1.78 million). In 2015, 2.4 million units were sold. Over 13 million hybrid electric vehicles as of 2019. 5.7 million neat-ethanol only light-vehicles built in Brazil since 1979, with 2.4 to 3.0 million vehicles still in use by 2003. and 1.22 million units as of December 2011. 70,200 fuel cell electric vehicles (FCEVs) powered with hydrogen by the end of 2022. South Korea had 29,500 units, the United States 15,000, China 11,200, and Japan 7,700. In 2022, annual sales amounted to 15,391 vehicles. Hydrogen FCEV sales as a percentage of market share among electric vehicles (BEVs, PHEVs and FCEVs) declined for the 6th consecutive year. Mainstream commercial technologies Flexible fuel A flexible-fuel vehicle (FFV) or dual-fuel vehicle (DFF) is an alternative fuel automobile or light duty truck with a multifuel engine that can use more than one fuel, usually mixed in the same tank, and the blend is burned in the combustion chamber together. These vehicles are colloquially called flex-fuel, or flexifuel in Europe, or just flex in Brazil. FFVs are distinguished from bi-fuel vehicles, where two fuels are stored in separate tanks. The most common commercially available FFV in the world market is the ethanol flexible-fuel vehicle, with the major markets concentrated in the United States, Brazil, Sweden, and some other European countries. Ethanol flexible-fuel vehicles have standard gasoline engines that are capable of running with ethanol and gasoline mixed in the same tank. These mixtures have "E" numbers which describe the percentage of ethanol in the mixture, for example, E85 is 85% ethanol and 15% gasoline. (See common ethanol fuel mixtures for more information.) Though technology exists to allow ethanol FFVs to run on any mixture up to E100, in the U.S. and Europe, flex-fuel vehicles are optimized to run on E85. This limit is set to avoid cold starting problems during very cold weather. Over 65 million flex fuel automobiles, motorcycles and light duty trucks by the end of 2021, led by Brazil with 38.3 million and the United States with 27 million. Other markets were Canada (1.6 million by 2014), and Sweden (243,100 through December 2014). The Brazilian flex fuel fleet includes over 4 million flexible-fuel motorcycles produced since 2009 through March 2015. In Brazil, 65% of flex-fuel car owners were using ethanol fuel regularly in 2009, while, the actual number of American FFVs being run on E85 is much lower; surveys conducted in the U.S. have found that 68% of American flex-fuel car owners were not aware they owned an E85 flex. There have been claims that American automakers are motivated to produce flex-fuel vehicles due to a loophole in the Corporate Average Fuel Economy (CAFE) requirements, which gives the automaker a "fuel economy credit" for every flex-fuel vehicle sold, whether or not the vehicle is actually fueled with E85 in regular use. This loophole allegedly allows the U.S. auto industry to meet CAFE fuel economy targets not by developing more fuel-efficient models, but by spending between US$100 and US$200 extra per vehicle to produce a certain number of flex-fuel models, enabling them to continue selling less fuel-efficient vehicles such as SUVs, which netted higher profit margins than smaller, more fuel-efficient cars. Plug-in electric Battery-electric Battery electric vehicles (BEVs), also known as all-electric vehicles (AEVs), are electric vehicles whose main energy storage is in the chemical energy of batteries. BEVs are the most common form of what is defined by the California Air Resources Board (CARB) as zero emission vehicle (ZEV) because they produce no tailpipe emissions at the point of operation. The electrical energy carried on board a BEV to power the motors is obtained from a variety of battery chemistries arranged into battery packs. For additional range genset trailers or pusher trailers are sometimes used, forming a type of hybrid vehicle. Batteries used in electric vehicles include "flooded" lead-acid, absorbed glass mat, NiCd, nickel metal hydride, Li-ion, Li-poly and zinc-air batteries. Attempts at building viable, modern battery-powered electric vehicles began in the 1950s with the introduction of the first modern (transistor controlled) electric car – the Henney Kilowatt, even though the concept was out in the market since 1890. Despite the poor sales of the early battery-powered vehicles, development of various battery-powered vehicles continued through the mid-1990s, with such models as the General Motors EV1 and the Toyota RAV4 EV. Battery powered cars had primarily used lead-acid batteries and NiMH batteries. Lead-acid batteries' recharge capacity is considerably reduced if they're discharged beyond 75% on a regular basis, making them a less-than-ideal solution. NiMH batteries are a better choice, but are considerably more expensive than lead-acid. Lithium-ion battery powered vehicles such as the Venturi Fetish and the Tesla Roadster have recently demonstrated excellent performance and range, and nevertheless is used in most mass production models launched since December 2010. Expanding on traditional Lithium-ion batteries predominately used in today's battery electric vehicles, is an emerging science that is paving the way to utilize a carbon fiber structure (a vehicle body or chassis in this case) as a structural battery. Experiments being conducted at the Chalmers University of Technology in Sweden are showing that when coupled with Lithium-ion insertion mechanisms, an enhanced carbon fiber structure can have electromechanical properties. This means that the carbon fiber structure itself can act as its own battery/power source for propulsion. This would negate the need for traditional heavy battery banks, reducing weight and therefore increasing fuel efficiency.As of December 2015, several neighborhood electric vehicles, city electric cars and series production highway-capable electric cars and utility vans have been made available for retails sales, including Tesla Roadster, GEM cars, Buddy, Mitsubishi i MiEV and its rebadged versions Peugeot iOn and Citroën C-Zero, Chery QQ3 EV, JAC J3 EV, Nissan Leaf, Smart ED, Mia electric, BYD e6, Renault Kangoo Z.E., Bolloré Bluecar, Renault Fluence Z.E., Ford Focus Electric, BMW ActiveE, Renault Twizy, Tesla Model S, Honda Fit EV, RAV4 EV second generation, Renault Zoe, Mitsubishi Minicab MiEV, Roewe E50, Chevrolet Spark EV, Fiat 500e, BMW i3, Volkswagen e-Up!, Nissan e-NV200, Volkswagen e-Golf, Mercedes-Benz B-Class Electric Drive, Kia Soul EV, BYD e5, and Tesla Model X. The world's all-time top selling highway legal electric car is the Nissan Leaf, released in December 2010, with global sales of more than 250,000 units through December 2016. The Tesla Model S, released in June 2012, ranks second with global sales of over 158,000 cars delivered as of December 2016. The Renault Kangoo Z.E. utility van is the leader of the light-duty all-electric segment with global sales of 25,205 units through December 2016. Plug-in hybrid Plug-in hybrid electric vehicles (PHEVs) use batteries to power an electric motor, as well as another fuel, such as gasoline or diesel, to power an internal combustion engine or other propulsion source. PHEVs can charge their batteries through charging equipment and regenerative braking. Using electricity from the grid to run the vehicle some or all of the time reduces operating costs and fuel use, relative to conventional vehicles.Until 2010 most plug-in hybrids on the road in the U.S. were conversions of conventional hybrid electric vehicles, and the most prominent PHEVs were conversions of 2004 or later Toyota Prius, which have had plug-in charging and more batteries added and their electric-only range extended. Chinese battery manufacturer and automaker BYD Auto released the F3DM to the Chinese fleet market in December 2008 and began sales to the general public in Shenzhen in March 2010. General Motors began deliveries of the Chevrolet Volt in the U.S. in December 2010. Deliveries to retail customers of the Fisker Karma began in the U.S. in November 2011. During 2012, the Toyota Prius Plug-in Hybrid, Ford C-Max Energi, and Volvo V60 Plug-in Hybrid were released. The following models were launched during 2013 and 2015: Honda Accord Plug-in Hybrid, Mitsubishi Outlander P-HEV, Ford Fusion Energi, McLaren P1 (limited edition), Porsche Panamera S E-Hybrid, BYD Qin, Cadillac ELR, BMW i3 REx, BMW i8, Porsche 918 Spyder (limited production), Volkswagen XL1 (limited production), Audi A3 Sportback e-tron, Volkswagen Golf GTE, Mercedes-Benz S 500 e, Porsche Cayenne S E-Hybrid, Mercedes-Benz C 350 e, BYD Tang, Volkswagen Passat GTE, Volvo XC90 T8, BMW X5 xDrive40e, Hyundai Sonata PHEV, and Volvo S60L PHEV. As of December 2015, about 500,000 highway-capable plug-in hybrid electric cars had been sold worldwide since December 2008, out of total cumulative global sales of 1.2 million light-duty plug-in electric vehicles. As of December 2016, the Volt/Ampera family of plug-in hybrids, with combined sales of about 134,500 units is the top selling plug-in hybrid in the world. Ranking next are the Mitsubishi Outlander P-HEV with about 119,500, and the Toyota Prius Plug-in Hybrid with almost 78,000. Biofuels Bioalcohol and ethanol The first commercial vehicle that used ethanol as a fuel was the Ford Model T, produced from 1908 through 1927. It was fitted with a carburetor with adjustable jetting, allowing use of gasoline or ethanol, or a combination of both. Other car manufactures also provided engines for ethanol fuel use. In the United States, alcohol fuel was produced in corn-alcohol stills until Prohibition criminalized the production of alcohol in 1919. The use of alcohol as a fuel for internal combustion engines, either alone or in combination with other fuels, lapsed until the oil price shocks of the 1970s. Furthermore, additional attention was gained because of its possible environmental and long-term economical advantages over fossil fuel. Both ethanol and methanol have been used as an automotive fuel. While both can be obtained from petroleum or natural gas, ethanol has attracted more attention because it is considered a renewable resource, easily obtained from sugar or starch in crops and other agricultural produce such as grain, sugarcane, sugar beets or even lactose. Since ethanol occurs in nature whenever yeast happens to find a sugar solution such as overripe fruit, most organisms have evolved some tolerance to ethanol, whereas methanol is toxic. Other experiments involve butanol, which can also be produced by fermentation of plants. Support for ethanol comes from the fact that it is a biomass fuel, which addresses climate change and greenhouse gas emissions, though these benefits are now highly debated, including the heated 2008 food vs fuel debate. Most modern cars are designed to run on gasoline are capable of running with a blend from 10% up to 15% ethanol mixed into gasoline (E10-E15). With a small amount of redesign, gasoline-powered vehicles can run on ethanol concentrations as high as 85% (E85), the maximum set in the United States and Europe due to cold weather during the winter, or up to 100% (E100) in Brazil, with a warmer climate. Ethanol has close to 34% less energy per volume than gasoline, consequently fuel economy ratings with ethanol blends are significantly lower than with pure gasoline, but this lower energy content does not translate directly into a 34% reduction in mileage, because there are many other variables that affect the performance of a particular fuel in a particular engine, and also because ethanol has a higher octane rating which is beneficial to high compression ratio engines. For this reason, for pure or high ethanol blends to be attractive for users, its price must be lower than gasoline to offset the lower fuel economy. As a rule of thumb, Brazilian consumers are frequently advised by the local media to use more alcohol than gasoline in their mix only when ethanol prices are 30% lower or more than gasoline, as ethanol price fluctuates heavily depending on the results and seasonal harvests of sugar cane and by region. In the US, and based on EPA tests for all 2006 E85 models, the average fuel economy for E85 vehicles was found 25.56% lower than unleaded gasoline. The EPA-rated mileage of current American flex-fuel vehicles could be considered when making price comparisons, though E85 has octane rating of about 104 and could be used as a substitute for premium gasoline. Regional retail E85 prices vary widely across the US, with more favorable prices in the Midwest region, where most corn is grown and ethanol produced. In August 2008 the US average spread between the price of E85 and gasoline was 16.9%, while in Indiana was 35%, 30% in Minnesota and Wisconsin, 19% in Maryland, 12 to 15% in California, and just 3% in Utah. Depending on the vehicle capabilities, the break even price of E85 usually has to be between 25 and 30% lower than gasoline. Reacting to the high price of oil and its growing dependence on imports, in 1975 Brazil launched the Pro-alcool program, a huge government-subsidized effort to manufacture ethanol fuel (from its sugar cane crop) and ethanol-powered automobiles. These ethanol-only vehicles were very popular in the 1980s, but became economically impractical when oil prices fell – and sugar prices rose – late in that decade. In May 2003 Volkswagen built for the first time a commercial ethanol flexible fuel car, the Gol 1.6 Total Flex. These vehicles were a commercial success and by early 2009 other nine Brazilian manufacturers are producing flexible fuel vehicles: Chevrolet, Fiat, Ford, Peugeot, Renault, Honda, Mitsubishi, Toyota, Citroën, and Nissan. The adoption of the flex technology was so rapid, that flexible fuel cars reached 87.6% of new car sales in July 2008. As of August 2008, the fleet of "flex" automobiles and light commercial vehicles had reached 6 million new vehicles sold, representing almost 19% of all registered light vehicles. The rapid success of "flex" vehicles, as they are popularly known, was made possible by the existence of 33,000 filling stations with at least one ethanol pump available by 2006, a heritage of the Pro-alcool program.In the United States, initial support to develop alternative fuels by the government was also a response to the 1973 oil crisis, and later on, as a goal to improve air quality. Also, liquid fuels were preferred over gaseous fuels not only because they have a better volumetric energy density but also because they were the most compatible fuels with existing distribution systems and engines, thus avoiding a big departure from the existing technologies and taking advantage of the vehicle and the refueling infrastructure. California led the search of sustainable alternatives with interest in methanol. In 1996, a new FFV Ford Taurus was developed, with models fully capable of running either methanol or ethanol blended with gasoline. This ethanol version of the Taurus was the first commercial production of an E85 FFV. The momentum of the FFV production programs at the American car companies continued, although by the end of the 1990s, the emphasis was on the FFV E85 version, as it is today. Ethanol was preferred over methanol because there is a large support in the farming community and thanks to government's incentive programs and corn-based ethanol subsidies. Sweden also tested both the M85 and the E85 flexifuel vehicles, but due to agriculture policy, in the end emphasis was given to the ethanol flexifuel vehicles. Biodiesel The main benefit of Diesel combustion engines is that they have a 44% fuel burn efficiency; compared with just 25–30% in the best gasoline engines. In addition diesel fuel has slightly higher energy density by volume than gasoline. This makes Diesel engines capable of achieving much better fuel economy than gasoline vehicles. Biodiesel (fatty acid methyl ester), is commercially available in most oilseed-producing states in the United States. As of 2005, it is somewhat more expensive than fossil diesel, though it is still commonly produced in relatively small quantities (in comparison to petroleum products and ethanol). Many farmers who raise oilseeds use a biodiesel blend in tractors and equipment as a matter of policy, to foster production of biodiesel and raise public awareness. It is sometimes easier to find biodiesel in rural areas than in cities. Biodiesel has lower energy density than fossil diesel fuel, so biodiesel vehicles are not quite able to keep up with the fuel economy of a fossil fuelled diesel vehicle, if the diesel injection system is not reset for the new fuel. If the injection timing is changed to take account of the higher cetane value of biodiesel, the difference in economy is negligible. Because biodiesel contains more oxygen than diesel or vegetable oil fuel, it produces the lowest emissions from diesel engines, and is lower in most emissions than gasoline engines. Biodiesel has a higher lubricity than mineral diesel and is an additive in European pump diesel for lubricity and emissions reduction. Some Diesel-powered cars can run with minor modifications on 100% pure vegetable oils. Vegetable oils tend to thicken (or solidify if it is waste cooking oil), in cold weather conditions so vehicle modifications (a two tank system with diesel start/stop tank), are essential in order to heat the fuel prior to use under most circumstances. Heating to the temperature of engine coolant reduces fuel viscosity, to the range cited by injection system manufacturers, for systems prior to 'common rail' or 'unit injection ( VW PD)' systems. Waste vegetable oil, especially if it has been used for a long time, may become hydrogenated and have increased acidity. This can cause the thickening of fuel, gumming in the engine and acid damage of the fuel system. Biodiesel does not have this problem, because it is chemically processed to be PH neutral and lower viscosity. Modern low emission diesels (most often Euro -3 and -4 compliant), typical of the current production in the European industry, would require extensive modification of injector system, pumps and seals etc. due to the higher operating pressures, that are designed thinner (heated) mineral diesel than ever before, for atomisation, if they were to use pure vegetable oil as fuel. Vegetable oil fuel is not suitable for these vehicles as they are currently produced. This reduces the market as increasing numbers of new vehicles are not able to use it. However, the German Elsbett company has successfully produced single tank vegetable oil fuel systems for several decades, and has worked with Volkswagen on their TDI engines. This shows that it is technologically possible to use vegetable oil as a fuel in high efficiency / low emission diesel engines. Greasestock is an event held yearly in Yorktown Heights, New York, and is one of the largest showcases of vehicles using waste oil as a biofuel in the United States. Biogas Compressed biogas may be used for internal combustion engines after purification of the raw gas. The removal of H2O, H2S and particles can be seen as standard producing a gas which has the same quality as compressed natural gas. Compressed natural gas High-pressure compressed natural gas (CNG), mainly composed of methane, that is used to fuel normal combustion engines instead of gasoline. Combustion of methane produces the least amount of CO2 of all fossil fuels. Gasoline cars can be retrofitted to CNG and become bifuel Natural gas vehicles (NGVs) as the gasoline tank is kept. The driver can switch between CNG and gasoline during operation. Natural gas vehicles (NGVs) are popular in regions or countries where natural gas is abundant. Widespread use began in the Po River Valley of Italy, and later became very popular in New Zealand by the eighties, though its use has declined. As of 2017, there were 24.5 million natural gas vehicles worldwide, led by China (5.35 million) followed by Iran (4.0 million), India (3.05 million), Pakistan (3 million), Argentina (2.3 million), and Brazil (1.78 million).As of 2010, the Asia-Pacific region led the global market with a share of 54%. In Europe they are popular in Italy (730,000), Ukraine (200,000), Armenia (101,352), Russia (100,000) and Germany (91,500), and they are becoming more so as various manufacturers produce factory made cars, buses, vans and heavy vehicles. In the United States CNG powered buses are the favorite choice of several public transit agencies, with an estimated CNG bus fleet of some 130,000. Other countries where CNG-powered buses are popular include India, Australia, Argentina, and Germany.CNG vehicles are common in South America, where these vehicles are mainly used as taxicabs in main cities of Argentina and Brazil. Normally, standard gasoline vehicles are retrofitted in specialized shops, which involve installing the gas cylinder in the trunk and the CNG injection system and electronics. The Brazilian GNV fleet is concentrated in the cities of Rio de Janeiro and São Paulo. Pike Research reports that almost 90% of NGVs in Latin America have bi-fuel engines, allowing these vehicles to run on either gasoline or CNG. Dual fuel Dual fuel vehicle is referred as the vehicle using two types of fuel in the same time (can be gas + liquid, gas + gas, liquid + liquid) with different fuel tank. Diesel-CNG dual fuel is a system using two type of fuel which are diesel and compressed natural gas (CNG) at the same time. It is because of CNG need a source of ignition for combustion in diesel engine. Hybrid electric A hybrid vehicle uses multiple propulsion systems to provide motive power. The most common type of hybrid vehicle is the gasoline-electric hybrid vehicles, which use gasoline (petrol) and electric batteries for the energy used to power internal-combustion engines (ICEs) and electric motors. These motors are usually relatively small and would be considered "underpowered" by themselves, but they can provide a normal driving experience when used in combination during acceleration and other maneuvers that require greater power. The Toyota Prius first went on sale in Japan in 1997 and it is sold worldwide since 2000. As of January 2017, there are over 50 models of hybrid electric cars available in several world markets, with more than 12 million hybrid electric vehicles sold worldwide since their inception in 1997. Hydrogen A hydrogen car is an automobile which uses hydrogen as its primary source of power for locomotion. These cars generally use the hydrogen in one of two methods: combustion or fuel-cell conversion. In combustion, the hydrogen is "burned" in engines in fundamentally the same method as traditional gasoline cars. The common internal combustion engine, usually fueled with gasoline (petrol) or diesel liquids, can be converted to run on gaseous hydrogen. This emits water at the point of use, and during combustion with air NOx can be produced. However, the most efficient use of hydrogen involves the use of fuel cells and electric motors instead of a traditional engine. Hydrogen reacts with oxygen inside the fuel cells, which produces electricity to power the motors, with the only byproduct from the spent hydrogen being water.A small number of commercially available hydrogen fuel cell cars currently exist: the Hyundai NEXO, Toytota Mirai, and previously the Honda FCX Clarity. One primary area of research is hydrogen storage, to try to increase the range of hydrogen vehicles while reducing the weight, energy consumption, and complexity of the storage systems. Two primary methods of storage are metal hydrides and compression. Some believe that hydrogen cars will never be economically viable and that the emphasis on this technology is a diversion from the development and popularization of more efficient battery electric vehicles.In the light road vehicle segment, by the end of 2022, 70,200 hydrogen fuel cell electric vehicles had been sold worldwide, compared with 26 million plug-in electric vehicles. With the rapid rise of electric vehicles and associated battery technology and infrastructure, the global scope for hydrogen’s role in cars is shrinking relative to earlier expectations. Electric, fed by external source Electric power fed from an external source to the vehicle is standard in railway electrification. At such systems usually the tracks form one pole, while the other is usually a single overhead wire or a rail insulated against ground. On roads this system does not work as described, as normal road surfaces are very poor electric conductors; and so electric vehicles fed with external power on roads require at least two overhead wires. The most common type of road vehicles fed with electricity from external source are trolleybusses, but there are also some trucks powered with this technology. The advantage is that the vehicle can be operated without breaks for refueling or charging. Disadvantages include: a large infrastructure of electric wires; difficulty in driving as one has to prevent a dewirement of the vehicle; vehicles cannot overtake each other; a danger of electrocution; and an aesthetic problem. Wireless transmission (see Wireless power transfer) is possible, in principle; but the infrastructure (especially wiring) necessary for inductive or capacitive coupling would be extensive and expensive. In principle it is also possible to transmit energy by microwaves or by lasers to the vehicle, but this may be inefficient and dangerous for the power required. Beside this, in the case of lasers one requires a guidance system to track the vehicle to be powered, as laser beams have a small diameter. Comparative assessment of fossil and alternative fuels Comparative assessments of conventional fossil and alternative fuel vehicles usually encompass more than in-use environmental impacts and running costs. They factor in issues like resource extractive impacts (e.g. for battery manufacture or fossil fuel extraction), ‘well-to-wheel’ efficiency, and the carbon intensity of electricity in different geographies.: 3–9  In general, the lifecycle greenhouse gas emissions of battery-electric vehicles are lower than emissions from hydrogen, PHEV, hybrid, compressed natural gas, gasoline, and diesel vehicles. BEVs have lower emissions than internal combustion engine vehicles even in places where electricity generation is relatively carbon-intensive, for example China where electricity is predominantly generated from coal. Other technologies Engine air compressor The air engine is an emission-free piston engine that uses compressed air as a source of energy. The first compressed air car was invented by a French engineer named Guy Nègre. The expansion of compressed air may be used to drive the pistons in a modified piston engine. Efficiency of operation is gained through the use of environmental heat at normal temperature to warm the otherwise cold expanded air from the storage tank. This non-adiabatic expansion has the potential to greatly increase the efficiency of the machine. The only exhaust is cold air (−15 °C), which could also be used to air condition the car. The source for air is a pressurized carbon-fiber tank. Air is delivered to the engine via a rather conventional injection system. Unique crank design within the engine increases the time during which the air charge is warmed from ambient sources and a two-stage process allows improved heat transfer rates. Electric, stored-otherway Electricity can be also stored in supercapacitors and supraconductors. However supraconductor storage is unsuitable for vehicle propulsion as it requires extreme deep temperature and produces strong magnetic fields. Supercapacitors, however, can be used in vehicles and are used in some trams on sections without overhead wire. They can be load in during regular stops, at which passengers enter and leave the train, but can only travel a few kilometres with the stored energy. However, this is no problem in this case as the next stop is usually in reachable distance. Solar A solar car is an electric vehicle powered by solar energy obtained from solar panels on the car. Solar panels cannot currently be used to directly supply a car with a suitable amount of power at this time, but they can be used to extend the range of electric vehicles. As of 2022, a handful of solar electric cars with varying performance are becoming commercially available, from Fisker and Lightyear, among others.Solar cars are raced in competitions such as the World Solar Challenge and the North American Solar Challenge. These events are often sponsored by Government agencies such as the United States Department of Energy keen to promote the development of alternative energy technology such as solar cells and electric vehicles. Such challenges are often entered by universities to develop their students' engineering and technological skills as well as motor vehicle manufacturers such as GM and Honda. Dimethyl ether fuel Dimethyl ether (DME) is a promising fuel in diesel engines, petrol engines (30% DME / 70% LPG), and gas turbines owing to its high cetane number, which is 55, compared to diesel's, which is 40–53. Only moderate modifications are needed to convert a diesel engine to burn DME. The simplicity of this short carbon chain compound leads during combustion to very low emissions of particulate matter, NOx, CO. For these reasons as well as being sulfur-free, DME meets even the most stringent emission regulations in Europe (EURO5), U.S. (U.S. 2010), and Japan (2009 Japan). Mobil is using DME in their methanol to gasoline process. DME is being developed as a synthetic second generation biofuel (BioDME), which can be manufactured from lignocellulosic biomass. Currently the EU is considering BioDME in its potential biofuel mix in 2030; the Volvo Group is the coordinator for the European Community Seventh Framework Programme project BioDME where Chemrec's BioDME pilot plant based on black liquor gasification is nearing completion in Piteå, Sweden. Ammonia fuelled vehicles Ammonia is produced by combining gaseous hydrogen with nitrogen from the air. Large-scale ammonia production uses natural gas for the source of hydrogen. Ammonia was used during World War II to power buses in Belgium, and in engine and solar energy applications prior to 1900. Liquid ammonia also fuelled the Reaction Motors XLR99 rocket engine, that powered the X-15 hypersonic research aircraft. Although not as powerful as other fuels, it left no soot in the reusable rocket engine and its density approximately matches the density of the oxidizer, liquid oxygen, which simplified the aircraft's design. Ammonia has been proposed as a practical alternative to fossil fuel for internal combustion engines. The calorific value of ammonia is 22.5 MJ/kg (9690 BTU/lb), which is about half that of diesel. In a normal engine, in which the water vapour is not condensed, the calorific value of ammonia will be about 21% less than this figure. It can be used in existing engines with only minor modifications to carburettors/injectors. When ammonia is produced using coal, the CO2 emitted has the potential to be sequestered (the combustion products are nitrogen and water). Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and in streetcars in New Orleans. In 1981 a Canadian company converted a 1981 Chevrolet Impala to operate using ammonia as fuel.Ammonia and GreenNH3 is being used with success by developers in Canada, since it can run in spark ignited or diesel engines with minor modifications, also the only green fuel to power jet engines, and despite its toxicity is reckoned to be no more dangerous than petrol or LPG. It can be made from renewable electricity, and having half the density of petrol or diesel can be readily carried in sufficient quantities in vehicles. On complete combustion it has no emissions other than nitrogen and water vapour. The combustion chemical formula is 4 NH3 + 3 O2 → 2 N2 + 6 H2O, 75% water is the result. Charcoal In the 1930s Tang Zhongming made an invention using abundant charcoal resources for Chinese auto market. The charcoal-fuelled car was later used intensively in China, serving the army and conveyancer after the breakout of World War II. Liquefied natural gas Liquefied natural gas (LNG) is natural gas that has been cooled to a point at which it becomes a cryogenic liquid. In this liquid state, natural gas is more than 2 times as dense as highly compressed CNG. LNG fuel systems function on any vehicle capable of burning natural gas. Unlike CNG, which is stored at high pressure (typically 3000 or 3600 psi) and then regulated to a lower pressure that the engine can accept, LNG is stored at low pressure (50 to 150 psi) and simply vaporized by a heat exchanger before entering the fuel metering devices to the engine. Because of its high energy density compared to CNG, it is very suitable for those interested in long ranges while running on natural gas. In the United States, the LNG supply chain is the main thing that has held back this fuel source from growing rapidly. The LNG supply chain is very analogous to that of diesel or gasoline. First, pipeline natural gas is liquefied in large quantities, which is analogous to refining gasoline or diesel. Then, the LNG is transported via semi trailer to fuel stations where it is stored in bulk tanks until it is dispensed into a vehicle. CNG, on the other hand, requires expensive compression at each station to fill the high-pressure cylinder cascades. Autogas LPG or liquefied petroleum gas (LPG) is a low pressure liquefied gas mixture composed mainly of propane and butane which burns in conventional gasoline combustion engines with less CO2 than gasoline. Gasoline cars can be retrofitted to LPG aka Autogas and become bifuel vehicles as the gasoline tank is not removed, allowing drivers to switch between LPG and gasoline during operation. Estimated 10 million vehicles running worldwide. There are 24.9 million LPG powered vehicles worldwide as of December 2013, led by Turkey with 3.93 million, South Korea (2.4 million), and Poland (2.75 million). In the U.S., 190,000 on-road vehicles use propane, and 450,000 forklifts use it for power. However, it is banned in Pakistan (DEC 2013) as it is considered a risk to public safety by OGRA. Formic acid Formic acid is used by converting it first to hydrogen, and using that in a hydrogen fuel cell. It can also be used directly in formic acid fuel cells. Formic acid is much easier to store than hydrogen. Liquid nitrogen car Liquid nitrogen (LN2) is a method of storing energy. Energy is used to liquefy air, and then LN2 is produced by evaporation, and distributed. LN2 is exposed to ambient heat in the car and the resulting nitrogen gas can be used to power a piston or turbine engine. The maximum amount of energy that can be extracted from LN2 is 213 Watt-hours per kg (W·h/kg) or 173 W·h per liter, in which a maximum of 70 W·h/kg can be utilized with an isothermal expansion process. Such a vehicle with a 350-liter (93 gallon) tank can achieve ranges similar to a gasoline powered vehicle with a 50-liter (13 gallon) tank. Theoretical future engines, using cascading topping cycles, can improve this to around 110 W·h/kg with a quasi-isothermal expansion process. The advantages are zero harmful emissions and superior energy densities compared to a compressed-air vehicle as well as being able to refill the tank in a matter of minutes. Nuclear power In principle, it is possible to build a vehicle powered by nuclear fission or nuclear decay. However, there are two major problems: first one has to transform the energy, which comes as heat and radiation into energy usable for a drive. One possible would be to use a steam turbine as in a nuclear power plant, but such a device would take too much space. A more suitable way would be direct conversion into electricity for example with thermoelements or thermionic devices. The second problem is that nuclear fission produces high levels of neutron and gamma rays, which require excessive shielding, that would result in a vehicle too large for use on public roads. However studies were made in this way by Ford Nucleon. A better way for a nuclear powered vehicle would be the use of power of radioactive decay in radioisotope thermoelectric generators, which are also very safe and reliable. The required shielding of these devices depends on the used radio nuclide. Plutonium-238 as nearly pure alpha radiator does not require much shielding. As prices for suitable radionuclide are high and energy density is low (generating 1 watt with Plutonium-238 requires a half gram of it), this way of propulsion is too expensive for wide use. Also radioisotope thermoelectric generators offer according to their large content of high radioactive material an extreme danger in case of misuse for example by terrorists. The only vehicle in use, which is driven by radioisotope thermoelectric generators is the Mars rover Curiosity. Other forms of nuclear power as fusion and annihilation are at present not available for vehicle propulsion, as no working fusion reactor is available and it is questionable if one can ever built one with a size suitable for a road vehicle. Annihilation may perhaps work in some ways (see antimatter drive), but there is no technology existing to produce and store enough antimatter. Pedal-assisted electric hybrid vehicle In very small vehicles, the power demand decreases, so human power can be employed to make a significant improvement in battery life. Three such commercially made vehicles are the Sinclair C5, ELF and TWIKE. Flywheels Flywheels can be also used for alternative fuel and were used in the 1950s for the propulsion of buses in Switzerland, the such called gyrobuses. The flywheel of the bus was loaded up by electric power at the terminals of the line and allowed it to travel a way up to 8 kilometres just with its flywheel. Flywheel-powered vehicles are quieter than vehicles with combustion engine, require no overhead wire and generate no exhausts, but the flywheel device has a great weight (1.5 tons for 5 kWh) and requires special safety measures due to its high rotational speed. Silanes Silanes higher than heptasilane can be stored like gasoline and may also work as fuel. They have the advantage that they can also burn with the nitrogen of the air, but have as major disadvantage its high price and that its combustion products are solid, which gives trouble in combustion engines. Spring The power of wound-up springs or twisted rubber cords can be used for the propulsion of small vehicles. However this way of energy storage allows only saving small energy amounts not suitable for the propulsion of vehicles for transporting people. Spring-powered vehicles are wind-up toys or mousetrap cars. Steam A steam car is a car that has a steam engine. Wood, coal, ethanol, or others can be used as fuel. The fuel is burned in a boiler and the heat converts water into steam. When the water turns to steam, it expands. The expansion creates pressure. The pressure pushes the pistons back and forth. This turns the driveshaft to spin the wheels which provides moves the car forward. It works like a coal-fueled steam train, or steam boat. The steam car was the next logical step in independent transport. Steam cars take a long time to start, but some can reach speeds over 100 mph (161 km/h) eventually. The late model Doble steam cars could be brought to operational condition in less than 30 seconds, had high top speeds and fast acceleration, but were expensive to buy. A steam engine uses external combustion, as opposed to internal combustion. Gasoline-powered cars are more efficient at about 25–28% efficiency. In theory, a combined cycle steam engine in which the burning material is first used to drive a gas turbine can produce 50% to 60% efficiency. However, practical examples of steam engined cars work at only around 5–8% efficiency. The best known and best selling steam-powered car was the Stanley Steamer. It used a compact fire-tube boiler under the hood to power a simple two-piston engine which was connected directly to the rear axle. Before Henry Ford introduced monthly payment financing with great success, cars were typically purchased outright. This is why the Stanley was kept simple; to keep the purchase price affordable. Steam produced in refrigeration also can be use by a turbine in other vehicle types to produce electricity, that can be employed in electric motors or stored in a battery. Steam power can be combined with a standard oil-based engine to create a hybrid. Water is injected into the cylinder after the fuel is burned, when the piston is still superheated, often at temperatures of 1500 degrees or more. The water will instantly be vaporized into steam, taking advantage of the heat that would otherwise be wasted. Wind Wind-powered vehicles have been well known for a long time. They can be realized with sails similar to those used on ships, by using an onboard wind turbine, which drives the wheels directly or which generates electricity for an electric motor, or can be pulled by a kite. Wind-powered land vehicles need an enormous clearance in height, especially when sails or kites are used and are unsuitable in urban area. They may be also be difficult to steer. Wind-powered vehicles are only used for recreational activities on beaches or other free areas. The concept is described in further detail here: [1]. Wood gas Wood gas can be used to power cars with ordinary internal combustion engines if a wood gasifier is attached. This was quite popular during World War II in several European and Asian countries because the war prevented easy and cost-effective access to oil. Herb Hartman of Woodward, Iowa currently drives a wood powered Cadillac. He claims to have attached the gasifier to the Cadillac for just $700. Hartman claims, "A full hopper will go about fifty miles depending on how you drive it," and he added that splitting the wood was "labor-intensive. That's the big drawback." See also References External links Cradle-to-Grave Lifecycle Analysis of U.S. Light-Duty Vehicle-Fuel Pathways: A Greenhouse Gas Emissions and Economic Assessment of Current (2015) and Future (2025–2030) Technologies Archived 2020-08-12 at the Wayback Machine (includes estimated cost of avoided GHG emissions from different AFV technologies), Argonne National Laboratory, June 2016. Official website of the Alternative Fuels Data Center, Office of Energy Efficiency and Renewable Energy, United States Department of Energy Transitions to Alternative Vehicles and Fuels, National Academy of Sciences (2013) ISBN 978-0-309-26852-3
chevron corporation
Chevron Corporation is an American multinational energy corporation predominantly specializing in oil and gas. The second-largest direct descendant of Standard Oil, and originally known as the Standard Oil Company of California (shortened to Socal or CalSo), it is headquartered in San Ramon, California, and active in more than 180 countries. Within oil and gas, Chevron is vertically integrated and is involved in hydrocarbon exploration, production, refining, marketing and transport, chemicals manufacturing and sales, and power generation. Chevron traces its history back to the 1870s to small California-based oil companies which were acquired by Standard and merged into Standard Oil of California. The company grew quickly on its own after the breakup of Standard Oil by continuing to acquire companies and partnering with others both inside and outside of California, eventually becoming one of the Seven Sisters that dominated the global petroleum industry from the mid-1940s to the 1970s. In 1985, Socal merged with the Pittsburgh-based Gulf Oil and rebranded as Chevron; the newly merged company later merged with Texaco in 2001. Today, Chevron manufactures and sells fuels, lubricants, additives, and petrochemicals, primarily in Western North America, the U.S. Gulf Coast, Southeast Asia, South Korea and Australia. In 2018, the company produced an average of 791,000 barrels of net oil-equivalent per day in United States.Chevron is one of the largest companies in the world and the second largest oil company based in the United States by revenue, only behind fellow Standard Oil descendant ExxonMobil. Chevron ranked 10th on the Fortune 500 in 2023. The company is also the last-remaining oil and gas component of the Dow Jones Industrial Average since ExxonMobil's exit from the index in 2020.Chevron has been subject to numerous controversies arising out of its activities, the most notable of which being related to its activities and inherited liabilities from its acquisition of Texaco in the Lago Agrio oil field, which include allegations of both Chevron and Texaco collectively dumping 18 billion tons of toxic waste and spilling 17 million gallons of petroleum. Chevron and Texaco's activities were the subject of a lawsuit Chevron lost to Ecuadorian residents defended in Ecuadorian court by Steven Donziger. Due to accusations of Donziger bribing the Ecuadorian court and the subsequent disbarment and criminal contempt charges against Donziger, Chevron was accused by environmentalists and human rights groups of jailing Donziger and compelling the US government to deny Donziger due process of law. History Predecessors Star Oil and Pacific Coast Oil Company One of Chevron's early predecessors, "Star Oil", discovered oil at the Pico Canyon Oilfield in the Santa Susana Mountains north of Los Angeles in 1876. The 25 barrels of oil per day well marked the discovery of the Newhall Field, and is considered by geophysicist Marius Vassiliou as the beginning of the modern oil industry in California. Energy analyst Antonia Juhasz has said that while Star Oil's founders were influential in establishing an oil industry in California, Union Mattole Company discovered oil in the state eleven years prior.In September 1879, Charles N. Felton, Lloyd Tevis, George Loomis and others created the "Pacific Coast Oil Company", which acquired the assets of Star Oil with $1 million in funding. Pacific Coast Oil eventually became the largest oil interest in California, and in 1900, John D. Rockefeller's Standard Oil acquired Pacific Coast Oil for $761,000. In 1906, the Pacific Coast acquired the business operations and assets of the Standard Oil Company (Iowa). At this time, Pacific renamed itself the Standard Oil Company (California). Texaco Since the acquisition of the Pacific Coast Oil Company by Standard Oil, the Standard descendant had traditionally worked closely with Texaco for 100 years, before acquiring Texaco outright in 2001. Originally known as the Texas Fuel Company (later the Texas Company), Texaco was founded in Beaumont, Texas as an oil equipment vendor by "Buckskin Joe". The founder's nickname came from being harsh and aggressive. Texas Fuel worked closely with Chevron. In 1936, it formed a joint venture with California Standard named Caltex, to drill and produce oil in Saudi Arabia. According to energy analyst and activist shareholder Antonia Juhasz, the Texas Fuel Company and California Standard were often referred to as the "terrible twins" for their cutthroat business practices. Formation of the Chevron name In 1911, the federal government broke Standard Oil into several pieces under the Sherman Antitrust Act. One of those pieces, Standard Oil Co. (California), went on to become Chevron. It became part of the "Seven Sisters", which dominated the world oil industry in the early 20th century. In 1926, the company changed its name to Standard Oil Co. of California (SOCAL). By the terms of the breakup of Standard Oil, at first Standard of California could use the Standard name only within its original geographic area of the Pacific coast states, plus Nevada and Arizona; outside that area, it had to use another name. Today, Chevron is the owner of the Standard Oil trademark in 16 states in the western and southeastern United States. Since American trademark law operates under a use-it-or-lose-it rule, the company owns and operates one Standard-branded Chevron station in each state of the area. However, though Chevron (as CalSo) acquired Kyso in the 1960s, its status in Kentucky is unclear after Chevron withdrew its brand from retail sales from Kentucky in July 2010.The 'Chevron' name came into use for some of its retail products in the 1930s. The name "Calso" was also used from 1946 to 1955, in states outside its native West Coast territory.Standard Oil Company of California ranked 75th among United States corporations in the value of World War II military production contracts.In 1933, Saudi Arabia granted California Standard a concession to find oil, which led to the discovery of oil in 1938. In 1948, California Standard discovered the world's largest oil field in Saudi Arabia, Ghawar Field. California Standard's subsidiary, California-Arabian Standard Oil Company, grew over the years and became the Arabian American Oil Company (ARAMCO) in 1944. In 1973, the Saudi government began buying into ARAMCO. By 1980, the company was entirely owned by the Saudis, and in 1988, its name was changed to Saudi Arabian Oil Company—Saudi Aramco.Standard Oil of California and Gulf Oil merged in 1984, which was the largest merger in history at that time. To comply with U.S. antitrust law, California Standard divested many of Gulf's operating subsidiaries, and sold some Gulf stations in the eastern United States and a Philadelphia refinery which has since closed. Among the assets sold off were Gulf's retail outlets in Gulf's home market of Pittsburgh, where Chevron lacks a retail presence but does retain a regional headquarters there as of 2013, partially for Marcellus Shale-related drilling. The same year, Standard Oil of California also took the opportunity to change its legal name to 'Chevron Corporation', since it had already been using the well-known "Chevron" retail brand name for decades. Chevron would sell the Gulf Oil trademarks for the entire U.S. to Cumberland Farms, the parent company of Gulf Oil LP, in 2010 after Cumberland Farms had a license to the Gulf trademark in the Northeastern United States since 1986.In 1996, Chevron transferred its natural gas gathering, operating and marketing operation to NGC Corporation (later Dynegy) in exchange for a roughly 25% equity stake in NGC. In a merger completed February 1, 2000, Illinova Corp. became a wholly owned subsidiary of Dynegy Inc. and Chevron's stake increased up to 28%. However, in May 2007, Chevron sold its stake in the company for approximately $985 million, resulting in a gain of $680 million. Acquisitions and diversification The early 2000s saw Chevron engage in many mergers, acquisitions, and sales. The first largest of which was the $45 billion acquisition of Texaco, announced on October 15, 2000. The acquisition created second-largest oil company in the United States and the world's fourth-largest publicly traded oil company with a combined market value of approximately $95 billion. Completed on October 9, 2001, Chevron temporarily renamed itself to ChevronTexaco between 2001 and 2005; after the company reverted its name to Chevron, Texaco became used as a brand by the company for some of its fueling stations.2005 also saw Chevron purchase Unocal Corporation for $18.4 billion, increasing the company's petroleum and natural gas reserves by about 15%. Because of Unocal's large South East Asian geothermal operations, Chevron became a large producer of geothermal energy. The deal did not include Unocal's former retail operations including the Union 76 trademark, as it had sold that off to Tosco Corporation in 1997. The 76 brand is currently owned by Phillips 66, unaffiliated with Chevron. Chevron and the Los Alamos National Laboratory started a cooperation in 2006, to improve the recovery of hydrocarbons from oil shale by developing a shale oil extraction process named Chevron CRUSH. In 2006, the United States Department of the Interior issued a research, development and demonstration lease for Chevron's demonstration oil shale project on public lands in Colorado's Piceance Basin. In February 2012, Chevron notified the Bureau of Land Management and the Department of Reclamation, Mining and Safety that it intends to divest this lease.Starting in 2010, Chevron began to reduce its retail footprint and expand in domestic natural gas. In July 2010, Chevron ended retail operations in the Mid-Atlantic United States by removing the Chevron and Texaco names from 1,100 stations. In 2011, Chevron acquired Pennsylvania based Atlas Energy Inc. for $3.2 billion in cash and an additional $1.1 billion in existing debt owed by Atlas. Three months later, Chevron acquired drilling and development rights for another 228,000 acres in the Marcellus Shale from Chief Oil & Gas LLC and Tug Hill, Inc. In September 2013, Total S.A. and its joint venture partner agreed to buy Chevron's retail distribution business in Pakistan for an undisclosed amount. In October 2014, Chevron announced that it would sell a 30 percent holding in its Canadian oil shale holdings to Kuwait's state-owned oil company Kuwait Oil Company for a fee of $1.5 billion.Despite these sales, Chevron continued to explore acquisitions, a trend which had reinvigorated in 2019 and extended throughout the COVID-19 pandemic. In April 2019, Chevron announced their intention to acquire Anadarko Petroleum in a deal valued at $33 billion, but decided to focus on other acquisitions shortly afterwards when a deal could not be reached. Despite the failed acquisition of Andarko, Chevron did acquire Noble Energy for $5 billion in July 2020. Chevron was not spared from the pandemic, however, as Chevron announced reductions of 10–15% of its workforce due to both the pandemic and a 2020 oil price war between Russia and Saudi Arabia. During the pandemic, Chevron considered a merger with rival ExxonMobil in 2020 during the early stages of the COVID-19 pandemic that drove oil demand sharply down. It would have been one of the biggest corporate mergers in history, and a combined Chevron and ExxonMobil (dubbed "Chexxon" by Reuters) would have been the second biggest oil company in the world, trailing only Saudi Aramco. Later in the pandemic, Chevron began requiring some employees, namely expatriate employees, those working overseas, and workers on U.S.-flagged ships, to receive COVID-19 vaccinations after having some key operations, the off-shore platforms off the Gulf of Mexico and Permian Basin for example. The requirement will begin for workers off the Gulf of Mexico on the first of November.In the 2020s, Chevron's primary focus was on alternative energy solutions, gradual pullouts from Africa and Southeast Asia, and an increased focus on the Americas with a lessened albeit still present interest in natural gas. Chevron in February 2020 joined Marubeni Corporation and WAVE Equity Partners in investing in Carbon Clean Solutions, a company that provides portable carbon capture technology for the oil field and other industrial facilities. Two years later, Chevron announced that they will acquire Renewable Energy Group, a biodiesel production company based in Ames, Iowa. The acquisition was completed just under 4 months later on June 13.In the Americas, Chevron acquired natural gas company Beyond6, LLC (B6) and its network of 55 compressed natural gas stations across the United States from Mercuria in November 2022. However, Chevron's largest American moves in the 2020s were in Venezuela, as the Biden administration relaxed restrictions on Chevron from pumping oil in the South American nation, originally imposed due to corruption scandals and human rights violations by Venezuelan president Nicolás Maduro. The relaxed restrictions, however, came with severe limitations, including provisions which prohibited Chevron from selling to Russian or Iranian-affiliated agencies and from allowing any direct profits to go to Venezuelan oil company PDVSA. On November 29, 2022, Venezuelan Government Petroleum Minister Tarek El Aissami met in Caracas, Venezuela with the president of Chevron, Javier La Rosa. The Venezuelan ruling party says it is committed to "the development of oil production" after the easing of sanctions. The most important joint ventures where Chevron is involved in Venezuela are Petroboscán, in the west of the nation, and Petropiar, in the eastern Orinoco Belt, with a production capacity of close to 180,000 barrels per day between both projects. In the case of Petroboscán, current production is nil and, in Petropiar, current records indicate close to 50,000 barrels per day. On March 20, 2023, Tareck El Aissami resigned from his government post amid serious corruption allegations. Moreover, El Aissami, a longtime Maduro ally, has a $10mn US government reward on his head for allegedly facilitating drug trafficking from Venezuela. He played a key role in helping Nicolas Maduro's government dodge US economic sanctions, using his Syrian and Lebanese parentage to open up new business channels to Iran and Turkey.On January 5, 2022, Chevron temporarily decreased production in Kazakhstan's Tengiz Field due to the 2022 Kazakh protests, which were motivated by heavy oil price increases. Later that month, Chevron also announced it would end all operations in Myanmar, citing rampant human rights abuses and deteriorating rule of law since the 2021 Myanmar coup d'état. A statement released by the company on its website stated while Chevron was committed to an orderly exit which ensures it can still provide energy to Southeast Asia, Chevron remains firmly opposed to the human rights violations committed by the current military rule in Myanmar. Also in 2022, Chevron was reported to explore the sale of stakes in three fields located in Equatorial Guinea. It was suggested by Reuters that the sales are intended to attract smaller oil companies.Chevron, however, did not do business in the 2020s without controversy and regulatory obstacles. Chevron Phillips Chemical, a company jointly owned by Chevron and Phillips 66, agreed to pay $118 million in March 2022 as a result of violating the Clean Air Act at three of its chemical production plants in Texas. According to the United States' Department of Justice and Environmental Protection Agency, Chevron and Phillips failed to properly flare at the plants, causing excess air pollution. The companies agreed to add pollution control systems to the plants as well.Despite the major oil and gas companies, including Chevron, reporting sharp rises in interim revenues and profits due to Russia's 2022 invasion of Ukraine, the world's largest oil companies received immense backlash for such profits. In total, Chevron made US$246.3 billion in revenue and $36.5 billion in profit within 2022, both of which are records for the company. In addition, days before the company reported its full year earnings, Chevron increased its dividend and announced a $75 billion stock buyback program, a move which attracted a heated response from the Biden administration as well as from news commentators within the United States.The 2020s also saw efforts by Chevron to expand into the clean energy industry. Across the 2020s, Chevron invested stakes into fusion power companies, the two largest of them being Zap Energy and TAE Technologies. September 2023 saw Chevron acquire a majority stake in a Utah hydrogen storage facility, which is poised to be the world's largest storage facility for hydrogen in renewable energy.In October 2023, Chevron Corporation acquired Hess Corporation in an all-stock deal for $53 billion. Corporate image Logo evolution The first logo featured the legend "Pacific Coast Oil Co.", the name adopted by the company when it was established in 1879. Successive versions showed the word 'Standard' (for "The Standard Oil of California"). In 1968, the company introduced the word 'Chevron' (which was introduced as a brand in the 1930s) for the first time in its logo. In July 2014, the Chevron Corporation logo design was officially changed, although it has been used since 2000. By 2015, the logo had been changed multiple times, with three different color schemes applied in the logo. The logo was gray, then blue, and then turned red before returning to the silver gray it is today. "Human Energy" Chevron today is well known for its slogan "the human energy company", a campaign first launched in September 2007. In a corporate blog, Chevron states "human energy" was chosen as their campaign's slogan and focus because "human energy captures our positive spirit in delivering energy to a rapidly changing world". The slogan remains prominent in Chevron advertising, and Chevron has derived from this slogan to use phrases in marketing such as "it's only human". Operations As of December 31, 2018, Chevron had approximately 48,600 employees (including about 3,600 service station employees). Approximately 24,800 employees (including about 3,300 service station employees), or 51 percent, were employed in U.S. operations.Chevron's dominant regions of production are North America, which produces 1.2 billion barrels of oil equivalent (BBOE), and Eurasia, which produces 1.4 BBOE. Chevron's Eurasian-Pacific operations are concentrated in the United Kingdom, Southeast Asia, Kazakhstan, Australia, Bangladesh, and greater China. Chevron additionally operates in South America, the west coast of sub-Saharan Africa (mainly Nigeria and Angola), Egypt, and Iraq; these four regions collectively produce 0.4 BBOE. Chevron's largest revenue products are shale and tight, though produces considerable revenue from heavy oil, deepwater offshore drilling, conventional oil, and liquefied natural gas.In October 2015, Chevron announced that it is cutting up to 7,000 jobs, or 11 percent of its workforce. Because of the COVID-19 pandemic and 2020 Russia–Saudi Arabia oil price war, Chevron announced reductions of 10–15% of its workforce. Upstream Chevron's oil and gas exploration and production operations, which in the oil and gas industry are considered as "upstream" operations, are primarily in the US, Australia, Nigeria, Angola, Kazakhstan, and the Gulf of Mexico. As of December 31, 2018, the company's upstream business reported worldwide net production of 2.930 million oil-equivalent barrels per day.In the United States, the company operates approximately 11,000 oil and natural gas wells in hundreds of fields occupying 4,000,000 acres (16,000 km2) across the Permian Basin, located in West Texas and southeastern New Mexico. In 2010, Chevron was the fourth-largest producer in the region. In February 2011, Chevron celebrated the production of its 5 billionth barrel of Permian Basin oil. The Gulf of Mexico is where the company's deepest offshore drilling takes place at Tahiti and Blind Faith. The company also explored and drilled in the Marcellus Shale formation under several northeastern US states; these operations were sold to the Pittsburgh-based natural gas firm EQT in 2020.Chevron's largest single resource project is the $43 billion Gorgon Gas Project in Australia. It also produces natural gas from Western Australia. The $43 billion project was started in 2010, and was expected to be brought online in 2014. The project includes construction of a 15 million tonne per annum liquefied natural gas plant on Barrow Island, and a domestic gas plant with the capacity to provide 300 terajoules per day to supply gas to Western Australia. It is also developing the Wheatstone liquefied natural gas development in Western Australia. The foundation phase of the project is estimated to cost $29 billion; it will consist of two LNG processing trains with a combined capacity of 8.9 million tons per annum, a domestic gas plant and associated offshore infrastructure. In August 2014 a significant gas-condensate discovery at the Lasseter-1 exploration well in WA-274-P in Western Australia, in which Chevron has a 50% interest was announced. The company also has an interest in the North West Shelf Venture, equally shared with five other investors including BP, BHP Petroleum, Shell, Mitsubishi/Mitsui and Woodside. Presently, Chevron is looking to convert its Gorgon Island operations from upstream production to carbon capture and storage.In the onshore and near-offshore regions of the Niger Delta, Chevron operates under a joint venture with the Nigerian National Petroleum Corporation, operating and holding a 40% interest in 13 concessions in the region. In addition, Chevron operates the Escravos Gas Plant and the Escravos gas-to-liquids plant.Chevron has interests in four concessions in Angola, including offshore two concessions in Cabinda province, the Tombua–Landana development and the Mafumeira Norte project, operated by the company. It is also a leading partner in Angola LNG plant.In Kazakhstan, Chevron participate the Tengiz and Karachaganak projects. In 2010, Chevron became the largest private shareholder in the Caspian Pipeline Consortium pipeline, which transports oil from the Caspian Sea to the Black Sea.As of 2013, the Rosebank oil and gas field west of Shetland was being evaluated by Chevron and its partners. Chevron drilled its discovery well there in 2004. Production is expected in 2015 if a decision is made to produce from the field. The geology and weather conditions are challenging. Midstream As of 2019, outside of maritime shipping, Chevron did not own significant midstream assets; that year it attempted to purchase Anadarko Petroleum, which owned pipelines, but was outbid by Occidental Petroleum. In 2021, Chevron completed its purchase of Noble Midstream Partners LP, which has crude oil, produced water and gas gathering assets in the Permian Basin in West Texas and the DJ Basin in Colorado. Noble Midstream also has 2 crude oil terminals in the DJ Basin as well as freshwater delivery systems. Transport Chevron Shipping Company, a wholly owned subsidiary, provides the maritime transport operations, marine consulting services and marine risk management services for Chevron Corporation. Chevron ships historically had names beginning with "Chevron", such as Chevron Washington and Chevron South America, or were named after former or serving directors of the company. Samuel Ginn, William E. Crain, Kenneth Derr, Richard Matzke and most notably Condoleezza Rice were among those honored, but the ship named after Rice was subsequently renamed as Altair Voyager. Downstream Refining Chevron's downstream operations manufacture and sell products such as fuels, lubricants, additives and petrochemicals. The company's most significant areas of operations are the west coast of North America, the U.S. Gulf Coast, Southeast Asia, South Korea, Australia and South Africa. In 2010, Chevron sold an average of 3.1 million barrels per day (490×10^3 m3/d) of refined products like gasoline, diesel and jet fuel. The company operates approximately 19,550 retail sites in 84 countries. Chevron's Asia downstream headquarters is in Singapore, and the company operates gas stations (under the Caltex brand) within the city state, in addition to some gas stations in Western Canada. Chevron owns the trademark rights to Texaco and Caltex fuel and lubricant products.Chevron, with equal partner Singapore Petroleum Company, also owns half of the 285,000 barrels per day (45,300 m3/d) Singapore Refining Company (SRC) plant, a complex refinery capable of cracking crude oil. The investment was first made in 1979 when Caltex was a one-third partner. In 2010, Chevron processed 1.9 million barrels per day (300×10^3 m3/d) of crude oil. It owns and operates Five active refineries in the United States (Richmond, CA, El Segundo, CA, Salt Lake City, UT, Pascagoula, MS, Pasadena, TX ). Chevron is the non-operating partner in seven joint venture refineries, located in Australia, Pakistan, Singapore, Thailand, South Korea, and New Zealand. Chevron's United States refineries are located in Gulf and Western states. Chevron also owns an asphalt refinery in Perth Amboy, New Jersey; however, since early 2008 that refinery has primarily operated as a terminal. Chemicals Chevron's primary chemical business is in a 50/50 joint venture with Phillips 66, organized into the Chevron Phillips Chemical Company. Chevron also operates the Chevron Oronite Company, which develops, manufactures and sells fuel and lubricant additives. Retail In the United States, the Chevron brand is the most widely used, at 6,880 locations as of September 2022 spread across 21 states. Chevron's highest concentration of stations branded as Chevron are in California (mostly in the San Francisco Bay Area, Central Valley, and Greater Los Angeles), Las Vegas, Anchorage, the Pacific Northwest (especially Seattle), Phoenix, Atlanta, the Texas Triangle, and South Florida.Chevron also utilizes the Texaco brand within the United States, though its locations are much more sparsely-spread than that of Chevron. Texaco is used at 1,346 locations across 17 states, mostly in Washington, Texas, Louisiana, Alabama, Mississippi, Georgia, and Hawaii. Additionally, Texaco licenses its brand to Valero Energy to use in the United Kingdom, and over 730 Texaco stations exist in Britain.Chevron primarily uses the Caltex brand outside of the United States, primarily in Southeast Asia, Hong Kong, Pakistan, New Zealand, and South Africa. In 2015, Chevron sold its 50% stake in Caltex Australia, while allowing the company to continue using the Caltex brand. In 2019, Chevron announced it would re-enter the Australian market by purchasing Puma Energy's operations in the country. The acquisition was completed in July 2020. Chevron relaunched the Caltex brand in Australia in 2022, after the expiration of Caltex Australia's license to use the Caltex brand. Alternative energy Chevron's alternative energy operations include geothermal solar, wind, biofuel, fuel cells, and hydrogen. In 2021 it significantly increased its use of biofuel from dairy farms, like biomethane.Chevron has claimed to be the world's largest producer of geothermal energy. The company's primary geothermal operations were located in Southeast Asia, but these assets were sold in 2017.Prior, Chevron operated geothermal wells in Indonesia providing power to Jakarta and the surrounding area. In the Philippines, Chevron also operated geothermal wells at Tiwi field in Albay province, the Makiling-Banahaw field in Laguna and Quezon provinces.In 2007, Chevron and the United States Department of Energy's National Renewable Energy Laboratory (NREL) started collaboration to develop and produce algae fuel, which could be converted into transportation fuels, such as jet fuel. In 2008, Chevron and Weyerhaeuser created Catchlight Energy LLC, which researches the conversion of cellulose-based biomass into biofuels. In 2013, the Catchlight plan was downsized due to competition with fossil fuel projects for funds.Between 2006 and 2011, Chevron contributed up to $12 million to a strategic research alliance with the Georgia Institute of Technology to develop cellulosic biofuels and to create a process to convert biomass like wood or switchgrass into fuels. Additionally, Chevron holds a 22% stake in Galveston Bay Biodiesel LP, which produces up to 110 million US gallons (420,000 m3) of renewable biodiesel fuel a year.In 2010, the Chevron announced a 740 kW photovoltaic demonstration project in Bakersfield, California, called Project Brightfield, for exploring possibilities to use solar power for powering Chevron's facilities. It consists of technologies from seven companies, which Chevron is evaluating for large-scale use. In Fellows, California, Chevron has invested in the 500 kW Solarmine photovoltaic solar project, which supplies daytime power to the Midway-Sunset Oil Field. In Questa, Chevron has built a 1 MW concentrated photovoltaic plant that comprises 173 solar arrays, which use Fresnel lenses. In October 2011, Chevron launched a 29-MW thermal solar-to-steam facility in the Coalinga Field to produce the steam for enhanced oil recovery. As of 2012, the project is the largest of its kind in the world.In 2014, Chevron began reducing its investment in renewable energy technologies, reducing headcount and selling alternative energy-related assets.In 2015, the Shell Canada Quest Energy project was launched of which Chevron Canada Limited holds a 20% share. The project is based within the Athabasca Oil Sands Project near Fort McMurray, Alberta. It is the world's first CCS project on a commercial-scale. Corporate affairs Finances For the fiscal year 2011, Chevron reported earnings of US$26.9 billion, with an annual revenue of US$257.3 billion, an increase of 23.3% over the previous fiscal cycle. Chevron's shares traded at over $105 per share, and its market capitalization was valued at over US$240 billion. As of 2018, Chevron is ranked No. 13 on the Fortune 500 rankings of the largest United States corporations by total revenue. Headquarters and Offices California Chevron's corporate headquarters are located in a 92-acre campus in San Ramon, California located at 6001 Bollinger Canyon Road. The company moved there in 2002 from its earlier headquarters at 555 Market Street in San Francisco, California, the city where it had been located since its inception in 1879. Chevron sold its San Ramon headquarters to the local Sunset Development Co. in September 2022, from whom it originally bought the land which the Bollinger Canyon Road headquarters today stand, and is planning to lease space in San Ramon's Bishop Ranch, also owned by Sunset, as its new headquarters. Texas Chevron operates from office towers in Houston, Texas, where it purchased 1500 Louisiana Street and 1400 Smith Street, the former headquarters of failed Texas energy giant Enron. Chevron also planned a new office tower in downtown Houston next to its existing properties at 1600 Louisiana Street. The building will stand 50-stories and 832 feet. Upon its completion, it would be the fourth tallest building in Houston and the first 50-story building constructed there in nearly 30 years. However, the contract with the state government of Texas that Chevron made did not require that Chevron follow through with the plans and build the office tower, and as such, Chevron announced in 2016 it had no plans to build the tower, which currently remains undeveloped as of July 2022. Upon Chevron announcing that it was selling its San Ramon headquarters in 2022 and offered to cover moving costs for employees who wishes to relocate to Texas, though, interest sparked in Chevron potentially following through with building a tower at 1600 Louisiana. Political contributions Since January 2011 Chevron has contributed almost $15 million on Washington lobbying. On October 7, 2012, Chevron donated $2.5 million to the Republican Congressional Leadership Fund super PAC that is closely tied to former House Speaker John Boehner.According to watchdog group Documented, in 2020 Chevron contributed $50,000 to the Rule of Law Defense Fund, a fund-raising arm of the Republican Attorneys General Association. Leadership Chevron's current chairman and CEO is Mike Wirth. The company's past chairmen and presidents are: Current Board of directors Wanda Austin John B. Frank Alice P. Gast Enrique Hernandez Jr. Marillyn Hewson Jon M. Huntsman Jr. Charles Moorman Dambisa Moyo Debra Reed-Klages Ronald Sugar (Lead independent director) Inge Thulin Jim Umpleby Mike Wirth (Chairman & CEO) Controversies Chevron has been widely criticized and attacked for scandals, accidents, and activities mostly related to climate change. Chevron has been fined by the governments of Angola, for oil spills within its waters, and the United States through its EPA for violations of the US Clean Air Act and pollutive activities arising out of its Richmond Refinery in California. In terms of total pollution released by the company, a 2019 report totaled Chevron's emissions of carbon dioxide to over 43 billion tons. On multiple instances, authorities in oil-heavy countries have fired rounds onto protestors against Chevron. Lago Agrio and Steven Donziger Chevron's most widely-known scandal involves Texaco's activities in the Lago Agrio oil field in Ecuador, which Chevron is deemed responsible for due to its acquisition of Texaco in 2001. Chevron has been most widely criticized for its handling of litigation against it filed by residents of the Lago Agrio region, which included what activists see as the "jailing" of Lago Agrio lawyer Steven Donziger. Protestors also frequently hold an annual Anti-Chevron day, usually held within a week of Chevron's annual meeting of shareholders. See also Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc. Climate appraisal Climate risk management Global warming Jack 2 Patent encumbrance of large automotive NiMH batteries Texaco Trans-Caribbean pipeline Notes References External links Official website Business data for Chevron Corporation:
bhadla solar park
The Bhadla Solar Park is a solar power plant located in the Thar Desert of Rajasthan, India. It covers an area of 56 square kilometers and has a total installed capacity of 2,245 megawatts (MW), making it the largest solar park in the world as of 2023. The park was developed in four phases since 2015, with $775 million in funding from the Climate Investment Fund and $1.4 billion in funding from other sources. The park contributes to India's renewable energy goals and helps reduce greenhouse gas emissions by an estimated 4 million tons per year. Development The Bhadla Solar Park was initiated by the Rajasthan Renewable Energy Corporation Limited (RRECL), a joint venture between the Government of Rajasthan and the Ministry of New and Renewable Energy (MNRE). The RRECL identified Bhadla, a remote area in the Phalodi tehsil of Jodhpur district, as a suitable site for solar power generation due to its high solar irradiance, low population density, and availability of government-owned land. In Phase I in 2017, NTPC Limited auctioned 420 MW to several developers including Fortum of Finland. In Phase II, Solar Energy Corporation of India (SECI) auctioned 250 MW which includes the under construction AMP Energy Bhadla Solar Power Plant and NTPC Bhadla Solar Power Plant. In Phase III on May 11, 2017, ACME Power won 200 MW and Softbank Group (SBG) won 300 MW. In Phase IV on May 9, 2017, Phelan Energy Group won 50 MW, Avaada Energy won 100 MW and SBG Cleantech consortium won 100 MW. SECI tendered bids for the remaining 750 MW in June 2017. After its completion in December 2018, the solar park achieved a total installed capacity of 2,055 MW, making it the largest solar park in the world as of 2023. Impact The Bhadla Solar Park is one of the projects of India's National Solar Mission, which aims to install 100 gigawatts (GW) of solar power by 2022. The park also helps India meet its commitments under the Paris Agreement to reduce its carbon intensity by 33-35% by 2030.According to a study by the World Bank, the park has multiple benefits for the local economy and environment. The park has created about 10,000 direct and indirect jobs during construction and operation. The park has also improved the quality and reliability of electricity supply in the region. The park has also reduced the dependence on fossil fuels and avoided about 4 million tons of carbon dioxide emissions per year. Challenges The Bhadla Solar Park has faced some challenges due to its location and scale. One of the main challenges has been dust accumulation on the solar panels, which reduces their efficiency and output. The park is also located in an arid region that experiences frequent dust storms and sandstorms. See also Ultra Mega Solar Power Projects Solar power in India References External links SOLAR PARK, BHADLA, PHASE-II World's largest solar power plant
cma cgm benjamin franklin
CMA CGM Benjamin Franklin is an Explorer class containership built for CMA CGM. Delivered in November 2015, she is named after Benjamin Franklin, one of the Founding Fathers of the United States. She is one of the largest container cargo vessels, capable of carrying 18,000 TEU. Routes The December 26, 2015 arrival at the Port of Los Angeles marked the first time a ship of this size had been used in North America - previous routes for a ship this large were only from Asia to Northern Europe. Benjamin Franklin is about a third larger than the biggest container ships that typically visit the deep water ports of southern California. After departing Long Beach, she traveled north along the West Coast of the United States, arriving at the Port of Oakland near San Francisco on December 31, and at the Port of Seattle on February 29, 2016. The U.S. tour was short-lived, however, as CMA CGM postponed deployment of megaships to the West Coast in May 2016, citing a lack of demand.In regard to U.S. port infrastructure, Marc Bourdon, president of CMA CGM American operations, states that while some ports are ready, most ports would have difficulty with such a large ship, and that ports would need to be modified before permanent large-scale implementation could commence. One such necessary modification is for the terminal cranes to be able to reach containers stacked to the full capacity of 10 high. Engine The Benjamin Franklin is propelled by a MAN B&W 11S90ME-C9.2, a low speed, two-stroke diesel engine. The MAN B&W 11S90ME-C9.2 was designed in house by MAN B&W with the goals of the lowest possible operational costs and fuel consumption at any load and any prevailing condition. MAN B&W ME engines have design and performance characteristics in order to comply with International Maritime Organization Tier II emission regulations. Concept of engine control system This engine has electronic hydraulic activation in order to increase flexibility with regard to fuel injection and exhaust valve activation. These two factors are adjusted in order to match operating conditions that a ship may be under. This electronic control of the start to finish of the combustion process is known as the Engine Control System. Fuel injection controls the speed in which the engine will be operating at, while the exhaust valves control the exit of combustion gases from the cylinder. The actuators for fuel injection and the exhaust valves are electronically controlled by the Engine Control System. For fuel injection, a plunger powered by a hydraulic piston is started with oil pressure. Oil pressure is regulated by a valve that is electronically controlled through the Engine Control System. Exhaust valves are opened by a two-stage exhaust valve actuator which is activated by oil delivered from an electronically controlled valve. By controlling these valves electronically, the combustion process is fully controlled by the Engine Control System. Intake and exhaust air Cylinder The frame of the cylinder is cast and fitted with access covers for the routine cleaning of the scavenge ports on the cylinder liner, a unique characteristic of two-stroke engines. The cylinder liner is produced with alloyed cast iron and contains scavenge ports as well as drilled holes for lubrication. Scavenge air system Air intake takes directly from the engine room to the turbocharger fitted to the engines. From the turbocharger, air is sent to be cooled through a sea water cooler then stored in a scavenge air receiver. This air is then sent to the scavenge ports on the cylinder liners. Exhaust gas system Exhaust from the exhaust valves are sent to a receiver to equalize the different pressures produced from each cylinder. The exhaust gases are then sent through turbochargers. Environmental initiatives The CMA CGM Benjamin Franklin was designed to meet CMA CGM's goals of a sustainable policy that aims to reduce greenhouse gas emissions and the protection of biodiversity. CMA CGM was able to improve its Carbon dioxide emissions by 50% since 2005. Limited greenhouse gas emissions Through the electronically controlled fuel injection with the MAN B&W's Engine Control System, the consumption of fuel has been reduced by 3% and oil by 25% on average. At low loads, the main engine utilizes an exhaust by-pass system that improves energy efficiency through slow-steaming. When used at slow speeds, fuel consumption is reduced by an average of 1.5% Benjamin Franklin's hydrodynamics, or water flow, is improved with the addition of a bulbous shape on the rudder. This variation of the rudder, known as a twisting leading edge rudder, optimizes energy performance by 4%. Biodiversity preservation Ballast water onboard the CMA CGM Benjamin Franklin is treated directly when pumped on board and again during ballast operations. The ballast water is filtered and passed under UV lamps in order to detect any living organisms. These living organisms are then killed in order to prevent these species from invading natural habitats. There are no chemical products ejected back into the sea with the Benjamin Franklin’s ballast system. For additional treatment of bilge, grey water, and engine water, the Benjamin Franklin is fitted with an additional decanting tank. This decanting tank is able to separate water from undesirable liquids, by letting all of the liquids settle. The waste products in the liquids are then removed and the remaining liquid is sent for further filtering. The Benjamin Franklin is fitted with CMA CGM's Fast Oil Recovery System, which is used to manage pollution on board the vessel. Issues of vessel size Vessel size transformation The initial deployment of large container ships was seen in the Asia-Europe trade. In the second half of 2010, the vessels in service with Maersk Line's Asia-Europe trade were 8,500 TEU. By 2013, the vessels in service for the Maersk Line's Asia-Europe trade increased to 18,000 TEU vessels. Until the CMA CGM Benjamin Franklin berthed in the Port of Los Angeles in late 2015, nearly all of the world's largest container ship (18,000+ TEU) were deployed in the Asia-Europe trade. However, 10,000+ TEU vessels represent only 10% of the total global fleet. In comparison, Maersk's Asia-Pacific trade vessels increased from 8,600 TEU to 9,400 TEU in August 2013. U.S. ports The average vessel size for U.S. port calls as of 2015 is less than 6000 TEU. However recently in 2016, container ships sizing from 12,000-14,000 TEU have been calling to U.S. ports in California. Notably, the CMA CGM Benjamin Franklin is the largest vessel to ever call to a U.S port. The Federal Maritime Commission has recognized the trend in increasing vessels size and U.S. ports have begun preparation in anticipation of the increased arrival of larger vessels. Major ports in the United States have begun dredging for 50 ft berth/channel depths and heightening their crane fleets in order to accommodate the arrival of larger vessels References External links Benjamin Franklin is one massive container ship that will start carrying cargo from May 2016
emissions & generation resource integrated database
The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. eGRID is issued by the U.S. Environmental Protection Agency (EPA). As of January 2023, the available editions of eGRID contain data for years 2021, 2020, 2019, 2018, 2016, 2014, 2012, 2010, 2009, 2007, 2005, 2004, and 1996 through 2000. eGRID is unique in that it links air emissions data with electric generation data for United States power plants. History eGRID2021 was released by EPA on January 30, 2023. It contains year 2021 data. eGRID2020 was released by EPA on January 27, 2022. It contains year 2020 data. eGRID2019 was released by EPA on February 23, 2021. It contains year 2019 data. eGRID2018 was released by EPA on January 28, 2020 and eGRID2018v2 was released on March 9, 2020. It contains year 2018 data. eGRID2016 was released by EPA on February 15, 2018. It contains year 2016 data. eGRID2014 was released by EPA on January 13, 2017. It contains year 2014 data. eGRID2012 was released by EPA on October 8, 2015. It is the 10th edition and contains year 2012 data. eGRID2010 Version 1.0 with year 2010 data was released on February 24, 2014. eGRID2009 Version 1.0, with year 2009 data was released on May 10, 2012. eGRID2007 Version 1.0 was released on February 23, 2011 and Version 1.1 was released May 20, 2011. eGRID2005 Version 1.0 was released in October 2008 and Version 1.1 was released in January 2009. eGRID2004 Version 1.0 was released in December 2006; Version 2.0 was released in early April 2007; and Version 2.1, was released in late April 2007 and updated for typos in May 2007. eGRID2000 Version 1.0 was released in December 2002; Version 2.0 was released in April 2003; and Version 2.01 was released in May 2003. (eGRID2000 replaced eGRID versions 1996 through 1998). eGRID1998 was released in March and September 2001. eGRID1997 was released in December 1999. eGRID1996 was first released in December 1998. Data summary eGRID data include emissions, different types of emission rates, electricity generation, resource mix, and heat input. eGRID data also include plant identification, location, and structural information. The emissions information in eGRID include carbon dioxide (CO2), nitrogen oxides (NOx), sulfur dioxide (SO2), mercury (Hg), methane (CH4), nitrous oxide (N2O),and carbon dioxide equivalent (CO2e). CO2, CH4, and N2O are greenhouse gases (GHG) that contribute to global warming or climate change. NOx and SO2 contribute to unhealthy air quality and acid rain in many parts of the country. eGRID's resource mix information includes the following fossil fuel resources: coal, oil, gas, other fossil; nuclear resources; and the following renewable resources: hydroelectric (water), biomass (including biogas, landfill gas and digester gas), wind, solar, and geothermal. eGRID data is presented as an Excel workbook with data worksheets and a table of contents. The eGRID workbook contains data at the unit, generator, and plant levels and aggregated data by state, power control area, eGRID subregion, NERC region, and U.S. The workbook also includes a worksheet that displays the grid gross loss (%). Additional documentation is also provided with each eGRID release such as, a Technical Guide (PDF), Summary Tables, eGRID subregion map (JPG), NERC region Map (JPG), and release notes (TXT). These files are available as separate downloadable files or all of them are contained in a ZIP file. Similar files can be downloaded for a given year's eGRID release from EPA's eGRID website. The primary data sources used for eGRID include data reported by electric generators to EPA’s Clean Air Markets Division (pursuant to 40 CFR Part 75) and to the U.S. Energy Information Administration (EIA). Data use eGRID data are used for carbon footprinting; emission reduction calculations; calculating indirect greenhouse gas emissions for The Climate Registry, the California Climate Action Registry, California's Mandatory GHG emissions reporting program (Global Warming Solutions Act of 2006, AB 32), and other GHG protocols; were used as the starting point for the new international carbon emissions database, CARMA. EPA tools and programs such as Power Profiler , Portfolio Manager, the WasteWise Office Carbon Footprint Tool, the Green Power Equivalency Calculator, the Personal Greenhouse Gas Emissions Calculator, and the Greenhouse Gas Equivalencies Calculator use eGRID. Other tools such as labeling/environmental disclosure, Renewable Portfolio Standards (RPS) and Renewable Energy Credits (RECs) attributes are supported by eGRID data. States also rely on eGRID data for electricity labeling (environmental disclosure programs), emissions inventories, and for policy decisions such as output based standards. eGRID is additionally used by nongovernmental organizations for tools and analysis by the International Council for Local Environmental Initiatives (ICLEI), the Northeast States for Coordinated Air Use Management (NESCAUM), the Rocky Mountain Institute, the National Resource Defense Council (NRDC), the Ozone Transport Commission (OTC), Powerscorecard.org, and the Greenhouse Gas Protocol Initiative. In 2010, Executive Order 13514 was issued, requiring Federal agencies to “measure, report, and reduce their greenhouse gas emissions from direct and indirect activities.” The Federal GHG Accounting and Reporting Guidance accompanied this order and recommended using eGRID non-baseload emission rates to estimate the Scope 2 (indirect) emission reductions from renewable energy. See also Air pollution Combined Heat and Power (CHP) Combined cycle Electric power Electric utility Electrical power industry Electricity generation External combustion engine Gas turbine Power station Renewable energy Steam turbine References External links EIA’s Electricity Database Files EPA’s Clean Air Markets - Data and Maps EPA’s Clean Energy Homepage EPA’s Climate Change Homepage EPA's eGRID paper “How to use eGRID for Carbon Footprinting Electricity Purchases in Greenhouse Gas Emission Inventories” EPA’s eGRID website EPA's Power Profiler EPA’s Energy Star Portfolio Manager EPA's Acid Rain Program EPA's Combined Heat and Power Partnership Homepage Executive Order 13514 Federal GHG Accounting and Reporting Guidance Greenhouse Gas Equivalencies Calculator Northeast States for Coordinated Air Use Management (NESCAUM) Ozone Transport Commission (OTC) Personal Greenhouse Gas Emissions Calculator Powerscorecard.org World Business Council for Sustainable Development World Resources Institute Homepage
thin-film solar cell
Thin-film solar cells are made by depositing one or more thin layers (thin films or TFs) of photovoltaic material onto a substrate, such as glass, plastic or metal. Thin-film solar cells are typically a few nanometers (nm) to a few microns (µm) thick–much thinner than the wafers used in conventional crystalline silicon (c-Si) based solar cells, which can be up to 200 µm thick. Thin-film solar cells are commercially used in several technologies, including cadmium telluride (CdTe), copper indium gallium diselenide (CIGS), and amorphous thin-film silicon (a-Si, TF-Si). Solar cells are often classified into so-called generations based on the active (sunlight-absorbing) layers used to produce them, with the most well-established or first-generation solar cells being made of single- or multi-crystalline silicon. This is the dominant technology currently used in most solar PV systems. Most thin-film solar cells are classified as second generation, made using thin layers of well-studied materials like amorphous silicon (a-Si), cadmium telluride (CdTe), copper indium gallium selenide (CIGS), or gallium arsenide (GaAs). Solar cells made with newer, less established materials are classified as third-generation or emerging solar cells. This includes some innovative thin-film technologies, such as perovskite, dye-sensitized, quantum dot, organic, and CZTS thin-film solar cells. Thin-film cells have several advantages over first-generation silicon solar cells, including being lighter and more flexible due to their thin construction. This makes them suitable for use in building-integrated photovoltaics and as semi-transparent, photovoltaic glazing material that can be laminated onto windows. Other commercial applications use rigid thin film solar panels (interleaved between two panes of glass) in some of the world's largest photovoltaic power stations. Additionally, the materials used in thin-film solar cells are typically produced using simple and scalable methods more cost-effective than first-generation cells, leading to lower environmental impacts like greenhouse gas (GHG) emissions in many cases. Thin-film cells also typically outperform renewable and non-renewable sources for electricity generation in terms of human toxicity and heavy-metal emissions. Despite initial challenges with efficient light conversion, especially among third-generation PV materials, as of 2023 some thin-film solar cells have reached efficiencies of up to 29.1% for single-junction thin-film GaAs cells, exceeding the maximum of 26.1% efficiency for standard single-junction first-generation solar cells. Multi-junction concentrator cells incorporating thin-film technologies have reached efficiencies of up to 47.6% as of 2023. Still, many thin-film technologies have been found to have shorter operational lifetimes and larger degradation rates than first-generation cells in accelerated life testing, which has contributed to their somewhat limited deployment. Globally, the PV marketshare of thin-film technologies remains around 5% as of 2023. However, thin-film technology has become considerably more popular in the United States, where CdTe cells alone accounted for nearly 30% of new utility-scale deployment in 2022. History Early research into thin-film solar cells began in the 1970s. In 1970, Zhores Alferov's team at Ioffe Institute created the first gallium arsenide (GaAs) solar cells, later winning the 2000 Nobel prize in Physics for this and other work. Two years later in 1972, Prof. Karl Böer founded the Institute of Energy Conversion (IEC) at the University of Delaware to further thin-film solar research. The institute first focused on copper sulfide/cadmium sulfide (Cu2S/CdS) cells and later expanded to zinc phosphide (Zn3P2) and amorphous silicon (a-Si) thin-films as well in 1975. In 1973, the IEC debuted a solar-powered house, Solar One, in the first example of residential building-integrated photovoltaics. In the next decade, interest in thin-film technology for commercial use and aerospace applications increased significantly, with several companies beginning development of amorphous silicon thin-film solar devices. Thin-film solar efficiencies rose to 10% for Cu2S/CdS in 1980, and in 1986 ARCO Solar launched the first commercially-available thin-film solar cell, the G-4000, made from amorphous silicon.In the 1990s and 2000s, thin-film solar cells saw significant increases in maximum efficiencies and expansion of existing thin-film technologies into new sectors. In 1992, a thin-film solar cell with greater than 15% efficiency was developed at University of South Florida. Only seven years later in 1999, the U.S. National Renewable Energy Laboratory (NREL) and Spectrolab collaborated on a three-junction gallium arsenide solar cell that reached 32% efficiency. That same year, Kiss + Cathcart designed transparent thin-film solar cells for some of the windows in 4 Times Square, generating enough electricity to power 5-7 houses. In 2000, BP Solar introduced two new commercial solar cells based on thin-film technology. In 2001, the first organic thin-film solar cells were developed at the Johannes Kepler University of Linz. In 2005, GaAs solar cells got even thinner with the first free-standing (no substrate) cells introduced by researchers at Radboud University.This was also a time of significant advances in the exploration of new third-generation solar materials–materials with the potential to overcome theoretical efficiency limits for traditional solid-state materials. In 1991, the first high-efficiency dye-sensitized solar cell was developed, replacing the ordinary solid semiconducting (active) layer of the cell with a liquid electrolyte mixture containing light-absorbing dye. In the early 2000s, development of quantum dot solar cells began, technology later certified by NREL in 2011. In 2009, researchers at the University of Tokyo reported a new type solar cell using perovskites as the active layer and achieving over 3% efficiency, building on Murase Chikao's 1999 work which created a perovskite layer capable of absorbing light.In the 2010s and early 2020s, innovation in thin-film solar technology has included efforts to expand third-generation solar technology to new applications and to decrease production costs, as well as significant efficiency improvements for both second and third generation materials. In 2015, Kyung-In Synthetic released the first inkjet solar cells, flexible solar cells made with industrial printers. In 2016, Vladimir Bulović's Organic and Nanostructured Electronics (ONE) Lab at the Massachusetts Institute of Technology (MIT) created thin-film cells light enough to sit on top of soap bubbles. In 2022, the same group introduced flexible organic thin-film solar cells integrated into fabric.Thin-film solar technology captured a peak global market share of 32% of the new photovoltaic deployment in 1988 before declining for several decades and reaching another, smaller peak of 17% again in 2009. Market share then steadily declined to 5% in 2021 globally, however thin-film technology captured approximately 19% of the total U.S. market share in the same year, including 30% of utility-scale production. Theory of operation In a typical solar cell, the photovoltaic effect is used to generate electricity from sunlight. The light-absorbing or "active layer" of the solar cell is typically a semiconducting material, meaning that there is a gap in its energy spectrum between the valence band of localized electrons around host ions and the conduction band of higher-energy electrons which are free to move throughout the material. For most semiconducting materials at room temperature, electrons which have not gained extra energy from another source will exist largely in the valence band, with few or no electrons in the conduction band. When a solar photon reaches the semiconducting active layer in a solar cell, electrons in the valence band can absorb the energy of the photon and be excited into the conduction band, allowing them to move freely throughout the material. When this happens, an empty electron state (or hole) is left behind in the valence band. Together, the conduction band electron and the valence band hole are called an electron-hole pair. Both the electron and the hole in the electron-hole pair can move freely throughout the material as electricity. However, if the electron-hole pair is not separated, the electron and hole can recombine into the lower-energy original state, releasing a photon of the corresponding energy. In thermodynamic equilibrium, the forward process (absorbing a photon to excite an electron-hole pair) and reverse process (emitting a photon to destroy an electron-hole pair) must occur at the same rate by the principle of detailed balance. Therefore, to construct a solar cell from a semiconducting material and extract current during the excitation process, the electron and hole of the electron-hole pair must be separated. This can be achieved in a variety of different ways, but the most common is with a p-n junction, where a positively doped (p-type) semiconducting layer and a negatively doped (n-type) semiconducting layer meet, creating a chemical potential difference which draws electrons one direction and holes the other, separating the electron-hole pair. This may instead be achieved using metal contacts with different work functions, as in a Schottky-junction cell. In a thin-film solar cell, the process is largely the same but the active semiconducting layer is made much thinner. This may be made possible by some intrinsic property of the semiconducting material used that allows it to convert a particularly large number of photons per thickness. For example, some thin-film materials having a direct bandgap, meaning the conduction and valence band electron states are at the same momentum instead of different momenta as in the case of an indirect bandgap semiconductor like silicon. Having a direct bandgap eliminates the need for a source or sink of momentum (typically a lattice vibration, or phonon), simplifying the two-step process of absorbing a photon into a single-step process. Other thin-film materials may be able to absorb more photons per thickness simply due to having an energy bandgap that is well-matched to the peak energy of the solar spectrum, meaning there are many solar photons of the correct energy available to excite electron-hole pairs. In other thin-film solar cells, the semiconducting layer may be replaced entirely with another light-absorbing material, for example an electrolyte solution and photo-active dye molecules in a dye-sensitized solar cell or by quantum dots in a quantum dot solar cell. Materials Thin-film technologies reduce the amount of active material in a cell. The active layer may be placed on a rigid substrate made from glass, plastic, or metal or the cell may be made with a flexible substrate like cloth. Thin-film solar cells tend to be cheaper than crystalline silicon cells and have a smaller ecological impact (determined from life cycle analysis). Their thin and flexible nature also makes them ideal for applications like building-integrated photovoltaics. The majority of film panels have 2-3 percentage points lower conversion efficiencies than crystalline silicon, though some thin-film materials outperform crystalline silicon panels in terms of efficiency. Cadmium telluride (CdTe), copper indium gallium selenide (CIGS) and amorphous silicon (a-Si) are three of the most prominent thin-film technologies. Second-generation thin-film materials Cadmium telluride Cadmium telluride (CdTe) is a chalcogenide material that is the predominant thin film technology. With about 5 percent of worldwide PV production, it accounts for more than half of the thin film market. The cell's lab efficiency has also increased significantly in recent years and is on a par with CIGS thin film and close to the efficiency of multi-crystalline silicon as of 2013.: 24–25  Also, CdTe has the lowest energy payback time of all mass-produced PV technologies, and can be as short as eight months in favorable locations.: 31  CdTe also performs better than most other thin-film PV materials across many important environmental impact factors like global warming potential and heavy metal emissions. A prominent manufacturer is the US-company First Solar based in Tempe, Arizona, that produces CdTe-panels with an efficiency of about 18 percent. Although the toxicity of cadmium may not be that much of an issue and environmental concerns completely resolved with the recycling of CdTe modules at the end of their life time, there are still uncertainties and the public opinion is skeptical towards this technology. The usage of rare materials may also become a limiting factor to the industrial scalability of CdTe thin film technology. The rarity of tellurium—of which telluride is the anionic form—is comparable to that of platinum in the earth's crust and contributes significantly to the module's cost. Copper indium gallium selenide A copper indium gallium selenide solar cell or CIGS cell uses an absorber made of copper, indium, gallium, selenide (CIGS), while gallium-free variants of the semiconductor material are abbreviated CIS. Like CdTe, CIGS/CIS is a chalcogenide material. It is one of three mainstream thin-film technologies, the other two being cadmium telluride and amorphous silicon, with a lab-efficiency above 20 percent and a share of 0.8 percent in the overall PV market in 2021. A prominent manufacturer of cylindrical CIGS-panels was the now-bankrupt company Solyndra in Fremont, California. Traditional methods of fabrication involve vacuum processes including co-evaporation and sputtering. In 2008, IBM and Tokyo Ohka Kogyo Co., Ltd. (TOK) announced they had developed a new, non-vacuum, solution-based manufacturing process for CIGS cells and are aiming for efficiencies of 15% and beyond.Hyperspectral imaging has been used to characterize these cells. Researchers from IRDEP (Institute of Research and Development in Photovoltaic Energy) in collaboration with Photon etc.¸ were able to determine the splitting of the quasi-Fermi level with photoluminescence mapping while the electroluminescence data were used to derive the external quantum efficiency (EQE). Also, through a light beam induced current (LBIC) cartography experiment, the EQE of a microcrystalline CIGS solar cell could be determined at any point in the field of view.As of April 2019, current conversion efficiency record for a laboratory CIGS cell stands at 22.9%. Silicon There are three prominent silicon thin-film architectures: Amorphous silicon cells Amorphous / microcrystalline tandem cells (micromorph) Thin-film polycrystalline silicon on glass. Amorphous silicon Amorphous silicon (a-Si) is a non-crystalline, allotropic form of silicon and the most well-developed thin film technology to-date. Thin-film silicon is an alternative to conventional wafer (or bulk) crystalline silicon. While chalcogenide-based CdTe and CIS thin films cells have been developed in the lab with great success, there is still industry interest in silicon-based thin film cells. Silicon-based devices exhibit fewer problems than their CdTe and CIS counterparts such as toxicity and humidity issues with CdTe cells and low manufacturing yields of CIS due to material complexity. Additionally, due to political resistance to the use non-"green" materials in solar energy production, there is no stigma in the use of standard silicon. This type of thin-film cell is mostly fabricated by a technique called plasma-enhanced chemical vapor deposition. It uses a gaseous mixture of silane (SiH4) and hydrogen to deposit a very thin layer of only 1 micrometre (µm) of silicon on a substrate, such as glass, plastic or metal, that has already been coated with a layer of transparent conducting oxide. Other methods used to deposit amorphous silicon on a substrate include sputtering and hot wire chemical vapor deposition techniques.a-Si is attractive as a solar cell material because it's an abundant, non-toxic material. It requires a low processing temperature and enables a scalable production upon a flexible, low-cost substrate with little silicon material required. Due to its bandgap of 1.7 eV, amorphous silicon also absorbs a very broad range of the light spectrum, that includes infrared and even some ultraviolet and performs very well at weak light. This allows the cell to generate power in the early morning, or late afternoon and on cloudy and rainy days, contrary to crystalline silicon cells, that are significantly less efficient when exposed at diffuse and indirect daylight. However, the efficiency of an a-Si cell suffers a significant drop of about 10 to 30 percent during the first six months of operation. This is called the Staebler-Wronski effect (SWE) – a typical loss in electrical output due to changes in photoconductivity and dark conductivity caused by prolonged exposure to sunlight. Although this degradation is perfectly reversible upon annealing at or above 150 °C, conventional c-Si solar cells do not exhibit this effect in the first place.Its basic electronic structure is the p-i-n junction. The amorphous structure of a-Si implies high inherent disorder and dangling bonds, making it a bad conductor for charge carriers. These dangling bonds act as recombination centers that severely reduce carrier lifetime. A p-i-n structure is usually used, as opposed to an n-i-p structure. This is because the mobility of electrons in a-Si:H is roughly 1 or 2 orders of magnitude larger than that of holes, and thus the collection rate of electrons moving from the n- to p-type contact is better than holes moving from p- to n-type contact. Therefore, the p-type layer should be placed at the top where the light intensity is stronger, so that the majority of the charge carriers crossing the junction are electrons. Tandem-cell using a-Si/μc-Si A layer of amorphous silicon can be combined with layers of other allotropic forms of silicon to produce a multi-junction solar cell. When only two layers (two p-n junctions) are combined, it is called a tandem-cell. By stacking these layers on top of one other, a broader range of the light spectra is absorbed, improving the cell's overall efficiency. In micromorphous silicon, a layer of microcrystalline silicon (μc-Si) is combined with amorphous silicon, creating a tandem cell. The top a-Si layer absorbs the visible light, leaving the infrared part to the bottom μc-Si layer. The micromorph stacked-cell concept was pioneered and patented at the Institute of Microtechnology (IMT) of the Neuchâtel University in Switzerland, and was licensed to TEL Solar. A new world record PV module based on the micromorph concept with 12.24% module efficiency was independently certified in July 2014. Because all layers are made of silicon, they can be manufactured using PECVD. The band gap of a-Si is 1.7 eV and that of c-Si is 1.1 eV. The c-Si layer can absorb red and infrared light. The best efficiency can be achieved at transition between a-Si and c-Si. As nanocrystalline silicon (nc-Si) has about the same bandgap as c-Si, nc-Si can replace c-Si. Tandem-cell using a-Si/pc-Si Amorphous silicon can also be combined with protocrystalline silicon (pc-Si) into a tandem-cell. Protocrystalline silicon with a low volume fraction of nanocrystalline silicon is optimal for high open-circuit voltage. These types of silicon present dangling and twisted bonds, which results in deep defects (energy levels in the bandgap) as well as deformation of the valence and conduction bands (band tails). Polycrystalline silicon on glass A new attempt to fuse the advantages of bulk silicon with those of thin-film devices is thin film polycrystalline silicon on glass. These modules are produced by depositing an antireflection coating and doped silicon onto textured glass substrates using plasma-enhanced chemical vapor deposition (PECVD). The texture in the glass enhances the efficiency of the cell by approximately 3% by reducing the amount of incident light reflecting from the solar cell and trapping light inside the solar cell. The silicon film is crystallized by an annealing step, temperatures of 400–600 Celsius, resulting in polycrystalline silicon. These new devices show energy conversion efficiencies of 8% and high manufacturing yields of >90%. Crystalline silicon on glass (CSG), where the polycrystalline silicon is 1–2 micrometres, is noted for its stability and durability; the use of thin film techniques also contributes to a cost savings over bulk photovoltaics. These modules do not require the presence of a transparent conducting oxide layer. This simplifies the production process twofold; not only can this step be skipped, but the absence of this layer makes the process of constructing a contact scheme much simpler. Both of these simplifications further reduce the cost of production. Despite the numerous advantages over alternative design, production cost estimations on a per unit area basis show that these devices are comparable in cost to single-junction amorphous thin film cells. Gallium arsenide Gallium arsenide (GaAs) is a III-V direct bandgap semiconductor and is a very common material used for single-crystalline thin-film solar cells. GaAs solar cells have continued to be one of the highest performing thin-film solar cells due to their exceptional heat resistant properties and high efficiencies. As of 2019, single-crystalline GaAs cells have shown the highest solar cell efficiency of any single-junction solar cell with an efficiency of 29.1%. This record-holding cell achieved this high efficiency by implementing a back mirror on the rear surface to increase photon absorption which allowed the cell to attain an impressive short-circuit current density and an open-circuit voltage value near the Shockley–Queisser limit. As a result, GaAs solar cells have nearly reached their maximum efficiency although improvements can still be made by employing light trapping strategies.GaAs thin-films are most commonly fabricated using epitaxial growth of the semiconductor on a substrate material. The epitaxial lift-off (ELO) technique, first demonstrated in 1978, has proven to be the most promising and effective. In this method, the thin film layer is peeled off of the substrate by selectively etching a sacrificial layer that was placed between the epitaxial film and substrate. The GaAs film and the substrate remain minimally damaged through the separation process, allowing for the reuse of the host substrate. With reuse of the substrate the fabrication costs can be reduced, but not completely forgone, since the substrate can only be reused a limited number of times. This process is still relatively costly and research is still being done to find more cost-effective ways of growing the epitaxial film layer onto a substrate. Despite the high performance of GaAs thin-film cells, the expensive material costs hinder their ability for wide-scale adoption in the solar cell industry. GaAs is more commonly used in multi-junction solar cells for solar panels on spacecraft, as the larger power to weight ratio lowers the launch costs in space-based solar power (InGaP/(In)GaAs/Ge cells). They are also used in concentrator photovoltaics, an emerging technology best suited for locations that receive much sunlight, using lenses to focus sunlight on a much smaller, thus less expensive GaAs concentrator solar cell. Third-generation (emerging) thin-film materials The National Renewable Energy Laboratory classifies a number of thin-film technologies as emerging photovoltaics—most of them have not yet been commercially applied and are still in the research or development phase. Many use organic materials, often organometallic compounds as well as inorganic substances. Though many of these technologies have struggled with instability and low efficiencies in their early stages, some emerging materials like perovskites have been able to attain efficiencies comparable to mono crystalline silicon cells. Many of these technologies have the potential to beat the Shockley–Queisser limit for efficiency of a single-junction solid-state cell. Significant research has been invested into these technologies as they promise to achieve the goal of producing low-cost, high-efficiency solar cells with smaller environmental impacts. Copper zinc tin sulfide (CZTS) Copper zinc tin sulfide or Cu(Zn,Sn)(S,Se)2, commonly abbreviated CZTS, and its derivatives CZTSe and CZTSSe belong to a group chalcogenides (like CdTe and CIGS/CIS) sometimes called kesterites. Unlike CdTe and CIGS, CZTS is made from abundant and non-toxic raw materials. Additionally, the bandgap of CZTS can be tuned by changing the S/Se ratio, which is a desirable property for engineering of optimal solar cells. CZTS also has a high light absorption coefficient. Other emerging chalcogenide PV materials include antimony-based compounds like Sb2(S,Se)3. Like CZTS, they have tunable bandgaps and good light absorption. Antimony-based compounds also have a quasi-1D structure which may be useful for device engineering. All of these emerging chalcogenide materials have the advantage of being a part of one of the most mature and efficient families of thin-film technology. As of 2022, CZTS cells have achieved a maximum efficiency of around 12.6% while antimony-based cells have reached 9.9%. Dye-sensitized (DSPV) Dye-sensitized cells, also known as Grätzel cells or DSPV, are innovative cells that perform a kind of artificial photosynthesis, removing the need for a bulk solid-state semiconductor or a p-n junction. Instead, they are constructed using a layer of photoactive dye mixed with semiconductor transition metal oxide nanoparticles on top of a liquid electrolyte solution, surrounded by electrical contacts made of platinum or sometimes graphene and encapsulated in glass. When photons enter the cell, they can be absorbed by the dye molecules, putting them into their sensitized state. In this state, the dye molecules can inject electrons into the semiconductor conduction band. The dye electrons are then replenished by the electrode, preventing recombination of the electron-hole pair. The electron in the semiconductor flows out as current through the electrical contacts.Dye-sensitized solar cells are attractive because they allow for cheap and cost-efficient roll-based manufacturing. In practice, however, the inclusion of expensive materials like platinum and ruthenium keep these low costs from being achieved. Dye-sensitized cells also have issues with stability and degradation, particularly because of the liquid electrolyte. In high temperature environments, the electrolyte may leak from the cell while in low temperature environments the electrolyte may freeze. Some of these issues can be overcome using a quasi-solid state electrolyte.As of 2023, the maximum realized efficiency of a dye-sensitized solar cell is around 13%. Organic photovoltaics (OPV) Organic solar cells use organic semiconducting polymers as the photoactive material. These organic polymers are cost-effective to produce and are tunable with high absorption coefficients. Organic solar cell manufacturing is also cost effective and can make use of efficient roll-to-roll production techniques. They also have some of the lowest environmental impact scores of all PV technologies across a wide range of impact factors including energy payback time global warming potential.Organic cells and are naturally flexible, lending themselves well to many applications. Scientists at the Massachusetts Institute of Technology (MIT)'s Organic and Nanostructured Electronics Lab (ONE Lab) have integrated organic PV onto flexible fabric substrates that can be unrolled over 500 times without degradation.However, organic solar cells are generally not very stable and tend to have low operational lifetimes. They also tend to be less efficient than other thin-film cells due to some intrinsic limits of the material like a large binding energy for electron-hole pairs. As of 2023, the maximum achieved efficiency for organic solar cells is 18.2%. Perovskite solar cells Perovskites are a group of materials with a shared crystal structure, named after their discoverer, mineralogist Lev Perovski. The perovskites most often used for PV applications are organic-inorganic hybrid methylammonium lead halides, which host a number of advantageous properties including widely tunable bandgaps, high absorption coefficients, and good electronic transport properties for both electrons and holes. As of 2023, single-junction perovskite solar cells achieved a maximum efficiency of 25.7%, rivaling that of mono crystalline silicon. Perovskites are also commonly used in tandem and multi-junction cells with crystalline silicon, CIGS, and other PV technologies to achieve even higher efficiencies. They also offer a wide spectrum of low-cost applications.However, perovskite cells tend to have short lifetimes, with 5 years being a typical lifetime as of 2016. This is mostly due to their chemical instability when exposed to light, moisture, UV radiation, and high temperatures which may even cause them to undergo a structural transition that impacts the operation of the device. Therefore, proper encapsulation is very important. Quantum dot photovoltaics (QDPV) Quantum dot photovoltaics (QDPV) replace the usual solid-state semiconducting active layer with semiconductor quantum dots. The bandgap of the photo-active layer can be tuned by changing the size of the quantum dots. QDPV has the potential to generate more than one electron-hole pair per photon in a process called multiple exciton generation (MEG) which could allow for a theoretical maximum conversion efficiency of 87%, though as of 2023 the maximum achieved efficiency of a QDPV cell is around 18.1%. QDPV cells also tend to use much less of the active layer material than other solar cell types leading to a low-cost manufacturing process. However, QDPV cells tend to have high environmental impacts compared to other thin-film PV materials, especially human toxicity and heavy metal emissions. Applications Transparent solar cells In 2022, semitransparent solar cells that are as large as windows were reported, after team members of the study achieved record efficiency with high transparency in 2020. Also in 2022, other researchers reported the fabrication of solar cells with a record average visible transparency of 79%, being nearly invisible. Building-integrated photovoltaics Thin-film PV materials tend to be lightweight and flexible in nature, which lends itself naturally to building-integrated photovoltaics (BIPV). Common examples include the integration of semi-transparent modules can be integrated into window designs and the use of rigid thin-film panels to replace roofing material. BIPV can greatly reduce the lifetime environmental impacts (like greenhouse gas (GHG) emission) due to solar cell modules due to the avoided emissions associated with not utilizing the usual building materials. Efficiencies Despite initially lower efficiencies at the time of their introduction, many thin-film technologies have efficiencies comparable to conventional single-junction non-concentrator crystalline silicon solar cells which have a 26.1% maximum efficiency as of 2023. Im fact, both GaAs thin-film and GaAs single-crystal cells have larger maximum efficiencies of 29.1% and 27.4% respectively. The maximum efficiencies for single-junction non-concentrator thin-film cells of various prominent thin-film materials are shown in the chart. Commercial module efficiences It's important to note that the maximum efficiencies achieved in a laboratory setting are generally higher than the efficiencies of manufactured cells, which often have efficiencies 20-50% lower. As of 2021, the maximum efficiency of manufactured solar cells was 24.4% for mono crystalline silicon, 20.4% for poly crystalline silicon, 12.3% for amorphous silicon, 19.2% for CIGS, and 19% for CdTe modules. The thin film cell prototype with the best efficiency yields 20.4% (First Solar), comparable to the best conventional solar cell prototype efficiency of 25.6% from Panasonic.A previous record for thin film solar cell efficiency of 22.3% was achieved by Solar Frontier, the world's largest CIS (copper indium selenium) solar energy provider. In joint research with the New Energy and Industrial Technology Development Organization (NEDO) of Japan, Solar Frontier achieved 22.3% conversion efficiency on a 0.5 cm2 cell using its CIS technology. This was an increase of 0.6 percentage points over the industry's previous thin-film record of 21.7%. Calculation of efficiency The efficiency of a solar cell quantifies the percentage of incident light on the solar cell that is converted into usable electricity. There are many factors that affect the efficiency of a solar cell, so the efficiency may be further parametrized by additional numerical quantities including the short-circuit current, open-circuit voltage, maximum power point, fill factor, and quantum efficiency. The short-circuit current is the maximum current the cell can flow with no voltage load. Similarly, the open-circuit voltage is the voltage across the device with no current or, alternatively, the voltage required for no current to flow. On a current vs. voltage (IV) curve, the open-circuit voltage is the horizontal intercept of the curve with the voltage axis and the short-circuit current is the vertical intercept of the curve with the current axis. The maximum power point is the point along the curve where the maximum power output of the solar cell is achieved and the area of the rectangle with side lengths equal to the current and voltage coordinates of the maximum power point is called the fill factor. The fill factor is a measure of how much power the solar cell achieves at this maximum power point. Intuitively, IV curves with a more square shape and a flatter top and side will have a larger fill factor and therefore a higher efficiency. Whereas these parameters characterize the efficiency of the solar cell based mostly on its macroscopic electrical properties, the quantum efficiency measures either the ratio of the number of photons incident on the cell to the number of charge carriers extracted (external quantum efficiency) or the ratio of the number of photons absorbed by the cell to the number of charge carriers extracted (internal quantum efficiency). Either way, the quantum efficiency is a more direct probe of the microscopic structure of the solar cell. Increasing efficiency Some third-generation solar cells boost efficiency through the integration of concentrator and/or multi-junction device geometry. This can lead to efficiencies larger than the Shockley–Queisser limit of approximately 42% efficiency for a single-junction semiconductor solar cell under one-sun illumination.A multi-junction cell is one that incorporates multiple semiconducting active layers with different bandgaps. In a typical solar cell, a single absorber with a bandgap near the peak of the solar spectrum is used, and any photons with energy greater than or equal to the bandgap can excite valence-band electrons into the conduction band to create electron-hole pairs. However, any excess energy above the Fermi energy will be quickly dissipated due to thermalization, leading to voltage losses from the inability to efficiently extract the energy of high-energy photons. Mutli-junction cells are able to recoup some of this energy lost to thermalization by stacking multiple absorber layers on top of each other with the top layer absorbing the highest-energy photons and letting the lower energy photons pass through to the lower layers with smaller bandgaps, and so on. This not only allows the cells to capture energy from photons in a larger range of energies, but also extracts more energy per photon from the higher-energy photons.Concentrator photovoltaics use an optical system of lenses that sit on top of the cell to focus light from a larger area onto the device, similar to a funnel for sunlight. In addition to creating more electron-hole pairs simply by increasing the number of photons available for absorption, having a higher concentration of charge carriers can increase the efficiency of the solar cell by increasing the conductivity. The addition of a concentrator to a solar cell can not only increase efficiency, but can also reduce the space, materials, and cost needed to produce the cell.Both of these techniques are employed in the highest-efficiency solar cell as of 2023, which is a four-junction concentrator cell with 47.6% efficiency. Increasing absorption Multiple techniques have been employed to increase the amount of light that enters the cell and reduce the amount that escapes without absorption. The most obvious technique is to minimize the top contact coverage of the cell surface, reducing the area that blocks light from reaching the cell. The weakly absorbed long wavelength light can be obliquely coupled into silicon and traverses the film several times to enhance absorption.Multiple methods have been developed to increase absorption by reducing the number of incident photons being reflected away from the cell surface. An additional anti-reflective coating can cause destructive interference within the cell by modulating the refractive index of the surface coating. Destructive interference eliminates the reflective wave, causing all incident light to enter the cell. Surface texturing is another option for increasing absorption, but increases costs. By applying a texture to the active material's surface, the reflected light can be refracted into striking the surface again, thus reducing reflectance. For example, black silicon texturing by reactive ion etching (RIE) is an effective and economic approach to increase the absorption of thin-film silicon solar cells. A textured back reflector can prevent light from escaping through the rear of the cell. Instead of applying the texturing on the active materials, photonic micro-structured coatings applied on the cells' front contact can be an interesting alternative for light-trapping, as they allow both geometric anti-reflection and light scattering while avoiding the roughening of the photovoltaic layers (thereby preventing increase of recombination).Besides surface texturing, the plasmonic light-trapping scheme attracted a lot of attention to aid photocurrent enhancement in thin film solar cells. This method makes use of collective oscillation of excited free electrons in noble metal nanoparticles, which are influenced by particle shape, size and dielectric properties of the surrounding medium. Applying noble-metal nanoparticles at the back of thin-film solar cells leads to the formation of plasmonic back reflectors, which allow broadband photocurrent enhancement. This is a result of both light scattering of the weakly-absorbed photons from the rear-located nanoparticles, plus improved light incoupling (geometric anti-reflection) caused by the hemispherical corrugations at the cells’ front surface formed from the conformal deposition of the cell materials over the particles.In addition to minimizing reflective loss, the solar cell material itself can be optimized to have higher chance of absorbing a photon that reaches it. Thermal processing techniques can significantly enhance the crystal quality of silicon cells and thereby increase efficiency. Layering thin-film cells to create a multi-junction solar cell can also be done. Each layer's band gap can be designed to best absorb a different range of wavelengths, such that together they can absorb a greater spectrum of light.Further advancement into geometric considerations can exploit nanomaterial dimensionality. Large, parallel nanowire arrays enable long absorption lengths along the length of the wire while maintaining short minority carrier diffusion lengths along the radial direction. Adding nanoparticles between the nanowires allows conduction. The natural geometry of these arrays forms a textured surface that traps more light. Production, cost and market With the advances in conventional crystalline silicon (c-Si) technology in recent years, and the falling cost of the polysilicon feedstock, that followed after a period of severe global shortage, pressure increased on manufacturers of commercial thin-film technologies, including amorphous thin-film silicon (a-Si), cadmium telluride (CdTe), and copper indium gallium diselenide (CIGS), leading to the bankruptcy of several companies. As of 2013, thin-film manufacturers continue to face price competition from Chinese refiners of silicon and manufacturers of conventional c-Si solar panels. Some companies together with their patents were sold to Chinese firms below cost. Market-share In 2013 thin-film technologies accounted for about 9 percent of worldwide deployment, while 91 percent was held by crystalline silicon (mono-Si and multi-Si). With 5 percent of the overall market, CdTe holds more than half of the thin-film market, leaving 2 percent to each CIGS and amorphous silicon.: 18–19 CIGS technology Several prominent manufacturers couldn't stand the pressure caused by advances in conventional c-Si technology of recent years. The company Solyndra ceased all business activity and filed for Chapter 11 bankruptcy in 2011, and Nanosolar, also a CIGS manufacturer, closed its doors in 2013. Although both companies produced CIGS solar cells, it has been pointed out, that the failure was not due to the technology but rather because of the companies themselves, using a flawed architecture, such as, for example, Solyndra's cylindrical substrates. In 2014, Korean LG Electronics terminated research on CIGS restructuring its solar business, and Samsung SDI decided to cease CIGS-production, while Chinese PV manufacturer Hanergy is expected to ramp up production capacity of their 15.5% efficient, 650 mm×1650 mm CIGS-modules. One of the largest producers of CI(G)S photovoltaics is the Japanese company Solar Frontier with a manufacturing capacity in the gigawatt-scale. (Also see List of CIGS companies). CdTe technology The company First Solar, a leading manufacturer of CdTe, has been building several of the world's largest solar power stations, such as the Desert Sunlight Solar Farm and Topaz Solar Farm, both in the Californian desert with a 550 MW capacity each, as well as the 102-megawatt Nyngan Solar Plant in Australia, the largest PV power station in the Southern Hemisphere, commissioned in 2015.In 2011, GE announced plans to spend $600 million on a new CdTe solar cell plant and enter this market, and in 2013, First Solar bought GE's CdTe thin-film intellectual property portfolio and formed a business partnership. In 2012 Abound Solar, a manufacturer of cadmium telluride modules, went bankrupt. a-Si technology In 2012, ECD solar, once one of the world's leading manufacturer of amorphous silicon (a-Si) technology, filed for bankruptcy in Michigan, United States. Swiss OC Oerlikon divested its solar division that produced a-Si/μc-Si tandem cells to Tokyo Electron Limited.Other companies that left the amorphous silicon thin-film market include DuPont, BP, Flexcell, Inventux, Pramac, Schuco, Sencera, EPV Solar, NovaSolar (formerly OptiSolar) and Suntech Power that stopped manufacturing a-Si modules in 2010 to focus on conventional silicon solar panels. In 2013, Suntech filed for bankruptcy in China. In August 2013, the spot market price of thin-film a-Si and a-Si/µ-Si dropped to €0.36 and €0.46, respectively (about $0.50 and $0.60) per watt. Thin film solar on metal roofs With the increasing efficiencies of thin film solar, installing them on standing seam metal roofs has become cost competitive with traditional Monocrystalline and Polycrystalline solar cells. The thin film panels are flexible and run down the standing seam metal roofs and stick to the metal roof with Adhesive, so no holes are needed to install. The connection wires run under the ridge cap at the top of the roof. Efficiency ranges from 10-18% but only costs about $2.00-$3.00 per watt of installed capacity, compared to Monocrystalline which is 17-22% efficient and costs $3.00-$3.50 per watt of installed capacity. Thin film solar is light weight at 7-10 ounces per square foot. Thin film solar panels last 10–20 years but have a quicker ROI than traditional solar panels, the metal roofs last 40–70 years before replacement compared to 12–20 years for an asphalt shingle roof. Cost In 1998, scientists at the National Renewable Energy Laboratory (NREL) predicted that production of thin-film PV systems at a cost of $50 per m2 could someday be possible, which would make them extremely economically viable. At this price, thin-film PV systems would yield return on investment of 30% or greater.To help achieve this goal, in 2022 NREL began administering the Cadmium Telluride Accelerator Consortium (CTAC) with the objective of enabling thin-film efficiencies above 24% with a cost below 20 cents per Watt by 2025, followed by efficiencies above 26% and cost below 15 cents per Watt by 2030. Durability and lifetime One of the significant drawbacks of thin-film solar cells as compared to mono crystalline modules is their shorter lifetime, though to extent to which this is an issue varies by material with the more established thin-film materials generally having longer lifetimes. The standard lifetime of mono crystalline silicon panels is typically taken to be 30 years with performance degradation rates of around 0.5% per year. Amorphous silicon thin-films tend to have comparable cell lifetimes with slightly higher performance degradation rates around 1% per year. Chalcogenide technologies like CIGS and CIS tend to have similar lifetimes of 20–30 years and performance degradation rates just over 1% per year. Emerging technologies tend to have lower lifetimes. Organic photovoltaics had a maximum reported lifetime of 7 years and an average of 5 years in 2016, but typical lifetimes have increased to the range of 15–20 years as of 2020. Similarly, dye-sensitized cells had a maximum reported lifetime of 10 years in 2007, but typical lifetimes have increased to 15–30 years as of 2020. Perovskite cells tend to have short lifetimes, with 5 years being a typical lifetime as of 2016. The lifetime of quantum dot solar cells is unclear due to their developing nature, with some predicting lifetimes to reach 25 years and others setting a realistic lifetime as somewhere between 1 and 10 years.Some thin-film modules also have issues with degradation under various conditions. Nearly all solar cells experience performance decreases with increasing temperature over a reasonable range of operating temperatures. Established thin-film materials may experience smaller temperature-dependent performance decreases, with amorphous silicon being slightly more resistant than mono crystalline silicon, CIGS more resistant than amorphous silicon, and CdTe displaying the best resistance to performance degradation with temperature. Dye-sensitized solar cells are particularly sensitive to operating temperature, as high temperatures may cause the electrolyte solution to leak and low temperatures may cause it to freeze, leaving the cell inoperable. Perovskite cells also tend to be unstable at high temperatures and may even undergo structural changes that impact the operation of the devices. Beyond temperature-induced degradation, amorphous silicon panels additionally experience light-induced degradation, as do organic photovoltaic cells to an even larger extent. Quantum dot cells degrade when exposed to moisture or UV radiation. Similarly, perovskite cells are chemically unstable and degrade when exposed to high temperatures, light, moisture, or UV radiation. Organic cells are also generally considered somewhat unstable, though improvement has been made on the durability organic cells and as of 2022, flexible organic cells have been developed that can be unrolled 500 times without significant performance losses. Unlike other thin-film materials, CdTe tends to be fairly resilient to environmental conditions like temperature and moisture, but flexible CdTe panels may experience performance degradation under applied stresses or strains. Environmental and health impact In order to meet international renewable energy goals, the worldwide solar capacity must increase significantly. For example, to keep up with the International Energy Agency's goal of 4674 GW of solar capacity installed globally by 2050, significant expansion is required from the 1185 GW installed globally as of 2022. As thin-film solar cells have become more efficient and commercially-viable, it has become clear that they will play an important role in meeting these goals. As such, it's become increasingly important to understand their cumulative environmental impact, both to compare between existing technologies and to identify key areas for improvement in developing technologies. For instance, to evaluate the effect of relatively shorter device lifetimes as compared to established solar modules, and to see whether increasing efficiencies or increasing device lifetimes is has a large influence on the total environmental impact of the technologies. Beyond key factors like greenhouse gas (GHG) emissions, questions have been raised about the environmental and health impacts of potentially toxic materials like cadmium that are used in many solar cell technologies. Many scientists and environmentalists have used life cycle analysis as a way to address these questions. Life cycle analysis Life cycle analysis (LCA) is a family of approaches that attempt to assess the total environmental impact of a product or process in an objective way, from the gathering of raw materials and the manufacturing process all the way to the disposal of the outcome and any waste products. Though LCA approaches aim to be unbiased, the outcome of LCA studies can be sensitive to the particular approach and data used. It is therefore important that LCA findings clearly state the assumptions made any which processes are included and excluded. This will often be specified using the system boundary and life cycle inventory framework. Due to the emerging nature of new photovoltaic technologies, the disposal process may sometimes be left out of a life cycle analysis due to the high uncertainty. In this case, the assessment is referred to as "cradle-to-gate" rather than "cradle-to-grave," because the calculated impact does not cover the full life cycle of the product. However, such studies may miss important environmental impacts from the disposal process, both negative (as in the case of incineration of end-of-life cells and waste products) and positive (as in the case of recycling). It's also important to include the effect of balance of service (BOS) steps, which include transportation, installation, and maintenance as they may also be costly in terms of materials and electricity.LCA studies can be used to quantify many potential environmental impacts, from land use to transportation-related emissions. Categories of environmental impacts may be grouped into so-called impact factors for standardized quantitative comparison. For solar cells, perhaps the most important impact factor is the total lifetime greenhouse gas (GHG) emission. This is often reported in terms of the global warming potential (GWP), which gives a more direct indication of the environmental impact.Another important measure of environmental impact is the primary energy demand (PED) which measures the energy (usually electricity) required to produce a particular solar cell. A more useful measure may be the cumulative energy demand (CED), which quantifies the total amount of energy required to produce, use, and dispose of a particular product over its entire lifetime. Relatedly, the energy payback time (EPBT) measures the operational time needed for a solar cell to produce enough energy to account for its cumulative energy demand. Similarly, the carbon payback time (CPBT) measures the operational time needed for a solar cell to produce enough electricity that the avoided carbon emissions from the same amount of electricity generated with the usual energy mix is equal to the amount of carbon emissions the cell will generate over its lifetime. In other words, CPBT measures the time a solar cell needs to run in order to mitigate its own carbon emissions.These quantities depends on many factors, including where the solar cell is manufactured and deployed, as the typical energy mix varies from place to place. Therefore, the electricity-related emissions from the production process as well as the avoided electricity-related emissions from the solar-generated electricity during operation of the cell can vary depending on the particular module and application. The emissions from a cell may also depend on how the module is deployed, not just because of the raw materials and energy costs associated with the production of mounting hardware, but also from any avoided emissions from replaced building materials, as in the case of building-integrated photovoltaics where solar panels may replace building materials like roof tiles.Though energy-usage and emissions-related impacts are vital for evaluation of and comparison between technologies, they are not the only important quantities for evaluating the environmental impact of solar cells. Other important impact factors include toxic heavy metal emissions, metal depletion, human toxicity, various eco-toxicities (marine, freshwater, terrestrial), and acidification potential which measures the emission of sulfur and nitrogen oxides. Including a wide range of environmental impacts in a life cycle analysis is necessarily to minimize the chance of passing environmental impact from a prominent impact factor like greenhouse gas emission to a less prominent but still relevant impact factor like human toxicity. Greenhouse gas emissions Using established first-generation mono crystalline silicon solar cells as a benchmark, some thin-film solar cells tend to have lower environmental impacts across most impact factors, however low efficiencies and short lifetimes can increase the environmental impacts of emerging technologies above those of first-generation cells. A standardized measure of greenhouse gas emissions, is displayed in the chart in units of grams of CO2 equivalent emissions per kiloWatt-hour of electricity production for a variety of thin-film materials. Crystalline silicon is also included for comparison. In terms of greenhouse gas emissions only, the two most ubiquitous thin-film technologies, amorphous silicon and CdTe, both have significantly lower global warming potential (GWP) than mono crystalline silicon solar cells, with amorphous silicon panels having GWP around 1/3 lower and CdTe nearly 1/2 lower. Organic photovoltaics have the smallest GWP of all thin-film PV technologies, with over 60% lower GWP than mono crystalline silicon.However, this is not the case for all thin-film materials. For many emerging technologies, low efficiencies and short device lifetimes may cause significant increases in environmental impact. Both emerging chalcogenide technologies and established chalcogenide technologies like CIS and CIGS have higher Global warming potential than mono crystalline silicon, as do dye-sensitized and quantum dot solar cells. For antimony-based chalcogenide cells, favorable for their use of less-toxic materials in the manufacturing process, low efficiencies and therefore larger area requirements for solar cells are the driving factor in the increased environmental impact, and cells with modestly improved efficiencies have the potential to outperform mono crystalline silicon in all relevant environmental impact factors. Improving efficiencies for these and other emerging chalcogenide cells is therefore a priority. Low realized efficiencies are also the driving factor behind the relatively large GWP of quantum dot solar cells, despite the potential for these materials to exhibit multiple exciton generation (MEG) from a single photon. Higher efficiencies would also allow for the use of a thinner active layer, reducing both materials costs for the quantum dots themselves and saving on materials and emissions related to encapsulation material. Realizing this potential and thereby increasing efficiency is also a priority for reducing the environmental impact of these cells.For organic photovoltaics, short lifetimes are instead the driving factor behind GWP. Despite overall impressive performance of OPV relative to other solar technologies, when considering cradle-to-gate rather than cradle-to-grave (i.e. looking only at the material extraction and production processes, discounting the useful lifetime of the solar cells) GWP, OPV constitute a 97% reduction in GHG emissions compared to mono crystalline silicon and 92% reduction relative to amorphous silicon thin-films. This is significantly better than the 60% reduction compared to mono crystalline silicon currently realized, and therefore improving OPV cell lifetimes is a priority for decreasing overall environmental impact. For Perovskite solar cells, with short lifetimes of only around 5 years, this effect may be even more significant. Perovskite solar cells (not included in the chart) typically have significantly larger global warming potential than other thin-film materials in cradle-to-grave LCA, around 5-8x worse than mono crystalline silicon at 150g CO2-eq /kWh. However, in grade-to-gate LCA, Perovskite cells perform 10-30% lower than mono-crystalline silicon, highlighting the importance of the increased environmental impact associated with the need to produce and dispose of multiple Perovskite panels to generate the same amount of electricity as a single mono crystalline silicon panel due to this short lifetime. Increasing the lifetime of Perovskite solar modules is therefore a top priority for decreasing their environmental impact. Other renewable energy sources like wind, nuclear, and hydropower may achieve smaller GWP than some PV technologies.It's important to note that although emerging thin-film materials don't outperform mono crystalline silicon cells in terms of global warming potential, they still constitute far lower carbon emissions than non-renewable energy sources which have global warming potentials ranging from comparatively clean natural gas with 517g CO2-eq /kWh to the worst polluter lignite with over 1100g CO2-eq /kWh. Thin-film cells also significantly outperform the typical energy mix, which is often in the range of 400-800g CO2-eq /kWh. The largest contributor to most impact factors, including the global warming potential, is nearly always energy use during the manufacturing process, greatly outweighing other potential sources of environmental impact such as transportation cost and material sourcing. For CIGS cells, for example, this accounts for 98% of the global warming potential, most of which is due to the manufacturing of the absorber layer specifically. In general, for processes that include metal deposition, this is often a particularly significant environmental impact hotspot. For quantum dot photovoltaics, hazardous waste disposal for the solvents used during the manufacturing process also contributes significantly. The level of global warming potential associated with electricity use can vary significantly depending on the location manufacturing takes place, in particular the proportion of renewable to non-renewable energy sources used in the local energy mix. Energy payback time In general, thin-film panels take less energy to produce than mono crystalline silicon panels, especially as some emerging thin-film technologies have the potential for efficient and cheap roll-to-roll processing. As a result, thin-film technologies tend to fare better than mono crystalline silicon in terms of energy payback time, though amorphous silicon panels are an exception. Thin-film cells typically have lower efficiencies than mono crystalline solar cells, so this effect is largely due to the comparatively lower primary energy demand (PED) associated with producing the cells. The application in which the modules are used and the recycling process (if any) for the materials can also play a large role in the overall energy efficiency and greenhouse gas emissions over the lifetime of the cell. Integrating the modules into building design may lead to a large reduction in the environmental impact of the cells due to the avoided emissions related to producing the usual building materials, for example the avoided emissions from roof tile production for a building-integrated solar roof. This effect is especially important for thin-film solar cells, whose lightweight and flexible nature lends itself naturally to building-integrated photovoltaics. 70-90% lower emissions in portable charging applications. This effect holds for some other applications as well, for example organic photovoltaics have 55% lower emissions than crystalline silicon in solar panel applications but up to nearly Similarly, avoided emissions from recycling solar cell components rather than gathering and processing new materials can lead to significantly lower cumulative energy consumption and greenhouse gas emissions. Recycling processes are available for several components of mono crystalline solar cells as well as the glass substrate, CdTe, and CdS in CdTe solar cells. For panels without recycling processes, and particularly for panels with short lifetimes like organic photovoltaics, the disposal of panels may contribute significantly to the environmental impact, and there may be little difference in environmental impact factors if the panel is incinerated or sent to landfill. Heavy-metal emission and human toxicity Though material selection and extraction does not play a large role in global warming potential, where electricity usage in the manufacturing process is near universally the largest contributor, it often has a significant impact on other important environmental impact factors, including human toxicity, heavy-metal emissions, acidification potential, and metal and ozone depletion. Human toxicity and heavy-metal emissions are particularly important impact factors for thin-film solar cell production, as the potential environmental and health effect of cadmium use has been a particular concern since the introduction of CdTe cells to the commercial market in the 1990s, when the hazards of cadmium-containing compounds were not well-understood. Public concern over CdTe solar cells has continues as they have become more common. Cadmium is a highly hazardous material that causes kidney, bone, and lung damage and is thought to increase the risk of developing cancer. Initially, all cadmium-containing compounds were classified as hazardous, although we now know that despite both Cd and Te being hazardous separately, the combination CdTe is very chemically stable with a low solubility and presents minimal risk to human health.Feedstock Cd presents a larger risk, as do pre-cursor materials like CdS, and cadmium acetate, which are frequently used in other photovoltaic cells as well, and often contribute significantly to environmental impact factors such as human toxicity and heavy metal emission. These effects may be more pronounced for nanofabrication processes that produce Cd ions in solution, like the manufacture of quantum dots for QDPV. Due to these effects, CdTe solar cell production is actually seen to have lower heavy-metal emissions than other thin-film solar manufacturing. In fact, CdTe production has lower cadmium emission than ribbon silicon, multi-crystalline silicon, mono-crystalline silicon, or quantum dot PV manufacturing, as well as lower emission of nickel, mercury, arsenic, chromium, and lead. In terms of total heavy metal emissions, quantum dot PV has the highest emissions of PV materials with approximately 0.01 mg/kWh, but still has lower total heavy metal emission than any other renewable or non-renewable electricity source, as shown in the chart. The desire to alleviate safety concerns around cadmium and CdTe solar cells specifically has sparked the development of other chalcogenide PV materials that are non-toxic or less toxic, particularly antimony-based chalcogenides. In these emerging chalcogenide cells, the use of CdS is the largest contribution to impact factors like human toxicity and metal depletion, though stainless steel also contributes significantly to the impact of these and other PV materials. In CIGS cells, for example, stainless steel accounts for 80% of the total toxicity associated with cell production and also contributes significantly to ozone depletion. Another potential impact factor of interest for PV manufacturing is the acidification potential, which quantifies the emission of sulfur and nitrogen oxides which contribute to the acidification of soil, freshwater, and the ocean and their negative environmental effects. In this respect, QDPV has the lowest emissions, with CdTe being a close second. See also List of photovoltaics companies Plasmonic solar cell Net metering Timeline of solar cells Photovoltaic system performance References Sources Grama, S. "A Survey of Thin-Film Solar Photovoltaic Industry & Technologies." Massachusetts Institute of Technology, 2008. Green, Martin A. "Consolidation of thin-film photovoltaic technology: the coming decade of opportunity.” Progress in Photovoltaics: Research and Applications 14, no. 5 (2006): 383–392. Green, M. A. “Recent developments in photovoltaics.” Solar Energy 76, no. 1-3 (2004): 3–8. Beaucarne, Guy. “Silicon Thin-Film Solar Cells.” Advances in OptoElectronics 2007 (August 2007): 12. Ullal, H. S., and B. von Roedern. “Thin Film CIGS and CdTe Photovoltaic Technologies: Commercialization, Critical Issues, and Applications; Preprint” (2007). Hegedus, S. “Thin film solar modules: the low cost, high throughput and versatile alternative to Si wafers.” Progress in Photovoltaics: Research and Applications 14, no. 5 (2006): 393–411. Poortmans, J., and V. Arkhipov. Thin Film Solar Cells: Fabrication, Characterization and Applications. Wiley, 2006. Wronski, C.R., B. Von Roedern, and A. Kolodziej. “Thin-film Si:H-based solar cells.” Vacuum 82, no. 10 (June 3, 2008): 1145–1150. Chopra, K. L., P. D. Paulson, and V. Dutta. “Thin-film solar cells: an overview.” Progress in Photovoltaics: Research and Applications 12, no. 2-3 (2004): 69–92. Hamakawa, Y. Thin-Film Solar Cells: Next Generation Photovoltaics and Its Applications. Springer, 2004. Green, Martin. “Thin-film solar cells: review of materials, technologies and commercial status.” Journal of Materials Science: Materials in Electronics 18 (October 1, 2007): 15–19. External links Solar Panels, Link Solar Flexcellence Archived May 10, 2009, at the Wayback Machine, a STReP financed by the Sixth Framework Programme (FP6) of the EU. Full title : Roll-to-roll technology for the production of high-efficiency low cost thin-film silicon photovoltaic modules.
global carbon project
The Global Carbon Project (GCP) is an organisation that seeks to quantify global greenhouse gas emissions and their causes. Established in 2001, its projects include global budgets for three dominant greenhouse gases—carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O)—and complementary efforts in urban, regional, cumulative, and negative emissions. The main object of the group has been to fully understand the carbon cycle. The project has brought together emissions experts, earth scientists, and economists to tackle the problem of rising concentrations of greenhouse gases. In 2020, the project released its newest Global Methane Budget and first Global Nitrous Oxide Budget, the two anthropogenic trace gases most dominant for warming after carbon dioxide. The Global Carbon Project collaborates with many groups to gather, analyze, and publish data on greenhouse gas emissions in an open and transparent fashion, making datasets available on its website and through its publications. It was founded as a partnership among the International Geosphere-Biosphere Programme, the World Climate Programme, the International Human Dimensions Programme and Diversitas, under the umbrella of the Earth System Science Partnership. Many core projects in this partnership subsequently became part of Future Earth in 2014. The current chairman of the Global Carbon Project is Rob Jackson of Stanford University. Previous co-chairs include Naki Nakicenovic of the International Institute for Applied Systems Analysis (IIASA), Corinne Le Quéré of the University of East Anglia, and Philippe Ciais of the Institut Pierre Simon Laplace (LSCE). Its executive director is Josep Canadell of Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO). The GCP has additional international offices in Tsukuba, Japan, and Seoul, Korea, and an international scientific steering committee consisting of a dozen scientists from five continents. For the most recent Global Carbon Budget released in December 2018, the GCP projects fossil CO2 emissions in 2018 to rise by 2.7% (range 1.8% to 3.7%) to a record 37.1 billion tonnes (Gt) CO2, as policy and market forces are currently insufficient to overcome growth in fossil energy use. Atmospheric CO2 concentration is set to increase by 2.3 ppm [range 2.0 to 2.6 ppm] to reach 407 ppm on average in 2018, 45% above pre-industrial levels. Increases in global use of natural gas and oil are the primary causes of rising atmospheric CO2 concentrations today. Global coal use will likely increase in 2018 but still remain below its historical peak in 2013. Over the past decade, coal has been displaced by natural-gas-fired, wind, and solar power in some countries. For examples of earlier communications from GCP, in late 2006 researchers from the project determined that carbon dioxide emissions had dramatically increased to a rate of 3.2% annually from 2000. At the time, the chair of the group Dr. Mike Raupach stated that "This is a very worrying sign. It indicates that recent efforts to reduce emissions have had virtually no impact on emissions growth and that effective caps are urgently needed". A 2010 study conducted by the Project published in Nature Geoscience revealed that the world's oceans absorb 2.3 billion metric tonnes of carbon dioxide. On 5 December 2011 analysis released from the project claimed carbon dioxide from fossil-fuel burning jumped by the largest amount on record in 2010 to 5.9 percent from a growth rate in the 1990s closer to 1 percent annually. The combustion of coal represented more than half of the growth in emissions, the report found. They predict greenhouse gas emissions to occur according to the IPCC's worst-case scenario, as CO2 concentration in the atmosphere reaches 500ppm in the 21st century. Global Carbon Budget Established by the GCP in 2005 the Global Carbon Budget is an annual publication of carbon cycle sources and sinks on a global level. In 2013 the annual publication of the Global Carbon Budget became a living data publication at the Earth System Science Data journal. Each year data is revised and updated along with any changes in analysis, results and the most up to date interpretation of the behaviour of the global carbon cycle. The original measurements and data used to complete the global carbon budget are generated by multiple organizations and research groups around the world. The effort presented by the GCP is mainly one of synthesis, where results from individual groups are collated, analysed and evaluated for consistency. The GCP facilitate access to original data with the understanding that the primary datasets will be referenced in future work (See Table). In depth descriptions of each component are provided by the original publications associated with those datasets. The 2021 Global Carbon Budget report shows that a method has been shown to estimate the difference in emissions from land-use change from national greenhouse gas inventories, supporting an assessment of collective national climate progress. Global Carbon Atlas Established by the GCP in 2013 the Global Carbon Atlas is a tool for visualizing data related to the global carbon cycle. The Global Carbon Atlas is a platform to explore and visualize the most up-to-date data on carbon fluxes resulting from human activities and natural processes. Human impacts on the carbon cycle are the most important cause of climate change. This web-based application allows the dissemination of the most up to date information on the global carbon cycle to a wider audience, from school children and lay people to policy makers and scientists. It consists of three components: 1) Outreach, 2) Emissions and 3) Research. The outreach component is aimed at the general public and those working in education. The emissions component is a visualisation tool for parts of the global carbon cycle that are related to emissions and is aimed primarily at policy makers. The research component is aimed primarily at researchers and acts as a data repository and visualisation tool for scientific data used to investigate the global carbon budget.All components of the Global Carbon Atlas are updated on an annual basis, most recently in December 2018, based on the data published in the Global Carbon Budget. See also AIMES (Analysis, Integration and Modelling of the Earth System) List of countries by carbon dioxide emissions References External links Global Carbon Project website Global Carbon Budget website Global Methane Budget website Global Nitrous Oxide Budget website Global Carbon Atlas website Future Earth website Earth System Science Partnership website
energy conservation
Energy conservation is the effort to reduce wasteful energy consumption by using fewer energy services. This can be done by using energy more effectively (using less energy for continuous service) or changing one's behavior to use less service (for example, by driving less). Energy conservation can be achieved through efficient energy use, which has some advantages, including a reduction in greenhouse gas emissions and a smaller carbon footprint, as well as cost, water, and energy savings. Green engineering practices improve the life cycle of the components of machines which convert energy from one form into another. Energy can be conserved by reducing waste and losses, improving efficiency through technological upgrades, improving operations and maintenance, changing users' behaviors through user profiling or user activities, monitoring appliances, shifting load to off-peak hours, and providing energy-saving recommendations. Observing appliance usage, establishing an energy usage profile, and revealing energy consumption patterns in circumstances where energy is used poorly, can pinpoint user habits and behaviors in energy consumption. Appliance energy profiling helps identify inefficient appliances with high energy consumption and energy load. Seasonal variations also greatly influence energy load, as more air-conditioning is used in warmer seasons and heating in colder seasons. Achieving a balance between energy load and user comfort is complex yet essential for energy preservation. On a large scale, a few factors affect energy consumption trends, including political issues, technological developments, economic growth, and environmental concerns. User Oriented Energy Conservation User behavior has a significant effect on energy conservation. It involves user activity detection, profiling, and appliance interaction behaviors. User profiling consists of the identification of energy usage patterns of the user and replacing required system settings with automated settings that can be initiated on request. Within user profiling, personal characteristics are instrumental in affecting energy conservation behavior. These characteristics include household income, education, gender, age, and social norms.User behavior also relies on the impact of personality traits, social norms, and attitudes on energy conservation behavior. Beliefs and attitudes toward a convenient lifestyle, environmentally friendly transport, energy security, and residential location choices affect energy conservation behavior. As a result, energy conservation can be made possible by adopting pro-environmental behavior and energy-efficient systems. Education on approaches to energy conservation can result in wise energy use. The choices made by the users yield energy usage patterns. Rigorous analysis of these usage patterns identifies waste energy patterns, and improving those patterns may reduce significant energy load. Therefore, human behavior is critical to determining the implications of energy conservation measures and solving environmental problems. Substantial energy conservation may be achieved if users' habit loops are modified. User Habits User habits significantly impact energy demand; thus, providing recommendations for improving user habits contributes to energy conservation. Micro-moments are essential in realizing energy consumption patterns and are identified utilizing a variety of sensing units positioned in prominent areas across the home. The micro-moment is an event that changes the state of the appliance from inactive to active and helps in building users' energy consumption profiles according to their activities. Energy conservation can be achieved through user habits by following energy-saving recommendations at micro-moments. Unnecessary energy usage can be decreased by selecting a suitable schedule for appliance operation. Creating an effective scheduling system requires an understanding of user habits regarding appliances. Off-peak scheduling Many techniques for energy conservation comprise off-peak scheduling, which means operating an appliance in a low-price energy hour. This schedule can be achieved after user habits regarding appliance use are understood. Most energy providers divide the energy tariff into high and low-price hours; therefore, scheduling an appliance to work an off-peak hour will significantly reduce electricity bills. User Activity Detection User activity detection leads to the precise detection of appliances required for an activity. If an appliance is active but not required for a user's current activity, it wastes energy and can be turned off to conserve energy. The precise identification of user activities is necessary to achieve this method of energy conservation. Energy conservation opportunities by sector Buildings Existing buildings Energy conservation measures have primarily focused on technological innovations to improve efficiencies and financial incentives with theoretical explanations obtained from the mentioned analytical traditions. Existing buildings can improve energy efficiency by changing structural maintenance materials, adjusting the composition of air conditioning systems, selecting energy-saving equipment, and formulating subsidy policies. These measures can improve users' thermal comfort and reduce buildings' environmental impact. The selection of combinatorial optimization schemes that contain measures to guide and restrict users' behavior in addition to carrying out demand-side management can dynamically adjust energy consumption. At the same time, economic means should enable users to change their behavior and achieve a low-carbon life. Combination optimization and pricing incentives reduce building energy consumption and carbon emissions and reduce users' costs. Energy monitoring through energy audits can achieve energy efficiency in existing buildings. An energy audit is an inspection and analysis of energy use and flows for energy conservation in a structure, process, or system intending to reduce energy input without negatively affecting output. Energy audits can determine specific opportunities for energy conservation and efficiency measures as well as determine cost-effective strategies. Training professionals typically accomplish this and can be part of some national programs discussed above. The recent development of smartphone apps enables homeowners to complete relatively sophisticated energy audits themselves. For instance, smart thermostats can connect to standard HVAC systems to maintain energy-efficient indoor temperatures. In addition, data loggers can also be installed to monitor the interior temperature and humidity levels to provide a more precise understanding of the conditions. If the data gathered is compared with the users' perceptions of comfort, more fine-tuning of the interiors can be implemented (e.g., increasing the temperature where A.C. is used to prevent over-cooling). Building technologies and smart meters can allow commercial and residential energy users to visualize the impact their energy use can have in their workplaces or homes. Advanced real-time energy metering can help people save energy through their actions. Another approach towards energy conservation is the implementation of E.C.M.s in commercial buildings, which often employ Energy Service Companies (ESCOs) experienced in energy performance contracting. This industry has been around since the 1970s and is more prevalent than ever today. The US-based organization E.V.O. (Efficiency Valuation Organization) has created a set of guidelines for ESCOs to adhere to in evaluating the savings achieved by E.C.M.s. These guidelines are called the International Performance Measurement and Verification Protocol(IPMVP). Energy efficiency can also be achieved by upgrading certain aspects of existing buildings. Firstly, making thermal improvements by adding insulation to crawl spaces and ensuring no leaks achieves an efficient building envelope, reducing the need for mechanical systems to heat and cool the space. High-performance insulation is also supported by adding double/triple-glazed windows to minimize thermal heat transmission. Minor upgrades in existing buildings include changing mixers to low flow greatly aids in water conservation, changing light bulbs to LED lights results in 70-90% less energy consumption than a standard incandescent or C.F.L. bulb, changing inefficient appliances with Energy Star-rated appliances will consume less energy, and finally adding vegetation in the landscape surrounding the building to function as a shading element. A window windcatcher can reduce the total energy use of a building by 23.3%. Energy conservation through users' behaviors requires understanding household occupants' lifestyle, social, and behavioral factors in analyzing energy consumption. This involves one-time investments in energy efficiency, such as purchasing new energy-efficient appliances or upgrading the building insulation without curtailing economic utility or the level of energy services, and energy curtailment behaviors which are theorized to be driven more by social-psychological factors and environmental concerns in comparison to the energy efficiency behaviors. Replacing existing appliances with newer and more efficient ones leads to energy efficiency as less energy is wasted throughout. Overall, energy efficiency behaviors are identified more with one-time, cost-incurring investments in efficient appliances and retrofits, while energy curtailment behaviors include repetitive, low-cost energy-saving efforts.To identify and optimize residential energy use, conventional and behavioral economics, technology adoption theory and attitude-based decision-making, social and environmental psychology, and sociology must be analyzed. The techno-economic and psychological literature analysis focuses on the individual attitude, behavior, and choice/context/external conditions. In contrast, the sociological literature relies more on the energy consumption practices shaped by the social, cultural, and economic factors in a dynamic setting. New buildings Many steps can be taken toward energy conservation and efficiency when designing new buildings. Firstly, the building can be designed to optimize building performance by having an efficient building envelope with high-performing insulation and window glazing systems, window facades strategically oriented to optimize daylighting, shading elements to mitigate unwanted glare, and passive energy systems for appliances. In passive solar building designs, windows, walls, and floors are made to collect, store, and distribute solar energy in the form of heat in the winter and reject solar heat in the summer. The key to designing a passive solar building is to best take advantage of the local climate. Elements to be considered include window placement and glazing type, thermal insulation, thermal mass, and shading. Optimizing daylighting can decrease energy waste from incandescent bulbs, windows, and balconies, allow natural ventilation, reduce the need for heating and cooling, low flow mixers aid in water conservation, and upgrade to Energy star rated appliances consume less energy. Designing a building according to LEED guidelines while incorporating smart home technology can help save a lot of energy and money in the long run. Passive solar design techniques can be applied most easily to new buildings, but existing buildings can be retrofitted. Mainly, energy conservation is achieved by modifying user habits or providing an energy-saving recommendation of curtailing an appliance or scheduling it to low-price energy tariff hours. Besides changing user habits and appliance control, identifying irrelevant appliances concerning user activities in smart homes saves energy. Smart home technology can advise users on energy-saving strategies according to their behavior, encouraging behavioral change that leads to energy conservation. This guidance includes reminders to turn off lights, leakage sensors to prevent plumbing issues, running appliances on off-peak hours, and smart sensors that save energy. Such technology learns user-appliance activity patterns, gives a complete overview of various energy-consuming appliances, and can provide guidance to improve these patterns to contribute to energy conservation. As a result, they can strategically schedule appliances by monitoring the energy consumption profiles of the appliances, schedule devices to the energy-efficient mode, or plan to work during off-peak hours. Appliance-oriented approaches emphasize appliance profiling, curtailing, and scheduling to off-peak hours, as supervision of appliances is key to energy preservation. It usually leads to appliance curtailment in which an appliance is either scheduled to work another time or is turned off. Appliance curtailment involves appliance recognition, activity-appliances model, unattended appliance detection, and energy conservation service. The appliance recognition module detects active appliances to identify the activities of smart home users. After identifying users' activities, the association between the functional appliances and user activities is established. The unattended appliance detection module looks for active appliances but is unrelated to user activity. These functional appliances waste energy and can be turned off by providing recommendations to the user.Based on the smart home recommendations, users can give weight to certain appliances that increase user comfort and satisfaction while conserving energy. Energy consumption models of energy consumption of appliances and the level of comfort they create can balance priorities among smart home comfort levels and energy consumption. According to Kashimoto, Ogura, Yamamoto, Yasumoto, and Ito, the energy supply reduces based on the historical state of the appliance and increases according to the comfort level requirement of the user, leading to a targeted energy-saving ratio. Scenarios-based energy consumption can be employed as a strategy for energy conservation, with each scenario encompassing a specific set of rules for energy consumption. Transportation Transporting people, goods, and services represented 29% of U.S. energy consumption in 2007. The transportation sector also accounted for about 33% of U.S. carbon dioxide emissions in 2006, with highway vehicles accounting for about 84% of that, making transportation an essential target for addressing global climate change (E.I.A., 2008). Suburban infrastructure evolved during an age of relatively easy access to fossil fuels, leading to transportation-dependent living systems.[citation needed] The amount of energy used to transport people to and from a facility, whether they are commuters, customers, vendors, or homeowners, is known as the transportation energy intensity of the building. Land is developing at a faster rate than population growth, leading to urban sprawl and, therefore, high transportation energy intensity as more people need to commute longer distances to jobs. As a result, the location of a building is essential in decreasing embodied emissions.In transportation, state and local efforts in energy conservation and efficiency measures tend to be more targeted and smaller in scale. However, with more robust fuel economy standards, new targets for the use of alternative transportation fuels, and new efforts in electric and hybrid electric vehicles, EPAct05 and EISA provide a new set of national policy signals and financial incentives to the private sector and state and local governments for the transportation sector. Zoning reforms that allow greater urban density and designs for walking and bicycling can greatly reduce energy consumed for transportation. Many Americans work in jobs that allow for remote work instead of commuting daily, which is a significant opportunity to conserve energy.[citation needed] Intelligent transportation systems (ITS) provide a solution to traffic congestion and C.E.s caused by increased vehicles. ITS combines improvements in information technology and systems, communications, sensors, controllers, and advanced mathematical methods with the traditional world of transportation infrastructure. It improves traffic safety and mobility, reduces environmental impact, promotes sustainable transportation, and increases productivity. The ITS strengthens the connection and cooperation between people, vehicles, roads, and the environment while improving road capacity, reducing traffic accidents, and improving transportation efficiency and safety by alleviating traffic congestion and reducing pollution. It makes full use of traffic information as an application service, which can enhance the operational efficiency of existing traffic facilities. The most significant energy-saving potential is that there are the most problems in urban transportation in various countries, such as management systems, policies and regulations, planning, technology, operation, and management mechanism. Improvements in one or several aspects will improve road transportation. Efficiency has a positive impact, which leads to the improvement of the urban traffic environment and efficiency.In addition to ITS, transit-oriented development (T.O.D.) significantly improves transportation in urban areas by emphasizing density, proximity to transit, diversity of uses, and streetscape design. Density is important for optimizing location and is a way to cut down on driving. Planners can regulate development rights by exchanging them from ecologically sensitive areas to growth-friendly zones according to density transfer procedures. Distance is defined as the accessibility of rail and bus transits, which serve as deterrents for driving. For transit-oriented development to be feasible, transportation stops must be close to where people live. Diversity refers to mixed-use areas that offer essential services close to homes and offices and include residential spaces for different socioeconomic categories, commercial and retail. This creates a pedestrian shed where one area can meet people's everyday needs on foot. Lastly, the streetscapes design involves minimal parking and walkable areas that calm traffic. Generous parking incentivizes people to use cars, whereas minimal and expensive parking deters commuters. At the same time, streetscapes can be designed to incorporate bicycling lanes and designated bicycle paths and trails. People may commute by bicycle to work without being concerned about their bicycles becoming wet because of covered bicycle storage. This encourages commuters to utilize bicycles rather than other modes of transportation and contributes to energy saving. People will be happy to walk a few blocks from a train stop if there are attractive, pedestrian-friendly outdoor spaces nearby with good lighting, park benches, outdoor tables at cafés, shade tree plantings, pedestrian courts that are blocked off to cars, and public internet connection. Additionally, this strategy calms traffic, improving the intended pedestrian environment. New urban planning schemes can be designed to improve connectivity in cities through networks of interconnected streets that spread out traffic flow, slow down vehicles, and make walking more pleasant. By dividing the number of road links by the number of road nodes, the connectivity index is calculated. The higher the connectivity index, the greater the route choices and the better the pedestrian access. Realizing the transportation impacts associated with buildings allows commuters to take steps toward energy conservation. Connectivity encourages energy-conserving behaviors as commuters use fewer cars, walk and bike more, and use public transportation. For commuters that do not have the option of public transportation, smaller vehicles that are hybrid or have better mileage can be used. Consumer products Homeowners implementing ECMs in their residential buildings often start with an energy audit. This is a way homeowners look at what areas of their homes are using, and possibly losing energy. Residential energy auditors are accredited by the Building Performance Institute (BPI) or the Residential Energy Services Network (RESNET). Homeowners can hire a professional or do it themselves or use a smartphone to help do an audit.Energy conservation measures are often combined into larger guaranteed Energy Savings Performance Contracts to maximize energy savings while minimizing disruption to building occupants by coordinating renovations. Some ECMs cost less to implement yet return higher energy savings. Traditionally, lighting projects were a good example of "low hanging fruit" that could be used to drive implementation of more substantial upgrades to HVAC systems in large facilities. Smaller buildings might combine window replacement with modern insulation using advanced building foams to improve energy for performance. Energy dashboard projects are a new kind of ECM that relies on the behavioral change of building occupants to save energy. When implemented as part of a program, case studies, such as that for the DC Schools, report energy savings up 30%. Under the right circumstances, open energy dashboards can even be implemented for free to improve upon these savings even more. Consumers are often poorly informed of the savings of energy-efficient products. A prominent example of this is the energy savings that can be made by replacing an incandescent light bulb with a more modern alternative. When purchasing light bulbs, many consumers opt for cheap incandescent bulbs, failing to take into account their higher energy costs and lower lifespans when compared to modern compact fluorescent and LED bulbs. Although these energy-efficient alternatives have a higher upfront cost, their long lifespan and low energy use can save consumers a considerable amount of money. The price of LED bulbs has also been steadily decreasing in the past five years due to improvements in semiconductor technology. Many LED bulbs on the market qualify for utility rebates that further reduce the price of the purchase to the consumer. Estimates by the U.S. Department of Energy state that widespread adoption of LED lighting over the next 20 years could result in about $265 billion worth of savings in United States energy costs.The research one must put into conserving energy is often too time-consuming and costly for the average consumer when there are cheaper products and technology available using today's fossil fuels. Some governments and NGOs are attempting to reduce this complexity with Eco-labels that make differences in energy efficiency easy to research while shopping.To provide the kind of information and support people need to invest money, time and effort in energy conservation, it is important to understand and link to people's topical concerns. For instance, some retailers argue that bright lighting stimulates purchasing. However, health studies have demonstrated that headache, stress, blood pressure, fatigue and worker error all generally increase with the common over-illumination present in many workplace and retail settings. It has been shown that natural daylighting increases productivity levels of workers, while reducing energy consumption.In warm climates where air conditioning is used, any household device that gives off heat will result in a larger load on the cooling system. Items such as stoves, dishwashers, clothes dryers, hot water, and incandescent lighting all add heat to the home. Low-power or insulated versions of these devices give off less heat for the air conditioning to remove. The air conditioning system can also improve efficiency by using a heat sink that is cooler than the standard air heat exchanger, such as geothermal or water. In cold climates, heating air and water is a major demand for household energy use. Significant energy reductions are possible by using different technologies. Heat pumps are a more efficient alternative to electrical resistance heaters for warming air or water. A variety of efficient clothes dryers are available, and the clothes lines requires no energy- only time. Natural-gas (or bio-gas) condensing boilers and hot-air furnaces increase efficiency over standard hot-flue models. Standard electric boilers can be made to run only at hours of the day when they are needed by means of a time switch. This decreases energy use vastly. In showers, a semi-closed-loop system could be used. New construction implementing heat exchangers can capture heat from wastewater or exhaust air in bathrooms, laundry, and kitchens. In both warm and cold climate extremes, airtight thermal insulated construction is the largest factor determining the efficiency of a home. Insulation is added to minimize the flow of heat to or from the home, but can be labor-intensive to retrofit to an existing home. Global Impact Energy conservation entails changing user behaviors to use electricity more efficiently, reducing the amount of fuel needed to generate electricity and, therefore, the amount of greenhouse gases emitted. This is achieved on a smaller, individual scale; however, its effects can be global when many people engage in individual action toward energy conservation. The growth of global energy use has raised concerns over supply, exhaustion of energy use, and severe environmental impacts. The global contributions from residential and commercial buildings towards energy consumption have steadily increased, reaching figures between 20% and 40% in developed countries. Coupled with rapid population growth, increasing pressure for building services, and enhanced comfort levels, an upward energy demand trend is expected. Therefore, energy efficiency and conservation is a prime objective for regional, national, and international energy policy. When users limit their energy usage, they decrease their environmental impact. The act of energy conservation can help slow global warming, therefore saving coastal cities from disappearing underwater, improving water quality and protecting reefs and other fragile ecosystems, improving air quality, and reducing allergens leading to a reduced risk of respiratory health issues, and decreasing the effects on mental health, injuries, and fatalities caused by severe weather. On an economic scale, energy conservation can also lower individual utility bills, create jobs, provide users with opportunities for tax credits and rebates and help stabilize electricity prices and volatility. Simple changes to the types of appliances used can significantly impact energy efficiency and cost. Changes to the electricity bill, natural gas bill, and water bill can reflect efforts toward energy conservation. Energy conservation and efficiency work hand in hand with improving the global impact. On a global basis, energy efficiency works behind the scenes to improve energy security, lower energy bills, and move countries closer to reaching climate goals. According to the IEA, some 40% of the global energy efficiency market is financed with debt and equity. Energy Performance Investment is one financing mechanism by which E.C.M.s can be implemented now and paid for by the savings realized over the project's life. While all 50 states, Puerto Rico and Washington, D.C., have statutes allowing companies to offer energy savings performance contracts, success varies because of variations in the approach, the state's degree of involvement, and other factors. Homes and businesses are implementing energy-efficiency measures that include low-energy lighting, insulation, and even high-tech energy dashboards to cut bills by avoiding waste and boosting productivity.[citation needed]Energy conservation can also prevent developments that extract natural resources from expanding and preserving natural areas. For instance, energy conservation benefits wildlife and natural regions by lessening the demand for new power plants. Reducing the reliance on finite sources moves the economy towards large-scale energy independence — the more energy conserved, the more energy independent the nation can become. Small steps towards energy conservation can have a positive impact, given the finite nature of energy sources. When users conserve energy and use it more efficiently, they prolong the existence of fossil fuels and directly reduce greenhouse gas emissions entering the Earth's atmosphere. After limiting the access of cars to the city center in Madrid, nitrogen oxide levels fell by 38%, and carbon dioxide decreased by 14.2% in the city center. Energy conservation prolongs the existence of fossil fuels by limiting energy consumption. The slower non-renewable resources are consumed, the more time is available to develop alternatives to energy solutions. Slowing down the diminishment of fossil fuels will prevent the increase in the cost of drilling and mining—the cost to the consumer increases as a result of this additional expense. The more we rely on renewable energy sources, the longer fossil fuels will last, and the rate at which their prices will rise will be slowed.Many international energy conservation standards exist to reduce energy demand and increase efficiency. The standards also help reduce greenhouse gas emissions by reducing energy demand and use, slowing global warming. To encourage homeowners to conserve energy, the U.S. Department of Energy and numerous state governments offer rebate programs and tax credits connected to energy efficiency. The following are a few of the policies and incentives:American Council to an Energy-Efficient Economy National Energy Policy (2009) Database of State Incentives for Renewables and Efficiency (2009) Department of Energy, Energy Projections to the Year 2010 (1983) Department of Energy, Energy Security Report (1987) Energy Independence and Security Act of 2007 Energy Information Administration (E.I.A.) (1995, 2008, 2009) Energy Policy Act (1992, 2005) Federal Energy Administration National Energy Outlook (1976) International Energy Agency Energy Policies of IEA Countries (2006a) International Energy Agency Light's Labour's Lost Policies for Energy-efficient Lighting (2006b) and National Energy Act (1992 Energy conservation by countries Asia Although energy efficiency is expected to play a vital role in cost-effectively cutting energy demand, only a small part of its economic potential is exploited in Asia. Governments have implemented a range of subsidies such as cash grants, cheap credit, tax exemptions, and co-financing with public-sector funds to encourage energy-efficiency initiatives across several sectors. Governments in the Asia-Pacific region have implemented a range of information provision and labeling programs for buildings, appliances, and the transportation and industrial sectors. Information programs can simply provide data, such as fuel-economy labels, or actively seek to encourage behavioral changes, such as Japan's Cool Biz campaign that encourages setting air conditioners at 28-degrees Celsius and allowing employees to dress casually in the summer.China's government has launched a series of policies since 2005 to effectively promote the goal of reducing energy-saving emissions; however, road transportation, the fastest-growing energy-consuming sector in the transportation industry, lacks specific, operational, and systematic energy-saving plans. Road transportation is the highest priority to achieve energy conservation effectively and reduce emissions, particularly since social and economic development has entered the "new norm" period. Generally speaking, the government should make comprehensive plans for conservation and emissions reduction in the road transportation industry within the three dimensions of demand, structure, and technology. For example, encouraging trips using public transportation and new transportation modes such as car-sharing and increasing investment in new energy vehicles in structure reform, etc. European Union At the end of 2006, the European Union (EU) pledged to cut its annual consumption of primary energy by 20% by 2020. The EU Energy Efficiency Directive 2012 mandates energy efficiency improvements within the EU.As part of the EU's SAVE program, aimed at promoting energy efficiency and encouraging energy-saving behavior, the Boiler Efficiency Directive specifies minimum levels of efficiency for boilers utilizing liquid or gaseous fuels. There is steady progress on energy regulation implementation in Europe, North America, and Asia, with the highest number of building energy standards being adopted and implemented. Moreover, the performance of Europe is highly encouraging concerning energy standard activities. They recorded the highest percentage of mandatory energy standards compared to the other five regions.In 2050, energy savings in Europe can reach 67% of the 2019 baseline scenario, amounting to a demand of 361 Mtoe in an "energy efficiency first" societal trend scenario. A condition is that there be no rebound effect, for otherwise the savings are 32% only or energy use may even increase by 42% if techno-economic potentials are not realized. India The Petroleum Conservation Research Association (PCRA) is an Indian governmental body created in 1978 that engages in promoting energy efficiency and conservation in every walk of life. In the recent past, PCRA has organised mass media campaigns in television, radio, and print media. This is an impact-assessment survey by a third party that revealed that due to these larger campaigns by PCRA, the public's overall awareness level has gone up leading to the saving of fossil fuels worth crores of rupees, besides reducing pollution. The Bureau of Energy Efficiency is an Indian government organization created in 2001 that is responsible for promoting energy efficiency and conservation. Protection and Conservation of Natural Resources are done by Community Natural Resources Management (CNRM). Iran Supreme leader of Iran Ali Khamenei had regularly criticized energy administration and high fuel consumption. Japan Since the 1973 oil crisis, energy conservation has been an issue in Japan. All oil-based fuel is imported, so domestic sustainable energy is being developed. The Energy Conservation Center promotes energy efficiency in every aspect of Japan. Public entities are implementing the efficient use of energy for industries and research. It includes projects such as the Top Runner Program. In this project, new appliances are regularly tested on efficiency, and the most efficient ones are made the standard. Middle East The Middle East holds 40% of the world's crude oil reserves and 23% of its natural gas reserves. Conservation of domestic fossil fuels is, therefore, a legitimate priority for the Gulf countries, given domestic needs as well as the global market for these products. Energy subsidies are the chief barrier to conservation in the Gulf. Residential electricity prices can be a tenth of U.S. rates. As a result, increased tariff revenues from gas, electricity, and water sales would encourage investment in natural gas exploration and production and generation capacity, helping to alleviate future shortages. Households in the MENA region are responsible for 53% of energy use in Saudi Arabia and 57% of the UAE's ecological footprint. This is partially due to poorly designed and constructed buildings, mainly under a cheap energy model that has left them without contemporary control technology or even proper insulation and efficient appliances. Building energy consumption can be cut by 20% under a combination of insulation, efficient windows and appliances, shading, reflective roofing, and a host of automated controls that adjust energy use.Governments could also set minimum energy efficiency and water use standards on importing appliances sold inside their countries, effectively banning the sale of inefficient air conditioners, dishwashers, and washing machines. Administration of the laws would essentially be a function of national customs services. Governments could go further, offering incentives – or mandates – that air conditioners of a certain age be replaced. Lebanon In Lebanon and since 2002 The Lebanese Center for Energy Conservation (LCEC) has been promoting the development of efficient and rational uses of energy and the use of renewable energy at the consumer level. It was created as a project financed by the International Environment Facility (GEF) and the Ministry of Energy Water (MEW) under the management of the United Nations Development Programme (UNDP) and gradually established itself as an independent technical national center although it continues to be supported by the United Nations Development Programme (UNDP) as indicated in the Memorandum of Understanding (MoU) signed between MEW and UNDP on 18 June 2007. Nepal Until recently, Nepal has been focusing on the exploitation of its huge water resources to produce hydropower. Demand-side management and energy conservation were not in the focus of government action. In 2009, bilateral Development Cooperation between Nepal and the Federal Republic of Germany has agreed upon the joint implementation of the "Nepal Energy Efficiency Programme". The lead executing agencies for the implementation are the Water and Energy Commission Secretariat (WECS). The aim of the program is the promotion of energy efficiency in policymaking, in rural and urban households as well as in the industry.Due to the lack of a government organization that promotes energy efficiency in the country, the Federation of Nepalese Chambers of Commerce and Industry (FNCCI) has established the Energy Efficiency Centre under his roof to promote energy conservation in the private sector. The Energy Efficiency Centre is a non-profit initiative that is offering energy auditing services to the industries. The Centre is also supported by Nepal Energy Efficiency Programme of Deutsche Gesellschaft für Internationale Zusammenarbeit.A study conducted in 2012 found out that Nepalese industries could save 160,000-megawatt hours of electricity and 8,000 terajoules of thermal energy (like diesel, furnace oil, and coal) every year. These savings are equivalent to annual energy cost cut of up to 6.4 Billion Nepalese Rupees. As a result of Nepal Economic Forum 2014, an economic reform agenda in the priority sectors was declared focusing on energy conservation among others. In the energy reform agenda, the government of Nepal gave the commitment to introduce incentive packages in the budget of the fiscal year 2015/16 for industries that practices energy efficiency or use efficient technologies (incl. cogeneration). New Zealand In New Zealand the Energy Efficiency and Conservation Authority is the Government Agency responsible for promoting energy efficiency and conservation. The Energy Management Association of New Zealand is a membership-based organization representing the New Zealand energy services sector, providing training and accreditation services with the aim of ensuring energy management services are credible and dependable. Nigeria In Nigeria, the Lagos State Government is encouraging Lagosians to imbibe an energy conservation culture. In 2013, the Lagos State Electricity Board (LSEB) ran an initiative tagged "Conserve Energy, Save Money" under the Ministry of Energy and Mineral Resources. The initiative is designed to sensitize Lagosians around the theme of energy conservation by influencing their behavior through do-it-yourself tips. In September 2013, Governor Babatunde Raji Fashola of Lagos State and the campaign ambassador, rapper Jude "MI" Abaga participated in the Governor's conference video call on the topic of energy conservation. In addition to this, during the month of October (the official energy conservation month in the state), LSEB hosted experience centers in malls around Lagos State where members of the public were encouraged to calculate their household energy consumption and discover ways to save money using a consumer-focused energy app. To get Lagosians started on energy conservation, solar lamps and energy-saving bulbs were also handed out. In Kaduna State, the Kaduna Power Supply Company (KAPSCO) ran a program to replace all light bulbs in Public Offices; fitting energy-saving bulbs in place of incandescent bulbs. KAPSCO is also embarking on an initiative to retrofit all conventional streetlights in the Kaduna Metropolis to LEDs which consume much less energy. Sri Lanka Sri Lanka currently consumes fossil fuels, hydro power, wind power, solar power and dendro power for their day to day power generation. The Sri Lanka Sustainable Energy Authority is playing a major role regarding energy management and energy conservation. Today, most industries are requested to reduce their energy consumption by using renewable energy sources and optimizing their energy usage. Turkey Turkey aims to decrease by at least 20% the amount of energy consumed per GDP of Turkey by 2023 (energy intensity). United Kingdom The Department for Business, Energy and Industrial Strategy is responsible for promoting energy efficiency in the United Kingdom. United States The United States is currently the second-largest single consumer of energy, following China. The U.S. Department of Energy categorizes national energy use in four broad sectors: transportation, residential, commercial, and industrial.About half of U.S. energy consumption in the transportation and residential sectors is primarily controlled by individual consumers. In the typical American home, space heating is the most significant energy use, followed by electrical technology (appliances, lighting, and electronics) and water heating. Commercial and industrial energy expenditures are determined by businesses entities and other facility managers. National energy policy has a significant effect on energy usage across all four sectors. Since the oil embargoes and price spikes of the 1970s, energy efficiency and conservation have been fundamental tenets of U.S. energy policy. The scope of energy conservation and efficiency measures has been broadened throughout time by U.S. energy policies and programs, including federal and state legislation and regulatory actions, to include all economic sectors and all geographical areas of the nation. Measurable energy conservation and efficiency gains in the 1980s led to the 1987 Energy Security Report to the President (DOE, 1987) that "the United States uses about 29 quads less energy in a year today than it would have if our economic growth since 1972 had been accompanied by the less- efficient trends in energy use we were following at that time" The DOE Strategy and the legislation included new strategies for strengthening conservation and efficiency in buildings, industry, and electric power, such as integrated resource planning for electric and natural gas utilities and efficiency and labeling standards for 13 residential appliances and equipment categories. Lack of a national consensus on how to proceed interfered with developing a consistent and comprehensive approach. Nevertheless, the Energy Policy Act of 2005 (EPAct05; 109th U.S. Congress, 2005) contained many new energy conservation and efficiency provisions in the transportation, buildings, and electric power sectors.The most recent federal law to increase and broaden U.S. energy conservation and efficiency laws, programs, and practices is the Energy Independence and Security Act of 2007 (EISA). Over the next few decades, it is anticipated that EISA will significantly reduce energy use because it has more standards and targets than previous legislation. Both acts reinforce the importance of lighting and appliance efficiency programs, targeting an additional 70% lighting efficiency by 2020, introducing 45 new standards for appliances, and setting up new standards for vehicle fuel economy. The Federal Government is also promoting a new 30% model code for efficient building practices in the construction industry. Additionally, according to the American Council for an Energy-Efficient Economy (ACEEE), the EISA's energy efficiency and conservation initiatives will cut carbon dioxide emissions by 9% in 2030. These requirements cover appliance and lighting efficiency, energy savings in homes, businesses, and public buildings, the effectiveness of industrial manufacturing facilities, and the efficiency of electricity supply and end use. Expectations are high for increased energy savings due to these initiatives, which have already started contributing to new federal, state, and local laws, programs, and practices across the U.S. The development and use of alternative transportation fuels (whose supply is expected to expand by 15% by 2022), renewable energy sources, and other clean energy technologies have also received more attention and financial incentives. Recent policies also emphasize growing the use of coal with carbon capture and sequestration, solar, wind, nuclear, and other clean energy sources. In February 2023 the United States Department of Energy proposed a set of new energy efficiency standards that, if implemented, will save to users of different electric machines in the United States around 3,500,000,000$ per year and will reduce by the year 2050 carbon emissions by the same amount as emitted by 29,000,000 houses. Mechanisms to Promote Conservation Governmental mechanisms Governments at the national, regional, and local levels may implement policies to promote energy efficiency. Building energy rules can cover the energy consumption of an entire structure or specific building components, like heating and cooling systems. They represent some of the most frequently used instruments for energy efficiency improvements in buildings and can play an essential role in improving energy conservation in buildings. There are multiple reasons for the growth of these policies and programs since the 2000s, including cost savings as energy prices increased, growing concern about the environmental impacts of energy use, and public health concerns. The policies and programs related to energy conservation are critical to establishing safety and performance levels, assisting in consumer decision-making, and explicitly identifying energy-conserving and energy-efficient products. Recent policies include new programs and regulatory incentives that call for electric and natural gas utilities to increase their involvement in delivering energy-efficiency products and services to their customers. For example, the National Action Plan for Energy Efficiency (NAPEE) is a public-private partnership created in response to EPAct05 that brings together senior executives from electric and natural gas utilities, state public utility commissions, other state agencies, and environmental and consumer groups representing every region of the country. The success of building energy regulation in effectively controlling energy consumption in the building sector will be, to a great extent, associated with the adopted energy performance indicator and the promoted energy assessment tools. It can help overcome significant market barriers and ensure cost-effective energy efficiency opportunities are incorporated into new buildings. This is crucial in emerging nations where new constructions are rapidly developing, and market and energy prices sometimes discourage efficient technologies. The building energy standards development and adoption showed that 42% of emerging developing countries surveyed have no energy standard in place, 20% have mandatory, 22% have mixed, and 16% proposed. The major impediments to implementing building energy regulations for energy conservation and efficiency in the building sector are institutional barriers and market failures rather than technical problems, as pointed out by Nature Publishing Group (2008). Among these, Santamouris (2005) includes a lack of owners' awareness of energy conservation benefits, building energy regulations benefits, insufficient awareness and training of property managers, builders, and engineers, and a lack of specialized professionals to ensure compliance. Based on the above information, the development and adoption of building energy regulations, such as energy standards in developing countries, are still far behind compared to building energy regulation adoption and implementation in developed countries. Building energy standards are starting to appear in Africa, Latin America, and Middle East regions, even though this is a new development going to the result obtained in this study. The level of progress on energy regulation activities in Africa, Latin America, and the Middle East is increasing, given the higher number of energy standard proposals recorded in these regions. According to the Royal Institute of Chartered Surveyors, several codes are being developed in developing countries with UNDP and GEF support. These typically include elemental and integrated routes to compliance, such as a fundamental method defining the performance requirements of specific building elements. However, they are still far behind in building energy regulation development, implementation, and compliance compared to developed nations. Also, decision-making regarding energy regulations is still from the government only, with little or no input from non-governmental entities. As a result, lower energy regulation development is recorded in these regions compared to regions with integrated and consensus approaches. Additionally, there is growing government involvement in the development and implementation of energy standards; 62% of Middle Eastern respondents, 45% of African respondents, and 43% of Latin American respondents indicated that existing government agencies, such as building agencies and energy agencies, are involved in implementing building energy standards in their respective nations, as opposed to 20% of European respondents, 38% of Asian respondents, and 0% of North American respondents, who indicated the involvement of existing agencies. Several North African nations, like Tunisia and Egypt, have programs relating to building energy standards, while Algeria and Morocco are now seeking to establish building energy standards, according to the Royal Institute of Chartered Surveyors. Similarly, Egypt's residential energy standard became law in 2005, and their commercial standard was anticipated to follow. The standards provide minimal performance requirements for applications involving air conditioners and other appliances and elemental and integrated pathways. However, it was claimed that enforcement legislation was still required in 2005. Additionally, Morocco launched a program in 2005 to create thermal energy requirements for construction, concentrating on the hospitality, healthcare, and communal housing industries. Mandatory energy standards Energy standards are the primary way governments foster energy efficiency as a public good. A recognized standard-setting organization prepares a standard. Standards developed by recognized organizations are often used as the basis for the development and updating of building codes. They allow innovative approaches and techniques to achieve effective energy utilization and optimum building performance. Besides, it encourages cost-effective energy use of building components, including building envelope, lighting, HVAC, electrical installations, lift and escalator, and other equipment. Energy-efficiency standards have been expanded and strengthened for appliances, building equipment, and lighting. For example, appliances and equipment standards are being developed for a new range of devices, including reduction goals for "standby" power that keeps consumer electronic products in a ready-to-use mode. Some devices require certain levels of energy performance from a car, building, appliance, or other technical equipment. If the vehicle, building, appliance, or equipment does not meet these standards, there may be restrictions on its sale or rent. In the U.K., these are called "minimum energy efficiency standards" or MEES and were applied to privately rented accommodation in 2019. Energy codes and standards are vital in setting minimum energy-efficient design and construction requirements. Buildings should be developed following energy standards to save energy efficiently. They specify uniform requirements for new buildings, additions, and modifications. National organizations like the American Society of Heating, Refrigerating, and Air-Conditioning Engineers publish the standards (ASHRAE). State and municipal governments frequently use energy standards as the technical foundation for creating their energy regulations. Some energy standards are written in a mandatory and enforceable language, making it simple for governments to add the standards' provisions directly to their laws or regulations. The American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE) is a well-known example of a standard-making organization. This organization dates to the nineteenth century and is international in its membership (About ASHRAE 2018). Examples of ASHRAE standards that relate to energy conservation in the built environment are: Standard 62.1-2016 Ventilation for Acceptable Indoor Air Quality. Standard 90.2-2007 Energy Efficient Design of Low-Rise Residential Buildings. Standard 100-2018 Energy Efficiency in Existing Buildings. Standard 189.1-2014 Standard for the Design of High-Performance Green Buildings.The Residential Energy Services Network is a crucial benchmark for energy reduction (RESNET). The Home Energy Rating System (HERS) of RESNET, which is based on the International Code Council's (ICC) energy code, is used to rate home energy consumption with a standard numerical scale that examines factors in home energy use (About HERS 2018). The American National Standards Institute (ANSI) has acknowledged the HERS assessment system as a national benchmark for evaluating energy efficiency. The International Energy Conservation Code (IECC) of the ICC requires an energy rating index, and the main index used in the residential building sector is HERS. The mortgage financing sector makes substantial use of the HERS index. A home's expected energy usage may impact the available mortgage funds based on the HERS score, with more energy-efficient, lower energy-using homes potentially qualifying for a better mortgage rate or amount. Mandatory energy labels Many governments require that a car, building, or piece of equipment be labeled with its energy performance. This allows consumers and customers to see the energy implications of their choices, but does not restrict their choices or regulate which products are available to choose from. It also does not enable easily comparing options (such as being able to filter by energy-efficiency in online stores) or have the best energy-conserving options accessible (such as energy-conserving options being available in the frequented local store). (An analogy would be nutritional labeling on food.) A trial of estimated financial energy cost of refrigerators alongside EU energy-efficiency class (EEEC) labels online found that the approach of labels involves a trade-off between financial considerations and higher cost requirements in effort or time for the product-selection from the many available options which are often unlabelled and don't have any EEEC-requirement for being bought, used or sold within the EU. Moreover, in this one trial the labeling was ineffective in shifting purchases towards more sustainable options. Energy taxes Some countries employ energy or carbon taxes to motivate energy users to reduce their consumption. Carbon taxes can motivate consumption to shift to energy sources with fewer emissions of carbon dioxide, such as solar power, wind power, hydroelectricity or nuclear power while avoiding cars with combustion engines, jet fuel, oil, fossil gas and coal. On the other hand, taxes on all energy consumption can reduce energy use across the board while reducing a broader array of environmental consequences arising from energy production. The state of California employs a tiered energy tax whereby every consumer receives a baseline energy allowance that carries a low tax. As for usage increases above that baseline, the tax increases drastically. Such programs aim to protect poorer households while creating a larger tax burden for high energy consumers.Developing countries specifically are less likely to impose policy measures that slow carbon emissions as this would slow their economic development. These growing countries may be more likely to support their own economic growth and support their citizens rather than decreasing their carbon emissions.The following pros and cons of a carbon tax help one to see some of the potential effects of a carbon tax policy.Pros of Carbon Tax include: Making polluters pay the external cost of carbon emissions. Enables greater social efficiency as all citizens pay the full social cost. Raises revenue which can, in turn, be spent on mitigating the effects of pollution. Encourages firms and consumers to search for non-carbon producing alternatives (ex. solar power, wind power, hydroelectricity, or nuclear power). Reduces environmental costs associated with excess carbon pollution.Cons of Carbon Tax include: Businesses claim higher taxes which can discourage investment and economic growth. A carbon tax may encourage tax evasion as firms may pollute in secret to avoid a carbon tax. It may be difficult to measure external costs and how much the carbon tax should truly be. There are administration costs in measuring pollution and collecting the associated tax. Firms may move production to countries in which there is no carbon tax. Non-Governmental Mechanisms Voluntary energy standards Another aspect of promoting energy efficiency is using the Leadership in Energy and Environmental Design (LEED) voluntary building design standards. This program is supported by the US Green Building Council. The "Energy and Atmosphere" Prerequisite applies to energy issues, it focuses on energy performance, renewable energy, and other. See green building. Reactions against conservation Former US President Donald Trump had opposed water regulation. He made a law easing shower head output power damping regulations which the Biden administration repealed. The Trump administration allowed creation of more powerful and faster dishwashers. See also References Further reading GA Mansoori, N Enayati, LB Agyarko (2016), Energy: Sources, Utilization, Legislation, Sustainability, Illinois as Model State, World Sci. Pub. Co., ISBN 978-981-4704-00-7 Alexeew, Johannes; Carolin Anders and Hina Zia (2015): Energy-efficient buildings – a business case for India? An analysis of incremental costs for four building projects of the Energy-Efficient Homes Programme. Berlin/New Delhi: Adelphi/TERI Gary Steffy, Architectural Lighting Design, John Wiley and Sons (2001) ISBN 0-471-38638-3 Lumina Technologies, Analysis of energy consumption in a San Francisco Bay Area research office complex, for the (confidential) owner, Santa Rosa, Ca. 17 May 1996 Robb, Drew (2 June 2007). "GSA paves way for IT-based buildings – Government Computer News". Gcn.com. Archived from the original on 25 December 2008. Retrieved 29 July 2010. External links bigEE – Your guide to energy efficiency in buildings Energy saving advice and grants for UK consumers Energy efficiency and renewable energy at the U.S. Department of Energy EnergyStar – for commercial buildings and plants Ulrich Hottelet: Want to Save the Earth? Pick a Clothesline, Atlantic Times, November 2007 Energy Efficiency in Asia and the Pacific Asian Development Bank Energy Saving Tips Save up to $100 on power bills per year by switching off any unused appliances.
landfill gas utilization
Landfill gas utilization is a process of gathering, processing, and treating the methane or another gas emitted from decomposing garbage to produce electricity, heat, fuels, and various chemical compounds. After fossil fuel and agriculture, landfill gas is the third largest human generated source of methane. Compared to CO2, methane is 25 times more potent as a greenhouse gas. It is important not only to control its emission but, where conditions allow, use it to generate energy, thus offsetting the contribution of two major sources of greenhouse gases towards climate change. The number of landfill gas projects, which convert the gas into power, went from 399 in 2005 to 519 in 2009 in the United States, according to the US Environmental Protection Agency. These projects are popular because they control energy costs and reduce greenhouse gas emissions. These projects collect the methane gas and treat it, so it can be used for electricity or upgraded to pipeline-grade gas. These projects power homes, buildings, and vehicles. Generation Landfill gas (LFG) is generated through the degradation of municipal solid waste (MSW) and other biodegradable waste, by microorganisms. Aerobic conditions, presence of oxygen, leads to predominately CO2 emissions. In anaerobic conditions, as is typical of landfills, methane and CO2 are produced in a ratio of 60:40. Methane (CH4) is the important component of landfill gas as it has a calorific value of 33.95 MJ/Nm^3 which gives rise to energy generation benefits. The amount of methane that is produced varies significantly based on composition of the waste. Most of the methane produced in MSW landfills is derived from food waste, composite paper, and corrugated cardboard which comprise 19.4 ± 5.5%, 21.9 ± 5.2%, and 20.9 ± 7.1% respectively on average of MSW landfills in the United States. The rate of landfill gas production varies with the age of the landfill. There are 4 common phases that a section of a MSW landfill undergoes after placement. Typically, in a large landfill, different areas of the site will be at different stages simultaneously. The landfill gas production rate will reach a maximum at around 5 years and start to decline. Landfill gas follows first-order kinetic decay after decline begins with a k-value ranging 0.02 yr-1 for arid conditions and 0.065 yr-1 for wet conditions. The Landfill Methane Outreach Program (LMOP) provides first order decay model to aid in the determination of landfill gas production named LandGEM (Landfill Gas Emissions Model). Typically, gas extraction rates from a municipal solid waste (MSW) landfill range from 25 to 10000 m3/h where Landfill sites typically range from 100,000 m3 to 10 million m3 of waste in place. MSW landfill gas typically has roughly 45 to 60% methane and 40 to 60% carbon dioxide, depending on the amount of air introduced to the site, either through active gas extraction or from inadequate sealing (capping) of the landfill site. Depending on the composition of the waste in place, there are many other minor components that comprises roughly 1% which includes H2S, NOx, SO2, CO, non-methane volatile organic compounds (NMVOCs), polycyclic aromatic hydrocarbons (PAHs), polychlorinated dibenzodioxins (PCDDs), polychlorinated dibenzofurans (PCDFs), etc. All of the aforementioned agents are harmful to human health at high doses. LFG collection systems Landfill gas collection is typically accomplished through the installation of wells installed vertically and/or horizontally in the waste mass. Design heuristics for vertical wells call for about one well per acre of landfill surface, whereas horizontal wells are normally spaced about 50 to 200 feet apart on center. Efficient gas collection can be accomplished at both open and closed landfills, but closed landfills have systems that are more efficient, owing to greater deployment of collection infrastructure since active filling is not occurring. On average, closed landfills have gas collection systems that capture about 84% of produced gas, compared to about 67% for open landfills. Landfill gas can also be extracted through horizontal trenches instead of vertical wells. Both systems are effective at collecting. Landfill gas is extracted and piped to a main collection header, where it is sent to be treated or flared. The main collection header can be connected to the leachate collection system to collect condensate forming in the pipes. A blower is needed to pull the gas from the collection wells to the collection header and further downstream. A 40-acre (160,000 m2) landfill gas collection system with a flare designed for a 600 ft3/min extraction rate is estimated to cost $991,000 (approximately $24,000 per acre) with annual operation and maintenance costs of $166,000 per year at $2,250 per well, $4,500 per flare and $44,500 per year to operate the blower (2008). LMOP provides a software model to predict collection system costs. Flaring If gas extraction rates do not warrant direct use or electricity generation and, in order to avoid uncontrolled release to the atmosphere, the gas can be flared off. One hundred m3/h is a practical threshold for flaring in the US. In the U.K, gas engines are used with a capacity of less than 100m3/h. Flares are useful in all landfill gas systems as they can help control excess gas extraction spikes and maintenance down periods. In the U.K and EU enclosed flares, from which the flame is not visible are mandatory at modern landfill sites. Flares can be either open or enclosed, but the latter are typically more expensive as they provide high combustion temperatures and specific residence times as well as limit noise and light pollution. Some US states require the use of enclosed flares over open flares. Higher combustion temperatures and residence times destroy unwanted constituents such as un-burnt hydrocarbons. General accepted values are an exhaust gas temperature of 1000°C with a retention time of 0.3 seconds which is said to result in greater than 98% destruction efficiency. The combustion temperature is an important controlling factor as if greater than 1100°C, there is a danger of the exponential formation of thermal NOx. Landfill gas treatment Landfill gas must be treated to remove impurities, condensate, and particulates. The treatment system depends on the end use. Minimal treatment is needed for the direct use of gas in boiler, furnaces, or kilns. Using the gas in electricity generation typically requires more in-depth treatment. Treatment systems are divided into primary and secondary treatment processing. Primary processing systems remove moisture and particulates. Gas cooling and compression are common in primary processing. Secondary treatment systems employ multiple cleanup processes, physical and chemical, depending on the specifications of the end use. Two constituents that may need to be removed are siloxanes and sulfur compounds, which are damaging to equipment and significantly increase maintenance cost. Adsorption and absorption are the most common technologies used in secondary treatment processing. Use of landfill gas Direct use Boiler, dryer, and process heater Pipelines transmit gas to boilers, dryers, or kilns, where it is used much in the same way as natural gas. Landfill gas is cheaper than natural gas and holds about half the heating value at 16,785 – 20,495 kJ/m3 (450 – 550 Btu/ft3) as compared to 35,406 kJ/m3 (950 Btu/ft3) of natural gas. Boilers, dryers, and kilns are used often because they maximize use of the gas, limited treatment is needed, and the gas can be mixed with other fuels. Boilers use the gas to transform water into steam for use in various applications. For boilers, about 8,000 to 10,000 pounds per hour of steam can be generated for every 1 million metric tons of waste-in-place at the landfill. Most direct use projects use boilers. General Motors saves $500,000 on energy costs per year at each of the four plants owned by General Motors that has implemented landfill gas boilers. Disadvantages of Boilers, dryers, and kilns are that they need to be retrofitted in order to accept the gas and the end user has to be nearby (within roughly 5 miles) as pipelines will need to be built. Infrared heaters, greenhouses, artisan studios In situations with low gas extraction rates, the gas can go to power infrared heaters in buildings local to the landfill, provide heat and power to local greenhouses, and power the energy intensive activities of a studio engaged in pottery, metalworking or glass-blowing. Heat is fairly inexpensive to employ with the use of a boiler. A microturbine would be needed to provide power in low gas extraction rate situations. Leachate evaporation The gas coming from the landfill can be used to evaporate leachate in situations where leachate is fairly expensive to treat. The system to evaporate the leachate costs $300,000 to $500,000 to put in place with operations and maintenance costs of $70,000 to $95,000 per year. A 30,000 gallons per day evaporator costs $.05 - $.06 per gallon. The cost per gallon increases as the evaporator size decreases. A 10,000 gallons per day evaporator costs $.18 - $.20 per gallon. Estimates are in 2007 dollars. Pipeline-quality gas, CNG, LNG Landfill gas can be converted to high-Btu gas by reducing its carbon dioxide, nitrogen, and oxygen content. The high-Btu gas can be piped into existing natural gas pipelines or in the form of CNG (compressed natural gas) or LNG (liquid natural gas). CNG and LNG can be used on site to power hauling trucks or equipment or sold commercially. Three commonly used methods to extract the carbon dioxide from the gas are membrane separation, molecular sieve, and amine scrubbing. Oxygen and nitrogen are controlled by the proper design and operation of the landfill since the primary cause for oxygen or nitrogen in the gas is intrusion from outside into the landfill because of a difference in pressure. The high-Btu processing equipment can be expected to cost $2,600 to $4,300 per standard cubic foot per minute (scfm) of landfill gas. Annual costs range from $875,000 to $3.5 million to operate, maintain and provide electricity to. Costs depend on quality of the end product gas as well as the size of the project. The first landfill gas to LNG facility in the United States was the Frank R. Bowerman Landfill in Orange County, California. The same process is used for the conversion to CNG, but on a smaller scale. The CNG project at Puente Hills Landfill in Los Angeles has realized $1.40 per gallon of gasoline equivalent with the flow rate of 250 scfm. Cost per gallon equivalent reduces as the flow rate of gas increases. LNG can be produced through the liquification of CNG. However, the oxygen content needs to be reduced to be under 0.5% to avoid explosion concerns, the carbon dioxide content must be as close to zero as possible to avoid freezing problems encountered in the production, and nitrogen must be reduced enough to achieve at least 96% methane. A $20 million facility is estimated to achieve $0.65/gallon for a plant producing 15,000 gallons/day of LNG (3,000 scfm). Estimates are in 2007 dollars. Electricity generation If the landfill gas extraction rate is large enough, a gas turbine or internal combustion engine could be used to produce electricity to sell commercially or use on site. Reciprocating piston engine More than 70 percent of all landfill electricity projects use reciprocating piston (RP) engines, a form of internal combustion engine, because of relatively low cost, high efficiency, and good size match with most landfills. RP engines usually achieve an efficiency of 25 to 35 percent with landfill gas. However, RP engines can be added or removed to follow gas trends. Each engine can achieve 150kW to 3 MW, depending on the gas flow. An RP engine (less than 1 MW) can typically cost $2,300 per kW with annual operation and maintenance costs of $210 per kW. An RP engine (greater than 800 kW) can typically cost $1,700 per kW with annual operation and maintenance costs of $180 per kW. Estimates are in 2010 dollars. Gas turbine Gas turbines, another form of internal combustion engine, usually meet an efficiency of 20 to 28 percent at full load with landfill gas. Efficiencies drop when the turbine is operating at partial load. Gas turbines have relatively low maintenance costs and nitrogen oxide emissions when compared to RP engines. Gas turbines require high gas compression, which uses more electricity to compress, therefore reducing the efficiency. Gas turbines are also more resistant to corrosive damage than RP engines. Gas turbines need a minimum of 1,300 cfm and typically exceed 2,100 cfm and can generate 1 to 10 MW. A gas turbine (greater than 3 MW) can typically cost $1,400 per kW with annual operation and maintenance costs of $130 per kW. Estimates are in 2010 dollars. Microturbine Microturbines can produce electricity with lower amounts of landfill gas than gas turbines or RP engines. Microturbines can operate between 20 and 200 cfm and emit less nitrogen oxides than RP engines. Also, they can function with less methane content (as little as 35 percent). Microturbines require extensive gas treatment and come in sizes of 30, 70, and 250 kW. A microturbine (less than 1 MW) can typically cost $5,500 per kW with annual operation and maintenance costs of $380 per kW. Estimates are in 2010 dollars. Fuel cell Research has been performed indicating that molten carbonate fuel cells could be fueled by landfill gas. Molten carbonate fuel cells require less purity than typical fuel cells, but still require extensive treatment. The separation of acid gases (HCl, HF, and SO2), VOC oxidation (H2S removal) and siloxane removal are required for molten carbonate fuel cells. Fuel cells are typically run on hydrogen and hydrogen can be produced from landfill gas. Hydrogen used in fuel cells have zero emissions, high efficiency, and low maintenance costs. Project incentives Various landfill gas project incentives exist for United States projects at the federal and state level. The Department of the Treasury, Department of Energy, Department of Agriculture, and Department of Commerce all provide federal incentives for landfill gas projects. Typically, incentives are in the form of tax credits, bonds, or grants. For example, the Renewable Electricity Production Tax Credit (PTC) gives a corporate tax credit of 1.1 cents per kWh for landfill projects above 150 kW. Various states and private foundations give incentives to landfill gas projects. A Renewable Portfolio Standard (RPS) is a legislative requirement for utilities to sell or generate a percentage of their electricity from renewable sources including landfill gas. Some states require all utilities to comply, while others require only public utilities to comply. Environmental impact In 2005, 166 million tons of MSW were discarded to landfills in the United States. Roughly 120 kg of methane is generated from every ton of MSW. Methane has a global warming potential of 25 times more effective of a greenhouse gas than carbon dioxide on a 100-year time horizon. It is estimated that more than 10% of all global anthropogenic methane emissions are from landfills. Landfill gas projects help aid in the reduction of methane emissions. However, landfill gas collection systems do not collect all the gas generated. Around 4 to 10 percent of landfill gas escapes the collection system of a typical landfill with a gas collection system. The use of landfill gas is considered a green fuel source because it offsets the use of environmentally damaging fuels such as oil or natural gas, destroys the heat-trapping gas methane, and the gas is generated by deposits of waste that are already in place. 450 of the 2,300 landfills in the United States have operational landfill gas utilization projects as of 2007. LMOP has estimated that approximately 520 landfills that currently exist could use landfill gas (enough to power 700,000 homes). Landfill gas projects also decrease local pollution, and create jobs, revenues and cost savings. Of the roughly 450 landfill gas projects operational in 2007, 11 billion kWh of electricity was generated and 78 billion cubic feet of gas was supplied to end users. These totals amount to roughly 17,500,000 acres (7,100,000 ha) of pine or fir forests or annual emissions from 14,000,000 passenger vehicles. See also Anaerobic digestion Atmospheric methane Biogas Biodegradation Cogeneration Landfill gas migration Landfill gas monitoring Waste minimisation Underground coal gasification == References ==
carbon budget
A carbon budget is a concept used in climate policy to help set emissions reduction targets in a fair and effective way. It looks at "the maximum amount of cumulative net global anthropogenic carbon dioxide (CO2) emissions that would result in limiting global warming to a given level".: 2220  When expressed relative to the pre-industrial period it is referred to as the total carbon budget, and when expressed from a recent specified date it is referred to as the remaining carbon budget.: 2220 A carbon budget consistent with keeping warming below a specified limit is also referred to as an emissions budget, an emissions quota, or allowable emissions. An emissions budget may also be associated with objectives for other related climate variables, such as radiative forcing or sea level rise.Total or remaining carbon budgets are calculated by combining estimates of various contributing factors, including scientific evidence and value judgments or choices.Global carbon budgets can be further divided into national emissions budgets, so that countries can set specific climate mitigation goals. Emissions budgets are relevant to climate change mitigation because they indicate a finite amount of carbon dioxide that can be emitted over time, before resulting in dangerous levels of global warming. Change in global temperature is independent from the geographic location of these emissions, and is largely independent of the timing of these emissions.Carbon budgets are applicable to the global level. To translate these global carbon budgets to the country level, a set of value judgments have to be made on how to distribute the total and remaining carbon budget. This involves the consideration of aspects of equity and fairness between countries as well as other methodological choices. There are many differences between nations, including but not limited to population, level of industrialisation, national emissions histories, and mitigation capabilities. For this reason, scientists have made attempts to allocate global carbon budgets among countries using methods that follow various principles of equity. Definition The IPCC Sixth Assessment Reports defines carbon budget as the following two concepts:: 2220  "An assessment of carbon cycle sources and sinks on a global level, through the synthesis of evidence for fossil fuel and cement emissions, emissions and removals associated with land use and land-use change, ocean and natural land sources and sinks of carbon dioxide (CO2), and the resulting change in atmospheric CO2 concentration. This is referred to as the global carbon budget."; or "The maximum amount of cumulative net global anthropogenic CO2 emissions that would result in limiting global warming to a given level with a given probability, taking into account the effect of other anthropogenic climate forcers. This is referred to as the total carbon budget when expressed starting from the pre-industrial period, and as the remaining carbon budget when expressed from a recent specified date."Global carbon budgets can be further divided into national emissions budgets, so that countries can set specific climate mitigation goals. An emissions budget may be distinguished from an emissions target, as an emissions target may be internationally or nationally set in accordance with objectives other than a specific global temperature and are commonly applied to the annual emissions in a single year as well. Estimations Recent and currently remaining carbon budget Several organisations provide annual updates to the remaining carbon budget, including the Global Carbon Project, the Mercator Research Institute on Global Commons and Climate Change (MCC) and the CONSTRAIN project. In March 2022, before formal publication of the 'Global Carbon Budget 2021' preprint, scientists reported, based on Carbon Monitor (CM) data, that after COVID-19-pandemic-caused record-level declines in 2020, global CO2 emissions rebounded sharply by 4.8% in 2021, indicating that at the current trajectory, the carbon budget for a ⅔ likelihood for limiting warming to 1.5 °C would be used up within 9.5 years.In April 2022, the now reviewed and officially published The Global Carbon Budget 2021 concluded that fossil CO2 emissions rebounded from pandemic levels by around +4.8% relative to 2020 emissions – returning to 2019 levels. It identifies three major issues for improving reliable accuracy of monitoring, shows that China and India surpassed 2019 levels (by 5.7% and 3.2%) while the EU and the US stayed beneath 2019 levels (by 5.3% and 4.5%), quantifies various changes and trends, for the first time provides models' estimates that are linked to the official country GHG inventories reporting, and suggests that the remaining carbon budget at 1. Jan 2022 for a 50% likelihood to limit global warming to 1.5°C (albeit a temporary exceedence is to be expected) is 120 GtC (420 GtCO2) – or 11 years of 2021 emissions levels.This does not mean that likely 11 years remain to cut emissions but that if emissions stayed the same, instead of increasing like in 2021, 11 years of constant GHG emissions would be left in the hypothetical scenario that all emissions suddenly ceased in the 12th year. (The 50% likelihood may be describable as a kind of minimum plausible deniability requirement as lower likelihoods would make the 1.5°C goal "unlikely".) Moreover, other trackers show (or highlight) different amounts of carbon budget left, such as the MCC, which as of May 2022 shows '7 years 1 month left' and different likelihoods have different carbon budgets: a 83% likelihood would mean 6.6 ±0.1 years left (ending in 2028) according to CM data.In October 2023 a group of researchers updated the carbon budget including the CO2 emitted at 2020-2022 and new findings about the role of reduced presence of polluting particles in the atmosphere. They found we can emit 250 GtCO2 or 6 years of emissions at current level starting from January 2023, for having a 50% chance to stay below 1.5 degrees. For reaching this target humanity will need to zero CO2 emissions by the year 2034. To have a 50% chance of staying below 2 degrees humanity can emit 1220 GtCO2 or 30 years of emissions at current level. Carbon budget in gigatonnes and factors The finding of an almost linear relationship between global temperature rise and cumulative carbon dioxide emissions has encouraged the estimation of global emissions budgets in order to remain below dangerous levels of warming. Since the pre-industrial period to 2019, approximately 2390 Gigatonnes of CO2 (Gt CO2) has already been emitted globally.Scientific estimations of the remaining global emissions budgets/quotas differ due to varied methodological approaches, and considerations of thresholds. Estimations might not include all amplifying climate change feedbacks, although the most authoritative carbon budget assessments by the IPCC do account explicitly for these. The IPCC assesses the size of remaining carbon budgets using estimates of past warming caused by human activities, the amount of warming per cumulative unit of CO2 emissions (also known as the Transient Climate Response to cumulative Emissions of carbon dioxide, or TCRE), the amount of warming that could still occur once all emissions of CO2 are halted (known as the Zero Emissions Commitment), and the impact of Earth system feedbacks that would otherwise not be covered; and vary according to the global temperature target that is chosen, the probability of staying below that target, and the emission of other non-CO2 greenhouse gases (GHGs). This approach was first applied in the 2018 Special report on Global Warming of 1.5°C by the IPCC, and was also used in its 2021 Working Group I Contribution to the Sixth Assessment Report.Carbon budget estimates depend on the likelihood or probability of avoiding a temperature limit, and the assumed warming that is projected to be caused by non-CO2 emissions. The values for the carbon budget estimates in the following table are drawn from the latest assessment of the Physical Science Basis of climate change by the Working Group I Contribution to the IPCC Sixth Assessment Report. These estimates assume non-CO2 emissions are also reduced in line with deep decarbonisation scenarios that reach global net zero CO2 emissions. Carbon budget estimates thus depend on how successful society is in reducing non-CO2 emissions together with carbon dioxide emissions. The IPCC Sixth Assessment Report estimated that remaining carbon budgets can be 220 Gt CO2 higher or lower depending on how successful non-CO2 emissions are reduced. National emissions budgets Carbon budgets are applicable to the global level. To translate these global carbon budgets to the country level, a set of value judgments have to be made on how to distribute the total and remaining carbon budget. In light of the many differences between nations, including but not limited to population, level of industrialisation, national emissions histories, and mitigation capabilities, scientists have made attempts to allocate global carbon budgets among countries using methods that follow various principles of equity. Allocating national emissions budgets is comparable to sharing the effort to reduce global emissions, underlined by some assumptions of state-level responsibility of climate change. Many authors have conducted quantitative analyses which allocate emissions budgets, often simultaneously addressing disparities in historical GHG emissions between nations. One guiding principle that is used to allocate global emissions budgets to nations is the principle of "common but differentiated responsibilities and respective capabilities" that is included in the United Nations Framework Convention on Climate Change (UNFCCC). This principle is not defined in further detail in the UNFCCC but is broadly understood to recognize nations' different cumulative historical contributions to global emissions as well as their different development stages. From this perspective, those countries with greater emissions during a set time period (for example, since the pre-industrial era to the present) are the most responsible for addressing excess emissions, as are countries that are richer. Thus, their national emissions budgets have to be smaller than those from countries that have polluted less in the past, or are poorer. The concept of national historical responsibility for climate change has prevailed in the literature since the early 1990s and has been part of the key international agreements on climate change (UNFCCC, the Kyoto Protocol and the Paris Agreement). Consequently, those countries with the highest cumulative historical emissions have the most responsibility to take the strongest actions and help developing countries to mitigate their emissions and adapt to climate change. This principle is recognized in international treaties and has been part of the diplomatic strategies by developing countries, that argue that they need larger emissions budgets to reduce inequity and achieve sustainable development. Another common equity principle for calculating national emissions budgets is the "egalitarian" principle. This principle stipulates individuals should have equal rights, and therefore emissions budgets should be distributed proportionally according to state populations. Some scientists have thus reasoned the use of national per-capita emissions in national emissions budget calculations. This principle may be favoured by nations with larger or rapidly growing populations, but raises the question whether individuals can have a right to pollute.A third equity principle that has been employed in national budget calculations considers national sovereignty. The "sovereignty" principle highlights the equal right of nations to pollute. The grandfathering method for calculating national emissions budgets uses this principle. Grandfathering allocates these budgets proportionally according to emissions at a particular base year, and has been used under international regimes such as the Kyoto Protocol and the early phase of the European Union Emissions Trading Scheme (EU ETS) This principle is often favoured by developed countries, as it allocates larger emissions budgets to them. However, recent publications highlight that grandfathering is unsupported as an equity principle as it "creates 'cascading biases' against poorer states, is not a 'standard of equity'". Other scholars have highlighted that "to treat states as the owners of emission rights has morally problematic consequences". Pathways to stay within carbon budget The steps that can be taken to stay within one's carbon budget are explained within the concept of climate change mitigation. See also Global Carbon Project References External links The CONSTRAIN Project Annual Report Nauels, Alex; Rosen, Debbie; Mauritsen, Thorsten; Maycock, Amanda; McKenna, Christine; Roegli, Joeri; Schleussner, Carl-Friedrich; Smith, Ela; Smith, Chris; Forster, Piers (2019). "ZERO IN ON the remaining carbon budget and decadal warming rates. The CONSTRAIN Project Annual Report 2019". University of Leeds. doi:10.5518/100/20. {{cite journal}}: Cite journal requires |journal= (help)
list of parties to the paris agreement
The Paris Agreement is an agreement within the United Nations Framework Convention on Climate Change (UNFCCC) dealing with greenhouse gas emissions mitigation, adaptation and finance starting in the year 2020. The Agreement aims to respond to the global climate change threat by keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels and to pursue efforts to limit the temperature increase even further to 1.5 degrees Celsius. History The language of the agreement was negotiated by representatives of 197 parties at the 21st Conference of the Parties of the UNFCCC in Paris and adopted by consensus on 12 December 2015. The Agreement was open for signature by States and regional economic integration organizations that are Parties to the UNFCCC (the Convention) from 22 April 2016 to 21 April 2017 at the UN Headquarters in New York. The agreement stated that it would enter into force (and thus become fully effective) only if 55 countries that produce at least 55% of the world's greenhouse gas emissions (according to a list produced in 2015) ratify, accept, approve or accede to the agreement. On 1 April 2016, the United States and China, which together represent almost 40% of global emissions, issued a joint statement confirming that both countries would sign the Paris Climate Agreement. 175 Parties (174 states and the European Union) signed the agreement on the first date it was open for signature. On the same day, more than 20 countries issued a statement of their intent to join as soon as possible with a view to joining in 2016. With ratification by the European Union, the Agreement obtained enough parties to enter into effect as of 4 November 2016. Parties As of February 2023, 194 states and the EU, representing over 98% of global greenhouse gas emissions, have ratified or acceded to the Agreement, including China and the United States, the countries with the 1st and 2nd largest CO2 emissions among UNFCCC members. A further 3 states have signed the Agreement but not ratified it. All 198 UNFCCC members have either signed or acceded to the Paris Agreement. European Union and its member states Both the EU and its member states are individually responsible for ratifying the Paris Agreement. A strong preference was reported that the EU and its 28 member states deposit their instruments of ratification at the same time to ensure that neither the EU nor its member states engage themselves to fulfilling obligations that strictly belong to the other, and there were fears that disagreement over each individual member state's share of the EU-wide reduction target, as well as Britain's vote to leave the EU might delay the Paris pact. However, the European Parliament approved ratification of the Paris Agreement on 4 October 2016, and the EU deposited its instruments of ratification on 5 October 2016, along with several individual EU member states. Withdrawal from Agreement Article 28 of the agreement enables parties to withdraw from the agreement after sending a withdrawal notification to the depositary, but notice can be given no earlier than three years after the agreement goes into force for the country. Withdrawal is effective one year after the depositary is notified. Alternatively, the Agreement stipulates that withdrawal from the UNFCCC, under which the Paris Agreement was adopted, would also withdraw the state from the Paris Agreement. The conditions for withdrawal from the UNFCCC are the same as for the Paris Agreement. In the agreement no provisions for non compliance are stated. On 1 June 2017, then-US President Donald Trump announced that the United States would withdraw from the agreement. In accordance with Article 28, as the agreement entered into force in the United States on 4 November 2016, the earliest possible effective withdrawal date for the United States was 4 November 2020. If it had chosen to withdraw by way of withdrawing from the UNFCCC, notice could be given immediately (the UNFCCC entered into force for the US in 1994), and be effective one year later. On August 4, 2017, the Trump administration delivered an official notice to the United Nations that the US intended to withdraw from the Paris Agreement as soon as it is legally eligible to do so. The formal notice of withdrawal could not be submitted until the agreement had been in force for 3 years for the US, in 2019.According to a memo obtained by HuffPost believed to be written by US State Department legal office, any "attempts to withdraw from the Paris Agreement outside of the above-described withdrawal provisions would be inconsistent with international law and would not be accepted internationally."On 4 November 2019, the United States notified the depositary of its withdrawal from the agreement, to be effective exactly one year from that date. In one of his first executive actions, President Joe Biden signed an order that would have the United States rejoin the agreement.He was greeted by French President Emmanuel Macron with a "Welcome back to the Paris Agreement!" Signatories A further three states have signed but not ratified the Paris Agreement. Notes and references Notes === References ===
greet model
GREET (Greenhouse gases, Regulated Emissions, and Energy use in Technologies) is a full life-cycle model sponsored by the Argonne National Laboratory (U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy). It fully evaluates energy and emission impacts of advanced and new transportation fuels, the fuel cycle from well to wheel and the vehicle cycle through material recovery and vehicle disposal. It allows researchers and analysts to evaluate various vehicle and fuel combinations on a full fuel-cycle/vehicle-cycle basis. The GREET model is specified in the Inflation Reduction Act of 2022 §45V[1] as the methodology to calculate the lifecycle greenhouse gas emissions "through the point of production (well-to-gate)" when determining the level of tax credit for clean Hydrogen production until a successor is approved by the Secretary of the Treasury. The basic implementation of the model was made using Excel spreadsheets. However a more practical and easy to use software developed with .NET and with a fully graphical toolbox is also available[2]. Content For a given vehicle and fuel system, GREET separately calculates the following: Consumption of total energy (energy in non-renewable and renewable sources), fossil fuels (petroleum, fossil natural gas, and coal together), petroleum, coal and natural gas; Emissions of CO2-equivalent greenhouse gases - primarily carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O); and Emissions of six criteria pollutants: volatile organic compounds (VOCs), carbon monoxide (CO), nitrogen oxide (NOx), airborne particulate matter with sizes smaller than 10 micrometre (PM10]), particulate matter with sizes smaller than 2.5 micrometre (PM2.5), and sulfur oxides (SOx).GREET includes more than 100 fuel production pathways and more than 70 vehicle/fuel systems. Michael Wang, a Senior Scientist in the Energy Systems Division is the primary developer of GREET. External links http://greet.anl.gov/ == References ==
air pollution in germany
Air pollution in Germany has significantly decreased over the past decade. Air pollution occurs when harmful substances are released into the Earth's atmosphere. These pollutants are released through human activity and natural sources. Germany took interest in reducing its greenhouse gas (GHG) emissions by switching to renewable energy sources. Renewable energy use rate from 6.3% in 2000 to 34% in 2016. Through the transition to renewable energy sources, some people believe Germany has become the climate change policy leader and renewable energy leader in the European Union (EU) and in the world with ambitious climate change programs, though Germany's CO2 emissions per capita are in fact among the highest in Europe, almost twice those of e.g. France. The current goal of the German government was approved on 14 November 2016 in the German Climate Action Plan 2050, which outlines measures by which Germany can meet its greenhouse gas emissions by 2050. By 2050, Germany wants to reduce their GHGs by 80 to 95% and by 2030 they want to reduce it by 55%, compared to the EU target of 40%.In order to achieve these goals, a variety of strategies and policies are used rather than legislation. The four strategies the German government bases air pollution control on are laying down environmental quality standards, emission reduction requirements according to the best available technology, production regulations, and laying down emission ceilings. Through these strategies, policy instruments have been put in place that have contributed to the success of the significant air pollution reduction in Germany. These instruments include the Federal Emission Control Act and Implementing Ordinances, Technical Instructions on Air Quality Control (TA Luft), Amendment to Ordinance on Small Firing Installations, Implementation of the directive on industrial emissions, and Transboundary air pollution control policy. The German Feed-in-Tariff policy introduced in 2000 led to the significant increase in renewable energy use and decreasing air pollution. They have been introduced in Germany to increase the use of renewables, such as wind power, biomass, hydropower, geothermal power, and photovoltaics, thereby reducing GHG emissions causing air pollution and combating climate change.The German government has been an agenda setter in international climate policy negotiations since the late 1980s. However, national and global climate policies have become a top priority since the conservative-social democratic government came into power in 2005, pushing both European and international climate negotiations. Positive path dependency in Germany's climate and energy policies has occurred over the past 20 years. There are three main triggers that put Germany on this positive path dependency and what led them to becoming a climate change policy leader. The first being the widespread damages to health, due to smog, and to nature, due to acid rain, caused by air pollution. The second being the shock of the two oil price crises, in 1973 and 1979, that highlighted the problem of the German economy's strong dependence on unsure foreign sources. The third being the growing opposition to the country's growing reliance on nuclear energy. Air pollution began to be seen as a problem in Germany due to these three triggers, causing Germany to put policies into place to control air pollution. This has now developed from controlling air pollution to being a leader in climate change politics. Background Air pollution may cause diseases, allergies, or death in humans. It may also cause harm to other living organisms, such as animals and crops, and may cause damage to the environment. Air pollution can be generated by both human activity and natural processes. An air pollutant is a substance in the air that can have adverse effects on humans and the ecosystem. The major primary pollutants produced by human activity include: Carbon dioxide (CO2) - The most emitted form of human caused air pollution. Emitted by the burning of fossil fuels. Sulfur oxides (SOx) - Particularly sulfur dioxide. Produced by the combustion of coal and petroleum. NOx (nitrogen oxides, particularly nitrogen dioxide). Produced by from high-temperature combustion. Carbon monoxide (CO) - Product of incomplete combustion of fuel such as natural gas, coal or wood. Vehicle exhaust is a major source of carbon monoxide. Volatile organic compounds (VOC) - Categorized as either methane or non-methane. Methane is an extremely efficient greenhouse gas which contributes to enhanced global warming. Ammonia (NH3) - Emitted from agricultural processes.Secondary pollutants include: Particulates created from gaseous primary pollutant and compounds in photochemical smog. Smog is a kind of air pollution and results from large amounts of coal burning in an area caused by a mixture of smoke and sulfur dioxide. Modern smog does not usually come from coal but from vehicular and industrial emissions that are acted on in the atmosphere by ultraviolet light from the sun to form secondary pollutants that also combine with the primary emissions to form photochemical smog. Ground-level ozone (O3) formed from NOx and VOCs. Abnormally high concentrations are brought about by human activities (mostly the combustion of fossil fuel), it is a pollutant, and a constituent of smog. Sources There are various locations, activities or factors which are responsible for releasing pollutants. Anthropogenic (man-made) sources These are mostly related to the burning of multiple types of fuel. Stationary sources include smoke stacks of power plants, factories, waste incinerators, furnaces and other types of fuel-burning heating devices. Mobile sources include motor vehicles, marine vessels, and aircraft. Controlled burn practices in agriculture and forest management. Fumes from paint, hair spray, varnish, aerosol sprays, and other solvents. Waste deposition in landfills, which generate methane. Military resources, such as nuclear weapons, toxic gases, germ warfare, and rocketry. Historical roots of Germany's air pollution policies "Modern" environmental policies first began to be developed in the 1960s, with the United States, Japan, Sweden, and to some extent Great Britain at the forefront in establishing new environmental institutions, procedures, instruments, standards, and technologies. Learning from these countries, Germany quickly caught up, especially in the area of air pollution control policy. The largest triggers for change were: The widespread damages to health (smog) and nature (acid rain) caused by air pollution. The shock of the two oil price crises (1973 and 1979) that highlighted the problem of the German economy's strong dependence on unsure foreign sources. The growing opposition to the country's growing reliance on nuclear energy.As early as 1977, green groups participated in elections to district parliaments. In the European elections of 1979, several such groups put up candidates with a "green label", attracting almost one million votes. During the rise of the green party, environmental issues triggered major and partly violent conflicts. Since the late 1980s, a more cooperative policy style developed between the various actor groups and institutions. The high integrative capacity of the German political system and the willingness of the "organized" green movement to become more cooperative are important features explaining the mostly cooperative climate policy that followed in the 1980s.The integration of a green political party in the political-administrative system distinguishes Germany from the United States and Japan. The German election system is one of proportional representation. Once the 5% hurdle is overcome, a party achieves representation in parliament. The German federal government and the electoral system tend to promote cooperation. The system of proportional representation makes it difficult for any single party to gain enough seats in the parliament to form a government by itself. Therefore, coalition governments are a basic German feature. The system encourages negotiation and consensus politics on and between all levels of government because it applies to all federal, state, and local bodies. These very specific political-cultural preconditions have influenced Germany's policy style in the area of climate change. The German public has been largely supportive of the German government's initiatives. An increased perception of vulnerability to climate change appears to motivate German citizens to be willing to change in the hopes of spurring necessary global efforts. Germany's vulnerability to the physical effects of climate change is much lower than the risk to the United States, Japan, Australia and Spain. However, risk perceptions among the population are high.There is also the issue of the perceived economic costs of action. In Germany, in contrast with many other countries, climate action is not considered to be an economic burden. The additional financial burden on the average household has been rather small, although it has increased in the form of a growing tax burden. Although certain measures have clear and distinct costs, there is a growing belief that the broader efforts to move to cleaner technologies have created economic "winners" as well. Green technology and renewable energy sectors have created many new jobs. In addition, the dependency on the world energy market for fossil resources is decreasing, reducing Germany's economic-political vulnerability. Thus, a structural change toward a climate-sensitive energy policy has been adopted with almost no social conflicts. These positive employment and foreign energy policy effects of climate-related policies have played an imported role in the public's climate change discourse. Germany's climate change policies Germany is well on the way to meeting the standards of air pollution control set by the EU. For sulfur dioxide and volatile organic compounds it is sufficient to apply the measures already adopted and implemented in the past. However, additional reductions are needed for nitrogen oxides and ammonia. The necessary reductions in nitrogen oxide emissions will be achieved in the transport sector and in stationary installations. The necessary reductions for ammonia emissions will be achieved in the agriculture sector. Strategies for controlling air pollution The German government bases air pollution control on four strategies: Laying down environmental quality standards Emission reduction requirements according to the best available technology Production regulations Laying down emission ceilings Policy instruments for controlling air pollution 1. Federal Emission Control Act and Implementing Ordinances Air quality control in Germany is mainly governed by the Act on the Prevention of Harmful Effects on the Environment caused by Air Pollution, Noise, Vibration and similar Phenomena. 2. Technical instructions on air quality control (TA Luft) A modern instrument for German authorities to control air pollution. They contain provisions to protect citizens from unacceptably high pollutant emissions from installations as well as requirements to prevent adverse effects on the environment. It lays down emissions limit values for relevant air pollutants from installations. Existing installations must also be upgraded to the best available technology. 3. Amendment to ordinance on small firing installations This instrument entered into force in March 2010 and was an important step toward reducing particulate matter emissions from small firing installations, such as stoves. The amended requirements for new installations and modernization of existing installations will especially achieve a noticeable average reduction in particulate matter emissions of 5 to 10% in the residential areas concerned. 4. Implementation of the directive on industrial emissions A significant share of the necessary emissions reductions to meet the targets proposed by Germany and the EU will be achieved by the implementation of the directive on industrial emissions. 5. Transboundary air pollution control policy A large proportion of pollution in Germany is due to transportation through the air over long distances from neighbouring countries. Therefore Germany determined it to be important to develop a transboundary air pollution control policy in order to increase the air quality in Germany. For this reason, the German government is actively involved in the constructive dialogue on air pollution control measures at both the European and international level. Feed-in tariffs Feed-in tariffs (FiT) for electricity have been introduced in Germany to encourage the use of new energy technologies such as wind power, biomass, hydropower, geothermal power, and solar photovoltaics. Feed-in tariffs are policy mechanisms designed to accelerate investment in renewable energy technologies by providing them with remuneration (a "tariff") above the retail or wholesale rates of electricity. Germany was the first country to implement feed-in tariffs by passing its Energy Feed-in Law in 1990. Many early FiT policies set one rate for all renewable energy technologies, more recently FiT programs are using various rates depending on the type of technology, location, size of the project, and the quality of resources, allowing FiTs to promote multiple renewable energy technologies. Although Germany's FiT program began in 1990, it has been amended and revised in 2000, 2004, 2006, 2008 and 2011. This allows Germany to alter the policies, as the economy and technologies for renewable energy changes. The mechanism provides long-term security to renewable energy producers, typically based on the cost of generation of each technology. For instance, technologies such as wind power, are given a lower per-kWh price, while technologies such as solar PV and tidal power are given a higher price, reflecting its higher costs.As of July 2014, the feed-in tariffs range from 3.33 ¢/kWh for hydropower facilities over 50 MW to 12.88 ¢/kWh for solar installations on buildings up to 30 kWp and 19 ¢/kWh for offshore wind.In August 2014, a revised German Renewable Energy Sources Act (EEG) entered into force. From this revised act, specific deployment corridors now stipulate the extent to which renewable energy is to be expanded in the future and the feed-in tariffs gradually will no longer be fixed by the government, rather they will be determined by auction. Wind and solar power will be targeted over hydro, gas, geothermal and biomass.The goal of the feed-in tariffs is to meet Germany's renewable energy goals of 40 to 45% of electricity consumption by 2025 and 55 to 60% by 2035. The policy also aims to encourage the development of renewable energy technologies, reduce external costs, and increase security of energy supply. Future developments German Climate Action Plan 2050 The German Climate Action Plan 2050 was approved by the German government on 14 November 2016. It is a climate protection policy document that outlines the measures that Germany will take to meet its national greenhouse gas emissions reduction goals by 2050, as well as service its international commitments under the 2016 Paris Climate Agreement. Projections from the ministry of environment in September 2016 indicate that Germany will likely miss its 2020 climate target. Climate targets On 28 September 2010, Germany announced the following greenhouse gas emissions targets. In October 2014, the European Council decided on a target of at least a 40% reduction in domestic greenhouse gas emissions by 2030 relative to 1990 for the EU, this is less stringent than Germany's targets. Sector targets for greenhouse gas emission reductions for 2030 Commission for growth, structural change, and regional development The Climate Action Plan 2050 establishes a commission for growth, structural change, and regional development. However, unlike earlier versions of the Germany's climate change plans, the commission will not set a date for an exit from coal. Instead, the commission will develop a mix of instruments that will bring together economic development, structural change, social acceptability and climate protection. The commission will be based at the economics and energy ministry, but will consult with other ministries, federal states, municipalities, and unions, as well as with representatives of companies and regions that may be affected. The commission is scheduled to begin work at the beginning of 2018 and report at the end of 2018. == References ==
sustainable communities and climate protection act of 2008
The Sustainable Communities and Climate Protection Act of 2008, also known as Senate Bill 375 or SB 375, is a State of California law targeting greenhouse gas emissions from passenger vehicles. The Global Warming Solutions Act of 2006 (AB 32) sets goals for the reduction of statewide greenhouse gas emissions. Passenger vehicles are the single largest source of greenhouse gas emissions statewide, accounting for 30% of total emissions. SB 375 therefore provides key support to achieve the goals of AB 32.SB 375 instructs the California Air Resources Board (CARB) to set regional emissions' reduction targets from passenger vehicles. The Metropolitan Planning Organization for each region must then develop a "Sustainable Communities Strategy" (SCS) that integrates transportation, land-use and housing policies to plan for achievement of the emissions target for their region.In a press release the day he signed the bill into law, Governor Arnold Schwarzenegger stated, "What this will mean is more environmentally-friendly communities, more sustainable developments, less time people spend in their cars, more alternative transportation options and neighborhoods we can safely and proudly pass on to future generations." Background Senate Bill 375 was introduced as a bill in order to meet the environmental standards set out by the Global Warming Solutions Act of 2006 (AB 32). Since its implementation in 2006, AB 32 has facilitated the passage of a cap-and-trade program in 2010 which placed an upper limit on greenhouse gas levels emitted by the state of California. AB 32 has contributed to its initial objectives of curbing climate change by establishing a program to reduce greenhouse gas emissions from various sources throughout California. AB 32 mandates that California reaches 1990 levels of greenhouse gas emissions by 2020, which is a twenty five percent decrease from the current levels in the state. Firstly, AB 32 purports to ratify a scoping plan to reach the most practicable reductions in greenhouse gas emissions from different sources. This scoping plan outlines how actions will be taken to reduce these emissions and how particular regulations and strategies or plans can contribute to this goal. Also, AB 32 identifies the levels of emissions, sets feasible limits, and adopts a regulatory measure to necessitate the mandatory reporting of these emission measures. The main components within the AB 32 policy have been to institute the cap-and-trade project, increase fuel efficiency in vehicles, decrease the carbon content in fuel, and motivate communities to become energy efficient. In order to fulfill these objectives, SB 375 aims to reduce the amount of carbon emitted by vehicles, reduce the amount of carbon in fuel, and reduce the distance in vehicle trips. SB 375 serves as the nation's first ever law to associate global warming with land use planning and transportation. SB 375 addresses these issues by tracking the levels of emissions from vehicles and by modifying the planning allocations of regional housing and transportation in order to create transportation and land use patterns such that the public will drive their vehicles less. Metropolitan Planning Organizations (MPOs) within California are now concerned with carrying out these roles in order to amend these patterns and incentivize the restructuring of plans that contribute in reducing greenhouse gas emissions. SB 375 went into effect on January 1, 2009 and endured twelve amendments from various groups which modified its initially more stringent mandates. SB 375 takes travel time into account by acknowledging that the development of transportation and land systems affects the amount of time that the public spends driving. The bill's objective is to lead each of California's regions to adopt more long-term sustainable investments across multiple sectors by lessening the extent to which Californians spend time driving and reducing air pollution through these efforts. These sustainable investments are meant to decrease driving distances in order to make driving less necessary. Multiple regional planning commissions, local governmental bodies, and state environmental groups are responsible for SB 375's implementation. Provisions for implementation Under the bill, each of California's 18 regions are required to generate a land use and transportation plan, which serves as the SCS for each region. The bill necessitates that every MPO must have a 'Sustainable Communities Strategy' included in the regional transportation plan to show how these targets will be met. As a means of integrating transportation, housing, and land-use plans, this SCS will assist Metropolitan Planning Organizations (MPO)'s in meeting the greenhouse gas emission targets for 2020 and 2035 which are assigned by the California Air Resources Board (CARB). Each SCS adopted in California includes land use strategies and transportation investment plans to carry out reductions in greenhouse gas emissions. All the SCS plans developed are generated in accordance with the Regional Transportation Plan (RTP) which regulates transportation financing in each region, as well as with a Regional Housing Needs Allocation (RHNA) which establishes housing goals and housing allocations consistent with the SCS such that the housing and zoning of municipalities must accommodate the plans set out by the RHNA. CARB assigns emissions targets for each region in California which is responsible for ensuring that these targets are met by 2020 and 2035 and then verifies that each SCS will sufficiently fulfill its aims and meet the emissions targets. The SCS guides local governments, with regard to plans regarding zoning or transportation and also provides incentives to developers who develop projects that help to meet the emission targets. Each SCS includes maps which show the land uses in the region, a plan that considers the housing needs of everyone of all income levels living in the region as well as an analysis of impacts on open spaces.SB 375 establishes a coordinative process between metropolitan planning organizations (MPOs) and the Air Resources Board (ARB) such that greenhouse gas emission targets are created for every region within California. Also, the bill makes it necessary for governmental decisions that are associated with transportation funding to be in line with the SCS. SB 375 establishes 'California Environmental Quality Act' (CEQA), a statute that mandates state and local agencies to ascertain the environmental effects of their actions and to mitigate them if possible, which serves to streamline benefits for projects that are consistent with this strategy. SB 375 provides CEQA incentives and exceptions for particular development projects that parallel the SCS that the bill sets out. The bill proposes changes to housing law in order to develop common land usage expectations for regional transportation planning and housing. Lastly, the bill fortifies requisites for public input to the creation and review of MPO plans. As a strategy to reach the goals of AB 32, SB 375 requires that CARB establish the targets for reductions in greenhouse gas emission targets for the eighteen MPOs in the state for 2020 and 2035. CARB assigned the 'Regional Targets Advisory Committee' to identify mechanisms for these reductions. After the targets are set, MPOs are required to update their Regional Transportation Plans (RTPs) such that the integrative patterns of planning across multiple sectors are in accordance with one another. If a MPO is practicably unable to meet the greenhouse gas emission reduction target set forth by the SCS, the MPO is required to prepare an 'Alternative Planning Strategy' to identify the impediments to reaching these targets and to demonstrate how emission reductions will take place through the adoption of alternative planning and development patterns. Obstacles One significant obstacle that SB 375 has faced relates to the lack of a permanent funding source. When the bill was enacted into law, there was no identified source of funding to finance the comprehensive set of tasks set out for regional agencies. The Southern California Association of Governments initially approximated that the bill's implementation would require $8 million. However, this estimate did not include the costs of local agencies in planning actions related to the bill. As of now, the only possible source of additional funding is $90 million derived from funds of Proposition 84 but these funds are assigned to be utilized for the development and design of sustainable communities in general. Because SB 375 requires constant and continual funding, this funding is unlikely to be enough in financing the long-term objectives of the bill. Although SB 406 was introduced by Senator Mark DeSaulinier to provide permanent financing for SB 375, this bill would require a subcharge on motor vehicle registration and this additional cost is one possible reason why the bill was not amended. In 2009, Governor Schwarzenegger vetoed SB 406 and cited the imposition of this fee as subject to the approval of California voters. As of now, SB 406 has not been passed. Additionally, increased financial incentives are likely to be needed in order to support infill developers since increased tax incentives, and reduced permit fees may increase infill development. AB 782 was also introduced by California State Assembly member Kevin Jeffries as a bill to apply CEQA exemptions to more kinds of development projects. This would alter the current patterns of infill development. Although AB 782 was not passed, it represents another bill that was introduced to amend the effects caused by SB 375. In addition, there has been a significant degree of skepticism associated with SB 375's effectiveness. This skepticism derives from the fact that the bill only mandates that a plan for emissions reductions to be created with no requirement for the implementation of this plan. Also, regional governmental bodies are responsible for developing these plans and these bodies do not have the power to regulate the usage of land. However, the law sets a precursor for the creation of a regional carbon budget and puts into place the processes to reduce greenhouse gas emissions Regional Targets On September 23, 2011, ARB adopted greenhouse gas emission targets from passenger vehicles for each of the state's eighteen MPOs for the years 2020 and 2035. These targets were developed in coordination with each of the MPOs. Targets for the eight San Joaquin Valley MPOs are placeholder targets pending the development of improved data, modeling, and target setting scenarios. Targets for the remaining six Metropolitan Planning Organizations—the Monterey Bay, Butte, San Luis Obispo, Santa Barbara, Shasta and Tahoe Basin regions—generally match or improve upon their current plans for 2020 and 2035. MTC, SANDAG, SACOG, SCAG and the San Joaquin Valley MPOs comprise 95% of the State of California's current population, vehicle miles of travel, and passenger vehicle greenhouse gas emissions, with the remaining six MPOs comprising only 5%. The targets are expressed as a percent reduction in per capita greenhouse gas emissions, with 2005 as a base year. Regions that meet their targets may receive easier access to certain federal funding opportunities and streamlined environment review of development and infrastructure projects. Final targets were adopted by ARB on February 15, 2011. Regional Targets - Percent Reduction in Per Capita Greenhouse Gas Emissions from Passenger Vehicles Sustainable Communities Strategies Every four years in areas that are not in attainment under the Clean Air Act, and every five years in areas of attainment, MPOs prepare a Regional Transportation Plan that serves as a blueprint for future investments in transportation in their region. SB 375 adds each a new element to the RTP, called a Sustainable Communities Strategy, or SCS. The SCS will increase the integration of land use and transportation planning through more detailed allocation of land uses in the RTP. Local and regional governments and agencies are empowered to determine how the targets are met, through a combination of land use planning, transportation programs, projects and policies, and/or other strategies. ARB will review each SCS to determine whether it would, if implemented, achieve the greenhouse gas emission reduction target for its region. If the SCS will not meet the region's target, the MPO must prepare a separate "alternative planning strategy (APS)" that is expected to meet the target. The APS is not a part of the RTP. SCS Evaluation Methodology In July 2011, ARB published a description of the methodology that it will use to determine whether a region's SCS, if adopted, will be expected to meet the greenhouse gas reduction target for that region. MPOs develop models to estimate current and predict future transportation-related conditions in the region. Inputs to the model include population distribution, land uses, and transportation infrastructure and services. The model then converts these inputs into output values such as vehicle miles traveled, daily trips per household and percentage trips by various modes of travel (auto, transit, bicycling and walking). These and other outputs of the model will be used to estimate total greenhouse gas emissions from motor vehicles for the region. Primary responsibility for transportation modeling remains with the MPOs, as will the evaluation of the impact of their SCS on greenhouse gas emissions. ARB's role will be evaluate the technical analysis performed by the MPOs, including a review of model complexity, and consideration of available resources and unique characteristics of each region. ARB will confirm estimates of vehicle-related GHG emissions and make a determination of whether these emissions will meet regional targets. Over time, ARB will revise its methodology for reviewing an SCS and work with MPOs to help them improve their modeling capabilities and evaluation of the impact of future Sustainable Communities Strategies on vehicle-related greenhouse gas emissions. Sustainable Communities Strategies by region Sacramento Area Council of Governments (SACOG) SACOG is currently preparing their 2035 Metropolitan Transportation Plan (MTP), which will include a Sustainable Communities Strategy as required by SB 375. The draft MTP is scheduled for release in Fall 2011. San Diego Association of Governments (SANDAG) In June 2011, SANDAG released its Draft Regional Transportation Plan for 2050 which includes its Draft Sustainable Communities Strategy. On Tuesday, September 13, 2011, ARB released an informational report on SANDAG's Draft SB 375 Sustainable Communities Strategy San Francisco Bay Area Metropolitan Transportation Commission (MTC) As part of their long-term regional planning process, titled "One Bay Area," the MTC and the Association of Bay Area Governments (ABAG) are developing a 25-year transportation plan for the San Francisco Bay Area that is scheduled for adoption in 2013. The initial vision scenario for the plan, which will include a Sustainable Communities Strategy for the region, was released on March 11, 2011. Implications for environmental justice Since its passage, SB 375 has garnered some controversy in relation to its environmental justice-related implications. The development and drafting process of SCS plans allow for minority and low-income communities to take advantage of opportunities to participate in the application of the bill so that ideas of equity are incorporated into its implementation. SB 375 specifies that regional planning agencies must implement a public participation plan for the drafting of the SCS. SB 375 requires that the SCS for each region in California include the RHNA requirement to provide housing to people of all income levels. The Regional Targets Advisory Committee which recommends ways to reduce emissions to each of California's regions is made up of local governmental representatives as well as members of the public, affected air districts, and regional coalitions. Also, the CEQA exemption and streamlining provision would possibly make particular projects more difficult to litigate and the CEQA streamlining benefits have led multiple environmental groups to withdraw their support for the bill. Although SB 375 supports increased density development surrounding main transit stops, this does not guarantee an increase in affordable options for housing and may even increase land values in these places, which may lead to the displacement of the people who live there. Another way in which the bill contributes to environmental justice is that the bill requires each city to show where housing will be situated in order to meet housing allocations for residents of varying income levels and SB 375 provides direct action to curb urban sprawl as well. According to a research study on accessory dwelling units by the UC Berkeley College of Environmental Design, California's implementation of SB 375 has indeed placed more pressure on particular neighborhoods to promote affordable housing development and infill. For example, the San Francisco Bay Area is dealing with the challenges of infilling which may lead to increases in the cost of housing and further escalate the economic crisis for the communities there.There have been claims that SB 375 increases pressure from gentrification and does not improve the livelihoods of low-income neighborhoods with higher levels of minority populations. The pressure from gentrification may lead to population migration such that poorer residents may be displaced by wealthy newcomers as a result of the SB 375 investments that fund particular infrastructure and projects in accordance with the bill. These claims further blame the bill for lacking positive funding as well as restrictions on sprawl. Moreover, opponents of the bill claim that while the bill may promote development near transit areas in urban neighborhoods, they claim that other factors such as crime rate and employment levels in these neighborhoods must not be ignored in the passage of these bills.In addition, environmental justice advocates claim that SB 375 could lead MPOs to allocate more resources to high income as well as to suburban rail expansion and will lead to inequitable transit systems and lower housing affordability. They also claim that equitable reforms will not take place under the bill because they believe that the bill may generate urban development patterns that displace low-income communities and communities of color. Another significant concern is that the CEQA exemptions can be used to weaken advocacy efforts in communities of color and low-income communities. Although SB 375 has an obligation to generate and reserve affordable housing for the public, these advocates are concerned with the implications that may arise from the implementation of each SCS that results from SB 375. See also Global Warming Solutions Act of 2006 References External links California Air Resources Board Climate Change Program California Air Resources Board Sustainable Communities Regional Targets Sacramento Region Transportation Plan 2035 San Diego Association of Governments Draft Regional Transportation Plan for 2050 San Francisco One Bay Area Southern California Association of Governments
blast furnace
A blast furnace is a type of metallurgical furnace used for smelting to produce industrial metals, generally pig iron, but also others such as lead or copper. Blast refers to the combustion air being supplied above atmospheric pressure.In a blast furnace, fuel (coke), ores, and flux (limestone) are continuously supplied through the top of the furnace, while a hot blast of air (sometimes with oxygen enrichment) is blown into the lower section of the furnace through a series of pipes called tuyeres, so that the chemical reactions take place throughout the furnace as the material falls downward. The end products are usually molten metal and slag phases tapped from the bottom, and waste gases (flue gas) exiting from the top of the furnace. The downward flow of the ore along with the flux in contact with an upflow of hot, carbon monoxide-rich combustion gases is a countercurrent exchange and chemical reaction process.In contrast, air furnaces (such as reverberatory furnaces) are naturally aspirated, usually by the convection of hot gases in a chimney flue. According to this broad definition, bloomeries for iron, blowing houses for tin, and smelt mills for lead would be classified as blast furnaces. However, the term has usually been limited to those used for smelting iron ore to produce pig iron, an intermediate material used in the production of commercial iron and steel, and the shaft furnaces used in combination with sinter plants in base metals smelting.Blast furnaces are estimated to have been responsible for over 4% of global greenhouse gas emissions between 1900 and 2015, but are difficult to decarbonize. Process engineering and chemistry Blast furnaces operate on the principle of chemical reduction whereby carbon monoxide converts iron oxides to elemental iron. Blast furnaces differ from bloomeries and reverberatory furnaces in that in a blast furnace, flue gas is in direct contact with the ore and iron, allowing carbon monoxide to diffuse into the ore and reduce the iron oxide. The blast furnace operates as a countercurrent exchange process whereas a bloomery does not. Another difference is that bloomeries operate as a batch process whereas blast furnaces operate continuously for long periods. Continuous operation is also preferred because blast furnaces are difficult to start and stop. Also, the carbon in pig iron lowers the melting point below that of steel or pure iron; in contrast, iron does not melt in a bloomery. Silica has to be removed from the pig iron. It reacts with calcium oxide (burned limestone) and forms silicates, which float to the surface of the molten pig iron as slag. Historically, to prevent contamination from sulfur, the best quality iron was produced with charcoal. The downward moving column of ore, flux, coke or charcoal and reaction products must be sufficiently porous for the flue gas to pass through. To ensure this permeability the particle size of the coke or charcoal is of great relevance. Therefore, the coke must be strong enough so it will not be crushed by the weight of the material above it. Besides the physical strength of its particles, the coke must also be low in sulfur, phosphorus, and ash.The main chemical reaction producing the molten iron is: Fe2O3 + 3CO → 2Fe + 3CO2This reaction might be divided into multiple steps, with the first being that preheated air blown into the furnace reacts with the carbon in the form of coke to produce carbon monoxide and heat: 2 C(s) + O2(g) → 2 CO(g)The hot carbon monoxide is the reducing agent for the iron ore and reacts with the iron oxide to produce molten iron and carbon dioxide. Depending on the temperature in the different parts of the furnace (warmest at the bottom) the iron is reduced in several steps. At the top, where the temperature usually is in the range between 200 °C and 700 °C, the iron oxide is partially reduced to iron(II,III) oxide, Fe3O4. 3 Fe2O3(s) + CO(g) → 2 Fe3O4(s) + CO2(g)The temperatures 850 °C, further down in the furnace, the iron(II,III) is reduced further to iron(II) oxide: Fe3O4(s) + CO(g) → 3 FeO(s) + CO2(g)Hot carbon dioxide, unreacted carbon monoxide, and nitrogen from the air pass up through the furnace as fresh feed material travels down into the reaction zone. As the material travels downward, the counter-current gases both preheat the feed charge and decompose the limestone to calcium oxide and carbon dioxide: CaCO3(s) → CaO(s) + CO2(g)The calcium oxide formed by decomposition reacts with various acidic impurities in the iron (notably silica), to form a fayalitic slag which is essentially calcium silicate, CaSiO3: SiO2 + CaO → CaSiO3As the iron(II) oxide moves down to the area with higher temperatures, ranging up to 1200 °C degrees, it is reduced further to iron metal: FeO(s) + CO(g) → Fe(s) + CO2(g)The carbon dioxide formed in this process is re-reduced to carbon monoxide by the coke: C(s) + CO2(g) → 2 CO(g)The temperature-dependent equilibrium controlling the gas atmosphere in the furnace is called the Boudouard reaction: 2CO ⇌ CO2 + CThe pig iron produced by the blast furnace has a relatively high carbon content of around 4–5% and usually contains too much sulphur, making it very brittle, and of limited immediate commercial use. Some pig iron is used to make cast iron. The majority of pig iron produced by blast furnaces undergoes further processing to reduce the carbon and sulphur content and produce various grades of steel used for construction materials, automobiles, ships and machinery. Desulphurisation usually takes place during the transport of the liquid steel to the steelworks. This is done by adding calcium oxide, which reacts with the iron sulfide contained in the pig iron to form calcium sulfide (called lime desulfurization). In a further process step, the so-called basic oxygen steelmaking, the carbon is oxidized by blowing oxygen onto the liquid pig iron to form crude steel. Although the efficiency of blast furnaces is constantly evolving, the chemical process inside the blast furnace remains the same. One of the biggest drawbacks of the blast furnaces is the inevitable carbon dioxide production as iron is reduced from iron oxides by carbon and as of 2016, there is no economical substitute – steelmaking is one of the largest industrial contributors of the CO2 emissions in the world (see greenhouse gases). Several alternatives are being investigated such as plastic waste, biomass or hydrogen as reducing agent, which can substantially reduce the carbon emissions. The injection of, for example, hydrogen into blast furnaces can reduce carbon emissions by 20 percent.The challenge set by the greenhouse gas emissions of the blast furnace is being addressed in an ongoing European Program called ULCOS (Ultra Low CO2 Steelmaking). Several new process routes have been proposed and investigated in depth to cut specific emissions (CO2 per ton of steel) by at least 50%. Some rely on the capture and further storage (CCS) of CO2, while others choose decarbonizing iron and steel production, by turning to hydrogen, electricity and biomass. In the nearer term, a technology that incorporates CCS into the blast furnace process itself and is called the Top-Gas Recycling Blast Furnace is under development, with a scale-up to a commercial size blast furnace under way. History Cast iron has been found in China dating to the 5th century BC, but the earliest extant blast furnaces in China date to the 1st century AD and in the West from the High Middle Ages. They spread from the region around Namur in Wallonia (Belgium) in the late 15th century, being introduced to England in 1491. The fuel used in these was invariably charcoal. The successful substitution of coke for charcoal is widely attributed to English inventor Abraham Darby in 1709. The efficiency of the process was further enhanced by the practice of preheating the combustion air (hot blast), patented by Scottish inventor James Beaumont Neilson in 1828. China Archaeological evidence shows that bloomeries appeared in China around 800 BC. Originally it was thought that the Chinese started casting iron right from the beginning, but this theory has since been debunked by the discovery of 'more than ten' iron digging implements found in the tomb of Duke Jing of Qin (d. 537 BC), whose tomb is located in Fengxiang County, Shaanxi (a museum exists on the site today). There is however no evidence of the bloomery in China after the appearance of the blast furnace and cast iron. In China, blast furnaces produced cast iron, which was then either converted into finished implements in a cupola furnace, or turned into wrought iron in a fining hearth.Although cast iron farm tools and weapons were widespread in China by the 5th century BC, employing workforces of over 200 men in iron smelters from the 3rd century onward, the earliest blast furnaces constructed were attributed to the Han dynasty in the 1st century AD. These early furnaces had clay walls and used phosphorus-containing minerals as a flux. Chinese blast furnaces ranged from around two to ten meters in height, depending on the region. The largest ones were found in modern Sichuan and Guangdong, while the 'dwarf" blast furnaces were found in Dabieshan. In construction, they are both around the same level of technological sophistication The effectiveness of the Chinese human and horse powered blast furnaces was enhanced during this period by the engineer Du Shi (c. AD 31), who applied the power of waterwheels to piston-bellows in forging cast iron. Early water-driven reciprocators for operating blast furnaces were built according to the structure of horse powered reciprocators that already existed. That is, the circular motion of the wheel, be it horse driven or water driven, was transferred by the combination of a belt drive, a crank-and-connecting-rod, other connecting rods, and various shafts, into the reciprocal motion necessary to operate a push bellow. Donald Wagner suggests that early blast furnace and cast iron production evolved from furnaces used to melt bronze. Certainly, though, iron was essential to military success by the time the State of Qin had unified China (221 BC). Usage of the blast and cupola furnace remained widespread during the Song and Tang dynasties. By the 11th century, the Song dynasty Chinese iron industry made a switch of resources from charcoal to coke in casting iron and steel, sparing thousands of acres of woodland from felling. This may have happened as early as the 4th century AD.The primary advantage of the early blast furnace was in large scale production and making iron implements more readily available to peasants. Cast iron is more brittle than wrought iron or steel, which required additional fining and then cementation or co-fusion to produce, but for menial activities such as farming it sufficed. By using the blast furnace, it was possible to produce larger quantities of tools such as ploughshares more efficiently than the bloomery. In areas where quality was important, such as warfare, wrought iron and steel were preferred. Nearly all Han period weapons are made of wrought iron or steel, with the exception of axe-heads, of which many are made of cast iron.Blast furnaces were also later used to produce gunpowder weapons such as cast iron bomb shells and cast iron cannons during the Song dynasty. Medieval Europe The simplest forge, known as the Corsican, was used prior to the advent of Christianity. Examples of improved bloomeries are the Stuckofen, sometimes called wolf-furnace,) which remained until the beginning of the 19th century. Instead of using natural draught, air was pumped in by a trompe, resulting in better quality iron and an increased capacity. This pumping of air in with bellows is known as cold blast, and it increases the fuel efficiency of the bloomery and improves yield. They can also be built bigger than natural draught bloomeries. Oldest European blast furnaces The oldest known blast furnaces in the West were built in Durstel in Switzerland, the Märkische Sauerland in Germany, and at Lapphyttan in Sweden, where the complex was active between 1205 and 1300. At Noraskog in the Swedish parish of Järnboås, traces of even earlier blast furnaces have been found, possibly from around 1100. These early blast furnaces, like the Chinese examples, were very inefficient compared to those used today. The iron from the Lapphyttan complex was used to produce balls of wrought iron known as osmonds, and these were traded internationally – a possible reference occurs in a treaty with Novgorod from 1203 and several certain references in accounts of English customs from the 1250s and 1320s. Other furnaces of the 13th to 15th centuries have been identified in Westphalia.The technology required for blast furnaces may have either been transferred from China, or may have been an indigenous innovation. Al-Qazvini in the 13th century and other travellers subsequently noted an iron industry in the Alburz Mountains to the south of the Caspian Sea. This is close to the silk route, so that the use of technology derived from China is conceivable. Much later descriptions record blast furnaces about three metres high. As the Varangian Rus' people from Scandinavia traded with the Caspian (using their Volga trade route), it is possible that the technology reached Sweden by this means. The Vikings are known to have used double bellows, which greatly increases the volumetric flow of the blast.The Caspian region may also have been the source for the design of the furnace at Ferriere, described by Filarete, involving a water-powered bellows at Semogo in Valdidentro in northern Italy in 1226. In a two-stage process the molten iron was tapped twice a day into water, thereby granulating it. Cistercian contributions The General Chapter of the Cistercian monks spread some technological advances across Europe. This may have included the blast furnace, as the Cistercians are known to have been skilled metallurgists. According to Jean Gimpel, their high level of industrial technology facilitated the diffusion of new techniques: "Every monastery had a model factory, often as large as the church and only several feet away, and waterpower drove the machinery of the various industries located on its floor." Iron ore deposits were often donated to the monks along with forges to extract the iron, and after a time surpluses were offered for sale. The Cistercians became the leading iron producers in Champagne, France, from the mid-13th century to the 17th century, also using the phosphate-rich slag from their furnaces as an agricultural fertilizer.Archaeologists are still discovering the extent of Cistercian technology. At Laskill, an outstation of Rievaulx Abbey and the only medieval blast furnace so far identified in Britain, the slag produced was low in iron content. Slag from other furnaces of the time contained a substantial concentration of iron, whereas Laskill is believed to have produced cast iron quite efficiently. Its date is not yet clear, but it probably did not survive until Henry VIII's Dissolution of the Monasteries in the late 1530s, as an agreement (immediately after that) concerning the "smythes" with the Earl of Rutland in 1541 refers to blooms. Nevertheless, the means by which the blast furnace spread in medieval Europe has not finally been determined. Origin and spread of early modern blast furnaces Due to the increased demand for iron for casting cannons, the blast furnace came into widespread use in France in the mid 15th century.The direct ancestor of those used in France and England was in the Namur region, in what is now Wallonia (Belgium). From there, they spread first to the Pays de Bray on the eastern boundary of Normandy and from there to the Weald of Sussex, where the first furnace (called Queenstock) in Buxted was built in about 1491, followed by one at Newbridge in Ashdown Forest in 1496. They remained few in number until about 1530 but many were built in the following decades in the Weald, where the iron industry perhaps reached its peak about 1590. Most of the pig iron from these furnaces was taken to finery forges for the production of bar iron.The first British furnaces outside the Weald appeared during the 1550s, and many were built in the remainder of that century and the following ones. The output of the industry probably peaked about 1620, and was followed by a slow decline until the early 18th century. This was apparently because it was more economic to import iron from Sweden and elsewhere than to make it in some more remote British locations. Charcoal that was economically available to the industry was probably being consumed as fast as the wood to make it grew.The first blast furnace in Russia opened in 1637 near Tula and was called the Gorodishche Works. The blast furnace spread from there to central Russia and then finally to the Urals. Coke blast furnaces In 1709, at Coalbrookdale in Shropshire, England, Abraham Darby began to fuel a blast furnace with coke instead of charcoal. Coke's initial advantage was its lower cost, mainly because making coke required much less labor than cutting trees and making charcoal, but using coke also overcame localized shortages of wood, especially in Britain and on the Continent. Metallurgical grade coke will bear heavier weight than charcoal, allowing larger furnaces. A disadvantage is that coke contains more impurities than charcoal, with sulfur being especially detrimental to the iron's quality. Coke's impurities were more of a problem before hot blast reduced the amount of coke required and before furnace temperatures were hot enough to make slag from limestone free flowing. (Limestone ties up sulfur. Manganese may also be added to tie up sulfur.): 123–125 : 122–123 Coke iron was initially only used for foundry work, making pots and other cast iron goods. Foundry work was a minor branch of the industry, but Darby's son built a new furnace at nearby Horsehay, and began to supply the owners of finery forges with coke pig iron for the production of bar iron. Coke pig iron was by this time cheaper to produce than charcoal pig iron. The use of a coal-derived fuel in the iron industry was a key factor in the British Industrial Revolution. Darby's original blast furnace has been archaeologically excavated and can be seen in situ at Coalbrookdale, part of the Ironbridge Gorge Museums. Cast iron from the furnace was used to make girders for the world's first cast iron bridge in 1779. The Iron Bridge crosses the River Severn at Coalbrookdale and remains in use for pedestrians. Steam-powered blast The steam engine was applied to power blast air, overcoming a shortage of water power in areas where coal and iron ore were located. This was first done at Coalbrookdale where a steam engine replaced a horse-powered pump in 1742. Such engines were used to pump water to a reservoir above the furnace. The first engines used to blow cylinders directly was supplied by Boulton and Watt to John Wilkinson's New Willey Furnace. This powered a cast iron blowing cylinder, which had been invented by his father Isaac Wilkinson. He patented such cylinders in 1736, to replace the leather bellows, which wore out quickly. Isaac was granted a second patent, also for blowing cylinders, in 1757. The steam engine and cast iron blowing cylinder led to a large increase in British iron production in the late 18th century. Hot blast Hot blast was the single most important advance in fuel efficiency of the blast furnace and was one of the most important technologies developed during the Industrial Revolution. Hot blast was patented by James Beaumont Neilson at Wilsontown Ironworks in Scotland in 1828. Within a few years of the introduction, hot blast was developed to the point where fuel consumption was cut by one-third using coke or two-thirds using coal, while furnace capacity was also significantly increased. Within a few decades, the practice was to have a "stove" as large as the furnace next to it into which the waste gas (containing CO) from the furnace was directed and burnt. The resultant heat was used to preheat the air blown into the furnace.Hot blast enabled the use of raw anthracite coal, which was difficult to light, in the blast furnace. Anthracite was first tried successfully by George Crane at Ynyscedwyn Ironworks in south Wales in 1837. It was taken up in America by the Lehigh Crane Iron Company at Catasauqua, Pennsylvania, in 1839. Anthracite use declined when very high capacity blast furnaces requiring coke were built in the 1870s. Modern applications of the blast furnace Iron blast furnaces The blast furnace remains an important part of modern iron production. Modern furnaces are highly efficient, including Cowper stoves to pre-heat the blast air and employ recovery systems to extract the heat from the hot gases exiting the furnace. Competition in industry drives higher production rates. The largest blast furnace in the world is in South Korea, with a volume around 6,000 m3 (210,000 cu ft). It can produce around 5,650,000 tonnes (5,560,000 LT) of iron per year.This is a great increase from the typical 18th-century furnaces, which averaged about 360 tonnes (350 long tons; 400 short tons) per year. Variations of the blast furnace, such as the Swedish electric blast furnace, have been developed in countries which have no native coal resources. According to Global Energy Monitor, the blast furnace is likely to become obsolete to meet climate change objectives of reducing carbon dioxide emission, but BHP disagrees. An alternative process involving direct reduced iron is likely to succeed it, but this also needs to use a blast furnace to melt the iron and remove the gangue (impurities) unless the ore is very high quality. Oxygen blast furnace The oxygen blast furnace (OBF) process has been extensively studied theoretically because of the potentials of promising energy conservation and CO2 emission reduction. This type may be the most suitable for use with CCS. The main blast furnace has of three levels; the reduction zone (523–973 K (250–700 °C; 482–1,292 °F)), slag formation zone (1,073–1,273 K (800–1,000 °C; 1,472–1,832 °F)), and the combustion zone (1,773–1,873 K (1,500–1,600 °C; 2,732–2,912 °F)). Blast furnaces are currently rarely used in copper smelting, but modern lead smelting blast furnaces are much shorter than iron blast furnaces and are rectangular in shape. Modern lead blast furnaces are constructed using water-cooled steel or copper jackets for the walls, and have no refractory linings in the side walls. The base of the furnace is a hearth of refractory material (bricks or castable refractory). Lead blast furnaces are often open-topped rather than having the charging bell used in iron blast furnaces.The blast furnace used at the Nyrstar Port Pirie lead smelter differs from most other lead blast furnaces in that it has a double row of tuyeres rather than the single row normally used. The lower shaft of the furnace has a chair shape with the lower part of the shaft being narrower than the upper. The lower row of tuyeres being located in the narrow part of the shaft. This allows the upper part of the shaft to be wider than the standard. Zinc blast furnaces The blast furnaces used in the Imperial Smelting Process ("ISP") were developed from the standard lead blast furnace, but are fully sealed. This is because the zinc produced by these furnaces is recovered as metal from the vapor phase, and the presence of oxygen in the off-gas would result in the formation of zinc oxide.Blast furnaces used in the ISP have a more intense operation than standard lead blast furnaces, with higher air blast rates per m2 of hearth area and a higher coke consumption.Zinc production with the ISP is more expensive than with electrolytic zinc plants, so several smelters operating this technology have closed in recent years. However, ISP furnaces have the advantage of being able to treat zinc concentrates containing higher levels of lead than can electrolytic zinc plants. Manufacture of stone wool Stone wool or rock wool is a spun mineral fibre used as an insulation product and in hydroponics. It is manufactured in a blast furnace fed with diabase rock which contains very low levels of metal oxides. The resultant slag is drawn off and spun to form the rock wool product. Very small amounts of metals are also produced which are an unwanted by-product. Modern iron process Modern furnaces are equipped with an array of supporting facilities to increase efficiency, such as ore storage yards where barges are unloaded. The raw materials are transferred to the stockhouse complex by ore bridges, or rail hoppers and ore transfer cars. Rail-mounted scale cars or computer controlled weight hoppers weigh out the various raw materials to yield the desired hot metal and slag chemistry. The raw materials are brought to the top of the blast furnace via a skip car powered by winches or conveyor belts.There are different ways in which the raw materials are charged into the blast furnace. Some blast furnaces use a "double bell" system where two "bells" are used to control the entry of raw material into the blast furnace. The purpose of the two bells is to minimize the loss of hot gases in the blast furnace. First, the raw materials are emptied into the upper or small bell which then opens to empty the charge into the large bell. The small bell then closes, to seal the blast furnace, while the large bell rotates to provide specific distribution of materials before dispensing the charge into the blast furnace. A more recent design is to use a "bell-less" system. These systems use multiple hoppers to contain each raw material, which is then discharged into the blast furnace through valves. These valves are more accurate at controlling how much of each constituent is added, as compared to the skip or conveyor system, thereby increasing the efficiency of the furnace. Some of these bell-less systems also implement a discharge chute in the throat of the furnace (as with the Paul Wurth top) in order to precisely control where the charge is placed.The iron making blast furnace itself is built in the form of a tall structure, lined with refractory brick, and profiled to allow for expansion of the charged materials as they heat during their descent, and subsequent reduction in size as melting starts to occur. Coke, limestone flux, and iron ore (iron oxide) are charged into the top of the furnace in a precise filling order which helps control gas flow and the chemical reactions inside the furnace. Four "uptakes" allow the hot, dirty gas high in carbon monoxide content to exit the furnace throat, while "bleeder valves" protect the top of the furnace from sudden gas pressure surges. The coarse particles in the exhaust gas settle in the "dust catcher" and are dumped into a railroad car or truck for disposal, while the gas itself flows through a venturi scrubber and/or electrostatic precipitators and a gas cooler to reduce the temperature of the cleaned gas.The "casthouse" at the bottom half of the furnace contains the bustle pipe, water cooled copper tuyeres and the equipment for casting the liquid iron and slag. Once a "taphole" is drilled through the refractory clay plug, liquid iron and slag flow down a trough through a "skimmer" opening, separating the iron and slag. Modern, larger blast furnaces may have as many as four tapholes and two casthouses. Once the pig iron and slag has been tapped, the taphole is again plugged with refractory clay. The tuyeres are used to implement a hot blast, which is used to increase the efficiency of the blast furnace. The hot blast is directed into the furnace through water-cooled copper nozzles called tuyeres near the base. The hot blast temperature can be from 900 °C to 1300 °C (1600 °F to 2300 °F) depending on the stove design and condition. The temperatures they deal with may be 2000 °C to 2300 °C (3600 °F to 4200 °F). Oil, tar, natural gas, powdered coal and oxygen can also be injected into the furnace at tuyere level to combine with the coke to release additional energy and increase the percentage of reducing gases present which is necessary to increase productivity.The exhaust gasses of a blast furnace are generally cleaned in the dust collector – such as an inertial separator, a baghouse, or an electrostatic precipitator. Each type of dust collector has strengths and weaknesses – some collect fine particles, some coarse particles, some collect electrically charged particles. Effective exhaust clearing relies on multiple stages of treatment. Waste heat is usually collected from the exhaust gases, for example by the use of a Cowper stove, a variety of heat exchanger. The IEA Green House Gas R&D Programme (IEAGHG) has shown that in an integrated steel plant, 70% of the CO2 is directly from the blast furnace gas (BFG). It is possible to use carbon capture technology on the BFG before the BFG goes on to be used for heat exchange processes within the plant. In 2000, the IEAGHG estimated using that chemical absorption to capture BFG would cost $35/t of CO2 (an additional $8–20/t of CO2 would be required for CO2 transportation and storage). This would make the entire steel production process in a plant 15–20% more expensive. Environmental impact The results showed that global warming potential and acidification potential were the most significant environmental impacts. On average producing a tonne of steel emits 1.8 tonnes of CO2. However, a steel mill using a top gas recycling blast furnace (TGRBF) producing a tonne of steel will emit 0.8 to 1.3 tonnes of CO2 depending upon the recycle rate of the TGRBF. Decommissioned blast furnaces as museum sites For a long time, it was normal procedure for a decommissioned blast furnace to be demolished and either be replaced with a newer, improved one, or to have the entire site demolished to make room for follow-up use of the area. In recent decades, several countries have realized the value of blast furnaces as a part of their industrial history. Rather than being demolished, abandoned steel mills were turned into museums or integrated into multi-purpose parks. The largest number of preserved historic blast furnaces exists in Germany; other such sites exist in Spain, France, the Czech Republic, Great Britain. Japan, Luxembourg, Poland, Romania, Mexico, Russia and the United States. Gallery See also Basic oxygen furnace Blast furnace zinc smelting process Crucible steel Extraction of iron Water gas, produced by a "steam blast" FINEX Krupp-Renn Process Flodin process Steelmaking Ironworks and steelworks in England, which covers ironworks of all kinds. Shaft furnace breather Direct reduction Direct reduction (blast furnace) References Bibliography External links American Iron and Steel Institute Blast Furnace animation Extensive picture gallery about all methods of making and shaping of iron and steel in North America and Europe. In German and English. Schematic diagram of blast furnace and Cowper stove
california senate bill 32
The California Global Warming Solutions Act of 2016: emissions limit, or SB-32, is a California Senate bill expanding upon AB-32 to reduce greenhouse gas (GHG) emissions. The lead author is Senator Fran Pavley and the principal co-author is Assemblymember Eduardo Garcia. SB-32 was signed into law on September 8, 2016, by Governor Edmund Gerald “Jerry” Brown Jr. SB-32 sets into law the mandated reduction target in GHG emissions as written into Executive Order B-30-15. The Senate bill requires that there be a reduction in GHG emissions to 40% below the 1990 levels by 2030. Greenhouse gas emissions include carbon dioxide, methane, nitrous oxide, sulfur hexafluoride, hydrofluorocarbons, and perfluorocarbons. The California Air Resources Board (CARB) is responsible for ensuring that California meets this goal. The provisions of SB-32 were added to Section 38566 of the Health and Safety Code subsequent to the bill’s approval. The bill goes into effect January 1, 2017. SB-32 builds onto Assembly Bill (AB) 32 written by Senator Fran Pavley and Assembly Speaker Fabian Nunez passed into law on September 27, 2006. AB-32 required California to reduce greenhouse gas emissions to 1990 levels by 2020 and SB-32 continues that timeline to reach the targets set in Executive Order B-30-15. SB-32 provides another intermediate target between the 2020 and 2050 targets set in Executive Order S-3-05. SB-32 was contingent on the passing of AB-197, which increases legislative oversight of CARB and is intended to ensure CARB must report to the Legislature. AB-197 also passed and was signed into law on September 8, 2016. Background The global community recognized the climate trends most recently at the 21st yearly session of the Conference of Parties (COP 21) in Paris, where a universal commitment to reduce average “global temperature rise this century well below 2 degrees Celsius and to drive efforts to limit the temperature increase even further to 1.5 degrees Celsius above pre-industrial levels.” Climate science dictates that global warming of more than 2 °C would have serious consequences, beyond those already being experienced, such as an increase in the number of extreme climate events. In order to avoid a global average surface temperature increase of 2 °C, global GHG emissions need to be reduced by 40–70% by 2050 and carbon neutrality (i.e. zero emissions) needs to be reached by the end of the century at the latest. The United States pledged at the Paris COP 21, “to achieve an economy-wide target of reducing its greenhouse gas emissions by 26–28% below its 2005 level in 2025 and to make best efforts to reduce its emissions by 28%” Prior to the Paris agreements, which have not domestically been ratified by a two-thirds Senate vote, the US did not have any binding national GHG reduction targets. In light of the uncertain federal requirements for GHG reductions, California has moved forward with statewide GHG reduction legislation. AB-32 and SB-32 establish aggressive near- and mid- term GHG reduction goals, respectively. Within the US, California is the second largest GHG emitter, accounting for approximately 7.7 percent of the national GHG emissions in 2012. Accordingly, California’s climate legislature can result in real reductions to the national GHG inventory and provide the framework for other states to achieve simultaneous economic growth and GHG reductions. California has set reduction targets based on 1990 levels of GHG emissions. The 1990 emissions limit was initially set at 427 million metric tons of carbon dioxide equivalent (MMTCO2e), but was revised in 2014 to 431 MMTCO2e based on updated scientific reporting. In 2014, California emitted a total of 441.5 MMTCO2e, a reduction of over 1.5 million MMTCO2e since 2012. It is believed that California will continue to reduce GHG emissions and meet the 2020 target. The 2030 reduction goal of 40% below 1990 levels equates to a target emissions rate of 258.6 MMTCO2e by 2030. The long-term emission reduction goal set forth in EO S-3-05 to reach 80% below 1990 levels by 2050 equates to a target emissions rate of 86.2 MMTCO2e by 2050. Related regulations Assembly Bill (AB) 197 AB-197 was signed into legislation by Governor Jerry Brown on the same day as SB-32, September 8, 2016. AB-197 is directly related to SB-32 in that AB-197 contains language stating AB-197 is only operative if SB-32 is enacted and becomes law on or before January 1, 2017. The provisions of AB-197 are intended to provide more legislative oversight of CARB by adding two new legislatively appointed non-voting members to the CARB Board, increasing the Legislature's role in the ARB Board's decisions. Additionally, AB-197 limits the term length of CARB Board members to six years. AB-197 also requires that CARB "protect the state's most impacted and disadvantaged communities … [and] consider the social costs of the emissions of greenhouse gases" in preparing plans to meet GHG reduction goals. Executive Order B-30-15 Executive Order (EO) B-30-15 was signed by Governor Jerry Brown in April 2015 that set an executive greenhouse gas emissions target for 2030 at 40% below 1990 levels. Executive Order S-3-05 EO S-3-05 was signed by Governor Arnold Schwarzenegger in June 2005 that set an executive greenhouse gas emissions target for 2050 at 80% below 1990 levels. While state legislation has not yet been passed regarding a 2050 statewide emissions target, this Executive Orders holds some weight. See Executive Order for more details. Senate bill requirements SB-32 requires CARB to reduce greenhouse gas emissions to 40% below the 1990 levels by 2030. This bill gives CARB the authority to adopt regulations in order to achieve the maximum technology feasible to be the most cost-efficient way to reduce greenhouse gas emissions. They are also required to meet these goals in such a way that benefits the state’s most disadvantaged communities as they are “disproportionately impacted” by the effects of climate change, such as drought and flooding.SB-32 does not state how California will or should reach emission reduction targets, but rather leaves it up to CARB to adopt rules and regulations “in an open public process” to “achieve the maximum, technologically feasible, and cost-effective greenhouse gas emissions reductions.” Under AB-32, CARB is required to publish an Update to the Scoping Plan every five years, detailing how CARB plans to meet emission reduction targets. The Second Update to the Scoping Plan is currently being prepared, and will be published in 2017. The Second Update to the Scoping Plan is required to incorporate 2030 target year goals. Environmental justice Environmental justice is an important aspect of SB-32. SB-32 emphasizes the need for protecting the state’s most disadvantaged communities. The concerns brought up in the bill are that those communities are affected first and most often by the negative impacts of climate change, such as drought, heat, and flooding. SB-32 also emphasizes that disadvantaged communities are also disproportionately affected health-wise. While SB-32 does not specifically lay out a plan on how to address environmental justice issues that the authors wrote into the bill, the companion bill AB-197 does. AB-197 requires a committee to be formed and called the Joint Legislative Committee on Climate Change Policies (JLCCCP), which will be responsible, among other duties, for addressing and prioritizing the disadvantaged communities in California. Fate of Cap-and-Trade under SB-32 and AB-197 The California Cap-and-Trade program was created by CARB as a market mechanism to reach GHG emission reduction targets established in AB-32. There currently is a Cap-and-Trade program in California, though it is not directly required under SB-32, which simply establishes a clear emissions reduction goal. In preparation of the Second Update to the Scoping Plan, CARB was presented with a study out of the University of Southern California and the University of California at Berkeley that found a tight correlation between the locations of polluting industries and of low-income communities. The environmental justice concerns associated with the cap-and-trade program could urge CARB to consider alternatives to the cap-and-trade market mechanism in the upcoming Scoping Plan. Additionally, as part of AB-197, reports of emissions inventories for GHGs, criteria pollutants, and toxic air contaminants are required to be made public and updated at least once a year. Criticisms Critics of SB-32 note that CARB has too much un-checked power in regulating California, and that the legislative branch should have more involvement. However, the passage of AB-197 assuaged many of these critics. Other critics of SB-32 argue that the cap-and-trade program is actually a tax, levied without a two-thirds vote of the Legislature. See Global Warming Solutions Act of 2006 for a discussion of the current legal challenges facing the cap-and-trade program. Still, further critics assert that the benefits of the CARB-favored cap-and-trade program are not shared equally by all Californians. One preliminary report asserts that polluters who can afford to are continuing to pollute, buying their way out of making actual reductions. And those polluters tend to be located in or near low-income communities that are frequently also communities of color, where their continued emissions of particulate matter and the like are degrading residents’ health. See also Climate Change in California Climate Change Kyoto Protocol Paris Agreement == References ==
carbon rift
Carbon rift is a theory attributing the input and output of carbon into the environment to human capitalistic systems. This is a derivative of Karl Marx's concept of metabolic rift. In practical terms, increased commodity production demands that greater levels of carbon dioxide (or CO2) be emitted into the biosphere via fossil fuel consumption. Carbon rift theory states that this ultimately disrupts the natural carbon cycle and that this "rift" has adverse effects on nearly every aspect of life. Many of the specifics regarding how this metabolic carbon rift interacts with capitalism are proposed by Brett Clark and Richard York in a 2005 article titled "Carbon Metabolism: Global capitalism, climate change, and the biospheric rift" in the journal Theory and Society. Researchers such as Jean P. Sapinski of the University of Oregon claim that, despite increased interest in closing the carbon rift, it is projected that as long as capitalism continues, there is little hope of reducing the rift.Both deforestation and the emission of greenhouse gases have been linked to increased atmospheric CO2 levels. Carbon rift theory states that these are the result of human production through capitalistic systems. There are proposed solutions to climate change such as geoengineering proposed in the December 2015 Paris Agreement. However, some argue that the capitalist mode of production is at fault for the emission of greenhouse gas and that solutions must be found to this issue before climate change itself can be addressed.Carbon rift theory shouldn't be confused with alternative explanations for climate change, which attribute the causes of the climate change to factors independent of human activity. Such explanations include the Chaotic Solar System Theory and that increased water vapor is responsible for climate change. Capitalism and human activity are not mutually exclusive explanations for climate change, because capitalism is a form of organization of human societies. Summary Greenhouse gas emissions Carbon rift is a result of CO2 gas being released into the environment by human sources, with the theory focusing specifically on capitalistic ones. In 2014, fossil fuel consumption resulted in nearly 36 billion metric tons of CO2 finding its way into natural sinks such as the atmosphere, land, and oceans. This transfer of carbon from the burning of fossil fuels into the biosphere is the primary human-driven cause of greenhouse gas emissions and is closely related to the unchecked behavior of capitalism. Deforestation Another contributing factor to carbon rift is the continual deforestation of the Earth's forests. In doing so, humankind is not only releasing carbon into the biosphere but removing one of the primary ways that carbon is naturally re-absorbed into the carbon cycle. Deforestation can both be tied to having large effects on greenhouse gas emissions (specifically, carbon dioxide) and to capitalism's continual disregard for its use of the truly limited resource represented by the forests. Thus, we have a tie between capitalism, deforestation, and carbon. This is the metabolic pathway defined by carbon rift. Negative effect on humanity As the carbon rift continues to grow, the ecosystems of the biosphere continue to experience detrimental effects. One of the readily observable examples is the acidification of the world's oceans. This occurs when carbon dioxide is absorbed by seawater, lowering its pH. Since the start of the Industrial Revolution, which Marx explicitly ties to capitalism, oceans have experienced a 30% increase in acidity. This acidification and resulting calcification of biological organisms are in part responsible for a decline of fishing as an industry and viable food source. The enlarging carbon rift could result in poorer conditions for human society over time. Political and economic implications Carbon rift plays into a larger discussion of climate change caused by humans—a topic with stark political division. In the United States, the right end of the political spectrum tends to either deny/downplay climate change or attribute it to non-human causes, while people on the left stress the dangerous effects it has on the planet and society. While the theory of carbon rift is not particularly well-known, these political divisions transfer to opinions on carbon rift because the theory operates under the belief that reliance on capitalist modes of production is the cause of increased carbon dioxide emissions. Geoengineering The small amount of political and economic analysis that has been done on carbon rift discusses the theory’s relation to geoengineering. While geoengineering is still in the development stage as both a topic and solution to climate change, the December 2015 Paris Agreement highlighted “negative emissions technologies”. These technologies aim to either “remove carbon dioxide from the atmosphere” or “reduce the amount of solar radiation that hits the earth’s surface.Some scientists and advocacy groups warn that geoengineering will have dangerous, irreversible effects on human society. Furthermore, there is no way to fully test the accuracy of these technologies before launch, making the risk even greater. The 2013 film Snowpiercer offers a grim, politicized portrayal of the possible negative effects of climate engineering. However, other researchers support the develop of such technologies, as they believe their necessity is inevitable. These researchers claim that the climate of capitalistic growth will not falter and greenhouse gas emissions will continue to rise. Critics of geoengineering emphasize that development of such technologies does not address the cause of carbon rift. Jean Sapinski from the University of Oregon defines the root cause as the “capitalist mode of production and the growth imperative it entails”. The extent of carbon rift relates directly to the dominant economic system and the political institutions that reinforce said system. Essentially, those who find fault in the capitalist system are more likely to believe carbon rift cannot be treated effectively without tackling capitalism first. Counterarguments and opposition Carbon rift theory, as a subtopic of both Marxist metabolic theory and climate change, has inherited dissenting viewpoints from both its parent topics. Detractors claim exactly the opposite of carbon rift theory: human production does not have an appreciable effect on the carbon emissions in the biosphere. Since carbon rift theory has not yet made it into the mainstream lexicon, it is not often attacked directly by its detractors, but its concepts are. A notable individual that believes that climate change and human carbon emissions are unrelated is Patrick Moore, of Greenpeace fame. Other theories that explain the growing carbon rift (but exclude capitalism as a contributing factor) are the Chaotic Solar System theory, the claim that carbon is wrongly blamed for the greenhouse effects of water vapor and that the sun is causing global warming. These together are referred to as Non-Consensus views, and lack reliable scientific evidence. See also Carbon cycle Eco-capitalism Industrial metabolism Metabolic rift Nature-culture divide Further reading Karl Marx and the Metabolic Rift Theory == References ==
santos limited
Santos Ltd. (South Australia Northern Territory Oil Search) is an Australian oil and gas exploration and production company, with its headquarters in Adelaide, South Australia. It owns liquefied natural gas (LNG), pipeline gas, and oil assets. It is the biggest supplier of natural gas in Australia, with its plants in the Cooper Basin in South Australia and South West Queensland supplying the eastern states of Australia. Its operations also extend to the seas off Western Australia and Northern Territory. The company has been criticised by environmentalists and others for its high level of greenhouse gas emissions, its lobbying of political parties, and various incidents causing contamination. Santos provides sponsorship of several arts festivals and bodies, charities, and the University of Adelaide's Australian School of Petroleum. History Santos was incorporated on 18 March 1954, with its name an acronym of South Australia Northern Territory Oil Search. Its core business was initially built on gas discoveries in the Cooper Basin, South Australia, with the discovery and development of the Gidgealpa 2 well in 1963 and Moomba 1 in 1966. It signed supply contracts with the South Australian Gas Company, the Electricity Trust of South Australia and the Australian Gas Light Company, commencing supply in 1969.After oil was discovered at Tirrawarra, near Moomba, South Australia, in the early 1970s, the company developed its liquid supply operations, which included a plant at Moomba and a fractionation and loading facility at Port Bonython.In the 1990s, the company expanded, acquiring other companies and developing its operations both onshore and offshore in Australia, Indonesia, Malaysia, Vietnam and Papua New Guinea. Also in this decade it acquired interests in petroleum in the United States and United Kingdom, as well as operations in the Timor Sea and Western Australia. The Ballera gas plant in South West Queensland was established in 1991 and upgraded in 1997.In 2015 Santos began producing LNG, shipping it to South Korea. In August 2018 Santos announced the acquisition of Australian oil and gas company Quadrant Energy for $2.15 billion. As part of the deal, Santos obtained Quadrant's 80% stake in Dorado in the Bedout Basin in northern Western Australia.In May 2020 Santos completed its acquisition of ConocoPhillips' northern Australia and Timor-Leste assets for US$1.265 billion as well as a contingent payment of US$200 million, which gave Santos control of ageing offshore assets in Bayu-Undan, subsea assets in the Timor Sea, and onshore gas plant Darwin LNG (DLNG), which included the Bayu-Undan to Darwin Pipeline. At the completion of the deal with ConocoPhillips, the interest of Santos in these assets was increased to 68.4%, The sale was contingent on a final investment decision on the future #Barossa project, which once concluded increased Santos' interest in the project to 62.5%. Description and governance Santos is one of Australia's domestic gas and oil producers, supplying sales gas to all mainland Australian states and territories, ethane to Sydney, and oil and liquids to domestic and international customers. It is the biggest supplier of natural gas in Australia, Australia's second-largest independent producer of oil and natural gas, and was slated to become the world's biggest liquefied natural gas (LNG) exporter by 2019. In the 2020 Forbes Global 2000, Santos was ranked as the 1583rd-largest public company in the world.Santos has its headquarters at 60 Flinders Street, Adelaide. It also has offices in Brisbane, Sydney, Perth and Jakarta. Since 1 February 2016 and as of June 2022 the company's CEO has been Kevin Gallagher. He was preceded by David Knox. Operations The South Australian and Queensland gas reserves are the main sources of natural gas to the eastern states of Australia. Santos is the primary venture partner and operator of natural gas processing facilities at Moomba in SA and Ballera in Queensland, and pipelines connecting those facilities with Adelaide, Sydney, Melbourne, Brisbane, Rockhampton and Mount Isa. Gas and LNG Santos has made significant discoveries in the Browse Basin, off the northwest of Western Australia. On 22 August 2014 the company announced a major gas condensate discovery at the Lasseter-1 exploration well in WA-274-P in the basin, in which Santos had a 30% interest in company with Chevron (50%) and Inpex (20%). It was the second major discovery by the company in the area in two years.On 7 September 2017 Santos pledged to divert 30 petajoules of gas from the Gladstone LNG plant slated for export into Australia's east coast market in 2018 and 2019, as part of efforts to avert government-imposed restrictions on gas exports to solve local gas shortages. Because of shortages in its own supply of gas for export, Santos rely on purchasing gas from third parties to supply its overseas contracts.Santos has an interest in the Darwin LNG project, which was the first liquefied natural gas project in the Northern Territory and the second in Australia. It has been supplied by the Bayu-Undan field, which is anticipated will be exhausted during the 2020s, hence the intention to develop the Barossa project to replace the dwindling reserves. Barossa project The Barossa project is a proposed gas field in the Timor Sea, intended to take over from Bayu-Undan field after its reserves are exhausted, supplying LNG to the Darwin facility via a new pipeline which, for part of its length, run parallel to the existing Bayu-Undan to Darwin Pipeline. Condensate oil will also be extracted. It is situated around 300 km (190 mi) north of Darwin, in Australian waters. Worth A$4.7 billion, the project was signed off in 2021, with gas production is expected to commence in 2025. The project is expected to create about 600 jobs during construction and 350 ongoing jobs in Darwin over the following 20 years.The project has been criticised for its future carbon emissions. If developed, Barossa would become the most carbon-intensive gas development in Australia. When the project was purchased from ConocoPhillips in 2020, it was projected to produce 1.5 tonnes of CO2 for every tonne of LNG. A 2021 report using the Darwin LNG project as a case study suggested that emissions could be greatly reduced by the use of solar power by using Sun Cable's Australia-Asia Power Link. but the Institute for Energy Economics and Financial Analysis (IEEFA) described the project as an “emissions factory with a gas by-product”, saying that even if it employed carbon capture and storage, the project would continue to release financially risky carbon dioxide emissions at the site, onshore and across the whole supply chain.A March 2022 legal challenge by leaders of the Jikilaruwu Tiwi Islands clan targeted the South Korean state-owned Export-Import Bank of Korea and the Korea Trade Insurance Corporation, which are planning to lend Santos approximately A$950m (£530m). They hoped to prevent Santos from building the gas pipeline near Cape Fourcroy, a habitat for many marine species, and a place where many Aboriginal people hunt, live, and camp. However the case failed in the Seoul District Court.In June 2022 traditional owners of the Tiwi Islands filed a lawsuit against Santos and the federal government, who they said had not properly consulted them. In the lawsuit Dennis Tipakalippa (senior lawman of the Munipi Clan) also argued that NOPSEMA, the federal offshore gas regulator, should not have approved Santos’ plans to drill the Barossa gas field due to the Santos' inadequate consultation. The traditional owners are concerned about the effect on the nesting areas of flatback and olive ridley turtles, which provide one of the Aboriginal people's traditional food sources. Four federal government marine parks, including Ashmore Reef, are also in the vicinity. Santos has submitted an environmental impact plan, which includes the potential impact of an oil spill, and its plans for cleanup should one occur. In September 2022 Judge Mordecai Bromberg dismissed Santos’ environmental plan, thus invalidating its authorisation for drilling. As a result Santos had to disconnect its drilling rig from the sea north of Melville Island and leave the Barossa field by 6 October 2022. Financial results Santos' production for 2008 was 54.4 million barrels (8,650,000 m3) of oil equivalent. Earnings before interest, taxes, depreciation, amortisations and exploration expenses for the period was A$2.8 billion, representing after tax profit of A$1.65 billion. On 22 August 2014 the company said its oil production was at its highest level in six years. For the first half of 2014, Santos recorded sales revenue of $1.8 billion, an increase of 20% on the comparable period the previous year. Sales volumes rose by 5% to 28.9 million barrels of oil equivalent. As a result of the company writing off its investment in a coal seam gas project in Indonesia, the 2014 first-half profit being down 24% at $206 million.In 2015, Santos' financial troubles became more evident as the share price crashed to one third of its value from the previous year. It hit a 12-year low and has stayed low since. This occurred because of mounting debt and an oil price slump. CEO David Knox was forced to leave, with chairman Peter Coates stepping into the role and leading a strategic review of the gas company. Options of partial asset sale, even takeovers, has been speculated including "No options will be ruled out from consideration, but neither is any particular option a preferred course at this time," Coates said. Issues Greenhouse gas emissions In 2020, Santos was named on a list of Australia's 65 worst greenhouse gas emitting companies. Following additional pressure from ethical investors, Santos announced a goal to reduce greenhouse gas emissions to achieve net zero emissions by 2040 using a combination of carbon capture and storage, renewable energy and offsetting through tree planting programs. Lobbying and political donations Santos has engaged Adelaide-based consultancy Bespoke Approach to lobby the Australian Government and the state governments of New South Wales and Queensland. Other lobbyists which have represented Santos include: Kreab Gavin Anderson (Australia) Ltd, Craig Emerson Economics and Australian Public Affairs.In the financial year 2012–13, Santos Ltd gave donations directly to the Labor, Liberal, and National political parties at state and federal levels. Donations are tabled below. Incidents Moomba explosions, South Australia On 1 January 2004 an explosion occurred at Santos' Moomba processing facility. The blast was traced to the Liquids Recovery Plant (LRP), where an inlet manifold and a related flange weld both failed after corrosion by mercury. Mercury was released along with a cloud of flammable gases including methane, ethane, propane and butane. Workers saw the cloud and raised the alarm, shutting down the plant and evacuating to designated safety points. Some workers allegedly did not hear the emergency alarms. The gas cloud ignited on contact with a heating unit 150 metres away, and an explosion followed. The plant was seriously damaged.Moomba workers who sought to remain anonymous told The Australian newspaper on 5 January that the company was running a "cowboy" operation, and that it was luck, not management that had prevented any loss of life. They also said that the emergency muster area was too close to the plant in the event of a major tank explosion.Gas supplies to South Australia and New South Wales were interrupted, leading to down-time in the manufacturing sector and short-term rationing measures in both states while repairs were made. Santos spent $40 million on remedial action following the incident. In 2011, the South Australian industrial relations court ruled that 13 employees had been placed at risk due to critical safety shortcomings. These included an inadequate risk assessment which failed to identify the likelihood of plant failing due to liquid metal rendering it brittle. The company pleaded guilty to breaching the Occupational Health Safety and Welfare Act after a SafeWork prosecution and was fined $84,000. Sidoarjo mud flow, Indonesia In May 2006, the Sidoarjo mud flow disaster occurred in East Java, Indonesia. Controversy exists surrounding the probable cause of the disaster which has displaced approximately 10,000 people and covered villages, farms and industrial areas with mud. The eruption is ongoing, though since 2011 the rate of flow has reduced.Santos had stated in June 2006 that it maintained "appropriate insurance coverage for these types of occurrences". Port Bonython groundwater contamination, South Australia In May 2008, groundwater contamination was reported to the Environment Protection Authority (EPA) following detection at Santos' Port Bonython site, Spencer Gulf, South Australia. Hydrocarbons were found floating on and in the groundwater. One hundred and fifty inspection wells were later established, and a 450-metre-long (1,480 ft) cement bentonite wall was constructed 'to stop the further spread of contamination off-site' including to the marine environment. In May 2012, Santos reported declining rates of hydrocarbon recovery from groundwater extraction wells and claimed that their remediation efforts were working. Pilliga CSG wastewater spill, New South Wales In 2011, a 10,000-litre spill of untreated coal seam gas water occurred impacting native vegetation and soil in the Pilliga forest. Coal seam gas extraction produces water that can contain lead, mercury, various salts and other heavy metals. Rehabilitation has been trying to restore this site to remediate elevated contamination in the soil. Jackson oil spill, Queensland In May 2013, an uncontrolled oil spill was reported in Santos' Zeus field near Jackson in Queensland's remote south-west. The flow lasted 'almost a week' before international experts were able to contain it. The rate of flow was estimated at 50,000 litres per day. Uranium contamination of Narrabri aquifers, New South Wales In 2013, groundwater monitoring detected elevated levels of salinity and heavy metals near Santos' Tintsfield ponds in the Pilliga forest. Also it was reported that at the Bibblewindi ponds, uranium 20 times above the safe drinking levels was detected. A NSW Government investigation into the incident determined the leak was "small, localised and contained" and drinking water sources and stock and domestic water sources were not impacted nor were they at risk. The Investigation also found that the uranium detected was not from the pond's water, but was from naturally occurring Uranium in the surrounding soil that was mobilised from the leaking pond. Climate activism In March 2021, four Extinction Rebellion protesters glued themselves to the road outside the Santos building in Adelaide, and two scaled the building, painted messages on it, set off flares and glued themselves to the building. Police and firefighters had to remove them and the protesters, who included three women aged over 64, were charged. They were protesting against fracking, and called upon Santos to invest more in renewable energy.Tiwi Islanders won a landmark case in September 2022, against drilling for gas by Santos in their traditional waters after complaining that the company failed to consult them about the impact of the project. Judge Mordecai Bromberg set aside approval for the drilling, part of Santos’s $4.7bn Barossa project and gave Santos two weeks to shut down and remove its rig from the sea north of Melville Island. The Judge said the offshore oil and gas regulator Nopsema failed to assess whether Santos had consulted with everyone affected by the proposed drilling, as required by the law. Pipeline explosions In January 2023, a gas pipeline exploded due to material fatigue. A similar event occurred in 2020. Both events were reported to the Government of South Australia, but neither was made public until they were discussed at the company's 2023 AGM and subsequently reported on by ABC news. Sponsorship Santos sponsors many community activities, events, institutions and projects in jurisdictions where they operate commercially. In October 2014, The Advertiser claimed that Santos spends $10 million annually on South Australian community groups, events and institutions. Figures published in Santos' 2014 Sustainability Report state that $7,487,731 was spent on 'Community investment' in South Australia that financial year and $3,108,057 in Queensland. Other jurisdictions received between $5,000 (South Korea) and $775,255 (Western Australia) and the total 'community investment' spent across all regions during 2013–14 was $13,217,617. South Australia Recipients of financial support from Santos in South Australia have included: Adelaide Symphony Orchestra Art Gallery of South Australia Adelaide Botanic Garden Come Out Festival (2009) – People's Puppets Project, Whyalla Committee for Adelaide – founding member OzAsia Festival RiAus – $5 million AUD foundation partner Santos Conservation Centre at the Adelaide Zoo Santos Stadium athletics venue Santos Tour Down Under UCI World Tour cycling event The Smith Family University of Adelaide – Australian School of Petroleum – $25 million AUD over 10 years Queensland Recipients of financial support from Santos in Queensland have included: Queensland Art Gallery – $1.5 million AUD over 5 years Santos GLNG Food & Fire Fest Queensland Police Service's Stay on Track Outback road safety campaign (2012–2014) ANU divestment In October 2014, the Australian National University (ANU) sold its shares in Santos and several other companies in the nation's most reported case of fossil fuel industry divestment. Santos responded by claiming that gas is necessary in the state's future energy mix, and The Advertiser published its economic value to South Australia. It was reported that Santos employed 3500 people nationally as well as thousands of contractors, and had a $13 billion market value. Politicians expressing their support for the company included the Prime Minister Tony Abbott and federal MPs Jamie Briggs, Christopher Pyne, James McGrath, Greg Hunt and Treasurer Joe Hockey. Several senior state ministers also spoke out against the decision to divest in South Australia and Queensland, including South Australian Treasurer Tom Koutsantonis. Former Liberal party leaders John Hewson and Malcolm Fraser both supported ANU's right to choose how and where to invest its money. ANU chancellor Gareth Evans said that the university had not described Santos as a "socially irresponsible" company, and in a letter to Santos CEO David Knox, Evans said the university regretted any embarrassment suffered by Santos over the decision to divest. Opposition to Santos sponsorship In December 2014, photographs showing Queensland police vehicles featuring Santos company logos was criticised by anti-coal seam gas group Lock the Gate Alliance. Santos contributed approximately $40,000 to the road safety program "stay on track outback" which was described as valuable addition to a road safety campaign. Queensland Police commissioner Ian Stewart described the vehicles as "PR vehicles that we use at shows, we use at expos, all of those sorts of things just as any PR machine would be used by a company or another government organisation."Online activists referred to the sponsorship as a "conflict of interest" and "a bloody disgrace" with Stop Brisbane Coal Trains spokesman John Gordon calling for the logos to be removed. Santos responded by stating that the company was "proud to support a program that promotes safe driving and is saving lives in outback Australia." Queensland's Police Minister Jack Dempsey defended the program and its sponsors stating "The Queensland Police Service's 'Stay on Track Outback' is a road safety program aimed at keeping communities safer and reducing road trauma in regional Queensland. It has been in place since 2012 thanks to support from a number of sponsors."In 2015, the Frack Free NT Alliance (a diverse group of opposing shale gas in the Northern Territory) called for the Darwin Festival to reject Santos sponsorship due to the company's involvement in shale gas exploration and development in the Northern Territory. Dayne Pratzky, a.k.a. Frackman, supported the call. On 19 October 2022, Santos announced they would not renew their 20 year funding agreement with the Darwin Festival.In 2021 and 2022, Extinction Rebellion held protests at the Adelaide Botanic Garden to denounce Santos' sponsorship of the Santos Museum of Economic Botany. Extinction Rebellion spokesperson Ben Brooker described the arrangement, slated to last until 2029, as "a terrible stain on this treasured institution" and one that "goes absolutely against the letter and the spirit of the Garden's own charter, which of course has biodiversity and conservation at its heart", adding that "we just do not feel that those values are compatible with taking money from an incredibly destructive fossil fuel company". See also Moomba Adelaide Pipeline System References Further reading Weidenbach, K. (2014): Blue flames, black gold: The story of Santos. Santos Pty Ltc. ISBN 9781921037399 External links Official website
peatland
A peatland is a type of wetland whose soils consist of organic matter from decaying plants, forming layers of peat. Peatlands arise because of incomplete decomposition of organic matter, usually litter from vegetation, due to water-logging and subsequent anoxia. Like coral reefs, peatlands are unusual landforms that derive mostly from biological rather than physical processes, and can take on characteristic shapes and surface patterning. The formation of peatlands is primarily controlled by climatic conditions such as precipitation and temperature, although terrain relief is a major factor as waterlogging occurs more easily on flatter ground and in basins. Peat formation typically initiates as a paludification of a mineral soil forests, terrestrialisation of lakes, or primary peat formation on bare soils on previously glaciated areas. A peatland that is actively forming peat is called a mire. All types of mires share the common characteristic of being saturated with water, at least seasonally with actively forming peat, while having their own ecosystem.Peatlands are the largest natural carbon store on land. Covering around 3 million km2 globally, they sequester 0.37 gigatons (Gt) of carbon dioxide (CO2) a year. Peat soils store over 600Gt of carbon, more than the carbon stored in all other vegetation types, including forests. In their natural state, peatlands provide a range of ecosystem services, including minimising flood risk and erosion, purifying water and regulating climate. Peatlands are under threat by commercial peat harvesting, drainage and conversion for agriculture (notably palm oil in the tropics) and fires, which are predicted to become more frequent with climate change. The destruction of peatlands results in release of stored greenhouse gases into the atmosphere, further exacerbating climate change. Types For botanists and ecologists, the term peatland is a general term for any terrain dominated by peat to a depth of at least 30 cm (12 in), even if it has been completely drained (i.e., a peatland can be dry). A peatland that is still capable of forming new peat is called a mire, while drained and converted peatlands might still have a peat layer but are not considered mires as the formation of new peat has ceased.There are two types of mire: bog and fen. A bog is a mire that, due to its raised location relative to the surrounding landscape, obtains all its water solely from precipitation (ombrotrophic). A fen is located on a slope, flat, or in a depression and gets most of its water from the surrounding mineral soil or from groundwater (minerotrophic). Thus, while a bog is always acidic and nutrient-poor, a fen may be slightly acidic, neutral, or alkaline, and either nutrient-poor or nutrient-rich. All mires are initially fens when the peat starts to form, and may turn into bogs once the height of the peat layer reaches above the surrounding land. A quagmire is a floating (quaking) mire, bog, or any peatland being in a stage of hydrosere or hydrarch (hydroseral) succession, resulting in pond-filling yields underfoot. Ombrotrophic types of quagmire may be called quaking bog (quivering bog). Minerotrophic types can be named with the term quagfen.Some swamps can also be peatlands (e.g.: peat swamp forest), while marshes are generally not considered to be peatlands. Swamps are characterized by their forest canopy or the presence of other tall and dense vegetation like papyrus. Like fens, swamps are typically of higher pH level and nutrient availability than bogs. Some bogs and fens can support limited shrub or tree growth on hummocks. A marsh is a type of wetland within which vegetation is rooted in mineral soil. Global distribution Peatlands are found around the globe, although are at their greatest extent at high latitudes in the Northern Hemisphere. Peatlands are estimated to cover around 3% of the globe's surface, although estimating the extent of their cover worldwide is difficult due to the varying accuracy and methodologies of land surveys from many countries. Mires occur wherever conditions are right for peat accumulation: largely where organic matter is constantly waterlogged. Hence the distribution of mires is dependent on topography, climate, parent material, biota, and time. The type of mire – bog, fen, marsh or swamp – depends also on each of these factors. The largest accumulation of mires constitutes around 64% of global peatlands and is found in the temperate, boreal and subarctic zones of the Northern Hemisphere. Mires are usually shallow in polar regions because of the slow rate of accumulation of dead organic matter, and often contain permafrost and palsas. Very large swathes of Canada, northern Europe and northern Russia are covered by boreal mires. In temperate zones mires are typically more scattered due to historical drainage and peat extraction, but can cover large areas. One example is blanket bog where precipitation is very high i.e., in maritime climates inland near the coasts of the north-east and south Pacific, and the north-west and north-east Atlantic. In the sub-tropics, mires are rare and restricted to the wettest areas. Mires can be extensive in the tropics, typically underlying tropical rainforest (for example, in Kalimantan, the Congo Basin and Amazon Basin). Tropical peat formation is known to occur in coastal mangroves as well as in areas of high altitude. Tropical mires largely form where high precipitation is combined with poor conditions for drainage. Tropical mires account for around 11% of peatlands globally (more than half of which can be found in Southeast Asia), and are most commonly found at low altitudes, although they can also be found in mountainous regions, for example in South America, Africa and Papua New Guinea. In the early 21st century, the world's largest tropical mire was found in the Central Congo Basin, covering 145,500 km2 and storing up to 1013 kg of carbon.The total area of mires has declined globally due to drainage for agriculture, forestry and peat harvesting. For example, more than 50% of the original European mire area which is more than 300,000 km2 has been lost. Some of the largest losses have been in Russia, Finland, the Netherlands, the United Kingdom, Poland and Belarus. A catalog of the peat research collection at the University of Minnesota Duluth provides references to research on worldwide peat and peatlands. Biochemical processes Peatlands have unusual chemistry that influences, among other things, their biota and water outflow. Peat has very high cation-exchange capacity due to its high organic matter content: cations such as Ca2+ are preferentially adsorbed onto the peat in exchange for H+ ions. Water passing through peat declines in nutrients and pH. Therefore, mires are typically nutrient-poor and acidic unless the inflow of groundwater (bringing in supplementary cations) is high.Generally, whenever the inputs of carbon into the soil from dead organic matter exceed the carbon outputs via organic matter decomposition, peat is formed. This occurs due to the anoxic state of water-logged peat, which slows down decomposition. Peat-forming vegetation is typically also recalcitrant (poorly decomposing) due to high lignin and low nutrient content. Topographically, accumulating peat elevates the ground surface above the original topography. Mires can reach considerable heights above the underlying mineral soil or bedrock: peat depths of above 10m have been commonly recorded in temperate regions (many temperate and most boreal mires were removed by ice sheets in the last Ice Age), and above 25 m in tropical regions.[7] When the absolute decay rate of peat in the catotelm (the lower, water-saturated zone of the peat layer) matches the rate of input of new peat into the catotelm, the mire will stop growing in height.[8] Carbon storage and methanogenesis Despite accounting for just 3% of Earth's land surfaces, peatlands are collectively a major carbon store containing between 500 and 700 billion tonnes of carbon. Carbon stored within peatlands equates to over half the amount of carbon found in the atmosphere. Peatlands interact with the atmosphere primarily through the exchange of carbon dioxide, methane and nitrous oxide, and can be damaged by excess nitrogen from agriculture or rainwater. The sequestration of carbon dioxide takes place at the surface via the process of photosynthesis, while losses of carbon dioxide occur through living plants via autotrophic respiration and from the litter and peat via heterotrophic respiration. In their natural state, mires are a small atmospheric carbon dioxide sink through the photosynthesis of peat vegetation, which outweighs their release of greenhouse gases. On the other hand, most mires are generally net emitters of methane and nitrous oxide. Due to the continued CO2 sequestration over millennia, and because of the longer atmospheric lifespan of the CO2 molecules compared with methane and nitrous oxide, peatlands have had a net cooling effect on the atmosphere.The water table position of a peatland is the main control of its carbon release to the atmosphere. When the water table rises after a rainstorm, the peat and its microbes are submerged under water inhibiting access to oxygen, reducing CO2 release via respiration. Carbon dioxide release increases when the water table falls lower, such as during a drought, as this increases the availability of oxygen to the aerobic microbes thus accelerating peat decomposition. Levels of methane emissions also vary with the water table position and temperature. A water table near the peat surface gives the opportunity for anaerobic microorganisms to flourish. Methanogens are strictly anaerobic organisms and produce methane from organic matter in anoxic conditions below the water table level, while some of that methane is oxidised by methanotrophs above the water table level. Therefore, changes in water table level influence the size of these methane production and consumption zones. Increased soil temperatures also contribute to increased seasonal methane flux. A study in Alaska found that methane may vary by as much as 300% seasonally with wetter and warmer soil conditions due to climate change.Peatlands are important for studying past climate because they are sensitive to changes in the environment and can reveal levels of isotopes, pollutants, macrofossils, metals from the atmosphere, and pollen. For example, carbon-14 dating can reveal the age of the peat. The dredging and destruction of a peatland will release the carbon dioxide that could reveal irreplaceable information about the past climatic conditions. Many kinds of microorganisms inhabit peatlands, due to the regular supply of water and abundance of peat forming vegetation. These microorganisms include but are not limited to methanogens, algae, bacteria, zoobenthos, of which sphagnum species are most abundant. Humic substances Peat contains a substantial amount of organic matter, where humic acid dominates. Humic materials are able to store very large amounts of water, making them an essential component in the peat environment, contributing to an increased amount of carbon storage due to the resulting anaerobic condition. If the peatland is dried from long-term cultivation and agricultural use, it will lower the water table and the increased aeration will subsequently release carbon. Upon extreme drying, the ecosystem can undergo a state shift, turning the mire into a barren land with lower biodiversity and richness. The formation of humic acid occurs during the biogeochemical degradation of vegetation debris, animal residue, and degraded segments. The loads of organic matter in the form of humic acid is a source of precursors of coal. Prematurely exposing the organic matter to the atmosphere promotes the conversion of organics to carbon dioxide to be released in the atmosphere. Use by humans Records of past human behaviour and environments can be contained within peatlands. These may take the form of human artefacts, or palaeoecological and geochemical records.Peatlands are used by humans in modern times for a range of purposes, the most dominant being agriculture and forestry, which accounts for around a quarter of global peatland area. This involves cutting drainage ditches to lower the water table with the intended purpose of enhancing the productivity of forest cover or for use as pasture or cropland. Agricultural uses for mires include the use of natural vegetation for hay crop or grazing, or the cultivation of crops on a modified surface. In addition, the commercial extraction of peat for energy production is widely practiced in Northern European countries, such as Russia, Sweden, Finland, Ireland and the Baltic states.Tropical peatlands comprise 0.25% of Earth's terrestrial land surface but store 3% of all soil and forest carbon stocks. The use of this land by humans, including draining and harvesting of tropical peat forests, results in the emission of large amounts of carbon dioxide into the atmosphere. In addition, fires occurring on peatland dried by the draining of peat bogs release even more carbon dioxide. The economic value of a tropical peatland was once derived from raw materials, such as wood, bark, resin, and latex, the extraction of which did not contribute to large carbon emissions. In Southeast Asia, peatlands are drained and cleared for human use for a variety of reasons, including the production of palm oil and timber for export in primarily developing nations. This releases stored carbon dioxide and preventing the system from sequestering carbon again. Tropical peatlands The global distribution of tropical peatlands is concentrated in Southeast Asia where agricultural use of peatlands has been increased in recent decades. Large areas of tropical peatland have been cleared and drained for the production of food and cash crops such as palm oil. Large-scale drainage of these plantations often results in subsidence, flooding, fire, and deterioration of soil quality. Small scale encroachment on the other hand, is linked to poverty and is so widespread that it also has negatively impacts these peatlands. The biotic and abiotic factors controlling Southeast Asian peatlands are interdependent. Its soil, hydrology and morphology are created by the present vegetation through the accumulation of its own organic matter, building a favorable environment for this specific vegetation. This system is therefore vulnerable to changes in hydrology or vegetation cover. These peatlands are mostly located in developing regions with impoverished and rapidly growing populations. These lands have become targets for commercial logging, paper pulp production and conversion to plantations through clear-cutting, drainage and burning. Drainage of tropical peatlands alters the hydrology and increases their susceptibility to fire and soil erosion, as a consequence of changes in physical and chemical compositions. The change in soil strongly affects the sensitive vegetation and forest die-off is common. The short-term effect is a decrease in biodiversity but the long-term effect, since these encroachments are hard to reverse, is a loss of habitat. Poor knowledge about peatlands' sensitive hydrology and lack of nutrients often lead to failing plantations, resulting in increasing pressure on remaining peatlands. Biology and peat characteristics Tropical peatland vegetation varies with climate and location. Three different characterizations are mangrove woodlands present in the littoral zones and deltas of salty water, followed inland by swamp forests. These forests occur on the margin of peatlands with a palm rich flora with trees 70 m tall and 8 m in girth accompanied by ferns and epiphytes. The third, padang, from the Malay and Indonesian word for forest, consists of shrubs and tall thin trees and appear in the center of large peatlands. The diversity of woody species, like trees and shrubs, are far greater in tropical peatlands than in peatlands of other types. Peat in the tropics is therefore dominated by woody material from trunks of trees and shrubs and contain little to none of the sphagnum moss that dominates in boreal peatlands. It's only partly decomposed and the surface consists of a thick layer of leaf litter. Forestry in peatlands leads to drainage and rapid carbon losses since it decreases inputs of organic matter and accelerate the decomposition. In contrast to temperate wetlands, tropical peatlands are home to several species of fish. Many new, often endemic, species has been discovered but many of them are considered threatened. Greenhouse gases and fires The tropical peatlands in Southeast Asia only cover 0.2% of earths land area but CO2 emissions are estimated to be 2 Gt per year, equal to 7% of global fossil fuel emissions. These emissions get bigger with drainage and burning of peatlands and a severe fire can release up to 4000 t of CO2/ha. Burning events in tropical peatlands are becoming more frequent due to large scale drainage and land clearance and in the past 10 years, more than 2 million ha was burnt in Southeast Asia alone. These fires last typically for 1–3 months and release large amounts of CO2. Indonesia is one of the countries suffering from peatland fires, especially during years with ENSO-related drought, an increasing problem since 1982 as a result of developing land use and agriculture. During the El Niño-event in 1997-1998 more than 24,400 km2 of peatland was lost to fires in Indonesia alone from which 10,000 km2 was burnt in Kalimantan and Sumatra. The output of CO2 was estimated to 0.81–2.57 Gt, equal to 13–40% of that year’s global output from fossil fuel burning. Indonesia is now considered the 3rd biggest contributor to global CO2 emissions, caused primarily by these fires. With a warming climate these burnings are expected to increase in intensity and number. This is a result of a dry climate together with an extensive rice farming project, called the Mega Rice Project, started in the 1990s, which converted 1 Mha of peatlands to rice paddies. Forest and land was cleared by burning and 4000 km of channels drained the area. Drought and acidification of the lands led to bad harvest and the project was abandoned in 1999. Similar projects in China have led to immense loss of tropical marshes and fens due to rice production.Drainage, which also increases the risk of burning, can cause additional emissions of CO2 by 30–100 t/ha/year if the water table is lowered by only 1 m. The draining of peatlands is likely the most important and long-lasting threat to peatlands globally, but is especially prevalent in the tropics.Peatlands release the greenhouse gas methane which has strong global warming potential. However, subtropical wetlands have shown high CO2 binding per mol of released methane, which is a function that counteracts global warming. Tropical peatlands are suggested to contain about 100 Gt carbon, corresponding to more than 50% of the carbon present as CO2 in the atmosphere. Accumulation rates of carbon during the last millennium were close to 40 g C/m2/yr. Northern peatlands Northern peatlands are associated with boreal and subarctic climates. Northern peatlands were mostly built up during the Holocene after the retreat of Pleistocene glaciers, but in contrast tropical peatlands are much older. Total northern peat carbon stocks are estimated to be 1055 Gt of carbon.Of all northern circumpolar countries, Russia has the largest area of peatlands and contains the largest peatland in the world, The Great Vasyugan Mire. Nakaikemi Wetland in southwest Honshu, Japan is more than 50,000 years old and has a depth of 45 m. The Philippi Peatland in Greece has probably one of the deepest peat layers with a depth of 190m. Impacts on global climate According to the IPCC Sixth Assessment Report, the conservation and restoration of wetlands and peatlands has large economic potential to mitigate greenhouse gas emissions, providing benefits for adaptation, mitigation, and biodiversity.Wetlands provide an environment where organic carbon is stored in living plants, dead plants and peat, as well as converted to carbon dioxide and methane. Three main factors give wetlands the ability to sequester and store carbon: high biological productivity, high water table and low decomposition rates. Suitable meteorological and hydrological conditions are necessary to provide an abundant water source for the wetland. Fully water-saturated wetland soils allow anaerobic conditions to manifest, storing carbon but releasing methane.Wetlands make up about 5-8% of Earth's terrestrial land surface but contain about 20-30% of the planet's 2500 Gt soil carbon stores. Peatlands contain the highest amounts of soil organic carbon of all wetland types. Wetlands can become sources of carbon, rather than sinks, as the decomposition occurring within the ecosystem emits methane. Natural peatlands do not always have a measurable cooling effect on the climate in a short time span as the cooling effects of sequestering carbon are offset by the emission of methane, which is a strong greenhouse gas. However, given the short "lifetime" of methane (12 years), it is often said that methane emissions are unimportant within 300 years compared to carbon sequestration in wetlands. Within that time frame or less, most wetlands become both net carbon and radiative sinks. Hence, peatlands do result in cooling of the Earth's climate over a longer time period as methane is oxidised quickly and removed from the atmosphere whereas atmospheric carbon dioxide is continuously absorbed. Throughout the Holocene (the past 12,000 years), peatlands have been persistent terrestrial carbon sinks and have had a net cooling effect, sequestering 5.6 to 38 grams of carbon per square metre per year. On average, it has been estimated that today northern peatlands sequester 20-30 grams of carbon per square meter per year.Peatlands insulate the permafrost in subarctic regions, thus delaying thawing during summer, as well as inducing the formation of permafrost. As the global climate continues to warm, wetlands could become major carbon sources as higher temperatures cause higher carbon dioxide emissions.Compared with untilled cropland, wetlands can sequester around two times the carbon. Carbon sequestration can occur in constructed wetlands as well as natural ones. Estimates of greenhouse gas fluxes from wetlands indicate that natural wetlands have lower fluxes, but man-made wetlands have a greater carbon sequestration capacity. The carbon sequestration abilities of wetlands can be improved through restoration and protection strategies, but it takes several decades for these restored ecosystems to become comparable in carbon storage to peatlands and other forms of natural wetlands. Drainage for agriculture and forestry The exchange of carbon between the peatlands and the atmosphere has been of current concern globally in the field of ecology and biogeochemical studies. The drainage of peatlands for agriculture and forestry has resulted in the emission of extensive greenhouse gases into the atmosphere, most notably carbon dioxide and methane. By allowing oxygen to enter the peat column within a mire, drainage disrupts the balance between peat accumulation and decomposition, and the subsequent oxidative degradation results in the release of carbon into the atmosphere. As such, drainage of mires for agriculture transforms them from net carbon sinks to net carbon emitters. Although the emission of methane from mires has been observed to decrease following drainage, the total magnitude of emissions from peatland drainage is often greater as rates of peat accumulation are low. Peatland carbon has been described as "irrecoverable" meaning that, if lost due to drainage, it could not be recovered within time scales relevant to climate mitigation.When undertaken in such a way that preserves the hydrological state of a mire, the anthropogenic use of mires' resources can avoid significant greenhouse gas emissions. However, continued drainage will result in increased release of carbon, contributing to global warming. As of 2016, it was estimated that drained peatlands account for around 10% of all greenhouse gas emissions from agriculture and forestry. Palm oil plantations Palm oil has increasingly become one of the world's largest crops. In comparison to alternatives, palm oil is considered to be among the most efficient sources of vegetable oil and biofuel, requiring only 0.26 hectares of land to produce 1 ton of oil. Palm oil has therefore become a popular cash crop in many low-income countries and has provided economic opportunities for communities. With palm oil as a leading export in countries such as Indonesia and Malaysia, many smallholders have found economic success in palm oil plantations. However, the land selected for plantations are typically substantial carbon stores that promote biodiverse ecosystems.Palm oil plantations have replaced much of the forested peatlands in Southeast Asia. Estimates now state that 12.9 Mha or about 47% of peatlands in Southeast Asia were deforested by 2006. In their natural state, peatlands are waterlogged with high water tables making for an inefficient soil. To create viable soil for plantation, the mires in tropical regions of Indonesia and Malaysia are drained and cleared. The peatland forests harvested for palm oil production serve as above- and below-ground carbon stores, containing at least 42,069 million metric tonnes (Mt) of soil carbon. Exploitation of this land raises many environmental concerns, namely increased greenhouse gas emissions, risk of fires, and a decrease in biodiversity. Greenhouse gas emissions for palm oil planted on peatlands is estimated to be between the equivalent of 12.4 (best case) to 76.6 t CO2/ha (worst case). Tropical peatland converted to palm oil plantation can remain a net source of carbon to the atmosphere after 12 years.In their natural state, peatlands are resistant to fire. Drainage of peatlands for palm oil plantations creates a dry layer of flammable peat. As peat is carbon dense, fires occurring in compromised peatlands release extreme amounts of both carbon dioxide and toxic smoke into the air. These fires add to greenhouse gas emissions while also causing thousands of deaths every year.Decreased biodiversity due to deforestation and drainage makes these ecosystem more vulnerable and less resilient to change. Homogenous ecosystems are at an increased risk to extreme climate conditions and are less likely to recover from fires. Fires Some peatlands are being dried out by climate change. Drainage of peatlands due to climatic factors may also increase the risk of fires, presenting further risk of carbon and methane to release into the atmosphere. Due to their naturally high moisture content, pristine mires have a generally low risk of fire ignition. The drying of this waterlogged state means that the carbon-dense vegetation becomes vulnerable to fire. In addition, due to the oxygen deficient nature of the vegetation, the peat fires can smolder beneath the surface causing incomplete combustion of the organic matter and resulting in extreme emissions events.In recent years, the occurrence of wildfires in peatlands has increased significantly worldwide particularly in the tropical regions. This can be attributed to a combination of drier weather and changes in land use which involve the drainage of water from the landscape. This resulting loss of biomass through combustion has led to significant emissions of greenhouse gasses both in tropical and boreal/temperate peatlands. Fire events are predicted to become more frequent with the warming and drying of the global climate. Management and rehabilitation The United Nations Convention of Biological Diversity highlights peatlands as key ecosystems to be conserved and protected. The convention requires governments at all levels to present action plans for the conservation and management of wetland environments. Wetlands are also protected under the 1971 Ramsar Convention.Often, restoration is done by blocking drainage channels in the peatland, and allowing natural vegetation to recover. Rehabilitation projects undertaken in North America and Europe usually focus on the rewetting of peatlands and revegetation of native species. This acts to mitigate carbon release in the short term before the new growth of vegetation provides a new source of organic litter to fuel the peat formation in the long term. UNEP is supporting peatland restoration in Indonesia. Global Peatlands Initiative References External links "Quagmire" . Encyclopædia Britannica. Vol. 22 (11th ed.). 1911. p. 703.
climate trace
Climate TRACE (Tracking Real-Time Atmospheric Carbon Emissions) is an independent group which monitors and publishes greenhouse gas emissions within weeks. It launched in 2021 before COP26, and improves monitoring, reporting and verification (MRV) of both carbon dioxide and methane. The group monitors sources such as coal mines and power station smokestacks worldwide, with satellite data (but not their own satellites) and artificial intelligence.Time magazine named it as one of the hundred best inventions of 2020. Their emissions map is the largest global inventory and interactive map of greenhouse gas emission sources. According to Kelly Sims Gallagher it could influence the politics of climate change by reducing MRV disputes, and lead to more ambitious climate pledges.Developed countries' annual reports to the UNFCCC are submitted over a year after the end of the monitored year. Developing countries in the Paris Agreement will submit every two years. Some large emitters, such as Iran which has not ratified the agreement, have not submitted a greenhouse gas inventory in the 2020s.New data was released around the time of the 2022 United Nations Climate Change Conference. Methods Power plant emissions are tracked by training software with supervised learning to combine satellite imagery with other open data, such as government datasets, OpenStreetMap, and company reports. Similarly large ships will be tracked to better understand emissions from international shipping. Members As of 2021 the coalition consists of: Nonprofits: CarbonPlan, Earthrise Alliance, Hudson Carbon, OceanMind, Rocky Mountain Institute, TransitionZero, and WattTime Companies: Blue Sky Analytics and Hypervine Former U.S. Vice President Al Gore See also Glossary of climate change == References ==
new south wales greenhouse gas abatement scheme
The New South Wales Greenhouse Gas Abatement Scheme (also known as GGAS) was a mandatory greenhouse gas emissions trading scheme that aimed to lower greenhouse gas emissions in New South Wales, Australia, to 7.27 tonnes of carbon dioxide per capita by the year 2007, which commenced on 1 January 2003. The Scheme imposed obligations on NSW electricity retailers and certain other parties, including large electricity users who elected to manage their own benchmark to abate a portion of the greenhouse gas emissions attributable to their sales/consumption of electricity in NSW. They did this by purchasing and acquitting NSW Greenhouse Abatement Certificates (also known as NGACs), a type of carbon credit, created by accredited "Abatement Certificate Providers" (ACPs).The NSW Minister for Energy, Chris Hartcher, announced closure of the scheme in April 2012, effective from 30 June 2012. The Greenhouse Gas Reduction Scheme (GGAS) closed on 30 June 2012. The NSW Government closed GGAS to avoid duplication with the Commonwealth’s carbon price which commenced on 1 July 2012. See also Jack's Gully WSN Environmental Solutions CO2 Australia Limited References External links Fact Sheet on Greenhouse and Energy The NSW Greenhouse Gas Office NGAC Creation Certificate Providers NGAC FAQ archived webpage
paper recycling
The recycling of paper is the process by which waste paper is turned into new paper products. It has a number of important benefits: It saves waste paper from occupying homes of people and producing methane as it breaks down. Because paper fibre contains carbon (originally absorbed by the tree from which it was produced), recycling keeps the carbon locked up for longer and out of the atmosphere. Around two-thirds of all paper products in the US are now recovered and recycled, although it does not all become new paper. After repeated processing the fibres become too short for the production of new paper, which is why virgin fibre (from sustainably farmed trees) is frequently added to the pulp recipe.There are three categories of paper that can be used as feedstocks for making recycled paper: mill broke, pre-consumer waste, and post-consumer waste. Mill broke is paper trimmings and other paper scrap from the manufacture of paper, and is recycled in a paper mill. Pre-consumer waste is a material which left the paper mill but was discarded before it was ready for consumer use. Post-consumer waste is material discarded after consumer use, such as old corrugated containers (OCC), old magazines, and newspapers. Paper suitable for recycling is called "scrap paper", often used to produce moulded pulp packaging. The industrial process of removing printing ink from paper fibres of recycled paper to make deinked pulp is called deinking, an invention of the German jurist Justus Claproth. Process The process of waste paper recycling most often involves mixing used/old paper with water and chemicals to break it down. It is then chopped up and heated, which breaks it down further into strands of cellulose, a type of organic plant material; this resulting mixture is called pulp, or slurry. It is strained through screens, which remove plastic (especially from plastic-coated paper) that may still be in the mixture. It is then cleaned, de-inked (ink is removed), bleached, and mixed with water. Then it can be made into new recycled paper.The share of ink in a wastepaper stock is up to about 2% of the total weight. Rationale for recycling Industrialized paper making has an effect on the environment both upstream (where raw materials are acquired and processed) and downstream (waste-disposal impacts).Today, 40% of paper pulp is created from wood (in most modern mills only 9–16% of pulp is made from pulp logs; the rest comes from waste wood that was traditionally burnt). Paper production accounts for about 35% of felled trees. Recycling one ton of newsprint saves about 1 ton of wood while recycling 1 ton of printing or copier paper saves slightly more than 2 tons of wood. This is because kraft pulping requires twice as much wood since it removes lignin to produce higher quality fibres than mechanical pulping processes. Relating tons of paper recycled to the number of trees not cut is meaningless, since tree size varies tremendously and is the major factor in how much paper can be made from how many trees. In addition, trees raised specifically for pulp production account for 16% of world pulp production, old growth forests 9% and second- and third- and more generation forests account for the balance. Most pulp mill operators practice reforestation to ensure a continuing supply of trees. The Programme for the Endorsement of Forest Certification (PEFC) and the Forest Stewardship Council (FSC) certify paper made from trees harvested according to guidelines meant to ensure good forestry practices. Energy Energy consumption is reduced by recycling, although there is debate concerning the actual energy savings realized. The Energy Information Administration claims a 40% reduction in energy when paper is recycled versus paper made with unrecycled pulp, while the Bureau of International Recycling (BIR) claims a 64% reduction. Some calculations show that recycling one ton of newspaper saves about 4,000 kWh (14 GJ) of electricity, although this may be too high (see comments below on unrecycled pulp). This is enough electricity to power a 3-bedroom European house for an entire year, or enough energy to heat and air-condition the average North American home for almost six months. Recycling paper to make pulp actually consumes more fossil fuels than making new pulp via the kraft process; these mills generate most of their energy from burning waste wood (bark, roots, sawmill waste) and byproduct lignin (black liquor). Pulp mills producing new mechanical pulp use large amounts of energy; a very rough estimate of the electrical energy needed is 10 gigajoules per tonne of pulp (2500 kW·h per short ton). Landfill use About 35% of municipal solid waste (before recycling) in the United States by weight is paper and paper products. 42.4% of that is recycled. Water and air pollution The United States Environmental Protection Agency (EPA) has found that recycling causes 35% less water pollution and 74% less air pollution than making virgin paper. Pulp mills can be sources of both air and water pollution, especially if they are producing bleached pulp. Modern mills produce considerably less pollution than those of a few decades ago. Recycling paper provides an alternative fibre for papermaking. Recycled pulp can be bleached with the same chemicals used to bleach virgin pulp, but hydrogen peroxide and sodium hydrosulfite are the most common bleaching agents. Recycled pulp, or paper made from it, is known as PCF (process chlorine free) if no chlorine-containing compounds were used in the recycling process. Greenhouse gas emissions Studies on paper and cardboard production estimate the emissions of recycling paper to be 0.2 to 1.5 kg CO₂-equivalent/kg material. This is about 70% of the CO₂ emissions connected with production of virgin material. Recycling statistics In the mid-19th century, there was an increased demand for books and writing material. Up to that time, paper manufacturers had used discarded linen rags for paper, but supply could not keep up with the increased demand. Books were bought at auctions for the purpose of recycling fiber content into new paper, at least in the United Kingdom, by the beginning of the 19th century.Internationally, about half of all recovered paper comes from converting losses (pre-consumer recycling), such as shavings and unsold periodicals; approximately one third comes from household or post-consumer waste.Some statistics on paper consumption: 1996: it was estimated that 95% of business information is still stored on paper. 2006: recycling 1 short ton (0.91 t) of paper saves 17 mature trees, 7 thousand US gallons (26 m3) of water, 3 cubic yards (2.3 m3) of landfill space, 2 barrels of oil (84 US gal or 320 L), and 4,100 kilowatt-hours (15 GJ) of electricity – enough energy to power the average American home for six months. 1993: although paper is traditionally identified with reading and writing, communications has now been replaced by packaging as the single largest category of paper use at 41% of all paper used. no date: 115 billion sheets of paper are used annually for personal computers. The average web user prints 16 pages daily. 1997: on that year, 299,044 metric tons of paper was produced (including paperboard). 1999: on that year, in the United States, the average consumption of paper per person was approximately 354 kilograms. This would be the same consumption for 6 people in Asia or 30 people in Africa. 2006–2007: Australia 5.5 million tons of paper and cardboard was used with 2.5 million tons of this recycled. 2009: Newspaper manufactured in Australia has 40% recycled content. By region European Union Paper recycling in Europe has a long history. The industry self-initiative European Recovered Paper Council (ERPC) was set up in 2000 to monitor progress towards meeting the paper recycling targets set out in the 2000 European Declaration on Paper Recycling. Since then, the commitments in the Declaration have been renewed every five years. In 2011, the ERPC committed itself to meeting and maintaining both a voluntary recycling rate target of 70% in the then E-27, plus Switzerland and Norway by 2015, as well as qualitative targets in areas such as waste prevention, ecodesign and research and development. In 2014, the paper recycling rate in Europe was 71.7%, as stated in the 2014 Monitoring Report. United States Recycling has long been practiced in the United States. In 1690, nearly a century before the American Revolution, the first paper mill to use recycled linen rags was established by the Rittenhouse family. In 1993, 300 years later, another milestone was reached when, for the first time, more paper was recycled than was landfilled.In 2018, paper and paperboard accounted for 67.39 million tons of municipal solid waste (MSW) generated in the U.S., down from more than 87.74 million tons in 2000. As of 2018, paper products are still the largest component of MSW generated in the United States, making up 23% by weight. While paper is the most commonly recycled material (68.2 percent of paper waste was recovered in 2018, up from 33.5 percent in 1990) it is being used less overall than at the turn of the century. As of 2018, paper accounted for a third of all recyclables collected in the US, by weight. The widespread adoption of the internet and email has led to a change in the composition of the waster paper stream, with junk mail becoming a larger part of the materials collected, as reading of newspapers and writing of personal letters declines.By 1998, some 9,000 curbside recycling programs and 12,000 recyclable drop-off centers existed nationwide. As of 1999, 480 materials recovery facilities had been established to process the collected materials.In 2008, the global financial crisis caused the price of old newspapers to drop in the U.S. from $130 to $40 per short ton ($140/t to $45/t) in October. India The foundation also makes eco-friendly Lord Ganesh idols from paper pulp which are worshiped in Indian homes every year during Ganesh Chaturthi. These paper recycling activities are carried out throughout the year by the volunteers of the foundation converting waste paper into "No Waste".In recent years, paper recycling has increased and Indian imports of waste paper have increased following stringent restrictions by China on waste imports. However, only 25–28% of local waste paper is recycled. Mexico In Mexico, recycled paper, rather than wood pulp, is the principal feedstock in paper mills accounting for about 75% of raw materials. South Africa In 2018, South Africa recovered 1.285 million tonnes of recyclable paper products, putting the country's paper recovery rate at 71.7%. More than 90% of this recovered paper is used for the local beneficiation of new paper packaging and tissue. Limitations and effects Along with fibres, paper can contain a variety of inorganic and organic constituents, including up to 10,000 different chemicals, which can potentially contaminate the newly manufactured paper products. As an example, bisphenol A (a chemical commonly found in thermal paper) has been verified as a contaminant in a variety of paper products resulting from paper recycling. Groups of chemicals as phthalates, phenols, mineral oils, polychlorinated biphenyls (PCBs) and toxic metals have all been identified in paper material. Although several measures might reduce the chemical load in paper recycling (e.g., improved decontamination, optimized collection of paper for recycling), even completely terminating the use of a particular chemical (phase-out) might still result in its circulation in the paper cycle for decades. See also References This article incorporates public domain material from the United States Government. Archived from the original on 8 March 2006. External links Paper recycling at Curlie U.S. Environmental Protection Agency: Paper and Paperboard Products How to Make Recycled Paper – A tutorial for making your own recycled paper
biofuel
Biofuel is a fuel that is produced over a short time span from biomass, rather than by the very slow natural processes involved in the formation of fossil fuels, such as oil. Biofuel can be produced from plants or from agricultural, domestic or industrial biowaste. Biofuels are mostly used for transportation, but can also be used for heating and electricity.: 173  Biofuels (and bioenergy in general) are regarded as a renewable energy source.: 11  However, the use of biofuel has been controversial because of the several disadvantages associated with the use of it. These include for example (and this varies on a case by case basis): the "food vs fuel" debate, biofuel production methods being sustainable or not, leading to deforestation and loss of biodiversity or not. In general, biofuels emit fewer greenhouse gas emissions when burned in an engine and are generally considered carbon-neutral fuels as the carbon emitted has been captured from the atmosphere by the crops used in production. However, life-cycle assessments of biofuels have shown large emissions associated with the potential land-use change required to produce additional biofuel feedstocks. Estimates about the climate impact from biofuels vary widely based on the methodology and exact situation examined. Therefore, the climate change mitigation potential of biofuel varies considerably: in some scenarios emission levels are comparable to fossil fuels, and in other scenarios the biofuel emissions result in negative emissions. The two most common types of biofuel are bioethanol and biodiesel. Brazil is the largest producer of bioethanol, while the EU is the largest producer of biodiesel. The energy content in the global production of bioethanol and biodiesel is 2.2 and 1.8 EJ per year, respectively. Demand for aviation biofuel is forecast to increase.Bioethanol is an alcohol made by fermentation, mostly from carbohydrates produced in sugar or starch crops such as maize, sugarcane, or sweet sorghum. Cellulosic biomass, derived from non-food sources, such as trees and grasses, is also being developed as a feedstock for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form (E100), but it is usually used as a gasoline additive to increase octane ratings and improve vehicle emissions. Biodiesel is produced from oils or fats using transesterification. It can be used as a fuel for vehicles in its pure form (B100), but it is usually used as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Terminology The term "biofuel" is used in different ways. One definition is "Biofuels are biobased products, in solid, liquid, or gaseous forms. They are produced from crops or natural products, such as wood, or agricultural residues, such as molasses and bagasse.": 173 Other publications reserve the term biofuel for liquid or gaseous fuels, used for transportation. Conventional biofuels (first generation) First-generation biofuels (also denoted as "conventional biofuels") are made from food crops grown on arable land.: 447  The crop's sugar, starch, or oil content is converted into biodiesel or ethanol, using transesterification, or yeast fermentation. Advanced biofuels (second generation) To avoid a "food versus fuel" dilemma, second-generation biofuels (also called advanced biofuels or sustainable biofuels) are made from waste products. These are derived from agriculture and forestry activities such as rice straw, rice husk, wood chips, and sawdust.: 448 The feedstock used to make the fuels either grow on arable land but are byproducts of the main crop, or they are grown on marginal land. Second-generation feedstocks also include straw, bagasse, perennial grasses, jatropha, waste vegetable oil, municipal solid waste and so forth. Types Liquid Ethanol Biologically produced alcohols, most commonly ethanol, and less commonly propanol and butanol, are produced by the action of microorganisms and enzymes through the fermentation of sugars or starches (easiest), or cellulose (which is more difficult).The IEA estimates that ethanol production used 20% of sugar supplies and 13% of corn supplies in 2021.Ethanol fuel is the most common biofuel worldwide, particularly in Brazil. Alcohol fuels are produced by fermentation of sugars derived from wheat, corn, sugar beets, sugar cane, molasses and any sugar or starch from which alcoholic beverages such as whiskey, can be made (such as potato and fruit waste, etc.). The ethanol production methods used are enzyme digestion (to release sugars from stored starches), fermentation of the sugars, distillation and drying. The distillation process requires significant energy input for heat (sometimes unsustainable natural gas fossil fuel, but cellulosic biomass such as bagasse, the waste left after sugar cane is pressed to extract its juice, is the most common fuel in Brazil, while pellets, wood chips and also waste heat are more common in Europe) Waste steam fuels ethanol factory – where waste heat from the factories also is used in the district heating grid. Corn-to-ethanol and other food stocks has led to the development of cellulosic ethanol. Other bioalcohols Methanol is currently produced from natural gas, a non-renewable fossil fuel. In the future it is hoped to be produced from biomass as biomethanol. This is technically feasible, but the production is currently being postponed for concerns that the economic viability is still pending. The methanol economy is an alternative to the hydrogen economy to be contrasted with today's hydrogen production from natural gas. Butanol (C4H9OH) is formed by ABE fermentation (acetone, butanol, ethanol) and experimental modifications of the process show potentially high net energy gains with biobutanol as the only liquid product. Biobutanol is often claimed to provide a direct replacement for gasoline, because it will produce more energy than ethanol and allegedly can be burned "straight" in existing gasoline engines (without modification to the engine or car), and is less corrosive and less water-soluble than ethanol, and could be distributed via existing infrastructures. Escherichia coli strains have also been successfully engineered to produce butanol by modifying their amino acid metabolism. One drawback to butanol production in E. coli remains the high cost of nutrient rich media, however, recent work has demonstrated E. coli can produce butanol with minimal nutritional supplementation. Biobutanol is sometimes called biogasoline, which is not correct, as it is chemically different, being an alcohol, not a hydrocarbon, like gasoline. Biodiesel Biodiesel is the most common biofuel in Europe. It is produced from oils or fats using transesterification and is a liquid similar in composition to fossil/mineral diesel. Chemically, it consists mostly of fatty acid methyl (or ethyl) esters (FAMEs). Feedstocks for biodiesel include animal fats, vegetable oils, soy, rapeseed, jatropha, mahua, mustard, flax, sunflower, palm oil, hemp, field pennycress, Pongamia pinnata and algae. Pure biodiesel (B100, also known as "neat" biodiesel) currently reduces emissions with up to 60% compared to diesel Second generation B100. As of 2020, researchers at Australia's CSIRO have been studying safflower oil as an engine lubricant, and researchers at Montana State University's Advanced Fuels Center in the US have been studying the oil's performance in a large diesel engine, with results described as a "game-changer". Biodiesel can be used in any diesel engine and modified equipment when mixed with mineral diesel. It can also be used in its pure form (B100) in diesel engines, but some maintenance and performance problems may then occur during wintertime utilization, since the fuel becomes somewhat more viscous at lower temperatures, depending on the feedstock used.Electronically controlled 'common rail' and 'Unit Injector' type systems from the late 1990s onwards may only use biodiesel blended with conventional diesel fuel. These engines have finely metered and atomized multiple-stage injection systems that are very sensitive to the viscosity of the fuel. Many current-generation diesel engines are made so that they can run on B100 without altering the engine itself, although this depends on the fuel rail design. Since biodiesel is an effective solvent and cleans residues deposited by mineral diesel, engine filters may need to be replaced more often, as the biofuel dissolves old deposits in the fuel tank and pipes. It also effectively cleans the engine combustion chamber of carbon deposits, helping to maintain efficiency. In many European countries, a 5% biodiesel blend is widely used and is available at thousands of gas stations. Biodiesel is also an oxygenated fuel, meaning it contains a reduced amount of carbon and higher hydrogen and oxygen content than fossil diesel. This improves the combustion of biodiesel and reduces the particulate emissions from unburnt carbon. However, using pure biodiesel may increase NOx-emissionsBiodiesel is also safe to handle and transport because it is non-toxic and biodegradable, and has a high flash point of about 300 °F (148 °C) compared to petroleum diesel fuel, which has a flash point of 125 °F (52 °C).In France, biodiesel is incorporated at a rate of 8% in the fuel used by all French diesel vehicles. Avril Group produces under the brand Diester, a fifth of 11 million tons of biodiesel consumed annually by the European Union. It is the leading European producer of biodiesel. Green diesel Green diesel is produced through hydrocracking biological oil feedstocks, such as vegetable oils and animal fats. Hydrocracking is a refinery method that uses elevated temperatures and pressure in the presence of a catalyst to break down larger molecules, such as those found in vegetable oils, into shorter hydrocarbon chains used in diesel engines. It may also be called renewable diesel, hydrotreated vegetable oil (HVO fuel) or hydrogen-derived renewable diesel. Unlike biodiesel, green diesel has exactly the same chemical properties as petroleum-based diesel. It does not require new engines, pipelines or infrastructure to distribute and use, but has not been produced at a cost that is competitive with petroleum. Gasoline versions are also being developed. Green diesel is being developed in Louisiana and Singapore by ConocoPhillips, Neste Oil, Valero, Dynamic Fuels, and Honeywell UOP as well as Preem in Gothenburg, Sweden, creating what is known as Evolution Diesel. Straight vegetable oil Straight unmodified edible vegetable oil is generally not used as fuel, but lower-quality oil has been used for this purpose. Used vegetable oil is increasingly being processed into biodiesel, or (more rarely) cleaned of water and particulates and then used as a fuel. The IEA estimates that biodiesel production used 17% of global vegetable oil supplies in 2021.Oils and fats reacted with 10 pounds of a short-chain alcohol (usually methanol) in the presence of a catalyst (usually sodium hydroxide [NaOH] can be hydrogenated to give a diesel substitute. The resulting product is a straight-chain hydrocarbon with a high cetane number, low in aromatics and sulfur and does not contain oxygen. Hydrogenated oils can be blended with diesel in all proportions. They have several advantages over biodiesel, including good performance at low temperatures, no storage stability problems and no susceptibility to microbial attack. Biogasoline A study led by Professor Lee Sang-yup at the Korea Advanced Institute of Science and Technology (KAIST) and published in the international science journal Nature used modified E. coli fed with glucose found in plants or other non-food crops to produce biogasoline with the produced enzymes. The enzymes converted the sugar into fatty acids and then turned these into hydrocarbons that were chemically and structurally identical to those found in commercial gasoline fuel. Bioethers Bioethers (also referred to as fuel ethers or oxygenated fuels) are cost-effective compounds that act as octane rating enhancers. "Bioethers are produced by the reaction of reactive iso-olefins, such as iso-butylene, with bioethanol." Bioethers are created from wheat or sugar beets, and also be produced from the waste glycerol that results from the production of biodiesel. They also enhance engine performance, while significantly reducing engine wear and toxic exhaust emissions. Although bioethers are likely to replace ethers produced from petroleum in the UK, it is highly unlikely they will become a fuel in and of itself due to the low energy density. By greatly reducing the amount of ground-level ozone emissions, they contribute to air quality.When it comes to transportation fuel there are six ether additives: dimethyl ether (DME), diethyl ether (DEE), methyl tert-butyl ether (MTBE), ethyl tert-butyl ether (ETBE), tert-amyl methyl ether (TAME), and tert-amyl ethyl ether (TAEE).The European Fuel Oxygenates Association identifies MTBE and ETBE as the most commonly used ethers in fuel to replace lead. Ethers were introduced in Europe in the 1970s to replace the highly toxic compound. Although Europeans still use bioether additives, the U.S. Energy Policy Act of 2005 lifted a requirement for reformulated gasoline to include an oxygenate, leading to less MTBE being added to fuel. Aviation biofuel Gaseous Biogas and biomethane Biogas is a mixture composed primarily of methane and carbon dioxide produced by the process of anaerobic digestion of organic material by micro-organisms. Other trace components of this mixture includes water vapor, hydrogen sulfide, siloxanes, hydrocarbons, ammonia, oxygen, carbon monoxide, and nitrogen. It can be produced either from biodegradable waste materials or by the use of energy crops fed into anaerobic digesters to supplement gas yields. The solid byproduct, digestate, can be used as a biofuel or a fertilizer. When CO2 and other impurities are removed from biogas, it is called biomethane. Biogas can be recovered from mechanical biological treatment waste processing systems. Landfill gas, a less clean form of biogas, is produced in landfills through naturally occurring anaerobic digestion. If it escapes into the atmosphere, it acts as a greenhouse gas. Farmers can produce biogas from manure from their cattle by using anaerobic digesters. Syngas Syngas, a mixture of carbon monoxide, hydrogen and various hydrocarbons, is produced by partial combustion of biomass, that is, combustion with an amount of oxygen that is not sufficient to convert the biomass completely to carbon dioxide and water. Before partial combustion, the biomass is dried, and sometimes pyrolysed. The resulting gas mixture, syngas, is more efficient than direct combustion of the original biofuel; more of the energy contained in the fuel is extracted. Syngas may be burned directly in internal combustion engines, turbines or high-temperature fuel cells. The wood gas generator, a wood-fueled gasification reactor, can be connected to an internal combustion engine. Syngas can be used to produce methanol, dimethyl ether and hydrogen, or converted via the Fischer–Tropsch process to produce a diesel substitute, or a mixture of alcohols that can be blended into gasoline. Gasification normally relies on temperatures greater than 700 °C. Lower-temperature gasification is desirable when co-producing biochar, but results in syngas polluted with tar. Solid The term "biofuels" is also used for solid fuels that are made from biomass, even though this is less common. Research into other types Algae-based biofuels Algae can be produced in ponds or tanks on land, and out at sea. Algal fuels have high yields, can be grown with minimal impact on fresh water resources, can be produced using saline water and wastewater, have a high ignition point, and are biodegradable and relatively harmless to the environment if spilled. Production requires large amounts of energy and fertilizer, the produced fuel degrades faster than other biofuels, and it does not flow well in cold temperatures.By 2017, due to economic considerations, most efforts to produce fuel from algae have been abandoned or changed to other applications.Algae has the potential to yield more than 5,000 gallons of per acre year of biodiesel which is part of the reason in 2010 NAAB attempted to widely commercialize the use of algae-based biofuels. The Eldorado Biofuels (NAAB member) aimed to create commercial algae-based biofuels that not only made repurposed wastewater but did not require food crops or freshwater in order to produce energy. This cite which was stationed in southeastern Mexico and constructed by Alfonz Visozlay utilizes a system that separates petroleum related toxins and trace elements that aid in the growth of algae. Electrofuels and solar fuels This class of biofuels includes electrofuels and solar fuels. Electrofuels are made by storing electrical energy in the chemical bonds of liquids and gases. The primary targets are butanol, biodiesel, and hydrogen, but include other alcohols and carbon-containing gases such as methane and butane. A solar fuel is a synthetic chemical fuel produced from solar energy. Light is converted to chemical energy, typically by reducing protons to hydrogen, or carbon dioxide to organic compounds.Third and fourth-generation biofuels also include biofuels that are produced by bioengineered organisms i.e. algae and cyanobacteria. Algae and cyanobacteria will use water, carbon dioxide, and solar energy to produce biofuels. This method of biofuel production is still at the research level. The biofuels that are secreted by the bioengineered organisms are expected to have higher photon-to-fuel conversion efficiency, compared to older generations of biofuels. One of the advantages of this class of biofuels is that the cultivation of the organisms that produce the biofuels does not require the use of arable land. The disadvantages include the cost of cultivating the biofuel-producing organisms being very high. Extent of production and use The following fuels can be produced using first, second, third or fourth-generation biofuel production procedures. Most of these can be produced using two or three of the different biofuel generation procedures. Global biofuel production was 81 Mtoe in 2017 which represented an annual increase of about 3% compared to 2010.: 12  Mtoe stands for Million Tonnes of Oil Equivalent. Furthermore: "the US is the largest producer in the world producing 37 Mtoe in 2017; Brazil and South America, 23 Mtoe; and Europe (mainly Germany) 12 Mtoe".: 12 An assessment from 2017 found that: "Biofuels will never be a major transport fuel as there is just not enough land in the world to grow plants to make biofuel for all vehicles. It can however, be part of an energy mix to take us into a future of renewable energy.": 11 In 2021, worldwide biofuel production provided 4.3% of the world's fuels for transport, including a very small amount of aviation biofuel. By 2027 worldwide biofuel production is expected to supply 5.4% of the world's fuels for transport including 1% of aviation fuel. The International Energy Agency (IEA) wants biofuels to make up 64% of the world demand for transportation fuels by 2050, in order to reduce dependency on petroleum. However, the production and consumption of biofuels are not on track to meet the IEA's sustainable development scenario. From 2020 to 2030 global biofuel output has to increase by 16% each year to reach IEA's goal. Issues Environmental impacts Estimates about the climate impact from biofuels vary widely based on the methodology and exact situation examined.In general, biofuels emit fewer greenhouse gas emissions when burned in an engine and are generally considered carbon-neutral fuels as the carbon emitted has been captured from the atmosphere by the crops used in production. However, life-cycle assessments of biofuels have shown large emissions associated with the potential land-use change required to produce additional biofuel feedstocks. A review of 179 studies published between 2009 and 2020 found that if no land-use change is involved, first-generation biofuels can—on average—have lower emissions than fossil fuels. However, there is an issue with competition with food. Up to 40% of corn produced in the United States is used to make ethanol, and worldwide 10% of all grain is turned into biofuel. A 50% reduction in grain used for biofuels in the US and Europe would replace all of Ukraine's grain exports. Also, several studies have shown that reductions in emissions from biofuels are achieved at the expense of other impacts, such as acidification, eutrophication, water footprint and biodiversity loss.The use of second generation biofuels is thought to increase environmental sustainability, since the non-food part of plants is being used to produce second-generation biofuels, instead of being disposed. But the use of this class of biofuels increases the competition for lignocellulosic biomass, increasing the cost of these biofuels.The European Commission officially approved a measure to phase out palm oil-based biofuels by 2030. During a meeting with European Commission President Ursula von der Leyen, Indonesian Prime Minister Joko Widodo expressed concern about the EU Deforestation Regulation (EUDR), which aims to prevent products linked to deforestation from reaching the EU market. Indirect land use change impacts of biofuels See also References Sources Avril Group, ed. (2015). A new springtime for the oils and proteins sectors : Activity Report 2014 (PDF) (Report). Paris: Avril. p. 65. Archived from the original (PDF) on 26 October 2020. Retrieved 11 August 2022. EurObserv (July 2014). Biofuel barometer (PDF) (Report). External links Biofuels Journal Alternative Fueling Station Locator Archived 14 July 2008 at the Wayback Machine (EERE) Towards Sustainable Production and Use of Resources: Assessing Biofuels by the United Nations Environment Programme, October 2009. Biofuels guidance for businesses, including permits and licences required on NetRegs.gov.uk How Much Water Does It Take to Make Electricity?—Natural gas requires the least water to produce energy, some biofuels the most, according to a new study. International Conference on Biofuels Standards – European Union Biofuels Standardization Biofuels from Biomass: Technology and Policy Considerations Thorough overview from MIT The Guardian news on biofuels The US DOE Clean Cities Program – links to the 87 US Clean Cities coalitions, as of 2004. Biofuels Factsheet by the University of Michigan's Center for Sustainable Systems Learn Biofuels – Educational Resource for Students
environmental issues in alberta
The Canadian province of Alberta faces a number of environmental issues related to natural resource extraction—including oil and gas industry with its oil sands—endangered species, melting glaciers in banff, floods and droughts, wildfires, and global climate change. While the oil and gas industries generates substantial economic wealth, the Athabasca oil sands, which are situated almost entirely in Alberta, are the "fourth most carbon intensive on the planet behind Algeria, Venezuela and Cameroon" according to an August 8, 2018 article in the American Association for the Advancement of Science's journal Science. This article details some of the environmental issues including past ecological disasters in Alberta and describes some of the efforts at the municipal, provincial and federal level to mitigate the risks and impacts. According to the 2019 report Canada's Changing Climate Report, which was commissioned by Environment and Climate Change Canada, Canada's annual average temperature over land has warmed by 1.7 C since 1948. The rate of warming is even higher in Canada's North, in the Prairies and northern British Columbia. The Intergovernmental Panel on Climate Change's (IPCC) October 8, 2018 Special Report on Global Warming of 1.5 °C set a target of 1.5 °C (2.7 °F) that would require "deep emissions reductions" and that "[g]lobal net human-caused emissions of carbon dioxide (CO2) would need to fall by about 45 percent from 2010 levels by 2030, reaching 'net zero' around 2050" for global warming to be limited to 1.5 °C.The Canadian oil and gas industry produces "60 per cent of all industrial emissions in Canada" and Alberta has the largest oil and gas industry in the country. By September 2017, Alberta had already begun "implementing broad climate change policies" including a "sophisticated two-tier carbon pricing system" for consumers and major emitters. This represented a "first step in broadening the tax base". The province set a "target cap for greenhouse gas emissions" and began the transformation to lower-carbon with coal being phased out for electricity production. Some involved in the energy industry were "voluntarily expanding into renewables and lower-carbon energy sources." The first act introduced by Premier Jason Kenney as promised in his United Conservative Party (UCP) election platform was An Act to Repeal the Carbon Tax, which received Royal Assent on June 4, 2019.Raw bitumen extracted from the oil sands in northern Alberta is shipped in Canada and to the United States through pipelines, railway, and trucks. Environmental concerns about the unintended consequences of the oil sands industry are linked to environmental issues in the rest of Canada. While pipelines are considered to be the most efficient and safest of the three methods, concerns have been raised about pipeline expansion because of climate change, the risk of pipeline leaks, increased oil tanker traffic and higher risk of oil tanker spills, and violations of First Nations' rights. Overview Environmental liabilities include emissions from a number of sources including the oil and gas industry with the oil sands tailings ponds, oil spills and tailings dam failures, pipelines, reclamation including orphan wells. Others environmental issues include melting glaciers, wildfires, extreme weather events—including floods and droughts, species at risk such as the boreal woodland caribou and bull trout, and invasive destructive species, such as the mountain pine beetle. Potential solutions include energy efficiency, reclamation, regulatory instruments for measuring, monitoring and managing greenhouse gases including methane, carbon dioxide, carbon pricing including a carbon tax, wilderness and parks. Greenhouse gas emissions Environment Canada monitors greenhouse gas emissions, including "carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), perfluorocarbons (PFCs), hydrofluorocarbons (HFCs), sulphur hexafluoride (SF6) and nitrogen trifluoride (NF3)". The sources of GHG were grouped into five sectors: energy; industrial processes and product use (IPPU); agriculture; waste, and land use, land-use change; and forestry (LULUCF). Air quality By September 9, 2015, then-Environment Minister Shannon Phillips warned that Alberta was "on track to have the worst air quality in Canada". The 2015 Canadian Ambient Air Quality Standards report showed that the Red Deer area had "exceeded the acceptable amount of particulate matter and ozone exposure" from 2011 through 2013. Although the health risk was low, Phillips called on the Red Deer area, "the lower Athabasca, upper Athabasca, North Saskatchewan and South Saskatchewan" whose air quality was also at risk, to develop plans to prevent their air quality levels from deteriorating. Todd Loewen, then-Wildrose environment critic, said Phillips was over-reacting. By 2018, the Alberta Environment and Parks research on the composition of the fine particulate matter that endangers health at any levels, indicated that "nitrogen dioxide and volatile compounds"—that are "associated with industry—make up a lot of the fine particulate matter in the Red Deer region".A May 14, 2019 Data Trending and Comparison Report by Fort Air Partnership (FAP) showed that in their study area—which includes a "4,500 square-kilometre airshed near Edmonton", "levels of sulphur dioxide, nitrogen oxide and carbon monoxide" have been decreasing since the late 1980s.From 2017 to May 2019, Bluesource's Methane Reduction Program retrofitted 4,000 high-bleed pneumatic controllers with units that emitted less CO2e for 15 oil and gas producers which cut estimated emissions by "180,000 tonnes of CO2e in 2018 and saved oil and gas producers over $4 million in capital expenditures." Greenhouse gases emissions in Alberta (1990-2017) According to the federal data published in the National Observer on February 20, 2019, in 2016 the provinces total emissions of CO2 equivalent amounted to 262.9 megatonnes (MT) with 17 per cent from the electrical sector and 48 per cent from the oil and gas sector.Alberta's CO2 equivalent kilotonne (kt) increased to 273,000 kt in 2017 from 171,000 kt in 1990. From 2005 to 2017 it increased by 18%, mainly because of "the expansion of oil and gas operations."The total of CO2 equivalent emissions in 2017 for all of Canada was 714,000 kt. In contrast, Ontario, the second largest emitter, had a total of 159,000 CO2 equivalent kt in 2017 representing a decrease from 1990 when it was 180,000 kt. Between 2005 and 2017, Ontario saw a decrease of −22% largely because of the closing of "coal-fired electricity generation plants".Alberta Canada "According to the Alberta government, the impact of methane as a greenhouse gas is, "25 times greater than carbon dioxide over a 100-year period." In 2014, Alberta's oil and gas sector emitted 31.4 megatonnes of methane (measured in carbon dioxide equivalent)." Alberta set 45-per-cent-by-2025 methane emission reduction targets. The oil and gas sector The oil and gas industry produces "60 per cent of all industrial emissions in Canada" and Alberta has the largest oil and gas industry.According to Natural Resources Canada (NRCAN), because of increased oil and gas production from 2005 through 2016, GHG emissions in Canada increased 16%, particularly through in-situ extraction.By 2015, Venezuela accounted for 18%, Saudi Arabia for 16.1%, and Canada for 10.3% of the world's proven oil reserves, according to NRCAN. Based on a May 2019 report, Alberta's total oil production in March, 2019 was 17.09 million cubic metres and 17.088 million cubic metres in March 2018. Oil sands tailings ponds By 2016, NRCAN reported that the growth of annual production of oil sands, in spite of significant technological advances, presents several environmental challenges to land, water, air, and energy conservation. One of the most difficult environmental challenges facing the oil industry is the management of the oil sands tailings ponds, which hold large volumes of tailings, the byproduct of bitumen extraction from the oil sands, which contain a mixture of salts, suspended solids and other dissolvable chemical compounds such as acids, benzene, hydrocarbons residual bitumen, fine silts and water.Tailings ponds in Alberta held c. 732 billion litres in 2008 and by 2013 they covered about 77 square kilometres (30 sq mi). By 2017 this increased to c."1.2 trillion litres of contaminated water" and then covered about 220 square kilometres (85 sq mi). In 2009, as tailing ponds continued to proliferate and volumes of fluid tailings increased, the Energy Resources Conservation Board of Alberta issued Directive 074 to force oil companies to manage tailings based on aggressive criteria. In 2015, regulators replaced 074 with Directive 085, which allowed the oil industry to release fluid fine tailings (FFT) into tailings ponds. In a June 3, 2019 The Globe and Mail article, limnologist David Schindler expressed concerns about new regulations at both the provincial and federal level authorizing the "discharge of treated effluence" from oil sands tailings ponds into the Athabasca River.The industry has been fined under the federal Migratory Birds Convention Act (MBCA) and Alberta's Environmental Protection and Enhancement Act in 2018 and 2010 for the deaths of great blue herons at the MLSB, and over 1,606 ducks in Syncrude's oil sands tailings ponds. Syncrude's fine of $3 million was the largest to date. Oil sands emissions The Athabasca oil sands, which are situated almost entirely in Alberta, are the "fourth most carbon intensive on the planet behind Algeria, Venezuela and Cameroon" according to an August 8, 2018 article in the American Association for the Advancement of Science's journal Science. Their research concluded that "Canada's rating was nearly twice the global average".Scientists from Environment Canada and Queen's University published their research in the January 2013 issue of the Proceedings of the National Academy of Sciences journal (PNAS) in which they described innovative methods to measure the polycyclic aromatic hydrocarbons (PAH) in core samples from lakes including a remote lake, Namur Lake, which is situated 50 km from the sampling site, AR6, on the Athabasca River, and had a "high atmospheric PAH deposition. They found that the sedimentary profiles from the core samples revealed "striking PAH trajectories" that "reflect the decades-long impacts of oil sands development on lake ecosystems, including remote Namur Lake. This temporal PAH pattern was not recognized previously by industry-funded oil sands monitoring programs." The Alberta's oil sands "emit high levels of air pollutants" based on a May 25, 2016 article entitled "Oil sands operations as a large source of secondary organic aerosols" in Nature in June 2016 by lead author John Liggio and a team of Environment Canada scientists. Oil sands greenhouse gas emissions are the largest "anthropogenic secondary organic aerosols in North America". The Environment Canada researchers defined secondary organic aerosols (SOAs) as "gases and particles that interact with sunlight in complex ways and that are released by both the globe’s plant matter as well as fossil-burning machines and industries". According to the article in La Verge, citing Environment Canada researchers, emissions from the oil sands "equal what's produced by the entire city of Toronto". The scientists from Environment Canada said that Alberta oil sands greenhouse gas emissions may be much higher than the four main mines were reporting. For example, Suncor’s mine was 13 per cent higher than it reported, Canadian Natural Resources Ltd.'s Horizon and Jackpine mines were about 37 per cent more, and Syncrude's Mildred Lake mine (MLSB) emitted 2 1/4 times more than they reported to the federal pollutant registry.Their "data from airborne measurements over the bitumen-producing region in August 2013 found that oilsands production generates at least 45 to 84 tonnes per day of the tiny particulate matter."According to the University of Calgary's Joule Bergerson, a co-author of an August 31, 2018 Natural Sciences and Engineering Research Council (NSERC)-funded Science article entitled "Global carbon intensity of crude oil production", "if oil-producing countries adopted regulations similar to Canada's that limit the amount of gas flared or vented into the air, it could cut greenhouse gas emissions from oil production by almost a quarter." Oil sands emissions cap In May 2016, the NDP provincial government introduced the Climate Leadership Act which "included a 100-megatonne annual emissions cap on oilsands operations in Alberta". The Oil Sands Emissions Limit Act passed in December 2016. Since the Alberta's oil sands emit approximately 70-megatonne a year in 2016, the emissions cap would not negatively affect the oil industry for many years.Without an emissions cap, however, the "federal federal government has promised that future in-situ oil projects" would have to go through approvals—not through the provincial rules under the Alberta Energy Regulator—but under the new federal regulations under development in Bill C-69, known as the "Impact Assessment Act" which will "change the regulatory process for new energy projects."Although Premier Kenney did not approve of the NDP's 100-megatonne annual emissions cap on the oil sands, and had initially planned on eliminating the cap along with the carbon tax, within days of his winning the election, he "soften[ed] his stance." In May he said that because the "whole question of the emissions cap is academic" because [Alberta] was "nowhere close to hitting [the cap], so for us that is not a fight that we're going to get into at this point." On June 13, 2019, the federal Environment Minister Catherine McKenna announced that because of the An Act to Repeal the Carbon Tax became law in Alberta, the federal carbon tax would be imposed on Alberta as of January 1, 2020. On June 18, the Governor in Council (GIC) approved the Trans Mountain Expansion Project. The November 2016 initial federal support for the controversial expansion of the existing Trans Mountain Pipeline was conditional on Alberta having a "climate plan that included the key ingredients of a carbon tax and a cap on emissions from the oilsands". According to CBC, now that there is a forced federal carbon tax on Alberta, both of the "key conditions for the project" were fulfilled. Oil sands industry's technological solutions Open pit mining is used for extracting only 20% of Alberta's bitumen reserves—those that are not too deep to access. According to Vicki Lightbrown of Alberta Innovates, the remaining 80% of bitumen reserves are deep underground and can only be recovered in situ, which involves drilling down to extract the oil using methods such as Steam-assisted gravity drainage (SAGD) and Cyclic Steam Stimulation (CSS). Drilling involves "minimal land disturbance and does not require tailings ponds. Lightbrown reported that, "Greenhouse gas emissions for SAGD projects are around 0.06 tonnes of CO2 equivalent per barrel of bitumen produced.": 1 Orphan wells In the fall of 2018 Alberta's provincial government pilot project found that the "vast majority of extractive industrial [sites]", where there is no longer any productive value, and that are therefore ready for reclamation, failed to meet the standards required by law for adequate reclamation. The number of orphan wells, according to the oil industry-led Orphan Well Association's (OWA) inventory, has increased from 1,200 to over 3,700 between 2014 and 2018. By February 2018, there were 1,800 orphan wells that had been licensed by Alberta Energy Regulator (AER) with combined liabilities of over $110 million. Pipelines Alberta's Western Canadian Select, one of North America's largest heavy crude oil streams, is landlocked and has faced significant obstacles to reaching tidewater. Pipeline expansions have prevented and/or delayed approvals for Trans Mountain Pipeline expansion, Enbridge Northern Gateway Pipelines, Energy East pipeline, and Keystone XL pipeline. Crude oil has been shipped by rail as an alternative. Oil spills and tailings dam failures On April 28, 2011, 4.5 million litres of oil (28,000 barrels) leaked from the Rainbow Pipeline, owned by the American company, Plains Midstream Canada spilled near Little Buffalo, a Lubicon Cree First Nation community in northeast of Peace River, Alberta. Alberta's Energy Resources Conservation Board (ERCB) published their report of the leak on February 26, 2013. Greenpeace sent an advanced copy of their April 24, 2013 report to the Albert Government. The report "Rainbow Pipeline Spill" was based on "confidential internal government documents". On April 24, 2013, the Environment Minister laid charges against the Plains Midstream in connection to this spill. The Energy Resources Conservation Board was dissolved in 2013.On January 17, 2001, a rupture occurred on the Enbridge Pipeline System near Hardisty, Alberta and about 3800 cubic metres of crude oil spilled. By May 1, 2001, 3760 cubic metres of crude oil had been recovered.In June, 2012 almost half a million litres of sour crude oil leaked into a creek that flows into the Red Deer River near Sundre, approximately 100 kilometres north of Calgary.On June 19, 2012, an Enbridge pipeline spilled approximately 1,400 barrels of crude oil near Elk Point, Alberta.On April 2, 2014, a pipeline spilled 70,000 litres of oil northwest of Slave Lake, Alberta.In November, 2014 a pipeline leaked 60,000 litres of crude oil spilled into muskeg in Red Earth Creek in northern Alberta.On March 1, 2015, in NOrthern Alberta, a pipeline leak spilled about 17,000 barrel of condensate.On May 5, 2015, an undetermined volume of sweet natural gas and associated hydrocarbon liquid leaked onto agricultural land from a gas transmission pipeline 36 kilometres southeast of Drumheller, Alberta.On July 15, 2015, leaked about 31,500 barrels of oil emulsion leaked from a pipeline at a Long Lake oil sands facility in northern Alberta.On August 14, 2015, 100,000 litres of an oil, water, and gas emulsion leaked on the Hay Lake First Nation, about 100 kilometres northwest of High Level, Alberta.On February 17, 2017, a third party struck one of Enbridge's pipelines in Strathcona County, Alberta, releasing about 200,000 litres of oil condensate. after line was struck during 3rd party construction operations. A new boat launch was created on Seba Beach, in Parkland County.On August 3, 2005, 43 cars of a Canadian National (CN) freight train derailed near Wabamun Lake spilling up to 1.3 million litres (286,000 Imp gallons or 343,000 US gallons) of heavy bunker C fuel oil. High winds spread about 734,000 litres (161,500 Imp gal/194,000 US gal) of the oil across the lake.[1] On October 31, 2013, the tailings dam collapsed at the Obed Mountain coal mine, near the town of Hinton, Alberta, spilling about billion litres (260 million US gal) of wastewater into the Athabasca River. It may have been the largest coal slurry spill in Canadian history".Eight people were killed in an explosion on a gas pipeline owned by Piggot Pipelines on January 17, 1962, about 50 kilometres northwest of Edson, Alberta. The electricity sector As of 2008, Alberta's electricity sector was the most carbon-intensive of all Canadian provinces and territories, with total emissions of 55.9 million tonnes of CO2 equivalent in 2008, accounting for 47% of all Canadian emissions in the electricity and heat generation sector.According to the National Observer, in 2016 17 per cent of Alberta's total emissions in 2016 were from the electrical sector. The oil and gas sector accounted for almost 48 per cent of the province's total carbon pollution in that year, according to federal data. Water resource management In 2003 the province of Alberta set a strategic 10-year action plan "Water for Life: Alberta’s Strategy for Sustainability (WFL)" under then Minister of the Environment Lorne Taylor, that guides water resource management.According to the Alberta Energy Regulator (AER), about 10 billion cubic metres (or 7 per cent) of the "140 billion cubic metres of nonsaline water available in Alberta" are "allocated for use through Water Act licenses for municipal, agricultural, forestry, industrial and other commercial us." In 2017, Of the 140 billion cubic metres of nonsaline water available in Alberta, almost 10 cent of the licensed-for-use water was for the energy industry with over 70 per cent was for oil sands mining. The rest was used in "enhanced oil recovery, hydraulic fracturing, in situ recovery operations." Melting glaciers As glaciers melt and lose mass, there is less fresh water for irrigation and domestic use. Glaciers are an important part of national and provincial parks in Alberta, such as Jasper Park, and their loss effects mountain recreation, animals and plants that depend on glacier-melt. The Rocky Mountains and other mid-latitude are showing some of the largest glacial losses. Glaciers in Canadian Rockies, such as the 325 km2 (125 sq mi) Columbia Icefield, of Jasper National Park, which includes one of the Icefield's outlet glaciers, Athabasca Glacier, are often larger and more widespread than in the United States Rocky Mountains. Mount Athabasca, is easily accessible. Since the late 19th century, the Athabasca Glacier has retreated 1,500 m (4,900 ft) with an increase in its rate of retreat since 1980. From 1950 to 1980 the rate of retreat had slowed. The 12 km2 (4.6 sq mi) Peyto Glacier retreated rapidly during the first half of the 20th century. In 1976 it stopped retreating but continued in 1976. Floods and droughts Alberta's Environment ministry reported in October 2009 that there was a trend of high summer temperatures and low summer precipitation in the province which has contributed to Alberta's drought conditions. which were harming the Alberta's agriculture sector, mainly in areas where there is cattle ranching area. When there is a drought there is a shortage of feed for cattle (hay, grain). With the shortage on crops ranchers are forced to purchase the feed at the increased prices while they can. For those who cannot afford to pay top money for feed are forced to sell their herds.When Alberta experienced a severe drought in 2002, the province of Ontario was able to send a vast amount of hay to Alberta ranchers that were hit by the drought. Ontario had a good season with high hay production. Droughts like the 2002 drought creates an income deficit for many ranchers as they are forced to buy heads of cattle high and sell low.The costliest disaster in Canadian history, according to the Insurance Bureau of Canada, was the 2013 Alberta floods which at over $1.7 billion, was more than the North American Ice Storm of 1998 at $1.6 billion.According to the May 2019 Canada's Changing Climate Report, scientists concluded that they had "low confidence" that "anthropogenic climate change" had caused the "extreme precipitation" that resulted in the 2013 southern Alberta flood, compared to "medium confidence" that "anthropogenic climate change" had contributed to the 2016 Fort McMurray wildfire.: 117 Athabasca River According to an April 23, 2019 article in the PLOS One journal, Wood Buffalo National Park (WBNP), which is designated as a UNESCO World Heritage Site, is being investigated as a potential World Heritage Site in Danger because of a number of environmental stressers, including the presence of mercury (Hg). The report built on previous research that concluded that "oil sands industrial operations release mercury into the local environment" and that spring snowmelt could potentially release Hg and other chemicals into the aquatic environment of the north-flowing Athabasca River and the "Peace-Athabasca Delta and Lake Athabasca in northern Alberta". Wildfires Canada's wildfire season, which includes Alberta, starts earlier, the frequency of wildfires has increased, and by 2016, the annual burn was twice as much as in 1970.El Niño and global warming contributed to the 2016 Fort McMurray wildfire, which led to the evacuation of Fort McMurray at the centre of the oil sands industry.By the afternoon of June 3, 2019, there were 558 wild fires in Alberta's Forest Protection Area with 656,842.84 hectares (1,623,094 acres) by the morning of June 3 with 595,726.23 hectares (1,472,072 acres) burned. compared to the five-year average of 590 wildfires with 136,335.82 hectares (336,893 acres) burned. Wilderness and parks The NDP government created Bighorn Wildland Provincial Park and new Castle Park area which "when combined with existing protected areas, create the world’s largest boreal forest protected area, including key caribou habitat." In a partnership with Syncrude, the Tallcree First Nation, the Nature Conservancy of Canada (NCC), the Governments of Alberta, and Canada to create new wildland provincial parks (WPP)s. The northern WPPs—Kazan, Richardson and Birch River—add about 1.36 million hectares to the Alberta's protected area network and connects Wood Buffalo National Park with wildland provincial parks. The boreal woodland caribou is a threatened species and one of the threats to its survival is habitat fragmentation of the boreal forest. Invasive species Mountain Pine Beetle By 2007, Alberta Sustainable Resource Development (ASRD), reiterated that the mountain pine beetle (MPB) is the "most damaging insect pest of [mature] pine trees in western North America."From about 2006 to 2017, Alberta spent $484 million which includes financial support from both Saskatchewan and the federal government, to fight the invasive species, the Mountain Pine Beetle (MPB) and "prevent damage in specific locations and to protect valuable resources, such as watersheds."An extreme frigid cold spell in February 2019, was expected to kill off the 90 per cent of MPB's larvae in Alberta, particularly in and around Jasper National Park, where the beetle has had the most damaging effect on the forest.In the 1940s there were outbreaks in Banff National Park and Kootenay national Park that also spread to the Kananaskis area. In the 1920s and again in the late 1950s there were outbreaks in Waterton Lakes National Park. In the 1970s and 1980s the outbreak spread into Alberta in the Castle River valley and Waterton Lakes National Park from Montana. There was a "massive unprecedented outbreak" in the early 1990s in British Columbia and in west-central Alberta.: 14 Species at risk The list of species at risk in Alberta includes the boreal woodland caribou and the bull trout—Alberta's Official Provincial Fish—which are on the IUCN Red List of Threatened Species. According to March 25, 2019, article by the Alberta Wilderness Association, the bull trout, which is popular in sport fishing, is listed as threatened and Alberta's Athabasca rainbow trout as endangered on a list of aquatic species proposed by the federal government under the Species at Risk Act (SARA). According to the Canada Gazette, the bull trout Salvelinus confluentus), are native to western Canada, is as an "indicator species of general ecosystem health". In Alberta, in particular, the bull trout range has become restricted resulting in the isolation and fragmentation of populations. According to the March 2019 federal report, "[t]he most serious threats to Bull Trout are from human disturbance, including habitat loss through degradation and fragmentation; commercial forestry; hydroelectric, oil, gas and mining development; agriculture; urbanization; road development; and climate change.": 20 Alberta's public policy Climate Change Action Plan Alberta released a "Climate Change Action Plan" in 2008. Energy efficiency Prior to 2017, Alberta was the "only jurisdiction in North America without an energy efficiency organization". In 2017, the NDP's created Energy Efficiency Alberta (EEA). It used revenues from Alberta's carbon tax to help municipalities, businesses and homeowners improve energy efficiency by funding programs and rebates. According to the NDP, in nine months in 2017, EEA saved Albertans $510 million and avoided adding "three million tonnes of GHG emissions". By May 2019, EEA—with an annual budget of $132 million—offered 20 different programs. By May 2019, Premier Jason Kenney with his Environment and Parks Minister Jason Nixon, are examining which of these programs would remain under the new UCP government. EEA programs included "instant in-store savings, residential and community solar, a business energy savings program or a host of education and training grants".In 2017, the NDP government opened the Energy Efficiency Alberta office using used "money from [Alberta'] carbon tax to fund rebates and programs aimed at increasing energy efficiency and decreasing greenhouse gas emissions." Carbon pricing In 2007, the provincial government's Specified Gas Emitters Regulation (SGER), which "priced carbon from large emitters and use[d] the resulting revenue for investments in low-carbon technology", made it the "first jurisdiction in North America to have a price on carbon". The SGER was renewed to 2017 with increased stringency. It requires "large final emitters", defined as facilities emitting more than 100,000tCO2e per year, to comply with an emission intensity reduction which increases over time and caps at 12% in 2015, 15% in 2016 and 20% in 2017. Facilities have several options for compliance. They may actually make reductions, pay into the Climate Change and Emission Management Fund (CCEMF), purchase credits from other large final emitters or purchase credits from non large final emitters in the form of offset credits. Criticisms against the intensity based approach to pricing carbon include the fact that there is no hard cap on emissions and actual emissions may always continue to rise despite the fact that carbon has a price. Benefits of an intensity based system include the fact that during economic recessions, the carbon intensity reduction will remain equally as stringent and challenging, while hard caps tend to become easily met, irrelevant and do not work to reduce emissions. Alberta has also been criticized that its goals are too weak, and that the measures enacted are not likely to achieve the goals. In 2015, the newly elected government committed to revising the climate change strategy.In November 2015, Premier Rachel Notley former-Alberta Environment Minister Shannon Phillips unveiled plans to increase the province's carbon tax to $20 per tonne in 2017, increasing further to $30 per tonne by 2018.By 2017 there was a Pan-Canadian Framework for Clean Growth and Climate Change in place, which heavily leaned on carbon pricing. By February 2017 Alberta, Manitoba, Nova Scotia, British Columbia, Ontario and Quebec had announced their announce own carbon-pricing policies. By May 2019, following changes in Government, Alberta, Manitoba, and Ontario had abandoned their carbon pricing policies. In December 2018, the federal government passed the Greenhouse Gas Pollution Pricing Act (GHGPPA)—a revenue-neutral tax which applied only to provinces and territories whose carbon pricing system did not meet federal requirements.By 2018, Alberta, Quebec (2007), British Columbia (2008), Ontario, Manitoba and Nova Scotia had carbon-pricing policies in place.Eric Denhoff, who was Alberta's deputy minister of environment and climate change under Notley's NDP government, met with members of a major New York City-based "investment house that is heavily involved in financing the Alberta oil patch" in Calgary in 2018. Against the backdrop of the "growing ESG (environment, social, and governance) responsibility industry", the investment house conveyed their shareholders' message telling the company to "stop investing in the Alberta oil sands.Premier Kenney joined like-minder premiers, including Premier Doug Ford, Saskatchewan and Manitoba Premier Brian Pallister (PC), in a law suit against the federal Liberal government on April 15, 2019. The court ruled in favour (3-2) of the constitutionality of the carbon tax. The four provinces are appealing the decision. Renewable energy In 2015, Notley's provincial NDP government committed to purchasing "one-third of domestic power from renewables." Wind power Alberta purchased "thousands of megawatt hours of wind power at the lowest recorded price in Canadian history, much of it from Indigenous partnerships." Indigenous communities were also undertaking a "special solar power program for their communities". Solar energy In 2017, the NDP government introduced the Residential and Commercial Solar Program which encouraged the use of solar energy through a solar rebate program. The Residential and Commercial Solar Program was intended to "invest $36 million to generate 48 megawatts of electricity by 2020." By May 2019, over 1,500 residential and commercial solar projects were completed by May 2019. Nine hundred were still being developed. There were 2,200 residential projects. By May 2019, $134 million had already been invested in solar projects in Alberta. Solar energy industry has added 500 jobs with an estimated workforce in 2019 of 2,000.With Jason Kenney as Premier, the future of Energy Efficiency Alberta and the solar rebate program, is uncertain. Phasing out coal Coal power generation is the most polluting source of electricity. In 2012, then-Canadian Prime Minister Stephen Harper introduced legislation that would phase out coal-fired generating units at the end-of-useful-life which is generally 50 years after the unit was first commissioned. For example, units commissioned before 1975 would be decommissioned by at least 2019. Those commissioned c. 1975 and before 1986, would be de-commissioned by the end of 2029. In 2012, Alberta had 18 coal-fired generation units. Environment Canada reported in 2012, in a backgrounder to the new legislation introduced by then-Environment Minister, Peter Kent that coal-fired generating units were "responsible for 77% of greenhouse gas (GHG) emissions from the electricity sector in Canada".Alberta's new climate policies introduced in November 2015 also include phasing out coal-fired power plants by 2030, and cutting emissions of methane by 45% by 2025. At the time, Notley "lobbied Trudeau to allow coal-to-gas conversions as a short-term solution, extending the life of the infrastructure with fewer emissions. The carbon tax introduced by Notley's government changed the daily electricity market. All three coal-burning owners signed deals with the provincial government to "cover losses from the faster phase-out".By April 2019, Alberta's coal industry provided 1,200 jobs. Coal phase out programs include "carbon capture and storage technology, retrofitted on existing coal plants." Municipalities Edmonton Edmonton passed legislation in January 2019, to launch a pilot project of the Clean Energy Improvement Program in October 2019. Edmonton is "one of the worst per-capita carbon emitters in Canada". With the change in government, Mayor Don Iveson said they were investigating ways to find partners, and to band together with other municipalities or to work with the federal government to achieve Edmonton's climate goals. By April 2019, Energy Efficiency Alberta had invested $40 million in Edmonton with the majority of the funds going to the "residential solar program and a home energy program." As part of their community energy transition strategy, the committee on ... unanimously decided to move the Energy Efficiency Alberta program forward while developing a contingency plan with the "city becoming the administrator of the program if the provincial office is slashed by the new [Kenney] government." Calgary Calgary began developing its Light Rail Transit (LRT) systems in 1979.: 3  By November 2016, Calgary's LRT was "one of the largest and well used public transit systems in North America".: 3  By 2016 Calgary had added the Bus Rapid Transit (BRT) lines and had begun working on the Green Line.: 3  The Green Line was to be partially funded with " $1.53 billion over eight years" from the carbon levy. See also Environmental issues in Canada Environmental impact of the Athabasca oil sands Hard Choices: Climate Change in Canada (2004 book) Regional effects of global warming 2012 North American drought Summer 2012 North American heat wave List of articles about Canadian tar sands 2011 Little Buffalo oil spill Enbridge Pipeline System Mountain pine beetle Obed Mountain coal mine spill Oil sands tailings ponds Orphan wells (Alberta) Seba Beach Wabamun Lake Notes == References ==
climate change in south sudan
South Sudan is one of the most vulnerable countries to climate change in the world. The country is facing the impacts of climate change, including droughts and flooding, which have indirect and interlinked implications for peace and security. Mean annual temperatures across South Sudan have increased by more than 0.4°C every decade in the past 30 years and are projected to increase between 1°C and 1.5°C by 2060, creating a warmer and drier climate. In the northeast, rainfall has decreased by 15–20%, but other regions experienced more frequent and severe floods. Greenhouse Gas Emissions[edit] South Sudan's greenhouse gas (GHG) emissions in 2020 saw a significant 13.97% increase compared to 2019, reaching a total of 56,051.36. The previous year, in 2019, South Sudan's GHG emissions experienced a notable decline of 12.8% with a recorded value of 49,180.53. Furthermore, in 2018, South Sudan observed a 9.86% increase in GHG emissions, totaling 56,397.92. Agriculture and livestock Climate change is having a significant impact on agriculture and livestock in South Sudan. Droughts have killed livestock and disrupted crop cycles, leading to food insecurity. To address this issue, the United Nations High Commissioner for Refugees (UNHCR) is distributing early maturing and drought-resistant seed varieties, as well as supporting the introduction of irrigation systems. The losses of livestock attributed to climate change, coupled with existing rivalries, heighten the probability of cattle theft, which can result in retaliatory actions, communal conflicts, population displacement, exacerbation of inter-communal animosity, and the emergence of armed factions. See also Climate change by country Health in South Sudan Agriculture in South Sudan Geography of South Sudan Climate change adaptationClimate change in Africa == References ==
nabers
NABERS, the National Australian Built Environment Rating System, is an initiative by the government of Australia to measure and compare the environmental performance of Australian buildings and tenancies. There are NABERS rating tools for commercial office buildings to measure greenhouse gas emissions, energy efficiency, water efficiency, waste efficiency and indoor environment quality. There are also energy/greenhouse and water rating tools for hotels, shopping centres and data centres. Accredited Assessors A key feature of the initiative is the use of independent 'Accredited Assessors' to conduct ratings. Assessors are required to attend training, pass an exam and complete two supervised assessments before they receive full accreditation. While there are no formal pre-requisites to attend the training, most Assessors have experience in the building services, property or energy management industries. Building owners and tenants can use the online 'self-assessment' tool, however they cannot promote these results. Only ratings that have been certified by the NABERS National Administrator can be promoted using the NABERS trademark. Calculating a rating The vision statement of NABERS is 'To support a more sustainable built environment through a relevant, reliable and practical measure of building performance'. Offices The NABERS tools attempt to provide an accurate measurement of how efficiently building owners and tenants are providing their services without penalising them for factors that are beyond their control. For example, if the primary service that an office building owner provides is safe, lit and comfortable office space the NABERS Energy for offices tool would consider how much space is being used, how much energy is being used to supply services to the space, and then statistically adjusts for factors like the climate - which will influence how much energy is used for heating and cooling. Energy To obtain a NABERS Energy for offices rating, consumption data for the building (such as electricity and gas bills) is collected by Accredited Assessors along with data about a number of other aspects of the building such as its size, hours of occupation, climate location and density of occupation. Data requirements are set out in a document called 'The NABERS Energy and Water for Offices Rules for Collecting and Using Data v.3.0'. This data is then input into the NABERS rating calculator which statistically adjusts for these factors so that the building can have its consumption fairly benchmarked against its peers. The result of this calculation is a star rating on a six-star scale, where zero is very poor performance and six is market-leading. Water The procedure for an office water rating is similar to conducting an office energy rating. The main differences are that it is water rather than energy bills that are used, and some data such as the hours of operation are not required. Unlike office energy ratings, which can either be for the base building, tenancies or whole building, office water ratings are only available for whole buildings. Data centres Energy Like NABERS for offices, NABERS Energy for data centres has three distinct rating types to reflect the different interests and responsibilities from data centres owners, operators and tenants – Infrastructure (co-location owner), Whole Facility (data centre owner) and IT Equipment (data centre tenant) ratings. The tool is designed to rate the majority of data centres in Australia, provide a direct comparison with other rateable data centres, and allow an individual data centre to measure and compare performance over time. The NABERS Data Centre IT Equipment Rating is designed for organisations that control and manage their own IT equipment (servers, storage and networking devices). The IT Equipment rating measures features that are closely related to the primary functions of a data centre (processing, storage and networking) and that all data centres provide, regardless of how they provide them. NABERS uses two IT equipment metrics: Processing capacity: number of server cores × clock speed in gigahertz (GHz) and Storage capacity (total unformatted storage capacity in terabytes).The NABERS performance benchmark model predicts the industry median greenhouse gas emissions for a given amount of data centre processing and storage capacity. This means that if a data centre consumes more energy than the benchmark model predicts, the site is less energy efficient than the industry median (set at 3 stars), while if it consumes less energy it is more efficient than the median. To obtain a NABERS Energy for data centres IT Equipment rating, energy consumption data for the IT equipment over a 28- to 40-day period is collected by Accredited Assessors along with data about the total unformatted storage capacity and total processing capacity as above. The Infrastructure Rating measures the energy efficiency in delivering support services to the IT equipment, using the widely accepted industry Power Usage Effectiveness (PUE) ratio that is converted into kilogram of emissions with some modification for climate and shared cooling services. To obtain an infrastructure rating, 12 months of energy consumption data for IT equipment and infrastructure services is collected by Accredited Assessors along with the climate location of the data centre.The Whole Facility rating measures the energy efficiency of the whole data centre by assessing the processing and storage capacity and the industry median energy efficiency for infrastructure services compared with the overall energy consumption of the data centre. It is a combination of both the IT Equipment and Infrastructure rating benchmarks To obtain a NABERS Energy for data centres Whole Facility rating, 12 months of energy consumption data for the data centre is collected by Accredited Assessors along with the processing and storage capacity and climate location of the data centre. Comparison to other building rating systems There are a number of building environmental certification systems across the world, such as LEED, Green Star, BRE Environmental Assessment Method (BREEAM) and Display Energy Certificates (DECs). The key features of NABERS as a system are that it is based on performance rather than design, assessments are carried out by third-party 'Accredited Assessors', it is based on third party verifiable data (such as utility bills), ratings undergo government quality assurance checks and it distinguishes between the environmental impact of a building's shared services and its tenancies. While other rating systems across the world share some of these features, none share all of them. Program success NABERS Energy for offices is considered by many to have been successful, as over 82% of the Australian national office market has now been rated with either a base building or whole building rating. Factors behind the success of the tool are largely attributed to its ability to differentiate between the base building and tenants energy end uses and strong government support. Far fewer tenancy energy ratings have been conducted however and there has also been far less uptake of the other tools. Use in Australian energy programs & policy While NABERS Energy is a voluntary rating scheme for buildings, its success has been at least partly driven by its extensive use in energy initiatives by government and industry throughout Australia. Some programs include: The NSW Government Resource Efficiency Policy (GREP): The most recent iteration of a series of NSW government procurement policies that set out NABERS targets in government leasing criteria. In the GREP, government tenants require a building to have a 4.5-star NABERS rating. It also states a 4.5-star energy rating as a minimum criterion for government data centres. Similar policies are in place other states and territories, as well as the Australian government (the 'Energy Efficiency in Government Operations' policy). Emissions Reduction Fund: the centrepiece of Australia's carbon abatement strategy, began operating in early 2015. NABERS Energy is used in the commercial buildings methodology, to calculate and ensure the carbon abatement achieved by project proponents is real Energy Savings Scheme (ESS): a New South Wales state energy program where commercial buildings can obtain Energy Saving Certificates (ESC) for energy efficiency projects, which can be sold to the market. A NABERS Energy ratings are used to demonstrate the energy savings achieved by the project. Green Building Fund: a former Australian Government program, where commercial buildings could obtain up to 50% of capital funding for energy efficiency projects. The program used NABERS Energy ratings to ensure the savings effectively occurred, as well as to calculate the total amount of energy and emissions saved. City Switch: an initiative that supports commercial office tenants to improve energy efficiency, run by a coalition of local councils throughout Australia. City Switch uses NABERS Energy as its key indicator of energy performance and provides assistance to its members to achieve a rating of 4 stars or higher Use in Australian legislation The Building Energy Efficiency Disclosure Act 2010: Australian government legislation that requires owners of office buildings to disclose the energy efficiency of the building to prospective tenants or buyers. Known operationally as the Commercial Building Disclosure (CBD) program, a certified NABERS Energy rating is the main energy efficiency indicator required of building owners. Use internationally NABERSNZ: The Energy Efficiency and Conservation Authority (EECA) in New Zealand licensed NABERS in 2013 to create NABERSNZ. The Global Real Estate Sustainability Benchmark (GRESB): The GRESB is a global standard for portfolio-level sustainability assessment in real estate. The GRESB benchmark addresses issues including corporate sustainability strategy, policies and objectives, environmental performance monitoring, and the use of high-quality voluntary rating tools such as NABERS. The Climate Bonds Initiative (CBI): The CBI creates Climate Bonds Standards, which provide a Fair Trade-like labelling system for bonds, designed to make it easier for investors to work out what sorts of investments genuinely contribute to addressing climate change. Data from NABERS Energy rating reports can be used in Climate Bond reporting under the Climate Bonds Standard for Low Carbon Commercial Buildings. NABERS IE in India: One NABERS Indoor Environment rating has been conducted in India, at the Paraharpur Business Centre. The rating was certified in May 2015.Australian government legislation that requires owners of office buildings to disclose the energy efficiency of the building to prospective tenants or buyers. Known operationally as the Commercial Building Disclosure (CBD) program, a certified NABERS Energy rating is the main energy efficiency indicator required of building owners. == References ==
rio tinto (corporation)
Rio Tinto Group is a British-Australian multinational company that is the world's second-largest metals and mining corporation (behind BHP). It was founded in 1873 when a group of investors purchased a mine complex on the Rio Tinto, in Huelva, Spain, from the Spanish government. It has grown through a long series of mergers and acquisitions. Although primarily focused on extraction of minerals, it also has significant operations in refining, particularly the refining of bauxite and iron ore. It has joint head offices in London, England and Melbourne, Australia.Rio Tinto is a dual-listed company traded on both the London Stock Exchange, where it is a component of the FTSE 100 Index, and the Australian Securities Exchange, where it is a component of the S&P/ASX 200 index. American depositary shares of Rio Tinto's British branch are also traded on the New York Stock Exchange, giving it listings on three major stock exchanges. In the 2020 Forbes Global 2000, it was ranked the world's 114th-largest public company.In May 2020, to expand the Brockman 4 mine, Rio Tinto demolished a sacred cave in Juukan Gorge, Western Australia, which had evidence of 46,000 years of continual human occupation, and was considered the only inland site in Australia to show signs of continual human occupation through the last Ice Age. The company later apologised for the demolition and CEO Jean-Sébastien Jacques subsequently stepped down.Rio Tinto has been widely criticised by environmental groups as well as the government of Norway for the environmental impacts of its mining activities. Claims of severe environmental damage related to its engagement in the Grasberg mine in Indonesia led the Government Pension Fund of Norway to exclude it from its investment portfolio.Academic observers have also expressed concern regarding Rio Tinto's operations in Papua New Guinea, which they allege were one catalyst of the Bougainville separatist crisis. There have also been corruption concerns: In July 2017 the UK's Serious Fraud Office (SFO) announced an investigation of the company's business practices in Guinea. Formation Since antiquity, a site along the Rio Tinto in Huelva, Spain, has been mined for copper, silver, gold and other minerals. Around 3000 BC, Iberians and Tartessians began mining the site, followed by the Phoenicians, Greeks, Romans, Visigoths and Moors. After a period of abandonment, the mines were rediscovered in 1556 and the Spanish government began operating them once again in 1724.However, Spain's mining operations there were inefficient, and the government itself was otherwise distracted by political and financial crises, leading the government to sell the mines in 1873 at a price later determined to be well below actual value. The purchasers of the mine were led by Hugh Matheson's Matheson & Company, which ultimately formed a syndicate consisting of Deutsche Bank (56% ownership), Matheson (24%) and the civil engineering firm Clark, Punchard and Company (20%). At an auction held by the Spanish government to sell the mine on 14 February 1873, the group won with a bid of £3.68 million (ESP 92.8 million). The bid also specified that Spain would permanently relinquish any right to claim royalties on the mine's production. Following purchase of the mine, the syndicate launched the Rio Tinto Company, registering it on 29 March 1873. At the end of the 1880s, control of the firm passed to the Rothschild family, who increased the scale of its mining operations.: 188 Operating history Following their purchase of the Rio Tinto Mine, the new ownership constructed a number of new processing facilities, innovated new mining techniques, and expanded mining activities.From 1877 to 1891, the Rio Tinto Mine was the world's leading producer of copper. From 1870 through 1925, the company was inwardly focused on fully exploiting the Rio Tinto Mine, with little attention paid to expansion or exploration activities outside of Spain. The company enjoyed strong financial success until 1914, colluding with other pyrite producers to control market prices. However, World War I and its aftermath effectively eliminated the United States as a viable market for European pyrites, leading to a decline in the firm's prominence.The company's failure to diversify during this period led to the slow decline of the company among the ranks of international mining firms. However, this changed in 1925, when Sir Auckland Geddes succeeded Lord Alfred Milner as chairman. Geddes and the new management team he installed focused on diversification of the company's investment strategy and the introduction of organisational and marketing reforms. Geddes led the company into a series of joint ventures with customers in the development of new technologies, as well as exploration and development of new mines outside of Spain. Between 1925 and 1931, Geddes recruited two directors: JN Buchanan (finance director) and RM Preston (commercial director), as well as other executives involved with technical and other matters.Perhaps most significant was the company's investment in copper mines in Northern Rhodesia, later Zambia, which it eventually consolidated into the Rhokana Corporation. These and later efforts at diversification eventually allowed the company to divest from the Rio Tinto mine in Spain. By the 1950s, Franco's nationalistic government had made it increasingly difficult to exploit Spanish resources for the profit of foreigners. Rio Tinto Company, supported by its international investments, was able to divest two-thirds of its Spanish operations in 1954 and the remainder over the following years. Major mergers and acquisitions Like many major mining companies, Rio Tinto has historically grown through a series of mergers and acquisitions. Early acquisitions The company's first major acquisition occurred in 1929, when the company issued stock for the purpose of raising 2.5 million pounds to invest in Northern Rhodesian copper mining companies, which was fully invested by the end of 1930. The Rio Tinto company consolidated its holdings of these various firms under the Rhokana Corporation by forcing the various companies to merge.Rio Tinto's investment in Rhodesian copper mines did much to support the company through troubled times at its Spanish Rio Tinto operations spanning the Spanish Civil War, World War II and Franco's nationalistic policies. In the 1950s, the political situation made it increasingly difficult for mostly British and French owners to extract profits from Spanish operations, and the company decided to dispose of the mines from which it took its name. Thus, in 1954, Rio Tinto Company sold two-thirds of its stake in the Rio Tinto mines, disposing of the rest over the following years. The sale of the mines financed extensive exploration activities over the following decade. Merger with Consolidated Zinc The company's exploration activities presented the company with an abundance of opportunities, but it lacked sufficient capital and operating revenue to exploit those opportunities. This situation precipitated the next, and perhaps most significant, merger in the company's history. In 1962, Rio Tinto Company merged with the Australian firm Consolidated Zinc to form the Rio Tinto – Zinc Corporation (RTZ) and its main subsidiary, Conzinc Riotinto of Australia (CRA). The merger provided Rio Tinto the ability to exploit its new-found opportunities, and gave Consolidated Zinc a much larger asset base.RTZ and CRA were separately managed and operated, with CRA focusing on opportunities within Australasia and RTZ taking the rest of the world. However, the companies continued to trade separately, and RTZ's ownership of CRA dipped below 50% by 1986. The two companies' strategic needs eventually led to conflicts of interest regarding new mining opportunities, and shareholders of both companies determined a merger was in their mutual best interest. In 1995, the companies merged into a dual listed company, in which management was consolidated into a single entity and shareholder interests were aligned and equivalent, although maintained as shares in separately named entities. The merger also precipitated a name change; after two years as RTZ-CRA, RTZ became Rio Tinto plc and CRA became Rio Tinto Limited, referred to collectively as Rio Tinto. Recent mergers, acquisitions and events Major acquisitions following the Consolidated Zinc merger included U.S. Borax, a major producer of borax, bought in 1968, Kennecott Utah Copper and BP's coal assets which were bought from BP in 1989, and a 70.7% interest in the New South Wales operations of Coal & Allied, also in 1989. In 1993, the company acquired Nerco and the United States coal mining businesses of Cordero Mining Company. In 2000, Rio Tinto acquired North Limited, an Australian company with iron ore and uranium mines, for $2.8 billion. The takeover was partially motivated as a response to North Limited's 1999 bid to have Rio Tinto's Pilbara railway network declared open access. The Australian Competition & Consumer Commission regulatory body approved the acquisition in August 2000, and the purchase was completed in October of the same year. That year, Rio Tinto also bought North Limited and Ashton Mining for US$4 billion, adding additional resources in aluminium, iron ore, diamonds and coal. In 2001, it bought (under Coal & Allied) the Australian coal businesses of the Peabody Energy.On 14 November 2007, Rio Tinto completed its largest acquisition to date, purchasing Canadian aluminium company Alcan for $38.1 billion, as of 2014, "the largest mining deal ever completed". Alcan's chief executive, Jacynthe Côté, led the new division, renamed Rio Tinto Alcan with its headquarters situated in Montreal.Activity in 2008 and 2009 was focused on divestments of assets to raise cash and refocus on core business opportunities. The company sold three major assets in 2008, raising about $3 billion in cash. In the first quarter of 2009, Rio Tinto reached agreements to sell its interests in the Corumbá iron ore mine and the Jacobs Ranch coal mine, and completed sales of an aluminium smelter in China and the company's potash operations, for an additional estimated $2.5 billion. On 5 July 2009, four Rio Tinto employees were arrested in Shanghai for corruption and espionage. One of the arrested, Australian citizen Stern Hu, was "suspected of stealing Chinese state secrets for foreign countries and was detained on criminal charges", according to a spokesman for the Chinese foreign ministry. Stern Hu was also accused of bribery by Chinese steel mill executives for sensitive information during the iron ore contract negotiations.On 19 March 2010 Rio Tinto and its biggest shareholder, Chinalco, signed a memorandum of understanding to develop Rio Tinto's iron ore project in the Simandou mine in Simandou, Guinea. On 29 July 2010, Rio Tinto and Chinalco signed a binding agreement to establish this joint venture covering the development and operation of the Simandou mine.Under the terms of the agreement, the joint venture maintains Rio Tinto's 95% interest in the Simandou project as follows: By providing US$1.35 billion on an earn-in basis through sole funding of ongoing development over a two-to-three-year period, Chalco, a subsidiary of Chinalco, would acquire a 47% interest in the joint venture. Once the full sum was paid, Rio Tinto would be left with a 50.35% interest in the project and Chalco would have 44.65%. The remaining 5% would be owned by the International Finance Corporation (IFC), the financing arm of the World Bank. On 22 April 2011 Rio Tinto, its subsidiary Simfer S.A. (Simfer), and the Guinean Government signed a settlement agreement that secured Rio Tinto's mining rights in Guinea to the southern concession of Simandou, known as blocks 3 and 4. According to the agreement, Simfer would pay US$700 million and receive mining concession and government approval of the proposed Chalco and Rio Tinto Simandou joint venture.In April 2011, Rio Tinto gained a majority stake in Riversdale Mining.In 2011, the company rekindled its interest in potash when it entered a joint venture with Acron Group to develop the Albany potash development, in southern Saskatchewan, Canada. Following an exploration program, Acron in a June 2014 statement described Albany as "one of the best potash development opportunities in the world".On 13 December 2011, an independent arbitrator cleared the way for Rio Tinto, which had owned 49% of Ivanhoe Mines (now known as Turquoise Hill Resources), to take it over: he said the $16 billion Canadian group's "poison pill" defence was not valid. Ivanhoe had developed Oyu Tolgoi in Mongolia, one of the world's largest-known copper deposits. On 28 January 2012, Rio Tinto gained control of Ivanhoe Mines and removed the management.In October 2013, Rio Tinto agreed to sell its majority stake in Australia's third-largest coal mine to Glencore and Sumitomo for a little over US$1 billion, as part of the firm's plans to focus on larger operations. Less than a year later, Rio Tinto rejected two merger proposals from Glencore, proffered in July and August 2014; the merger of Glencore and Rio Tinto would have created the world's largest mining company.In May 2015, Rio Tinto announced plans to sell some of its aluminium assets as part of a possible $1 billion deal, two years after a similar but failed attempt.In September 2020, it was announced that the company's chief executive Jean-Sébastien Jacques, along with two executives, would resign because of Rio Tinto's role in destroying two ancient rock shelters in the Pilbara region of Australia. The company's chief financial officer, Jakob Stausholm, became the new chief executive on 1 January 2021.In March 2022, Rio Tinto completed the acquisition of Rincon Mining's lithium project in Argentina for $825 million, following approval by Australia's Foreign Investment Review Board.In July 2023, it was announced Rio Tinto had acquired a 15% stake in the Australian exploration and development company, Sovereign Metals for US $27.6 million. Subsidiaries The company has operations on six continents, but is mainly concentrated in Australia and Canada, and owns its mining operations through a complex web of wholly and partly owned subsidiaries. Energy Resources of Australia – 68.4% Hathor Exploration – 100% QIT-Fer et Titane – 100% Dampier Salt – 68.4% Iron Ore Company of Canada – 58.7% Pacific Aluminum – 100% Richards Bay Minerals – 74% Corporate status Rio Tinto is primarily organised into four operational businesses, divided by product type: Iron ore Aluminium – aluminium, bauxite and alumina Copper & Diamonds – copper and by-products such as gold, silver, molybdenum and sulphuric acid, and the company's diamond interests Energy & Minerals – uranium interests, industrial minerals such as borax, salt and titanium dioxide. The corporation previously held coal production assets.These operating groups are supported by separate divisions providing exploration and function support. Stock structure and ownership Rio Tinto is structured as a dual-listed company, with listings on both the London Stock Exchange (symbol: RIO), under the name "Rio Tinto Plc", and the Australian Securities Exchange (symbol: RIO) in Sydney, under the name "Rio Tinto Limited". The dual-listed company structure grants shareholders of the two companies the same proportional economic interests and ownership rights in the consolidated Rio Tinto, in such a way as to be equivalent to all shareholders of the two companies actually being shareholders in a single, unified entity. This structure was implemented to avoid adverse tax consequences and regulatory burdens. To eliminate currency exchange issues, the company's accounts are kept, and dividends paid, in United States dollars.Rio Tinto is one of the largest companies listed on either exchange. As such, it is included in the widely quoted indices for each market: the FTSE 100 Index of the London Stock Exchange, and the S&P/ASX 200 index of the Australian Securities Exchange. LSE-listed shares in Rio Tinto plc can also be traded indirectly on the New York Stock Exchange via an American Depositary Receipt. As of 4 March 2009, Rio Tinto was the fourth-largest publicly listed mining company in the world, with a market capitalisation around $134 billion. As of mid-February 2009, shareholders were geographically distributed 42% in the United Kingdom, 18% in North America, 16% in Australia, 14% in Asia and 10% in continental Europe. BHP Billiton bid On 8 November 2007, rival mining company BHP Billiton announced it was seeking to purchase Rio Tinto in an all-share deal. This offer was rejected by the board of Rio Tinto as "significantly undervalu[ing]" the company. Another attempt by BHP Billiton for a hostile takeover, valuing Rio Tinto at $147 billion, was rejected on the same grounds. Meanwhile, the Chinese government-owned resources group Chinalco and the U.S. aluminium producer Alcoa purchased 12% of Rio Tinto's London-listed shares in a move that would block or severely complicate BHP Billiton's plans to buy the company. BHP Billiton's bid was withdrawn on 25 November 2008, with the BHP citing market instability from the global financial crisis of 2008–2009. Chinalco investment On 1 February 2009, Rio Tinto management announced it was in talks to receive a substantial equity infusion from Chinalco, a major Chinese state-controlled mining enterprise, in exchange for ownership interest in certain assets and bonds. Chinalco was already a major shareholder, having bought up 9% of the company in a surprise move in early 2008; its ownership stake had risen to 9.8% by 2014, making it Rio Tinto's biggest investor. The proposed investment structure reportedly involves $12.3 billion for the purchase of ownership interests of Rio Tinto assets in its iron ore, copper and aluminium operations, plus $7.2 billion for convertible bonds. The transaction would bring Chinalco's ownership of the company to roughly 18.5%. The deal is still pending approval from regulators in the United States and China, and has not yet been approved by shareholders, although regulatory approval has been received from Germany and the Australian Competition & Consumer Commission. The largest barrier to completing the investment may come from Rio Tinto's shareholders; support for the deal by shareholders was never overwhelming and has reportedly declined in 2009, as other financing options (such as a more traditional bond issuance) are beginning to appear more realistic as a viable alternative funding source. A shareholder vote on the proposed deal was expected in the third quarter of 2009.Rio Tinto is believed to have pursued this combined asset and convertible bond sale to raise cash to satisfy its debt obligations, which required payments of $9.0 billion in October 2009 and $10.5 billion by the end of 2010. The company has also noted China's increasing appetite for commodities, and the potential for increased opportunities to exploit these market trends, as a key factor in recommending the transaction to its shareholders.In March 2010, it was announced that Chinalco would invest $1.3 billion for a 44.65% stake in Rio Tinto's iron ore project in Simandou, Guinea. Rio Tinto retained 50.35% ownership at Simandou.In November 2011, Rio joined with Chinalco to explore for copper resources in China's complex landscape, by setting up a new company, CRTX, 51% owned by Chinalco and 49% by Rio Tinto. Management Under the company's dual-listed company structure, management powers of the Rio Tinto are consolidated in a single senior management group led by a board of directors and executive committee. The board of directors has both executive and non-executive members, while the executive committee is composed of the heads of major operational groups. Board of Directors Executive Directors Dominic Barton, chairman Jakob Stausholm, chief executive Non-Executive Directors Megan Clark Hinda Gharbi Simon McKeon Simon Henry Jennifer Nason Sam Laidlaw Ngaire WoodsRio Tinto engages professional lobbyists to represent its interests in various jurisdictions. In South Australia, the company in represented by DPG Advisory Solutions. Operations Rio Tinto's main business is the production of raw materials including copper, iron ore, bauxite, diamonds, uranium and industrial minerals including titanium dioxide, salt, gypsum and borates. Rio Tinto also performs processing on some of these materials, with plants dedicated to processing bauxite into alumina and aluminium, and smelting iron ore into iron. The company also produces other metals and minerals as by-products from the processing of its main resources, including gold, silver, molybdenum, sulphuric acid, nickel, potash, lead and zinc. Rio Tinto controls gross assets of $81 billion in value across the globe, with main concentrations in Australia (35%), Canada (34%), Europe (13%) and the United States (11%), and smaller holdings in South America (3%), Africa (3%) and Indonesia (1%). Iron ore: Rio Tinto Iron Ore The Australian operations of Rio Tinto Iron Ore (RTIO) comprises an integrated iron ore operations in the Pilbara, Western Australia. The Pilbara iron ore operations include 16 iron ore mines, four independent port terminals, a 1,700-kilometre rail network and related infrastructure. The corporation also has had a majority stake in Iron Ore Company of Canada since its 2000 hostile takeover of North Limited. Copper and by-products: Rio Tinto Copper Copper was one of Rio Tinto's main products from its earliest days operating at the Rio Tinto complex of mines in Spain. Since that time, the company has divested itself from its original Spanish mines, and grown its copper-mining capacity through acquisitions of major copper resources around the world. The copper group's main active mining interests are Oyu Tolgoi mine in Mongolia, Kennecott Utah Copper in the United States, and Minera Escondida in Chile. Most of these mines are joint ventures with other major mining companies, with Rio Tinto's ownership ranging from 30% to 80%; only Kennecott is wholly owned. Operations typically include mining of ore through to production of 99.99% purified copper, including extraction of economically valuable by-products. Together, Rio Tinto's share of copper production at its mines totalled nearly 700,000 tonnes, making the company the fourth-largest copper producer in the world.Rio Tinto Copper continues to seek new opportunities for expansion, with major exploration activities at the Resolution Copper project in the United States, Winu in Australia, and Oyu Tolgoi underground mine in Mongolia. In addition, the company is seeking to become a major producer of nickel, with exploration projects currently underway in the United States and Indonesia.Although not the primary focus of Rio Tinto Copper's operations, several economically valuable by-products are produced during the refining of copper ore into purified copper. Gold, silver, molybdenum and sulphuric acid are all removed from copper ore during processing. Due to the scale of Rio Tinto's copper mining and processing facilities, the company is also a leading producer of these materials, which drive substantial revenues to the company.Sales of copper generated 8% of the company's 2008 revenues, and copper and by-product operations accounted for 16% of underlying earnings. Rio Tinto exclusively provided the metal to produce the 4,700 gold, silver and bronze medals at the London 2012 Olympic and Paralympic Games. This was the second time Rio Tinto had done so for Olympic medals, having previously provided the metals for the Salt Lake City 2002 Winter Olympics. Together, Rio Tinto's share of copper production at its mines totalled nearly 700,000 tonnes, making the company the fourth-largest copper producer in the world. Rio Tinto was the naming rights sponsor of Utah Soccer Stadium until 2022. Aluminium Rio Tinto consolidated its aluminium-related businesses into its aluminium product group (originally named Rio Tinto Alcan), formed in late 2007, when Rio Tinto purchased the Canadian company Alcan for $38.1 billion. Combined with Rio Tinto's existing aluminium-related assets, the new aluminium division vaulted to the world number-one producer of bauxite, alumina and aluminium. Aluminium division kept key leadership from Alcan, and the company's headquarters remain in Montreal. Rio Tinto divides its Aluminium operations into three main areas—bauxite, alumina and primary metal. The Bauxite and Alumina unit mines raw bauxite from locations in Australia, Brazil and Africa. The unit then refines the bauxite into alumina at refineries located in Australia, Brazil, Canada and France. The Primary Metal business unit's operations consist of smelting aluminium from alumina, with smelters located in 11 countries around the world. The Primary Metal group also operates several power plants to support the energy-intensive smelting process.The aluminium division has interests in seven bauxite mines and deposits, six alumina refineries and six speciality alumina plants, 26 aluminium smelters, 13 power plants and 120 facilities for the manufacture of speciality products. The acquisition of Alcan operations in 2007 substantially increased Rio Tinto's asset base, revenues and profits: in 2008, 41% of company revenues and 10% of underlying earnings were attributable to the aluminium division. Uranium: Rio Tinto Energy Rio Tinto Energy is a business group of Rio Tinto that was dedicated to the mining and sale of uranium. Rio Tinto's uranium operations were located at two mines: the Ranger Uranium Mine of Energy Resources of Australia and the Rössing uranium mine in Namibia. The unit is now focused on mine rehabilitation. The company was the third-largest producer of uranium in the world. According to Rio Tinto's website, the company instituted strict controls and contractual limitations on uranium exports, limiting uses to peaceful, nonexplosive uses only. Such controls are intended to limit use of the company's uranium production to use as fuel for nuclear power plants only, and not for use in the production of nuclear weapons. Rio Tinto Energy was responsible for 12% of revenues and 18% of underlying earnings in 2008.Rio Tinto has divested or closed its remaining uranium operations since 2019. In 2019 it sold its remaining holdings in the Rössing uranium mine to China National Uranium Corporation Limited (CNUC) for an initial cash payment of $6.5 million plus a contingent payment of up to $100 million.Mining finished at Ranger in late 2012 and the mine plant processed stockpiled ore until January 2021. Rio has tenure and access to the site, principally for rehabilitation activities, until 8 January 2026. Diamonds: Rio Tinto Diamonds Rio Tinto Diamonds operates three diamond mines: the Argyle Diamond Mine in Western Australia (100% ownership), the Diavik Diamond Mine in the Northwest Territories of Canada (60% ownership), and the Murowa diamond mine located in Zimbabwe (78% ownership). Together, these three mines produce 20% of the world's annual production of rough diamonds, making Rio Tinto the world's third-largest producer of mined diamonds.The diamond business unit's most advanced exploration project is the Falcon Project in Saskatchewan, Canada, where Rio Tinto owns 60% of the project in a joint venture with Star Diamond Corp. Rio Tinto Diamonds generated 1% of revenues and earnings for Rio Tinto in 2008. Industrial minerals: Rio Tinto Minerals Rio Tinto Minerals is a diverse business group with mining and processing interest in borates, salt and gypsum. Rio Tinto Borax, with operations in California, supplies nearly one-third of the world's annual demand for refined borates. The Minerals group is also majority owner of Dampier Salt, which produces over 9 million tonnes of salt and 1.5 million tonnes of gypsum annually from its three facilities in north-west Australia. Rio Tinto Minerals accounted for 6% of company revenues, and contributed 3% to earnings in 2008.On 31 January 2010, the management of U.S. Borax locked out its hourly workforce, replacing the workers with nonunion workers and managers from other Rio Tinto operations. The 560 International Longshore and Warehouse Union Local 30 members immediately began a fireside vigil that garnered national and international labour attention. ILWU filed several unfair labour practices against the company, including an illegal lock-out claim. Iron products and titanium: Rio Tinto Iron and Titanium Titanium dioxide is mined at three locations in Canada, South Africa and Madagascar, and refined at QIT-Fer et Titane's Canadian facilities.A media report in October 2013 revealed that the corporation plans to establish a fully automated railroad system for the transportation of iron ore across the Australian outback by 2015, thereby replacing the corporation's train drivers. The United Kingdom-based transport historian Christian Wolmar stated at the same time that the train drivers are most likely the highest-paid members of the occupation in the world at that time. As part of an overall strategy to increase profit margins, the corporation is spending US$518 million on the project. Development of autonomous technologies Rio Tinto is a global leader in the development of autonomous technologies for use in the mining sector. As of 2018, Rio Tinto's fleet of 80 autonomous Komatsu vehicles had moved over 1 billion tonnes of ore and waste material in Western Australia's Pilbara region.Furthermore, in late 2017 Rio Tinto announced funding for their Koodaideri Mine in Western Australia, which Rio Tinto had dubbed their "intelligent mine." Financial results Rio Tinto's revenues and earnings have grown substantially in 2003–2010, with one of the largest increases attributable to the company's acquisition of Alcan in 2007. Although its operating margin is significantly affected by the market prices of the commodities it produces, it has remained profitable over its recent history. Controversies Poor working conditions The United Steelworkers of America has claimed that mine workers at Kennecott Copper Mine worked for eight months without stopping during a labour dispute about how these workers were treated. This dispute ended with a settlement including a six-year labour agreement, only for Rio Tinto to lay off over 120 workers just two days later. Tom Johnson, a spokesperson from United Steelworkers of America said, "Rio Tinto does not make an effort to use technologies that are more sustainable. They do not discuss with local communities their environmental impact. They operate in secret with governments and with groups that are friendly to them." London Olympic Games The metals for the medals for the 2012 Summer Olympics were supplied from the Bingham Canyon Mine located in Utah and the Oyu Tolgoi mine in Mongolia, which caused uproar among many activist groups, especially in Utah due to their concern about the impact to the local cities. One person particularly bothered by this decision was the Commissioner of the London Games, Meredith Alexander, who quit her position and led a coalition of human rights and environmental groups during the "Greenwashed Gold Campaign". Panguna Mine, Papua New Guinea In 2000, Rio Tinto faced a federal lawsuit on behalf of Papua New Guinea due to the harm the company's mining operations at the Panguna Mine had to the environment for decades. Local communities filed this suit claiming that the local Kawerong-Jaba river delta was used as a dumping site for "more than one billion tons of mine waste". According to the lawsuit, the citizens are claiming that the mining giant used harmful chemicals and bulldozers to destroy the environment and specifically the rainforest and used their waterways as a dumping site for the chemicals and runoff caused by their mining operations. These citizens believe that Rio Tinto was targeting them due to their race and culture and was even paying what they referred to as "slave wages" to the company's black workers. Rio Tinto massacre During the first years of the company's operation in Spain, the company practiced open-air pyrite calcination in blast furnaces. The toxic fumes released by this process had a negative impact on the farmland and the local agriculturists, which led to the company's workers and some local anarchists coming together to protest against this practice. On February 4, 1888, several thousand rank and file—agriculturalists, anarchists and mineworkers—marched to the Rio Tinto town hall (ayuntamiento) to deliver their petitions to the mayor. While the mayor spoke with the crowd's representatives, the Huelva military governor and civil guards watched over the protest. The military governor's attempts to disperse the crowd only incensed it further. The civil guards, under perceived threat of mob violence, fired on the crowd, killing at least 13 and injuring 35. Interference from Axis powers during World War II Rio Tinto's status as a mainly British-owned company, located in Spain and producing pyrites—an important material for military applications—created a complicated set of circumstances for the company's operation in the 1930s and 1940s. During the Spanish Civil War, the region in which Rio Tinto's mines were located came under the control of General Franco's Nationalists in 1936. However, Franco increasingly intervened in the company's operations, at times requisitioning pyrite supplies for use by Spain and its Axis allies Germany and Italy, forcing price controls on the company's production, restricting exports, and threatening nationalisation of the mines. Although company management (and indirectly, the British government) managed to counteract some of these efforts by Franco, much of the mine's pyrite production was channelled to Axis powers before and during World War II. Nonetheless, Franco's meddling caused the mine's production and profitability to fall precipitously during and after the war, leading the company to ultimately exit from its Spanish operations in 1954. Guinean iron ore In 2015, Rio Tinto was criticised by the Guinean government for the many mining delays at the local Simandou mine. Cece Noramou, government official said the government was "running out of patience". President Alpha Conde himself said that "there have been people at Simandou for 15 years, 20 years, and they've never produced a ton of iron". Even before 2015, the Guinean government had expressed their displeasure and dissatisfaction with Rio Tinto; in 2008, the Guinean government annulled half of the company's Simandou rights and gave them to BSGR, a French–Israeli-owned mining company.In late 2016, Rio Tinto agreed to sell its stake in the Simandou iron ore mine to Chinalco and exit the deal. The deal was negotiated after the company's case against Vale and BSGR was dismissed at US District Court. Racism, bullying and sexual harassment In 2022, Rio Tinto released a report that described a work culture of bullying, harassment and racism at the global mining giant, including twenty one complaints by women of actual or attempted rape or sexual assault in the past five years. Elizabeth Broderick, who surveyed more than 10,000 of Rio Tinto's 45,000 employees, released an independent report, which found that systemic bullying, sexism and racism were common. According to the report, these harmful behaviours were often tolerated or normalised. "Harmful behaviour by serial perpetrators is often an open secret," Elizabeth Broderick said. On the whole, about 28% of women and 7% of men had experienced sexual harassment at Rio. But this rate rose to 41% for female workers at FIFO sites. Most women who responded had experienced "everyday sexism". Near half of the workforce reported being bullied, and described the resultant loss of confidence, declining performance, anxiety and depression. According to Broderick, LGBTIQ+ employees had experienced "elevated rates of bullying, sexual harassment and racism". A "culture of silence" had kept workers from reporting unacceptable behaviour. People who worked in a country they weren't born in had experienced high rates of racism, while almost 40% of men who identify as Aboriginal or Torres Strait Islander had endured racism in Australia.The Australian Human Rights Commission found that, from 2015 and 2020, approximately three in four women had experienced at least some form of sexual harassment while in the mining industry, in part due to a gender imbalance. Juukan Gorge destruction In May 2020, to expand the Brockman 4 mine, Rio Tinto demolished an Australian Aboriginal sacred site in Juukan Gorge, Western Australia, which had evidence of 46,000 years of continual human occupation, and was considered the only inland prehistoric site in Australia to show signs of continual human occupation through the last Ice Age. The company later revealed it had three alternative options to preserve the site, but chose to destroy it without informing the traditional owners of the alternatives. Permission to destroy the site had been given in 2013 under the state Aboriginal Heritage Act 1972, which, however, has been under review since 2018. The Puutu Kunti Kurrama and Pinikura peoples, who are the local land custodians, had fought the decision. The destruction brought widespread criticism.On 31 May, Rio Tinto apologised for the distress caused. According to 35 Aboriginal and Torres Strait Islander and human rights organisations, Rio Tinto's qualified apology is "far from an adequate response to an incident of this magnitude".On 9 June, Reconciliation Australia revoked its endorsement of Rio Tinto as partner in reconciliation action plans, defining the behaviour of the corporation a breathtaking breach of a respectful relationship" which was "devastating for the Traditional Owners and robbed the world of a uniquely valuable cultural heritage site". Also on 9 July, The Corporate Human Rights Benchmark (CHRB) and the World Benchmarking Alliance (WBA) condemned "the destruction of invaluable cultural heritage at Juukan Gorge", adding that this "incident is a severe adverse impact on cultural rights that has engendered extreme concern and outrage among the Puutu Kunti Kurrama and Pinikura traditional owners of the site as well as Aboriginal and Torres Strait Islander communities and their allies". The CHRB and WBA also called "on Rio Tinto to take appropriate action to carry out an independent investigation of the incident, involving affected stakeholders, to provide effective remedy and to prevent similar impacts in the future, in Australia and elsewhere". The statement was attached to the company's listing in the 2019 Benchmark Report.On 4 August, in its submission to a parliamentary inquiry looking at the destruction of the sacred rock caves, the company said it "missed opportunities" to alter its mine plan. A dig in 2014 and a final report on the archaeological excavations in 2018 underlined the cultural and historical significance of the caves. Rio Tinto said it did not "clearly communicate" its plan for destroying the sacred site to the native land owners. Although no executives were fired, on 24 August the company announced that three senior executives would lose a combined £3.8 million ($5 million) from their expected bonus payments.On 11 September 2020, it was announced that, as a result of the destruction at Juukan Gorge, CEO Jean-Sébastien Jacques and two other Rio Tinto executives would step down. The National Native Title Council (NNTC) welcomed the move, but said that there should be an independent review into the company's procedures and culture to ensure that such an incident could never happen again. Rio Tinto admitted their error, issued an apology via media and on their website, and also committed to building relationships with the traditional owners as well as getting Indigenous people into leadership roles in the company. One analysis of what went wrong in Rio Tinto to allow the destruction to occur suggested that processes failed at several levels, but mainly due to its "segmented organisational structure", a poor reporting structure, and Indigenous relations not being properly represented at a high enough level.In response to this disaster, the Western Australian government introduced the Aboriginal Cultural Heritage Act 2021. After being repealed, Rio Tinto wrote to traditional owners to make promises not to backtrack. Rio Tinto was one of thirteen ASX 20 companies in support for the Yes campaign for the 2023 Australian Indigenous Voice referendum. 2021 Serbian protests During 2021, a series of mass protests broke out in Serbia against the construction of a lithium mine in Western Serbia by the Rio Tinto corporation. Protesters blocked major roads and bridges in Belgrade and other major cities. In the town of Šabac, there was an incident when a member of the ruling party attacked the protesters with an excavator, and then the protesters were beaten by an armed group of hooligans. The Jadar lithium project is driven by a "significant" supply gap for lithium, as demand for the metal used in electric vehicles (EV) and green technologies continues to soar, particularly in North America and Europe. The project would make Serbia the biggest producer of lithium globally, and provide raw materials to more than 1 million electric cars.As of 11 December 2021, protests are still ongoing with demands to stop and permanently prohibit any mining-related activity in Jadar region. An activist from Ekološki Ustanak (Environmental Uprising), one of the prominent organisers of the protests, told to the local media that "protests will be continued until the basic demand is met, which is the expulsion of Rio Tinto from Serbia and the adoption of a law banning lithium exploitation in Serbia".In December 2021, Rio Tinto said it was considering the concerns of residents in western Serbia after Loznica's municipal assembly scrapped a plan to allocate land for a lithium project. Misplaced radioactive capsule In January 2023, the company announced that it had misplaced a capsule of radioactive material that was being transported from their Gudai-Darri mine in Western Australia. The capsule is a 8 by 6 mm cylinder containing a 19-gigabecquerel caesium-137 ceramic source. It has the capability of causing serious illness if it is not handled correctly. According to the company, the capsule was lost somewhere between Newman and Perth, a distance of 1,400 km. The company launched an investigation into the disappearance and are working alongside authorities.Later the same month the capsule was recovered by investigators and verified by the Australian Defence Forces. Environment Mining Rio Tinto has been widely criticised by environmental groups and at least one national government for the environmental impacts of its mining activities. The most high-level environmental criticism to date has come from the government of Norway, which divested itself from Rio Tinto shares and banned further investment due to environmental concerns. Claims of severe environmental damages related to Rio Tinto's engagement in the Grasberg mine in Indonesia led the Government Pension Fund of Norway to exclude Rio Tinto from its investment portfolio. The fund, which is said to be the world's second-largest pension fund, sold shares in the company valued at 4.85 billion kr (US$855 million) to avoid contributing to environmental damages caused by the company. Exclusion of a company from the Fund reflects our unwillingness to run an unacceptable risk of contributing to grossly unethical conduct. The Council on Ethics has concluded that Rio Tinto is directly involved, through its participation in the Grasberg mine in Indonesia, in the severe environmental damage caused by that mining operation. Rio Tinto disputes the claims of environmental damage at the Grasberg mine, and states that the company has long maintained an excellent record on environmental issues.After the former Panguna copper and gold mine in Bougainville, Papua New Guinea, which was abandoned by Rio Tinto in 1989, caused flooding, pollution of water wells and river poisoning, residents of the region filed a request for investigation with the Australian government in September 2020.As part of a rider of the National Defense Authorization Act, the United States agreed in 2014 to hand over Oak Flat in Arizona to a joint venture of Rio Tinto and BHP Billiton to build the Resolution Copper mine. The proposal has faced significant backlash from environmentalists and the Apache tribe, arguing that the project, if it goes forward, would collapse a region two miles (3.2 km) wide around Oak Flat into a sinkhole 1,100 feet (340 m) deep, destroying sacred and ecologically sensitive land. The project would also deplete and contaminate Arizona's already limited groundwater supply. U.S. Representative Raúl Grijalva has introduced on four occasions a proposal for the land transfer to be halted, most recently in March 2021 after the Joe Biden administration paused the land transfer. Carbon dioxide emissions According to The Guardian, Rio Tinto is one of the top 100 industrial greenhouse gas producers in the world, accounting for 0.75 per cent of global industrial greenhouse gas emissions between 1988 and 2015. In 2016, Rio Tinto estimated to have produced 32 million tonnes of carbon dioxide equivalent in its own climate change report.In March 2018, Rio Tinto was urged by institutional investors to set new rules requiring the company to adhere to the goals of the Paris Agreement to limit global warming to 1.5 °C, including detailed plans to reduce scope 1 to 3 emissions. Rio Tinto's top executives rejected the resolution, arguing that the company had made a lot of progress in reducing its greenhouse gas emissions and that appropriate plans were in place to deal with climate change.Rio also argued that scope 3 emissions, those of its customers, were beyond the firm's control. Nevertheless, the corporation in September 2019 signed a partnership with Chinese steelmaker China Baowu Steel Group to find ways to reduce greenhouse gas emissions from steel making, in an attempt to tackle the scope 3 issue.In 2021, Rio unveiled plans to spend $7.5 billion in direct capital expenditure on efforts to decarbonise, announcing new targets of cutting Scope 1 and 2 emissions (from their 2018 baseline) by 15% before 2025 and by 50% by 2030 and scope 3 emissions by 30% before 2030. This is to be achieved through 5 GW of wind and solar projects for the Boyne Island and Tomago smelters and 1 GW for Pilbara mining, full electrification of the Pilbara system including all trucks, mobile equipment and rail operations, replacing gas, and investments into green steel and aluminium, joining fellow iron ore giants Fortescue Metals and BHP in the effort to transition to renewable powered operations.Rio Tinto reported Total CO2e emissions (Direct + Indirect) for the twelve months ending 31 December 2020 at 31,500 Kt (+100/+0.3% y-o-y). Labour and human rights Academic observers have also expressed concern regarding Rio Tinto's operations in Papua New Guinea, which they allege were one catalyst of the Bougainville separatist crisis. The British antipoverty charity War on Want has also criticised Rio Tinto for its complicity in the serious human rights violations which have occurred near the mines it operates in Indonesia and Papua New Guinea.On 31 January 2010, Rio Tinto locked out nearly 600 workers from a mine in Boron, California, U.S.Rio Tinto was also accused of planning and funding the murder of RTI activist Shehla Masood in Bhopal, India. Apparently, she was protesting illegal diamond mining done by Rio Tinto in connivance with government officers. The case was, however, solved and no connection to Rio Tinto was established, though popular opinion still perceives them as the possible culprit.Rio Tinto is not, however, universally condemned for its ethical behaviour. The company has won an award for ethical behaviour, the Worldaware Award for Sustainable Development in 1993. The award, although given by an independent committee, is sponsored by another multinational corporation (in this case, the sponsor was Tate and Lyle). Rio Tinto has, in turn, sponsored its own WorldAware award, the Rio Tinto Award for Long-term Commitment. The British charity Worldaware ceased to exist in March 2005. These awards, awarded to extractive industries which make some environmental commitments to deflect the more general criticisms of their operations, are referred to by corporate watchdog groups as "greenwashing". Corruption allegations In China In 2009, Chinese authorities began investigating allegations against Rio Tinto. These included bribing executives from 16 of China's biggest steel mill companies to get hold of secret information. On 29 March 2010 four Rio Tinto employees including Australian citizen Stern Hu were found guilty of these charges and of accepting millions of dollars in bribes. They were ordered to pay hundreds of thousands of dollars in fines, and sentenced to 7 to 14 years in jail. In Guinea Rio Tinto has been embroiled in a number of corruption allegations over its acquisition of stakes in the Simandou iron ore mine in Guinea. The allegations center around the payment of a $10.5 million bribe to François de Combret, a French banking consultant who was a friend and adviser of President Alpha Condé.Rio Tinto launched an internal probe into the matter run by an independent law firm, and on 9 November 2016 announced it would report the findings to the Securities & Exchange Commission (SEC), the Serious Fraud Office (United Kingdom), the Australian Securities & Investments Commission, and the United States Department of Justice. Rio Tinto also declared they would cooperate with all related investigations and fired two top executives in connection with the matter, one of whom was head of energy and minerals, Alan Davies, who led the Simandou operation in 2011. He was suspended after the investigators discovered suspicious emails discussing contractual payments from that year. Davies claimed that there were no grounds for the termination of his employment.The President of Guinea, Alpha Condé, denied having any knowledge of the illegal transactions, but recordings obtained by France24 prove otherwise.Sam Walsh, the retiring CEO of the company, had 80% of his pay withheld while the investigation continue.Also in early November 2016, Former mining minister of Guinea, Mahmoud Thiam, revealed that the head of Rio Tinto's operation in Guinea offered him a bribe in 2010 to win back control of the Simandou mine, and that his offer was supported by senior members of the company.Rio Tinto is currently facing at least four class action suits in the U.S. demanding damages over the corruption allegations in Guinea. The suit states that Rio Tinto made "materially false and misleading statements" that "deceived" investors.In July 2017 the Serious Fraud Office (SFO) announced the launch of a fraud and corruption investigation into the company's business practices in Guinea. Following the news of the investigation Rio Tinto shares in the U.S. dropped by 1.4%. The Australian Federal Police is also investigating the allegations. Rio Tinto has announced it would cooperate fully. After the SFO investigation announcement, and amid a search for a new CEO, Rio director John Varley was forced to resign from his role in the company.On 6 March 2023, the U.S. SEC announced charges against Rio Tinto plc for violations of the Foreign Corrupt Practices Act (FCPA) arising out of a bribery scheme involving a consultant in Guinea. The company has agreed to pay a $15 million civil penalty to settle the SEC's charges. SEC Investigation The Securities & Exchange Commission investigated a $3 billion impairment charge against Rio Tinto regarding a coal deal it made in Mozambique. Rio acquired Riversdale Mining, an Australian coal mining company with significant interests in Mozambique, in 2011 for $2.9 billion in an all-cash deal. Two years later they wrote down the value of the assets by $3 billion. Following the impairment charge, which included an additional $11 billion in asset write-downs, chief executive officer of Rio Tinto, Tom Albanese stepped down from his post and left the company. Rio later sold the assets for $50 million. See also Riotinto Railway Tourist Mining Train References Further reading Avery, David (1974). Not on Queen Victoria's Birthday; the Story of the Rio Tinto Mines. London: Collins. OCLC 1086684067. Harvey, Charles E. The Rio Tinto Company: an economic history of a leading international mining concern, 1873-1954. (Alison Hodge, 1981). External links Official website Business data for Rio Tinto: Rio Tinto Coal Australia (RTCA) corporate website Rio Tinto (corporation) companies grouped at OpenCorporates MBendi Rio Tinto information page, including a detailed list of related companies and Rio Tinto activity worldwide Unsustainable: The Ugly Truth about Rio Tinto
environmental impact of the oil shale industry
Environmental impact of the oil shale industry includes the consideration of issues such as land use, waste management, and water and air pollution caused by the extraction and processing of oil shale. Surface mining of oil shale deposits causes the usual environmental impacts of open-pit mining. In addition, the combustion and thermal processing generate waste material, which must be disposed of, and harmful atmospheric emissions, including carbon dioxide, a major greenhouse gas. Experimental in-situ conversion processes and carbon capture and storage technologies may reduce some of these concerns in future, but may raise others, such as the pollution of groundwater. Surface mining and retorting Land use and waste management Surface mining and in-situ processing requires extensive land use. Mining, processing, and waste disposal require land to be withdrawn from traditional uses, and therefore should avoid high density population areas. Oil shale mining reduces the original ecosystem diversity with habitats supporting a variety of plants and animals. After mining the land has to be reclaimed, process takes time and cannot necessarily re-establish the original biodiversity. The impact of sub-surface mining on the surroundings will be less than for open pit mines. However, sub-surface mining may also cause subsidence of the surface due to the collapse of mined-out area and abandoned stone drifts.Disposal of mining wastes, spent oil shale (including semi-coke) and combustion ashes needs additional land use. According to the study of the European Academies Science Advisory Council, after processing, the waste material occupies a greater volume than the material extracted, and therefore cannot be wholly disposed underground. According to this, production of a barrel of shale oil can generate up to 1.5 tonnes of semi-coke, which may occupy up to 25% greater volume than the original shale. This is not confirmed by the results of Estonia's oil shale industry. The mining and processing of about one billion tonnes of oil shale in Estonia has created about 360-370 million tonnes of solid waste, of which 90 million tonnes is a mining waste, 70–80 million tonnes is a semi-coke, and 200 million tonnes are combustion ashes.The waste material may consist of several pollutants including sulfates, heavy metals, and polycylic aromatic hydrocarbons (PAHs), some of which are toxic and carcinogenic. To avoid contamination of the groundwater, the solid waste from the thermal treatment process is disposed in an open dump (landfill or "heaps"), not underground where it could potentially reach clean ground water. As semi-coke consists of, in addition to minerals, up to 10% organics that may pose hazard to the environment owing to leaching of toxic compounds as well as to the possibility of self-ignition. Water management Mining influences the water runoff pattern of the area affected. In some cases it requires the lowering of groundwater levels below the level of the oil shale strata, which may have harmful effects on the surrounding arable land and forest. In Estonia, for each cubic meter of oil shale mined, 25 cubic meters of water must be pumped from the mine area. At the same time, the thermal processing of oil shale needs water for quenching hot products and the control of dust. Water concerns are a particularly sensitive issue in arid regions, such as the western part of the United States and Israel's Negev Desert, where there are plans to expand the oil shale industry. Depending on technology, above-ground retorting uses between one and five barrels of water per barrel of produced shale oil. In situ processing, according to one estimate, uses about one-tenth as much water.Water is the main transmitter of oil shale industry pollutants. One environmental issue is to prevent noxious materials leaching from spent shale into the water supply. The oil shale processing is accompanied by the formation of process waters and waste waters containing phenols, tar and several other products, heavily separable and toxic to the environment. A 2008 programmatic environmental impact statement issued by the United States Bureau of Land Management stated that surface mining and retort operations produce 2 to 10 U.S. gallons (7.6 to 37.9 L; 1.7 to 8.3 imp gal) of waste water per 1 short ton (0.91 t) of processed oil shale. Air pollution management Main air pollution is caused by the oil shale-fired power plants. These factory plants provide the atmospheric emissions of gaseous products like nitrogen oxides, sulfur dioxide and hydrogen chloride, and the airborne particulate matter (fly ash). It includes particles of different types (carbonaceous, inorganic ones) and different sizes. The concentration of air pollutants in flue gas depends primarily on the combustion technology and burning regime, while the emissions of solid particles are determined by the efficiency of fly ash-capturing devices.Open deposition of semi-coke causes distribution of pollutants in addition to aqueous vectors also via air (dust).There are possible links from being in an oil shale area to a higher risk of asthma and lung cancer than other areas. Greenhouse gas emissions Carbon dioxide emissions from the production of shale oil and shale gas are higher than conventional oil production and a report for the European Union warns that increasing public concern about the adverse consequences of global warming may lead to opposition to oil shale development.Emissions arise from several sources. These include CO2 released by the decomposition of the kerogen and carbonate minerals in the extraction process, the generation of the energy needed to heat the shale and in the other oil and gas processing operations, and fuel used in the mining of the rock and the disposal of waste. As the varying mineral composition and calorific value of oil shale deposits varies widely, the actual values vary considerably. At best, the direct combustion of oil shales produces carbon emissions similar to those from the lowest form of coal, lignite, at 2.15 moles CO2/MJ, an energy source which is also politically contentious due to its high emission levels. For both power generation and oil extraction, the CO2 emissions can be reduced by better utilization of waste heat from the product streams. In-situ processing Currently, the in-situ process is the most attractive proposition due to the reduction in standard surface environmental problems. However, in-situ processes do involve possible significant environmental costs to aquifers, especially since in-situ methods may require ice-capping or some other form of barrier to restrict the flow of the newly gained oil into the groundwater aquifers. However, after the removal of the freeze wall these methods can still cause groundwater contamination as the hydraulic conductivity of the remaining shale increases allowing groundwater to flow through and leach salts from the newly toxic aquifer. See also Oil shale geology Oil shale industry Oil shale economics References External links and further reading Oil Shale and Tar Sands Draft Programmatic Environmental Impact Statement (EIS) Concerning potential leases of Federal oil sands lands in Utah and oil shale lands in Utah, Wyoming, and Colorado
bush burning in nigeria
Bush burning is the practice of setting fire to vegetation, either intentionally or accidentally, in Nigeria. It is a common occurrence during the dry season when the grasses and weeds are dry and flammable. Bush burning is mainly done for agricultural purposes, such as clearing land for cultivation, controlling pests, and enhancing soil fertility. It is also done for hunting, as some hunters use fire to drive out animals from their hiding places. However, bush burning has many negative effects on the environment, health, and economy. It causes air pollution, soil degradation, loss of biodiversity, greenhouse gas emissions, damage to infrastructure and livelihoods, and increased vulnerability to climate change. Causes Bush burning in Nigeria is caused by various factors, including: Agricultural practices: Some farmers use fire to clear land for cultivation, especially in the savanna and forest zones. They believe that burning the vegetation will make the land easier to till, kill weeds and pests, and add nutrients to the soil. However, this practice is often done without proper planning, control, or monitoring, and can result in uncontrolled fires that spread to other areas. Hunting: Some hunters use fire to flush out animals from their hiding places, such as rodents, reptiles, and birds. They also use fire to create paths and access points in the bush. However, this practice can also result in uncontrolled fires that destroy the habitat and food sources of the wildlife. Pest control: Some livestock owners use fire to control pests that affect their animals, such as ticks, fleas, and tsetse flies. They also use fire to stimulate the growth of fresh grass for grazing. However, this practice can also result in uncontrolled fires that damage the vegetation and soil. Accidental ignition: Some fires are caused by accidental ignition from various sources, such as lightning, sparks from vehicles or machines, discarded cigarettes, fireworks, or cooking stoves. These fires can also spread rapidly and cause extensive damage. Effects Bush burning in Nigeria has many negative effects on the environment, health, and economy, such as: Air pollution: Bush burning produces large amounts of smoke, ash, and particulate matter that pollute the air and reduce visibility. The smoke can also contain harmful substances, such as carbon monoxide, nitrogen oxides, sulfur dioxide, and volatile organic compounds, that can affect human and animal health. The smoke can also contribute to the formation of ozone and smog, which can aggravate respiratory and cardiovascular diseases. Soil degradation: Bush burning destroys the organic matter and nutrients in the soil, making it less fertile and productive. It also reduces the soil moisture and increases the soil temperature, making it more susceptible to erosion and compaction. It also alters the soil pH and microbial activity, affecting the soil quality and health. Loss of biodiversity: Bush burning destroys the habitat and food sources of many plants and animals, reducing their diversity and abundance. It also kills or displaces many species, especially those that are rare, endangered, or endemic. It also affects the genetic diversity and evolutionary processes of the surviving species. Greenhouse gas emissions: Bush burning releases large amounts of greenhouse gases, such as carbon dioxide, methane, and nitrous oxide, that contribute to global warming and climate change. These gases can also affect the regional and global climate patterns, such as rainfall, temperature, and wind. Damage to infrastructure and livelihoods: Bush burning can damage or destroy infrastructure, such as roads, bridges, buildings, power lines, and communication networks, affecting the transportation, communication, and service delivery. It can also damage or destroy livelihood assets, such as crops, livestock, food stocks, and equipment, affecting the food security, income, and well-being of the people. Increased vulnerability to climate change: Bush burning reduces the resilience and adaptive capacity of the ecosystems and communities to cope with the impacts of climate change, such as drought, flood, heat wave, and disease outbreak. It also reduces the potential of the ecosystems and communities to mitigate climate change, such as by sequestering carbon, regulating water, and providing ecosystem services. Regulations and alternatives Bush burning in Nigeria is regulated by various laws and policies, such as the National Environmental Standards and Regulations Enforcement Agency (NESREA) Act, the National Policy on Environment, the National Forest Policy, and the National Climate Change Policy. These laws and policies aim to prevent, control, and manage bush burning and its effects, as well as to promote sustainable land management and environmental protection.However, the enforcement and implementation of these laws and policies are often weak and ineffective, due to various challenges, such as lack of awareness, resources, coordination, and political will. Moreover, some of these laws and policies are outdated and do not reflect the current realities and challenges of bush burning and climate change.Therefore, there is a need for more effective and efficient regulations and alternatives for bush burning in Nigeria, such as: Awareness and education: There is a need to raise awareness and educate the public, especially the farmers and hunters, about the causes and effects of bush burning, as well as the benefits and methods of alternative practices. This can be done through various channels, such as media, campaigns, workshops, and extension services. Alternative practices: There is a need to promote and adopt alternative practices that can achieve the same or better results as bush burning, without causing harm to the environment and health. Some of these practices include: mechanical clearing, mulching, composting, crop rotation, intercropping, agroforestry, integrated pest management, controlled burning, and firebreaks. Incentives and sanctions: There is a need to provide incentives and sanctions to encourage and discourage certain behaviors and actions related to bush burning. Some of these incentives and sanctions include: subsidies, loans, grants, awards, recognition, penalties, fines, prosecution, and confiscation. Monitoring and evaluation: There is a need to monitor and evaluate the implementation and impact of the laws, policies, and practices related to bush burning. This can be done through various methods, such as remote sensing, field surveys, interviews, and feedback. This can also help to identify and address the gaps, challenges, and opportunities for improvement. See also Deforestation in Nigeria Environmental issues in Nigeria Wildfire References Citations Bibliography Sanyaolu, V.T. (4 July 2015). "EFFECT OF BUSH BURNING ON HERBACEOUS PLANT DIVERSITY IN LAGOS STATE POLYTECHNIC, IKORODU CAMPUS, LAGOS". Science World Journal. 10 (1): 1–6. ISSN 1597-6343. Retrieved 3 November 2023. Caillault, Sebastien; Ballouche, Aziz; Delahaye, Daniel (2015). "Where are the 'bad fires' in West African savannas? Rethinking burning management through a space-time analysis in Burkina Faso". The Geographical Journal. [Wiley, The Royal Geographical Society (with the Institute of British Geographers)]. 181 (4): 375–387. ISSN 0016-7398. JSTOR 43868669. Retrieved 3 November 2023. External links NESREA Federal Ministry of Environment Food and Agriculture Organization of the United Nations in Nigeria ReliefWeb Nigeria
corporate average fuel economy
Corporate average fuel economy (CAFE) standards are regulations in the United States, first enacted by the United States Congress in 1975, after the 1973–74 Arab Oil Embargo, to improve the average fuel economy of cars and light trucks (trucks, vans and sport utility vehicles) produced for sale in the United States. More recently, efficiency standards were developed and implemented for heavy-duty pickup trucks and commercial medium-duty and heavy-duty vehicles. CAFE neither directly offers incentives for customers to choose fuel efficient vehicles nor directly affects fuel prices. Rather, it attempts to accomplish the goals indirectly, by making it more expensive for automakers to build inefficient vehicles by introducing penalties.The original CAFE standards sought to drive automotive innovation to curtail fuel consumption, and now the aim is also to create domestic jobs and cut global warming. Stringent CAFE standards together with government incentives for fuel efficient vehicles in the United States should accelerate the demand for electric vehicles.CAFE standards are administered by the Secretary of Transportation via the National Highway Traffic Safety Administration. Overview The Energy Policy and Conservation Act (EPCA), as amended by the 2007 Energy Independence and Security Act (EISA), requires that the U.S. Department of Transportation (DOT) establish standards separately for passenger automobiles (passenger cars) and nonpassenger automobiles (light trucks) at the maximum feasible levels in each model year, and requires that DOT enforce compliance with the standards. DOT has delegated the responsibilities to the National Highway Traffic Safety Administration (NHTSA). Through EPCA and EISA, U.S. law (49 U.S. Code § 32919) also preempts state or local laws: "a State or a political subdivision of a State may not adopt or enforce a law or regulation related to fuel economy standards or average fuel economy standards." The CAFE achieved by a given fleet of vehicles in a given model year is the production-weighted harmonic mean fuel economy, expressed in miles per US gallon (mpg), of a manufacturer's fleet of current model year passenger cars or light trucks with a gross vehicle weight rating (GVWR) of 8,500 pounds (3,856 kg) or less (but also including medium-duty passenger vehicles, such as large sport-utility vehicles and passenger vans, with GVWR up to 10,000 pounds), produced for sale in the United States. The CAFE standards in a given model year define the CAFE levels that manufacturers' fleets are required to meet in that model year, specific levels depending on the characteristics and mix of vehicles produced by each manufacturer. If the average fuel economy of a manufacturer's annual fleet of vehicle production falls below the applicable requirement, the manufacturer must either apply sufficient CAFE credits (see below) to cover the shortfall or pay a penalty, currently $14 per 0.1 mpg under the standard, multiplied by the manufacturer's total production for the U.S. domestic market. Congress established both of these provisions explicitly in EPCA, as amended in 2007 by EISA. In addition, a Gas Guzzler Tax is levied on individual passenger car models (but not trucks, vans, minivans, or SUVs) that get less than 22.5 miles per US gallon (10.5 L/100 km).Starting in 2011, the CAFE standards are newly expressed as mathematical functions depending on vehicle footprint, a measure of vehicle size determined by multiplying the vehicle's wheelbase by its average track width. A complicated 2011 mathematical formula was replaced starting in 2012 with a simpler inverse-linear formula with cutoff values. CAFE footprint requirements are set up such that a vehicle with a larger footprint has a lower fuel economy requirement than a vehicle with a smaller footprint. For example, the fuel economy target for the 2012 Honda Fit with a footprint of 40 sq ft (3.7 m2) is 36 miles per US gallon (6.5 L/100 km), equivalent to a published fuel economy of 27 miles per US gallon (8.7 L/100 km) (see #Calculations of MPG overestimated for information regarding the difference), and a Ford F-150 with its footprint of 65–75 sq ft (6.0–7.0 m2) has a fuel economy target of 22 miles per US gallon (11 L/100 km), i.e., 17 miles per US gallon (14 L/100 km) published. Individual vehicles do not have to meet their fuel economy targets; CAFE compliance is enforced at the fleet level. CAFE 2016 target fuel economy of 34.0 MPG (44 sq. ft. footprint) compares to 2012 advanced vehicle performance of Prius hybrid on the compliance test cycles: 70.7 MPG, Plug-in Prius hybrid: 69.8 MPGe and LEAF electric vehicle: 141.7 MPGe. The compliance fuel economy of plug-in electric vehicles such as the Plug-in Prius or LEAF is complicated by accounting for the energy used in generating electricity. CAFE has separate standards for "passenger cars" and "light trucks" even if the majority of "light trucks" are being used as passenger vehicles. The market share of "light trucks" grew steadily from 9.7% in 1979 to 47% in 2001, remained in 50% numbers up to 2011. More than 500,000 vehicles in the 1999 model year exceeded the 8,500 lb (3,900 kg) GVWR cutoff and were thus omitted from CAFE calculations. More recently, coverage of medium duty trucks has been added to the CAFE regulations starting in 2012, and heavy duty commercial trucks starting in 2014. The National Highway Traffic Safety Administration (NHTSA) regulates CAFE standards and the U.S. Environmental Protection Agency (EPA) measures vehicle fuel efficiency. Congress specifies that CAFE standards must be set at the "maximum feasible level" given consideration for: technological feasibility; economic practicality; effect of other standards on fuel economy; need of the nation to conserve energy.Historically, the EPA has encouraged consumers to buy more fuel efficient vehicles, while the NHTSA expressed concerns that smaller, more fuel efficient vehicles may lead to increased traffic fatalities. Thus higher fuel efficiency was associated with lower traffic safety, intertwining the issues of fuel economy, road-traffic safety, air pollution, and carbon emissions. In the mid-2000s, increasing safety of smaller cars and the poor safety record of light trucks began to reverse this association. Nevertheless, in 2008, the on-road vehicle fleets in the United States and Canada had the lowest overall average fuel economy among first world nations: 25 miles per US gallon (9.4 L/100 km) in North America, versus 45 miles per US gallon (5.2 L/100 km) in the European Union and was even higher in Japan, according to data as of 2008. Furthermore, despite general opinion that larger and heavier (and therefore relatively fuel-uneconomical) vehicles are safer, the U.S. traffic fatality rate—and its trend over time—is higher than some other western nations, although it has recently started to gradually decline at a faster rate than in previous years. Effect on automotive fuel economy In 2002, a committee of the National Academy of Sciences wrote a report on the effects of the CAFE standard. The report's conclusions include a finding that in the absence of CAFE, and with no other fuel economy regulation substituted, motor vehicle fuel consumption would have been approximately 14 percent higher than it actually was in 2002. However, due to the effect of these standards on the types and weights of vehicles sold, it has increased the costs of vehicles and may have led to an estimated 1,300 to 2,600 increased fatalities in the year 1993 alone, though certain members of the committee dissented from the latter opinion.A plot of average overall vehicle fuel economy (CAFE) for new model year passenger cars, the required by law CAFE standard target fuel economy value (CAFE standard) for new model year passenger cars, and fuel prices, adjusted for inflation, shows that there has been little variation over the past 20 years. Within this period, there are three distinct periods of fuel economy change: from 1979 to 1982 the fuel economy rose as the CAFE standard rose dramatically and the price of fuel increased; from 1984 to 1986 the fuel economy rose as the CAFE standard rose and the price of fuel decreased rapidly; from 1986 to 1988 the fuel economy rose at a significantly subdued rate and eventually leveled off as the price of fuel fell and the CAFE standard was relaxedbefore returning to 1986 levels in 1990. These are following by an extended period during which the passenger car CAFE standard, the observed average passenger car fuel economy, and the price of gasoline remained stable, and finally a period starting about 2003 when prices rose dramatically and fuel economy has slowly responded. The law of supply and demand would predict that an increase in gasoline prices would lead in the long run to an increase in the average fuel economy of the U.S. passenger car fleet, and that a drop in gasoline prices would be associated with a reduction in the average fuel economy of the entire U.S. fleet. There is some evidence that this happened with an increase in market share of lower fuel economy light trucks and SUVs and decline in passenger car sales, as a percentage of total fleet sales, as car buying trends changed during the 1990s, the impact of which is not reflected in this chart. In the case of passenger cars, U.S. average fuel economy did not fall as economic theory would predict, suggesting that CAFE standards maintained the higher fuel economy of the passenger car fleet during the long period from the end of the 1979 energy crisis to the rise of gasoline prices in the early 2000s. Most recently, fuel economy has increased about one mpg from 2006 to 2007. This increase is due primarily to increased fuel efficiency of imported cars. Similarly, the law of supply and demand predicts that due to the United States' large percentage consumption of the world's oil supply, that increasing fuel economy would drive down the gasoline prices that U.S. consumers would otherwise have to pay. Reductions in petroleum demand in the United States helped create the collapse of OPEC market power in 1986.The "CAFE" and "CAFE standard" shown here only regards new model passenger car fuel economy and target fuel economy (respectively) rather than the overall U.S. fuel economy average which tends to be dominated by used vehicles manufactured in previous years, new model light truck CAFE standards, light truck CAFE averages, or aggregate data. Calculation Under CAFE regulations, a light vehicle's fuel economy, f {\displaystyle f} , is determined as the weighted harmonic average of the values measured on the “city” (FTP-75) and “highway” (HWFET) drive cycles. f {\displaystyle f} has long been known to overestimate real-world fuel economy which, as of the 2022 model year, is typically 76 percent of f {\displaystyle f} , and has gotten worse over its decades of use. f {\displaystyle f} is not the same as the Monroney window sticker value for consumer information.Fleet fuel economy is calculated using a harmonic mean, not a simple arithmetic mean (average) – namely, the reciprocal of the average of the reciprocal values. For a fleet composed of four different kinds of vehicle A, B, C and D, produced in numbers nA, nB, nC and nD, with fuel economies fA, fB, fC and fD, the CAFE would be: For example, a fleet of 4 vehicles getting 15, 13, 17, and 100 mpg has a CAFE of slightly less than 19 mpg: While the arithmetic mean fuel economy of the fleet is just over 36 mpg: The harmonic mean captures the fuel economy of driving each car in the fleet for the same number of miles, while the arithmetic mean captures the fuel economy of driving each car using the same amount of gas (i.e., the 13 mpg vehicle would travel 13 miles (21 km) with one gallon while the 100 mpg vehicle would travel 100 miles). For the purposes of CAFE, a manufacturer's car output is divided into a domestic fleet (vehicles with more than 75 percent U.S., Canadian or post-NAFTA Mexican content) and a foreign fleet (everything else). Each of these fleets must separately meet the requirements. The two-fleet requirement was developed by the United Automobile Workers (UAW) as a means to ensure job creation in the United States. The UAW successfully lobbied Congress to write this provision into the enabling legislation – and continues to advocate this position. The two fleet rule for light trucks was removed in 1996. For the fuel economy calculation for alternative fuel vehicles, a gallon of alternative fuel is deemed to contain 15% fuel (which is approximately the amount of gasoline in a gallon of E85) as an incentive to develop alternative fuel vehicles. The mileage for dual-fuel vehicles, such as E85 capable models and plug-in hybrid electric vehicles, is computed as the average of its alternative fuel rating—divided by 0.15 (equal to multiplying by 6.666)—and its gasoline rating. Thus an E85-capable vehicle that gets 15 mpg on E-85 and 25 mpg on gasoline might logically be rated at 20 mpg. But in fact the average, for CAFE purposes, despite perhaps only one percent of the fuel used in E85-capable vehicles is actually E85, is computed as 100 mpg for E-85 and the standard 25 mpg for gasoline, or 62.5 mpg. However, the total increase in a manufacturer's average fuel economy rating due to dual-fueled vehicles cannot exceed 1.2mpg. Section 32906 reduces the increase due to dual-fueled vehicles to 0 through 2020. Electric vehicles are also incentivized by the 0.15 fuel divisor, but are not subject to the 1.2 mpg cap like dual-fuel vehicles. Manufacturers are also allowed to earn CAFE "credits" in any year they exceed CAFE requirements, which they may use to offset deficiencies in other years. CAFE credits can be applied to the three years before or the five years after the year in which they are earned. The reason for this flexibility is so manufacturers are penalized only for persistent failure to meet the requirements, not for transient non-compliance due to market conditions. History Fuel economy regulations were first introduced in 1978, only for passenger vehicles. NHTSA kept CAFE standards for cars the same from 1985 to 2010, except for a slight decrease in required mpg from 1986 to 1989. The next year, a second category was defined for light trucks. These were distinguished from heavy duty vehicles by a gross vehicle weight rating (GVWR) of 6000 pounds or less. The GVWR threshold was raised to 8500 pounds in 1980 and has remained at that level through 2010. Thus certain large trucks and SUV's were exempt, such as the Hummer and the Ford Excursion. From 1979 to 1991, separate standards were established for two-wheel drive (2WD) and four-wheel drive (4WD) light trucks, but for most of this period, car makers were allowed to choose between these separate standards or a combined standard to be applied to the entire fleet of light trucks they sold that model year. In 1980 and 1981, respectively, a manufacturer whose light truck fleet was powered exclusively by basic engines which were not also used in passenger cars could meet standards of 14 mpg and 14.5 mpg. Standards by model year, 1978–2020 Performance in practice Since 1980, the traditional Japanese manufacturers have increased their combined fleet average fuel economy by 1.6 miles per gallon according to the March 30, 2009, Summary of Fuel Economy Performance published annually by NHTSA. During this time, they also increased their sales in the United States by 221%. The traditional European manufacturers actually decreased their fleet average fuel economy by 2 miles per gallon while increasing their sales volume by 91%. The traditional U.S. manufacturers, Chrysler, Ford, and General Motors, increased their fleet average fuel economy by 4.1 miles per gallon since 1980 according to the government figures. During this time the sales of U.S. manufacturers decreased by 29%. A number of manufacturers choose to pay CAFE penalties rather than attempt to comply with the regulations. These tend to be companies with small U.S. market share and expensive, high-performance vehicles, such as Porsche, Mercedes, and Fiat. In model year 2012, Jaguar (Land Rover) and Volvo did not meet CAFE requirements. They paid fines totaling 15 million dollars for the year.For the 2014 model year, Mercedes SUVs followed by GM and Ford light trucks had the lowest fleet average while Tesla followed by Toyota and Mazda had the highest.Before the oil price increases of the 2000s, overall fuel economy for both cars and light trucks in the U.S. market reached its highest level in 1987, when manufacturers managed 26.2 mpg (8.98 L/100 km). The average in 2004 was 24.6 mpg. In that time, vehicles increased in size from an average of 3,220 pounds to 4,066 pounds (1,461 kg to 1,844 kg), in part due to an increase in truck ownership from 28% to 53%. 2006 reform attempt and lawsuit The CAFE rules for trucks were officially amended at the end of March 2006. However, the 9th Circuit Court of Appeals has overturned the rules, returning them to NHTSA, as discussed below. These changes would have segmented truck fleets by vehicle size and class as of 2011. All SUVs and passenger vans up to 10,000 pounds GVWR would have had to comply with CAFE standards regardless of size, but pickup trucks and cargo vans over 8500 pounds gross vehicle weight rating (GVWR) would have remained exempt. The United States Court of Appeals for the Ninth Circuit agreed with NHTSA that economic benefit-cost analysis (maximizing net economic benefits to the Nation) is, under the Energy Policy and Conservation Act (EPCA), an appropriate method to select the maximum feasible stringency of CAFE standards, but nonetheless found that NHTSA incorrectly set a value of zero dollars to the global warming damage caused by CO2 emissions; failed to set a "backstop" to prevent trucks from emitting more CO2 than in previous years; failed to set standards for vehicles in the 8,500 to 10,000 lb (4,500 kg) range; and failed to prepare a full Environmental Impact Statement (EIS) rather than a more abbreviated environmental impact assessment. The Court directed NHTSA to prepare a new standard as quickly as possible and to fully evaluate that new standard's impact on the environment. Energy Independence and Security Act of 2007 In 2007, the House and Senate passed the Energy Independence and Security Act (EISA) with broad support, setting a goal for the national fuel economy standard of 35 miles per gallon (mpg) by 2020 and rendering the court judgment obsolete. On December 19, 2007, President George W. Bush signed the bill. The bill's standard would increase the fuel economy standards by 40 percent and save the United States billions of gallons of fuel. The requirement applies to all passenger automobiles, including "light trucks." President Bush faced serious pressure to reduce the Nation's dependency on oil and this was part of his initiative to do so. New "footprint" model Under the new final light truck CAFE standard 2008–2011, fuel economy standards would have been restructured so that they are based on a measure of vehicle size called "footprint", the product of multiplying a vehicle's wheelbase by its track width. A target level of fuel economy would have been established for each increment in footprint using a continuous mathematical formula. Smaller footprint light trucks had higher fuel economy targets and larger trucks lower targets. Manufacturers who made more large trucks would have been allowed to meet a lower overall CAFE target, manufacturers who make more small trucks would have needed to meet a higher standard. Unlike previous CAFE standards there was no requirement for a manufacturer or the industry as a whole to meet any particular overall actual MPG target, since that will depend on the mix of sizes of trucks manufactured and ultimately purchased by consumers. Some critics pointed out that this might have had the unintended consequence of pushing manufacturers to make ever-larger vehicles to avoid strict economy standards. However, the equation used to calculate the fuel economy target had a built in mechanism that provides an incentive to reduce vehicle size to about 52 square feet (the approximate midpoint of the current light truck fleet.) Increases and light truck standard reform In 2006, the rule making for light trucks for model years 2008–2011 included a reform to the structure for CAFE standards for light trucks and gave manufacturers the option for model years 2008–2010 to comply with the reformed standard or to comply with the unreformed standard. The reformed standard was based on the vehicle footprint. The unreformed standard for MY 2008 was set to be 22.5mpg, 23.1mpg for MY 2009, and 23.5mpg for MY 2010. To achieve the target of 35mpg authorized under EISA for the combined fleet of passenger cars and light truck for MY2020, NHTSA is required to continue raising the CAFE standards. In determining a new CAFE standard, NHTSA must assess the environmental impacts of each new standard and the effect of this standard on employment. With the EISA, NHTSA needed to take new analysis including taking a fresh look at the potential impacts under the National Environmental Policy Act (NEPA) and assessing whether or not the impacts are significant within the meaning of NEPA. NHTSA has to issue its new standards eighteen months before the model year for fleet. According to NHTSA report, to achieve this industry wide combined fleet of at least 35mpg, NHTSA must set new standards well in advance of the model year so as to provide the automobile manufacturers with lead time enough to make extensive necessary changes in their automobiles. The EISA also called for a reform where the standards set by the Transportation Department would be are "attribute based" so as to ensure that the safety of vehicles is not compromised for higher standards. CAFE credit trading provisions The 2007 Energy Independence and Security Act also instructed NHTSA to establish a credit trading and transferring scheme to allow manufacturers to transfer credits between categories, as well as sell them to other manufacturers or non-manufacturers. In addition, the period over which credits could be carried forward was extended from three years to five. Traded or transferred credits may not be used to meet the minimum standard in the domestic passenger car fleet, however they may be used to meet the "attribute standard". This latter allowance has drawn criticism from the UAW which fears it will lead manufacturers to increase the importation of small cars to offset shortfalls in the domestic market. These new flexibilities were implemented by regulation on March 23, 2009, in the Final Rule for 2011 Model Year Passenger Cars and Light Trucks. Calculations using official CAFE data, and the newly proposed credit trading flexibility contained in the September 28, 2009, Notice of Proposed Rulemaking show that ninety-eight percent of the benefit derived from just the cross fleet credit trading provision flows to Toyota. According to these calculations 75% of the benefit from the two new CAFE credit trading provisions, cross fleet trading and five-year carry-forward, falls to foreign manufacturers. Toyota can use the provision to avoid or reduce compliance on average by 0.69 mpg per year through 2020, Hyundai (1.01 mpg), Nissan (0.65), Honda (0.83 mpg), Mitsubishi (0.13 mpg), Subaru (0.08), Chrysler (0.14 mpg), GM (0.09 mpg), and Ford (0.18 mpg) also benefit.The estimated value of the CAFE exemption gained by Toyota is $2.5 billion; Honda's benefit is worth $800 million, and Nissan's benefit is valued at $900 million in reduced CAFE compliance costs. Foreign companies gained $5.5 billion in benefits compared with the $1.8 billion that went to the Detroit Three. Out-year and alternative fuel standard changes In the years 2021 to 2030, the standards requires MPG to be the "maximum feasible" fuel economy. The law allows NHTSA to issue additional requirements for cars and trucks based on the footprint model or other mathematical standard. Additionally, each manufacturer must meet a minimum standard of the higher of either 27.5 mpg for passenger automobiles or 92% of the projected average for all manufacturers. National Highway Traffic Safety Administration (NHTSA) is directed based on National Academy of Sciences studies to set medium and heavy-duty truck MPG standards to the "maximum feasible". Additionally, the law phases out the mpg credit previously granted to E85 flexible-fuel vehicle manufacturers and adds in one for biodiesel, and it adds a requirement that NHTSA publish replacement tire fuel efficiency ratings. The bill also adds support for initial state and local infrastructure for plug-in electric vehicles. Implementing regulations On April 22, 2008, NHTSA responded to the Energy Independence and Security Act of 2007 with proposed new fuel economy standards for cars and trucks effective model year 2011.The new rules also introduce the "footprint" model for cars as well as trucks, where if a manufacturer makes more large cars and trucks they will be allowed to meet a lower standard for fuel economy. This means that an overall fuel efficiency for a particular manufacturer nor the fleet as a whole cannot be predicted with certainty since it will depend on the actual product mix manufactured. However, if the product mix is as NHTSA predicts, car fuel economy would increase from a current standard of 27.5 mpg‑US (8.6 L/100 km; 33.0 mpg‑imp) to 31.0 mpg‑US (7.6 L/100 km; 37.2 mpg‑imp) in 2011. The new regulations are designed to be "optimized" with respect to a certain set of assumptions which include: gas prices in 2016 will be $2.25 a U.S. gallon (59.4¢/L), all new car purchasers will pay 7% interest rates on their vehicles purchases, and only care about fuel costs for the first 5 years of a vehicle's life, and that the social cost of carbon is $7 per tonne of CO2. This corresponds to a global warming value of $4.31 savings a year per car under the new regulations. Further, the new regulations assume that no advanced hybrids (Toyota Prius), plug-in hybrids and extended range electric vehicles (Chevrolet Volt), electric cars (Th!nk City), nor alternative fuel vehicles (Honda Civic GX) will be used to achieve these fuel economies. The proposal again explained that U.S. law (49 U.S. Code § 32919) requires that "a State or a political subdivision of a State may not adopt or enforce a law or regulation related to fuel economy standards or average fuel economy standards", and explained that laws or regulations applicable to motor vehicle greenhouse gas emissions are related to fuel economy standards. In mid-October 2008, DOT completed and released a final environmental impact statement in anticipation of issuing standards for model years 2011–2015. Based on its consideration of the public comments and other available information, including information on the financial condition of the automotive industry, the agency adjusted its analysis and the standards and prepared a final rule and Final Regulatory Impact Analysis (FRIA) for MYs 2011–2015. On November 14, 2008, the Office of Management and Budget concluded review of the rule and FRIA. However, issuance of the final rule was held in abeyance. On January 7, 2009, the Department of Transportation announced that the final rule would not be issued, writing: "The Bush Administration will not finalize its rulemaking on Corporate Fuel Economy Standards. The recent financial difficulties of the automobile industry will require the next administration to conduct a thorough review of matters affecting the industry, including how to effectively implement the Energy Independence and Security Act of 2007 (EISA). The National Highway Traffic Safety Administration has done significant work that will position the next Transportation Secretary to finalize a rule before the April 1, 2009 deadline." 2009 Obama administration directive On January 27, 2009, President Barack Obama directed the Department of Transportation to review relevant legal, technological, and scientific considerations associated with establishing more stringent fuel economy standards, and to finalize the 2011 model year standard by the end of March. This single-model year standard was issued March 27, 2009, and is about one mpg lower than the fuel economy standards previously recommended under the Bush Administration. "These standards are important steps in the nation's quest to achieve energy independence and bring more fuel efficient vehicles to American families", said Secretary LaHood. The new standards will raise the industry-wide combined average to 27.3 miles per US gallon (8.6 L/100 km; 32.8 mpg‑imp) (a 2.0 mpg‑US (2.4 mpg‑imp) increase over the 2010 model year average), as estimated by the National Highway Traffic Safety Administration (NHTSA). It will save about 887,000,000 U.S. gallons (3.36×109 L) of fuel and reduce carbon dioxide emissions by 8.3 million metric tons. This 2011 single-year standard will use an attribute-based system, which sets fuel economy standards for individual vehicle models, based on the footprint model. Secretary LaHood also noted that work on the multi-year fuel economy plan for model years after 2011 is already well underway. The review will include an evaluation of fuel-saving technologies, market conditions and future product plans from the manufacturers. The effort will be coordinated with interested stakeholders and other federal agencies, including the Environmental Protection Agency. The new rules were immediately challenged in court again by the Center for Biological Diversity as not addressing the inadequacies found by the previous court rulings. Model year 2012–2016 Obama administration proposal On May 19, 2009, President Barack Obama proposed a new national fuel economy program which adopts uniform federal standards to regulate both fuel economy and greenhouse gas emissions while preserving the legal authorities of DOT, EPA and California. The program covered model year 2012 to model year 2016 and ultimately required an average fuel economy standard of 35.5 miles per US gallon (6.63 L/100 km; 42.6 mpg‑imp) in 2016 (of 39 miles per gallon for cars and 30 mpg for trucks), a jump from the 2009 average for all vehicles of 25 miles per gallon. Obama said, "The status quo is no longer acceptable." The higher fuel economy was projected to reduce oil consumption by approximately 1.8 billion barrels (290,000,000 m3) over the life of the program and reduce greenhouse gas emissions by approximately 900 million metric tons; the expected consumer costs in terms of higher car prices was unknown. Ten car companies and the UAW embraced the national program because it provided certainty and predictability to 2016 and included flexibilities that would significantly reduce the cost of compliance. Stated goals for the program included: saving consumers money over the long term in increased fuel efficiency, preserving consumer choice (the new rules do not dictate the size of cars, trucks and SUVs that manufacturers can produce; rather it requires that all sizes of vehicles become more energy efficient), reduced air pollution in the form of greenhouse gas emissions and other conventional pollutants, one national policy for all automakers instead of three standards (a DOT standard, an EPA standard and a California standard that would apply to 13 other states), and industry desires: clarity, predictability and certainty concerning the rules while giving them flexibility on how to meet the expected outcomes and the lead time they need to innovate. The policy was expected to result in yearly 5% increases in efficiency from 2012 through 2016, 1.8 billion barrels (290,000,000 m3) of oil saved cumulatively over the lifetime of the program and significant reductions in greenhouse gas emissions equivalent to taking 177 million of today's cars off the road.By model year 2014, many of the program's goals were being met. The average new vehicle fuel economy was 30.7 mpg (35.6 mpg for cars and 25.5 mpg for trucks) and for the years 2012–2015, auto industry outperformed the GHG standard by a substantial margin. Consumers are expected to save an estimated 16.6 billion gallons of fuel over the lifetime of model year 2011 to 2014 vehicles due to the manufacturers exceeding the CAFE standards in those years. 2011 agreement for model years 2017–2025 On July 29, 2011, President Obama announced an agreement with thirteen large automakers to increase fuel economy to 54.5 miles per gallon for cars and light-duty trucks by model year 2025. He was joined by Ford, GM, Chrysler, BMW, Honda, Hyundai, Jaguar/Land Rover, Kia, Mazda, Mitsubishi, Nissan, Toyota, and Volvo—which together accounted for over 90% of all vehicles sold in the United States—as well as the United Auto Workers (UAW), and the State of California, who were all participants in the deal. The agreement resulted in new CAFE regulations for model year 2017–2025 vehicles, which were finalized on August 28, 2012. The major increases in stringency and the changes in the structure of CAFE create a need for research that incorporates the demand and supply sides of the new vehicle market in a more detailed manner than was needed with static fuel economy standards.Volkswagen responded to the July 29, 2011, agreement with the following statement: "Volkswagen does not endorse the proposal under discussion. It places an unfairly high burden on passenger cars, while allowing special compliance flexibility for heavier light trucks. Passenger cars would be required to achieve 5% annual improvements, and light trucks 3.5% annual improvements. The largest trucks carry almost no burden for the 2017–2020 timeframe, and are granted numerous ways to mathematically meet targets in the outlying years without significant real-world gains. The proposal encourages manufacturers and customers to shift toward larger, less efficient vehicles, defeating the goal of reduced greenhouse gas emissions." Additionally, Volkswagen has since approached U.S. lawmakers about lowering their proposal to double fuel efficiency for passenger cars by 2025. Volkswagen at the time claimed that the new plan was unfair, but the company was later revealed to have been systematically cheating emissions tests. As a result, Volkswagen is one of the only major auto manufacturers to not sign the agreement that has led to the current proposal from the Obama administration. Daimler, producer of Mercedes-Benz brand automobiles, expressed similar views, saying it "clearly favors large SUVs and pickup trucks." 2016 mid-term review The 2011 agreement set up requirements for a mid-term review to look at how the industry was progressing with the new standards. On July 18, 2016, the EPA, NHTSA and the California Air Resources Board (CARB) released a technical paper assessing whether or not the auto industry would be able to reach the 2022 to 2025 mpg standards. The Draft Technical Assessment Report, as the paper is called, is the first step in the mid-term evaluation process.The government groups found that the auto industry had been successfully innovating and pushing towards lowering greenhouse gas emissions. The paper said that the technology was cheaper or about what was expected in terms of cost, and that automakers were adopting new technologies quicker than expected. Still, the paper said that the 54.5 mpg-equivalent projection is unrealistic. That goal was based on a market that was 67 percent cars and 33 percent trucks and SUVs and higher fuel prices, but American customers weren't buying that many cars, as the market was still about 50/50 and was likely to stay that way. The paper said more realistic projections are 50 mpg to 52.6 if the 2012 standards are maintained. Agreed standards by model year, 2012–2025 NB: Real-world fuel economy values are about 20 percent lower than laboratory values used for CAFE. Use of E10 decreases fuel economy further by about 3 percent.Additionally, there were minimum standards since EISA for domestically produced passenger automobiles being the greater of 27.5 mpg or 92 percent of the CAFE projected by the Secretary of Transportation for the combined domestic and non-domestic passenger automobile fleets manufactured for that model year. 2020 rollback In early August 2018, the EPA and Department of Transportation, then operating under the Presidency of Donald Trump, issued a proposed ruling that, if enacted, would rollback some of the goals set in 2012 under President Obama. It proposed freezing the fuel economy goals to the 2021 target of 37 mpg, would halt requirements on the production of hybrid and electric cars, and would eliminate the legal waiver that allows states like California to set more stringent standards. The EPA acting administrator Andrew R. Wheeler and the Transportation Secretary Elaine Chao issued a joint statement stating that the rule change was needed as the current rules "impose significant costs on American consumers and eliminate jobs", while the new rules "give consumers greater access to safer, more affordable vehicles, while continuing to protect the environment". The proposal included a withdrawal of the waiver that granted California the ability to set its own GHG and ZEV (Zero Emission Vehicle) standards and that allowed other States to adopt the standard instead of the federal standard. Following publication of the proposed rule changes, California and eighteen other states announced that should the rule be enacted, they would sue the government to reject the rule.The new ruling proposed by the EPA and NHTSA was named the Safer Affordable Fuel-Efficient (SAFE) Vehicle Rules. It aimed to set new CAFE standards for MY 2022–2026 passenger car and light trucks and amend the 2021 MY CAFE standards because they are "no longer maximum feasible standards." The safety reason provided by the government was to shift people to buying new vehicles once the vehicles become more affordable under SAFE standards, with a government study conducted to show new model year vehicles were associated with lower fatality rates. After releasing the proposal on August 2, 2018, NHTSA and EPA held a comment hearing period for 60 days. The deadline was later extended to October 26, 2018, after requests from 32 US Senators, 18 State Attorneys General, and others for a 120-day or longer comment period were received.Researchers described in a December 2018 article in Science fundamental flaws and inconsistencies in the analysis justifying the proposed rule including miscalculating changes in the size of the automobile fleet and ignoring international benefits of reduced greenhouse gas emissions, thereby discarding at least $112 billion in benefits, and also by overestimating compliance costs and characterized such changes in the Notice of Proposed Rulemaking as misleading.New CAFE targets went into effect in June 2020 beginning with the 2021 model year, increasing at a rate of 1.5 percent per year, far lower than the nearly 5 percent increase they replace. Additionally, the minimum standard for domestic passenger cars was lowered from the 2020 model year level until the 2023 MY. Updates under Biden administration Upon taking office, the administration of President Biden stated an intention to set new fuel efficiency standards. In August 2021 NHTSA released its Notice of Proposed Rulemaking offering new standards for the 2024–2026 model years. The final rule covering the 2024–2026 model years was signed on March 31, 2022. Fuel economy targets for cars and light trucks each increase 8 percent for 2024 MY, 8 percent for 2025 MY, and 10 percent for 2026 MY. NHTSA projects that the updated targets lead to an industry-wide average of 49 MPG by the 2026 model year given a fleet mix of 48 percent passenger cars and 52 percent light trucks. Additionally, since by law, the minimum domestic passenger car standard (MDPCS) is "92 percent of the average fuel economy projected by the Secretary for the combined domestic and non-domestic passenger automobile fleets," they are also updated. However, NHTSA is retaining a "1.9 percent offset" to the MDPCS because of past undercompliance with the standard, keeping a roll back of the Trump administration. On July 28, 2023 NHTSA proposed new fuel economy targets for light-duty vehicles for the 2027—2031 model years, as well as heavy-duty trucks and vans for the 2030—2035 model years. The preferred alternative calls for 2 percent annual increases for cars, 4 percent for light trucks, and 10 percent for heavy-duty trucks and vans, among other regulatory changes. Active debate There continues to be an active debate on the safety, costs, and impact of the CAFE program. Effect on traffic safety NHTSA has expressed concerns that automotive manufacturers would increase mileage by reducing vehicle weight, which might lead to weight disparities in the vehicle population and increased danger for occupants of lighter vehicles. According to the Insurance Institute for Highway Safety (IIHS) in May 2020, "the smallest late-model cars remain the most dangerous, according to the most recent driver death rates."A National Research Council report found that the standards implemented in the 1970s and 1980s "probably resulted in an additional 1,300 to 2,600 traffic fatalities in 1993." A Harvard Center for Risk Analysis study found that CAFE standards led to "2,200 to 3,900 additional fatalities to motorists per year." The Insurance Institute for Highway Safety's 2007 data show a correlation of about 250–500 fatalities per year per MPG.In a 2007 analysis, IIHS found that 50 percent of fatalities in small four-door vehicles were single-vehicle crashes, compared to 83 percent in very large SUVs. The Mini Cooper had a driver fatality rate of 68 per million vehicle-years (multi-vehicle, single-vehicle, & rollover) compared to 115 for the Ford Excursion, which has a high proportion of fatalities from vehicle rollover. The Toyota Matrix was even lower at 44, while the rollover-prone Chevrolet S-10 Blazer 2 door was 232. The Nissan 350Z sports car (193) and the mechanically similar Nissan Altima sedan (79) show that driving style can't be isolated from engineering in these results. The analysis' conclusions include findings that death rates generally are higher in lighter vehicles, but cars almost always have lower death rates than SUVs or pickup trucks of comparable weight.Against this evidence, proponents of higher CAFE standards argue that it is the "footprint" model of CAFE for trucks that encourages production of larger trucks with concomitant increases in vehicle weight disparities. A 2005 IIHS plot shows that in collisions between SUVs weighing 3,500 lb (1,600 kg) and cars, the car driver is more than 4 times more likely to be killed, and if the SUV weighs over 5,000 lb (2,300 kg) the car driver is 9 times more likely to be killed, with 16 percent of deaths occurring in car-to-car crashes and 18 percent in car-to-truck crashes. Recent studies find about 75 percent of two-vehicle fatalities involve a truck, and about half these fatalities involve a side-impact crash. Risk to the driver of the other vehicle is almost 10 times higher when the vehicle is a one-ton pickup compared to an imported car. Proponents of higher CAFE standards also argue that the quality of the engineering design is the prime determinant of vehicular safety, not the vehicle's mass. In 2006, IIHS found that some of the smallest cars have good crash safety, and others do not. A 2003 Transportation Research Board study show greater safety disparities among vehicles of differing price, country of origin, and quality than among vehicles of different size and weight.: 17–21  A 2006 study discounts the importance of vehicle mass to traffic safety, pointing instead to the quality of engineering design as the primary factor. Economic arguments A key argument is that economic forces are responsible for fuel economy gains, and that higher fuel prices already drove customers to seek more fuel-efficient vehicles.The law of supply and demand predicts an increase in gasoline prices would lead in the long run to an increase in the average fuel economy of the U.S. passenger car fleet, and that a drop in gasoline prices would be associated with a reduction in the average fuel economy of the entire U.S. fleet.Rather than mandating fuel economy increases, Charles Krauthammer advocated using a significant increase in gasoline taxes that would be revenue-neutral for the government. CAFE advocates assert that most of the gains in fuel economy over the past 30 years can be attributed to the standard itself.Economic research in 2015 concludes that firms are shown to be more incentivized toward innovations on fuel economy while the expenses of other safety considerations are undetermined.According to the Transportation Research Board, the weakening of 2022-2025 CAFE standards would make it much harder for the U.S. to avoid a two-degree-Celsius global warming scenario as per the Paris Agreement, meaning substantial more effort would have to be made between 2025 and 2050 if the SAFE standard is administrated to halt the original CAFE regulations.A study has found that the adoption of CAFE standards, if supported together by government incentives, would accelerate the Electric Vehicle Market. The U.S. could be less dependent on fossil fuels from the shift to EV market adoption. Automaker viewpoints In the May 6, 2007, edition of Autoline Detroit, GM vice-chairman Bob Lutz, an automobile designer/executive of BMW and Big Three fame, asserted that the CAFE standard was a failure and said it was like trying to fight obesity by requiring tailors to make only small-sized clothes.In late 2007, Lutz called hybrid gasoline-electric vehicles the "ideal solution".Automakers have said that small, fuel-efficient vehicles cost the auto industry billions of dollars. They cost almost as much to design and market but cannot be sold for as much as larger vehicles such as SUVs, because consumers expect small cars to be inexpensive.Former GM chairman Rick Wagoner admitted in 2008 not knowing which fuel efficiency technologies consumers really want, he said "we are moving fast with technologies like E‑85 (ethanol), all-electric, fuel cells, and a wide range of hybrid offers". Ethanol fuel being studied by GM and other manufacturers, has a "gasoline gallon equivalency" (GGE) value of 1.5, i.e. to replace the energy of 1 volume of gasoline, 1.5 times the volume of ethanol is needed. To overcome this fact, Congress enacted The Alternative Motor Fuels Act (AMFA) in 1988 to gain CAFE credits for the manufacture of flexible-fuel vehicles. The formula using an example is: alternative fuel vehicle that achieves 15 mpg fuel economy while operating on alcohol would have a CAFE calculated as follows: Fuel Economy = (1/(0.15 AMFA factor)) x (15mpg) = 100 miles per gallon, providing a very healthy economic incentive for manufacturers of ethanol vehicles.NHTSA's public records show in 2005 that automakers publicly expressed doubts as to the economic practicality and feasibility of increased light truck CAFE standards.Toyota has invested heavily in developing the complex Hybrid Synergy Drive system, which allows the company to meet CAFE targets.Volkswagen embraced the rising CAFE standards and tailored its US product line with a fleet of economical, popular, inexpensive diesel vehicles, beginning in 2009. In 2014 Volkswagen registered an impressive CAFE of 34 mpg‑US (6.9 L/100 km; 41 mpg‑imp). The company even received green car subsidies and tax exemptions in the US. This result was achieved by installing a defeat device in the electronic control unit of each vehicle, in what is now known as the 2015 Volkswagen emissions scandal.Tesla, a firm that makes vehicles like the 142 miles per gallon gasoline equivalent Tesla Model 3 Standard Range Plus, earned $428 Million in AMFA CAFE Credits paid to it by other manufacturers in Q2 2020, a new record. Consumer preferences Proponents of CAFE state that automobile-purchasing decisions that may have global effects should not be left up to a free society of individuals operating in a free market.The Insurance Companies' Highway Loss Data Institute publishes data showing that larger vehicles are more expensive to insure, so forcing consumers to purchase smaller vehicles is in their best interest.Automotive enthusiasts decry the Malaise era of auto design, partially brought on by CAFE. Some consumers felt so strongly, that by 1985, 66,900 individuals purchased vehicles in the grey market to avoid the sluggish, unreliable vehicles mandated by the government. In 2003, Alliance of Automobile Manufacturers spokesman Eron Shosteck noted that automakers produce more than 30 models rated at 30 mpg or more for the U.S. market, and they are poor sellers, indicating that consumers do not prioritize fuel economy.In 2004, GM retiree Charles Amann said that consumers do not pick the weak-performing vehicle when given a choice of engines.Vehicle safety ratings are now made available to consumers by NHTSA. and by the Insurance Institute for Highway Safety.A 2006 Consumer Reports survey concluded fuel economy is the most important consideration in consumers' choice of vehicle and a 2007 Pew Charitable Trusts survey found that nine out of ten Americans favor tougher CAFE standards, including 91% of Democrats and 85% of Republicans. In 2007, the 55 mpg Toyota Prius outsold the top-selling SUV, the 17 mpg Ford Explorer. In 1999, USA Today reported small cars tend to depreciate faster than larger cars, so they are worth less in value to the consumer over time. However, 2007 Edmunds depreciation data show that some small cars, primarily premium models, are among the best in holding their value. SUVs and minivans created due to original mandate CAFE standards signaled the end of the traditional long station wagon, but Chrysler CEO Lee Iacocca developed the idea of marketing the minivan as a station wagon alternative, while certifying it in the separate truck category to allow compliance with less-strict CAFE standards. Eventually, this same idea led to the promotion of the SUV.The definitions for cars and trucks are not the same for fuel economy and emission standards. For example, a Chrysler PT Cruiser was defined as a car for emissions purposes and a truck for fuel economy purposes. Under then light truck fuel economy rules, the PT Cruiser had have a lower fuel economy target (28.05 mpg beginning in 2011) than it would if it were classified as a passenger car. Increased automobile usage As fuel efficiency rises, people may drive their cars more, which can mitigate some of fuel savings and the decrease in carbon dioxide emissions from the higher standards. According to the National Academies Report (Page 19) a 10% improvement in fuel efficiency leads to an average increase in travel distance of 1–2%. This phenomenon is referred to as the "rebound effect". The report stated (page 20) that the fuel efficiency improvements of light-duty vehicles have reduced the overall U.S. emissions of CO2 by 7%. Technological considerations There are a large number of technologies that manufacturers can apply to improve fuel efficiency short of implementing hybrid or plug-in hybrid technologies. Applied aggressively, at a cost of a few thousand dollars per vehicle, the National Research Council estimated that these technologies can almost double fuel economy versus a 2008 model year baseline vehicle.Some technologies, such as multi-valve cylinders, are already widely applied in cars, but not trucks. Manufacturers dispute how effective these technologies are, their retail price, and how willing customers are to pay for these improvements. Payback on these improvements is highly dependent on fuel prices. Calculations of MPG overestimated The United States Environmental Protection Agency (EPA) laboratory measurements of MPG had consistently overestimated fuel economy of gasoline vehicles and underestimated diesel vehicles. John DeCicco, the automotive expert for the Environmental Defense Fund (EDF), estimated that this results in about 20% higher actual consumption than measured CAFE goals. Starting with 2008-model vehicles, the EPA has adopted a new protocol for estimating the MPG figures presented to consumers. The new protocol includes driving cycles more closely representative of today's traffic and road conditions, as well as increased air conditioner usage. This change does not affect how the EPA calculates CAFE ratings; the new protocol changes only the mileage estimates provided for consumer information. Low penalty Some critics argue that CAFE fines do not seem to be having much impact in the fuel economy drive. As noted in the 2007 United States Government Accountability Office Report to the Chairman of the U.S. Senate Committee on Commerce, Science, and Transportation (page 23) "Several experts stated that this is (penalties) not enough of a monetary incentive for manufacturers to comply with CAFE." For example, in 25 years, from 1983 to 2008, Mercedes-Benz paid penalties 21 times and BMW paid penalties 20 times.From the 1997 through the 2018 model year, the CAFE penalty was US$55 per vehicle for every 1 MPG under the standard. For the year 2006 Mercedes-Benz drew a $30.3 million penalty for violating fuel economy standards by 2.2 MPG, or $122 per vehicle. According to the "fueleconomy.gov" website operated by the EPA, violating CAFE by 2.42 MPG means consuming an extra 27 barrels (4.3 m3) (1,134 US gallons (4,290 L)) of mostly imported fuel in 10 years which is worth $3,490 (based on 45% highway, 55% city driving for 15,000 mi (24,140 km) annually, at a fuel price of $2.95 per gallon), which is 13.4% more than the target and also it means emitting an extra 14 Tons of CO2 in 10 years, which is 12.7% more. These numbers are based on comparison of 2010 Mercedes ML 350 4MATIC with CAFE Unadjusted Average Fuel Economy of 21.64 MPG (which met 2006 CAFE requirements of 21.6 MPG) and 2010 Mercedes ML 550 4MATIC with CAFE Unadjusted Average Fuel Economy of 19.22 MPG. So spending an extra $3,490 on mostly imported fuel and emitting an extra 14 tons of CO2 draws a penalty of only $122 for a single luxury car buyer, which is only 0.3% of the price of a $40,000 car (the average 2010 price of a luxury car). Several experts stated that this is not enough of a monetary incentive to comply with CAFE.The CAFE penalty had increased only 10% since 1983, the year it was first implemented, while cumulative inflation has exceeded 150%. Thus, the CAFE penalty in 2019 is actually less than 40% of what it was in 1983. NHTSA officials stated that in addition to the authority the Federal Civil Penalties Inflation Adjustment Act of 1990 under the EPCA, the NHTSA has the authority to raise CAFE penalties to $100 per mpg shortfall. However, the NHTSA currently does not exercise this authority. In fact, in 2015 Congress required federal agencies to adjust civil penalties for inflation (Public Law 114–74) and NHTSA under Heidi King unlawfully delayed its implementation.In 2022 NHTSA reinstated the inflation adjustment that was made in 2016 and began making annual inflation adjustments as required by law. For the 2019 through 2021 model years the rate is $14 per 0.1 MPG and increased to $15 per 0.1 MPG for the 2022 model year. Stellantis was the first manufacturer fined under a new rate. See also Notes Explanatory notes Citations External links 49 U.S.C. Chapter 329 – Automobile Fuel Economy (2023) CAFE legislation on U.S. House of Representatives Downloadable U.S. Code NHTSA: CAFE Overview (Archived in 2010) NHTSA: Corporate Average Fuel Economy Light Truck Rule criticism. Is Bigger Safer? It Ain't Necessarily So Federal Proposal Would Unlink Fuel Economy Requirements From Their Safety Consequences Effectiveness and Impact of Corporate Average Fuel Economy (CAFE) Standards, National Academy of Sciences Assessment of Technologies for Improving Light-Duty Vehicle Fuel Economy—2025-2035 (2021) Final Rule: CAFE Standards for MYs 2024-2026 Vehicle Fuel Economy and Greenhouse Gas Standards: Frequently Asked Questions R45204 June 1, 2021 (pdf)
nitrous oxide
Nitrous oxide (dinitrogen oxide or dinitrogen monoxide), commonly known as laughing gas, nitrous, nitro, or nos, is a chemical compound, an oxide of nitrogen with the formula N2O. At room temperature, it is a colourless non-flammable gas, and has a slightly sweet scent and taste. At elevated temperatures, nitrous oxide is a powerful oxidiser similar to molecular oxygen. Nitrous oxide has significant medical uses, especially in surgery and dentistry, for its anaesthetic and pain-reducing effects. Its colloquial name, "laughing gas", coined by Humphry Davy, is due to the euphoric effects upon inhaling it, a property that has led to its recreational use as a dissociative anaesthetic. It is on the World Health Organization's List of Essential Medicines. It is also used as an oxidiser in rocket propellants, and in motor racing to increase the power output of engines. Nitrous oxide's atmospheric concentration reached 333 parts per billion (ppb) in 2020, increasing at a rate of about 1 ppb annually. It is a major scavenger of stratospheric ozone, with an impact comparable to that of CFCs. Global accounting of N2O sources and sinks over the decade ending 2016 indicates that about 40% of the average 17 TgN/yr (teragrams, or million metric tons, of nitrogen per year) of emissions originated from human activity, and shows that emissions growth chiefly came from expanding agriculture. Being the third most important greenhouse gas, nitrous oxide also substantially contributes to global warming.Nitrous oxide is used as a propellant, and has a variety of applications from rocketry to making whipped cream. It is used as a recreational drug for its potential to induce a brief "high". Most recreational users are unaware of its neurotoxic effects when abused. When used chronically, nitrous oxide has the potential to cause neurological damage through inactivation of vitamin B12. Uses Rocket motors Nitrous oxide may be used as an oxidiser in a rocket motor. It has advantages over other oxidisers in that it is much less toxic, and because of its stability at room temperature, it is also easier to store and relatively safe to carry on a flight. As a secondary benefit, it may be decomposed readily to form breathing air. Its high density and low storage pressure (when maintained at low temperatures) enable it to be highly competitive with stored high-pressure gas systems.In a 1914 patent, American rocket pioneer Robert Goddard suggested nitrous oxide and gasoline as possible propellants for a liquid-fuelled rocket. Nitrous oxide has been the oxidiser of choice in several hybrid rocket designs (using solid fuel with a liquid or gaseous oxidiser). The combination of nitrous oxide with hydroxyl-terminated polybutadiene fuel has been used by SpaceShipOne and others. It also is notably used in amateur and high power rocketry with various plastics as the fuel. Nitrous oxide also may be used in a monopropellant rocket. In the presence of a heated catalyst, N2O will decompose exothermically into nitrogen and oxygen, at a temperature of approximately 1,070 °F (577 °C). Because of the large heat release, the catalytic action rapidly becomes secondary, as thermal autodecomposition becomes dominant. In a vacuum thruster, this may provide a monopropellant specific impulse (Isp) of as much as 180 s. While noticeably less than the Isp available from hydrazine thrusters (monopropellant or bipropellant with dinitrogen tetroxide), the decreased toxicity makes nitrous oxide an option worth investigating. Nitrous oxide is said to deflagrate at approximately 600 °C (1,112 °F) at a pressure of 309 psi (21 atmospheres). At 600 psi, for example, the required ignition energy is only 6 joules, whereas N2O at 130 psi a 2,500-joule ignition energy input is insufficient. Internal combustion engine In vehicle racing, nitrous oxide (often called "nitrous") allows the engine to burn more fuel by providing more oxygen during combustion. The increase in oxygen allows an increase in the injection of fuel, allowing the engine to produce more engine power. The gas is not flammable at a low pressure/temperature, but it delivers more oxygen than atmospheric air by breaking down at elevated temperatures, about 570 degrees F (~300C). Therefore, it often is mixed with another fuel that is easier to deflagrate. Nitrous oxide is a strong oxidising agent, roughly equivalent to hydrogen peroxide, and much stronger than oxygen gas. Nitrous oxide is stored as a compressed liquid; the evaporation and expansion of liquid nitrous oxide in the intake manifold causes a large drop in intake charge temperature, resulting in a denser charge, further allowing more air/fuel mixture to enter the cylinder. Sometimes nitrous oxide is injected into (or prior to) the intake manifold, whereas other systems directly inject, right before the cylinder (direct port injection) to increase power. The technique was used during World War II by Luftwaffe aircraft with the GM-1 system to boost the power output of aircraft engines. Originally meant to provide the Luftwaffe standard aircraft with superior high-altitude performance, technological considerations limited its use to extremely high altitudes. Accordingly, it was only used by specialised planes such as high-altitude reconnaissance aircraft, high-speed bombers and high-altitude interceptor aircraft. It sometimes could be found on Luftwaffe aircraft also fitted with another engine-boost system, MW 50, a form of water injection for aviation engines that used methanol for its boost capabilities. One of the major problems of using nitrous oxide in a reciprocating engine is that it can produce enough power to damage or destroy the engine. Very large power increases are possible, and if the mechanical structure of the engine is not properly reinforced, the engine may be severely damaged or destroyed during this type of operation. It is important with nitrous oxide augmentation of petrol engines to maintain proper operating temperatures and fuel levels to prevent "pre-ignition", or "detonation" (sometimes referred to as "knock"). Most problems that are associated with nitrous oxide do not come from mechanical failure due to the power increases. Since nitrous oxide allows a much denser charge into the cylinder, it dramatically increases cylinder pressures. The increased pressure and temperature can cause problems such as melting the pistons or valves. It also may crack or warp the piston or cylinder head and cause pre-ignition due to uneven heating. Automotive-grade liquid nitrous oxide differs slightly from medical-grade nitrous oxide. A small amount of sulfur dioxide (SO2) is added to prevent substance abuse. Aerosol propellant The gas is approved for use as a food additive (E number: E942), specifically as an aerosol spray propellant. Its most common uses in this context are in aerosol whipped cream canisters and cooking sprays. The gas is extremely soluble in fatty compounds. In aerosol whipped cream, it is dissolved in the fatty cream until it leaves the can, when it becomes gaseous and thus creates foam. Used in this way, it produces whipped cream which is four times the volume of the liquid, whereas whipping air into cream only produces twice the volume. If air were used as a propellant, oxygen would accelerate rancidification of the butterfat, but nitrous oxide inhibits such degradation. Carbon dioxide cannot be used for whipped cream because it is acidic in water, which would curdle the cream and give it a seltzer-like "sparkling" sensation. The whipped cream produced with nitrous oxide is unstable, and will return to a more liquid state within half an hour to one hour. Thus, the method is not suitable for decorating food that will not be served immediately. In December 2016, some manufacturers reported a shortage of aerosol whipped creams in the United States due to an explosion at the Air Liquide nitrous oxide facility in Florida in late August. With a major facility offline, the disruption caused a shortage resulting in the company diverting the supply of nitrous oxide to medical clients rather than to food manufacturing. The shortage came during the Christmas and holiday season when canned whipped cream use is normally at its highest.Similarly, cooking spray, which is made from various types of oils combined with lecithin (an emulsifier), may use nitrous oxide as a propellant. Other propellants used in cooking spray include food-grade alcohol and propane. Medicine Nitrous oxide has been used in dentistry and surgery, as an anaesthetic and analgesic, since 1844. In the early days, the gas was administered through simple inhalers consisting of a breathing bag made of rubber cloth. Today, the gas is administered in hospitals by means of an automated relative analgesia machine, with an anaesthetic vaporiser and a medical ventilator, that delivers a precisely dosed and breath-actuated flow of nitrous oxide mixed with oxygen in a 2:1 ratio. Nitrous oxide is a weak general anaesthetic, and so is generally not used alone in general anaesthesia, but used as a carrier gas (mixed with oxygen) for more powerful general anaesthetic drugs such as sevoflurane or desflurane. It has a minimum alveolar concentration of 105% and a blood/gas partition coefficient of 0.46. The use of nitrous oxide in anaesthesia can increase the risk of postoperative nausea and vomiting.Dentists use a simpler machine which only delivers an N2O/O2 mixture for the patient to inhale while conscious but must still be a recognised purpose designed dedicated relative analgesic flowmeter with a minimum 30% of oxygen at all times and a maximum upper limit of 70% nitrous oxide. The patient is kept conscious throughout the procedure, and retains adequate mental faculties to respond to questions and instructions from the dentist.Inhalation of nitrous oxide is used frequently to relieve pain associated with childbirth, trauma, oral surgery and acute coronary syndrome (including heart attacks). Its use during labour has been shown to be a safe and effective aid for birthing women. Its use for acute coronary syndrome is of unknown benefit.In Britain and Canada, Entonox and Nitronox are used commonly by ambulance crews (including unregistered practitioners) as rapid and highly effective analgesic gas. Fifty percent nitrous oxide can be considered for use by trained non-professional first aid responders in prehospital settings, given the relative ease and safety of administering 50% nitrous oxide as an analgesic. The rapid reversibility of its effect would also prevent it from precluding diagnosis. Recreational use Recreational inhalation of nitrous oxide, with the purpose of causing euphoria and/or slight hallucinations, began as a phenomenon for the British upper class in 1799, known as "laughing gas parties".Starting in the 19th century, the widespread availability of the gas for medical and culinary purposes allowed for recreational use to expand greatly globally. In the UK as of 2014, nitrous oxide was estimated to be used by almost half a million young people at nightspots, festivals and parties.Widespread recreational use of the drug throughout the UK was featured in the 2017 Vice documentary Inside The Laughing Gas Black Market, in which journalist Matt Shea met with dealers of the drug who stole it from hospitals.A significant issue cited in London's press is the effect of nitrous oxide canister littering, which is highly visible and causes significant complaints from communities.While casual use of nitrous oxide is understood by most recreational users to be a route to a "safe high", many are unaware that excessive consumption has the potential to cause neurological harm which, if left untreated, can result in permanent neurological damage. In Australia, recreation use became a public health concern following a rise in reported cases of neurotoxicity and a rise in emergency room admissions, and in (the state of) South Australia legislation was passed in 2020 to restrict canister sales. Safety Nitrous oxide is a significant occupational hazard for surgeons, dentists and nurses. Because nitrous oxide is minimally metabolised in humans (with a rate of 0.004%), it retains its potency when exhaled into the room by the patient, and can pose an intoxicating and prolonged exposure hazard to the clinic staff if the room is poorly ventilated. Where nitrous oxide is administered, a continuous-flow fresh-air ventilation system or N2O scavenger system is used to prevent a waste-gas buildup.The National Institute for Occupational Safety and Health recommends that workers' exposure to nitrous oxide should be controlled during the administration of anaesthetic gas in medical, dental and veterinary operators. It set a recommended exposure limit (REL) of 25 ppm (46 mg/m3) to escaped anaesthetic. Mental and manual impairment Exposure to nitrous oxide causes short-term decreases in mental performance, audiovisual ability and manual dexterity. These effects coupled with the induced spatial and temporal disorientation could result in physical harm to the user from environmental hazards. Neurotoxicity and neuroprotection Nitrous oxide is neurotoxic and there is evidence that medium or long-term habitual consumption of significant quantities can cause neurological harm with the potential for permanent damage if left untreated.Like other NMDA receptor antagonists, it has been suggested that N2O produces neurotoxicity in the form of Olney's lesions in rodents upon prolonged (several hour) exposure. It has been argued that, because N2O is rapidly expelled from the body under normal circumstances, it is less likely to be neurotoxic than other NMDAR antagonists. Indeed, in rodents, short-term exposure results in only mild injury that is rapidly reversible, and neuronal death occurs only after constant and sustained exposure. Nitrous oxide also may cause neurotoxicity after extended exposure because of hypoxia. This is especially true of non-medical formulations such as whipped-cream chargers (also known as "whippets" or "nangs"), which never contain oxygen, since oxygen makes cream rancid.In heavy (≥400 g or ≥200 L of N2O gas in one session) or frequent (regular, i.e., daily or weekly) users reported to poison control centers, signs of peripheral neuropathy have been noted: the presence of ataxia (gait abnormalities) or paresthesia (perception of abnormal sensations, e.g. tingling, numbness, prickling, mostly in the extremities). These are considered an early sign of neurological damage and indicates chronic toxicity.Nitrous oxide at 75% by volume reduces ischemia-induced neuronal death induced by occlusion of the middle cerebral artery in rodents, and decreases NMDA-induced Ca2+ influx in neuronal cell cultures, a critical event involved in excitotoxicity. DNA damage Occupational exposure to ambient nitrous oxide has been associated with DNA damage, due to interruptions in DNA synthesis. This correlation is dose-dependent and does not appear to extend to casual recreational use; however, further research is needed to confirm the duration and quantity of exposure needed to cause damage. Oxygen deprivation If pure nitrous oxide is inhaled without oxygen, oxygen deprivation can occur, resulting in low blood pressure, fainting, and even heart attacks. This can occur if the user inhales large quantities continuously, as with a strap-on mask connected to a gas canister. It can also happen if the user engages in excessive breath-holding or uses any other inhalation system that cuts off a supply of fresh air. Vitamin B12 deficiency Long-term exposure to nitrous oxide may cause vitamin B12 deficiency. This can cause serious neurotoxicity if the user has preexisting vitamin B12 deficiency. It inactivates the cobalamin form of vitamin B12 by oxidation. Symptoms of vitamin B12 deficiency, including sensory neuropathy, myelopathy and encephalopathy, may occur within days or weeks of exposure to nitrous oxide anaesthesia in people with subclinical vitamin B12 deficiency. Symptoms are treated with high doses of vitamin B12, but recovery can be slow and incomplete.People with normal vitamin B12 levels have stores to make the effects of nitrous oxide insignificant, unless exposure is repeated and prolonged (nitrous oxide abuse). Vitamin B12 levels should be checked in people with risk factors for vitamin B12 deficiency prior to using nitrous oxide anaesthesia. Prenatal development Several experimental studies in rats indicate that chronic exposure of pregnant females to nitrous oxide may have adverse effects on the developing fetus. Chemical/physical risks At room temperature (20 °C [68 °F]) the saturated vapour pressure is 50.525 bar, rising up to 72.45 bar at 36.4 °C (97.5 °F)—the critical temperature. The pressure curve is thus unusually sensitive to temperature.As with many strong oxidisers, contamination of parts with fuels have been implicated in rocketry accidents, where small quantities of nitrous/fuel mixtures explode due to "water hammer"-like effects (sometimes called "dieseling"—heating due to adiabatic compression of gases can reach decomposition temperatures). Some common building materials such as stainless steel and aluminium can act as fuels with strong oxidisers such as nitrous oxide, as can contaminants that may ignite due to adiabatic compression.There also have been incidents where nitrous oxide decomposition in plumbing has led to the explosion of large tanks. Mechanism of action The pharmacological mechanism of action of N2O in medicine is not fully known. However, it has been shown to directly modulate a broad range of ligand-gated ion channels, and this likely plays a major role in many of its effects. It moderately blocks NMDAR and β2-subunit-containing nACh channels, weakly inhibits AMPA, kainate, GABAC and 5-HT3 receptors, and slightly potentiates GABAA and glycine receptors. It also has been shown to activate two-pore-domain K+ channels. While N2O affects quite a few ion channels, its anaesthetic, hallucinogenic and euphoriant effects are likely caused predominantly, or fully, via inhibition of NMDA receptor-mediated currents. In addition to its effects on ion channels, N2O may act to imitate nitric oxide (NO) in the central nervous system, and this may be related to its analgesic and anxiolytic properties. Nitrous oxide is 30 to 40 times more soluble than nitrogen. The effects of inhaling sub-anaesthetic doses of nitrous oxide have been known to vary, based on several factors, including settings and individual differences; however, from his discussion, Jay (2008) suggests that it has been reliably known to induce the following states and sensations: Intoxication Euphoria/dysphoria Spatial disorientation Temporal disorientation Reduced pain sensitivityA minority of users also will present with uncontrolled vocalisations and muscular spasms. These effects generally disappear minutes after removal of the nitrous oxide source. Anxiolytic effect In behavioural tests of anxiety, a low dose of N2O is an effective anxiolytic, and this anti-anxiety effect is associated with enhanced activity of GABAA receptors, as it is partially reversed by benzodiazepine receptor antagonists. Mirroring this, animals that have developed tolerance to the anxiolytic effects of benzodiazepines are partially tolerant to N2O. Indeed, in humans given 30% N2O, benzodiazepine receptor antagonists reduced the subjective reports of feeling "high", but did not alter psychomotor performance, in human clinical studies. Analgesic effect The analgesic effects of N2O are linked to the interaction between the endogenous opioid system and the descending noradrenergic system. When animals are given morphine chronically, they develop tolerance to its pain-killing effects, and this also renders the animals tolerant to the analgesic effects of N2O. Administration of antibodies that bind and block the activity of some endogenous opioids (not β-endorphin) also block the antinociceptive effects of N2O. Drugs that inhibit the breakdown of endogenous opioids also potentiate the antinociceptive effects of N2O. Several experiments have shown that opioid receptor antagonists applied directly to the brain block the antinociceptive effects of N2O, but these drugs have no effect when injected into the spinal cord. Apart from an indirect action, nitrous oxide, like morphine also interacts directly with the endogenous opioid system by binding at opioid receptor binding sites.Conversely, α2-adrenoceptor antagonists block the pain-reducing effects of N2O when given directly to the spinal cord, but not when applied directly to the brain. Indeed, α2B-adrenoceptor knockout mice or animals depleted in norepinephrine are nearly completely resistant to the antinociceptive effects of N2O. Apparently N2O-induced release of endogenous opioids causes disinhibition of brainstem noradrenergic neurons, which release norepinephrine into the spinal cord and inhibit pain signalling. Exactly how N2O causes the release of endogenous opioid peptides remains uncertain. Properties and reactions Nitrous oxide is a colourless gas with a faint, sweet odour. Nitrous oxide supports combustion by releasing the dipolar bonded oxygen radical, and can thus relight a glowing splint. N2O is inert at room temperature and has few reactions. At elevated temperatures, its reactivity increases. For example, nitrous oxide reacts with NaNH2 at 460 K (187 °C) to give NaN3: 2 NaNH 2 + N 2 O ⟶ NaN 3 + NaOH + NH 3 {\displaystyle {\ce {2 NaNH2 + N2O -> NaN3 + NaOH + NH3}}} The above reaction is the route adopted by the commercial chemical industry to produce azide salts, which are used as detonators. History The gas was first synthesised in 1772 by English natural philosopher and chemist Joseph Priestley who called it dephlogisticated nitrous air (see phlogiston theory) or inflammable nitrous air. Priestley published his discovery in the book Experiments and Observations on Different Kinds of Air (1775), where he described how to produce the preparation of "nitrous air diminished", by heating iron filings dampened with nitric acid. Early use The first important use of nitrous oxide was made possible by Thomas Beddoes and James Watt, who worked together to publish the book Considerations on the Medical Use and on the Production of Factitious Airs (1794). This book was important for two reasons. First, James Watt had invented a novel machine to produce "factitious airs" (including nitrous oxide) and a novel "breathing apparatus" to inhale the gas. Second, the book also presented the new medical theories by Thomas Beddoes, that tuberculosis and other lung diseases could be treated by inhalation of "Factitious Airs". The machine to produce "Factitious Airs" had three parts: a furnace to burn the needed material, a vessel with water where the produced gas passed through in a spiral pipe (for impurities to be "washed off"), and finally the gas cylinder with a gasometer where the gas produced, "air", could be tapped into portable air bags (made of airtight oily silk). The breathing apparatus consisted of one of the portable air bags connected with a tube to a mouthpiece. With this new equipment being engineered and produced by 1794, the way was paved for clinical trials, which began in 1798 when Thomas Beddoes established the "Pneumatic Institution for Relieving Diseases by Medical Airs" in Hotwells (Bristol). In the basement of the building, a large-scale machine was producing the gases under the supervision of a young Humphry Davy, who was encouraged to experiment with new gases for patients to inhale. The first important work of Davy was examination of the nitrous oxide, and the publication of his results in the book: Researches, Chemical and Philosophical (1800). In that publication, Davy notes the analgesic effect of nitrous oxide at page 465 and its potential to be used for surgical operations at page 556. Davy coined the name "laughing gas" for nitrous oxide.Despite Davy's discovery that inhalation of nitrous oxide could relieve a conscious person from pain, another 44 years elapsed before doctors attempted to use it for anaesthesia. The use of nitrous oxide as a recreational drug at "laughing gas parties", primarily arranged for the British upper class, became an immediate success beginning in 1799. While the effects of the gas generally make the user appear stuporous, dreamy and sedated, some people also "get the giggles" in a state of euphoria, and frequently erupt in laughter.One of the earliest commercial producers in the U.S. was George Poe, cousin of the poet Edgar Allan Poe, who also was the first to liquefy the gas. Anaesthetic use The first time nitrous oxide was used as an anaesthetic drug in the treatment of a patient was when dentist Horace Wells, with assistance by Gardner Quincy Colton and John Mankey Riggs, demonstrated insensitivity to pain from a dental extraction on 11 December 1844. In the following weeks, Wells treated the first 12 to 15 patients with nitrous oxide in Hartford, Connecticut, and, according to his own record, only failed in two cases. In spite of these convincing results having been reported by Wells to the medical society in Boston in December 1844, this new method was not immediately adopted by other dentists. The reason for this was most likely that Wells, in January 1845 at his first public demonstration to the medical faculty in Boston, had been partly unsuccessful, leaving his colleagues doubtful regarding its efficacy and safety. The method did not come into general use until 1863, when Gardner Quincy Colton successfully started to use it in all his "Colton Dental Association" clinics, that he had just established in New Haven and New York City. Over the following three years, Colton and his associates successfully administered nitrous oxide to more than 25,000 patients. Today, nitrous oxide is used in dentistry as an anxiolytic, as an adjunct to local anaesthetic. Nitrous oxide was not found to be a strong enough anaesthetic for use in major surgery in hospital settings, however. Instead, diethyl ether, being a stronger and more potent anaesthetic, was demonstrated and accepted for use in October 1846, along with chloroform in 1847. When Joseph Thomas Clover invented the "gas-ether inhaler" in 1876, however, it became a common practice at hospitals to initiate all anaesthetic treatments with a mild flow of nitrous oxide, and then gradually increase the anaesthesia with the stronger ether or chloroform. Clover's gas-ether inhaler was designed to supply the patient with nitrous oxide and ether at the same time, with the exact mixture being controlled by the operator of the device. It remained in use by many hospitals until the 1930s. Although hospitals today use a more advanced anaesthetic machine, these machines still use the same principle launched with Clover's gas-ether inhaler, to initiate the anaesthesia with nitrous oxide, before the administration of a more powerful anaesthetic. As a patent medicine Colton's popularisation of nitrous oxide led to its adoption by a number of less than reputable quacksalvers, who touted it as a cure for consumption, scrofula, catarrh and other diseases of the blood, throat and lungs. Nitrous oxide treatment was administered and licensed as a patent medicine by the likes of C. L. Blood and Jerome Harris in Boston and Charles E. Barney of Chicago. Production Reviewing various methods of producing nitrous oxide is published. Industrial methods Nitrous oxide is prepared on an industrial scale by carefully heating ammonium nitrate at about 250 °C, which decomposes into nitrous oxide and water vapour. NH 4 NO 3 ⟶ 2 H 2 O + N 2 O {\displaystyle {\ce {NH4NO3 -> 2 H2O + N2O}}} The addition of various phosphate salts favours formation of a purer gas at slightly lower temperatures. This reaction may be difficult to control, resulting in detonation. Laboratory methods The decomposition of ammonium nitrate is also a common laboratory method for preparing the gas. Equivalently, it can be obtained by heating a mixture of sodium nitrate and ammonium sulfate: 2 NaNO 3 + ( NH 4 ) 2 SO 4 ⟶ Na 2 SO 4 + 2 N 2 O + 4 H 2 O {\displaystyle {\ce {2 NaNO3 + (NH4)2SO4 -> Na2SO4 + 2 N2O + 4 H2O}}} Another method involves the reaction of urea, nitric acid and sulfuric acid: 2 ( NH 2 ) 2 CO + 2 HNO 3 + H 2 SO 4 ⟶ 2 N 2 O + 2 CO 2 + ( NH 4 ) 2 SO 4 + 2 H 2 O {\displaystyle {\ce {2 (NH2)2CO + 2 HNO3 + H2SO4 -> 2 N2O + 2 CO2 + (NH4)2SO4 + 2 H2O}}} Direct oxidation of ammonia with a manganese dioxide-bismuth oxide catalyst has been reported: cf. Ostwald process. 2 NH 3 + 2 O 2 ⟶ N 2 O + 3 H 2 O {\displaystyle {\ce {2 NH3 + 2 O2 -> N2O + 3 H2O}}} Hydroxylammonium chloride reacts with sodium nitrite to give nitrous oxide. If the nitrite is added to the hydroxylamine solution, the only remaining by-product is salt water. If the hydroxylamine solution is added to the nitrite solution (nitrite is in excess), however, then toxic higher oxides of nitrogen also are formed: NH 3 OHCl + NaNO 2 ⟶ N 2 O + NaCl + 2 H 2 O {\displaystyle {\ce {NH3OHCl + NaNO2 -> N2O + NaCl + 2 H2O}}} Treating HNO3 with SnCl2 and HCl also has been demonstrated: 2 HNO 3 + 8 HCl + 4 SnCl 2 ⟶ 5 H 2 O + 4 SnCl 4 + N 2 O {\displaystyle {\ce {2 HNO3 + 8 HCl + 4 SnCl2 -> 5 H2O + 4 SnCl4 + N2O}}} Hyponitrous acid decomposes to N2O and water with a half-life of 16 days at 25 °C at pH 1–3. H 2 N 2 O 2 ⟶ H 2 O + N 2 O {\displaystyle {\ce {H2N2O2 -> H2O + N2O}}} Atmospheric occurrence Nitrous oxide is a minor component of Earth's atmosphere and is an active part of the planetary nitrogen cycle. Based on analysis of air samples gathered from sites around the world, its concentration surpassed 330 ppb in 2017. The growth rate of about 1 ppb per year has also accelerated during recent decades. Nitrous oxide's atmospheric abundance has grown more than 20% from a base level of about 270 ppb in year 1750. Important atmospheric properties of N2O are summarized in the following table: In 2022 the IPCC reported that: "The human perturbation of the natural nitrogen cycle through the use of synthetic fertilizers and manure, as well as nitrogen deposition resulting from land-based agriculture and fossil fuel burning has been the largest driver of the increase in atmospheric N2O of 31.0 ± 0.5 ppb (10%) between 1980 and 2019." Emissions by source 17.0 (12.2 to 23.5) million tonnes total annual average nitrogen in N2O was emitted in 2007–2016. About 40% of N2O emissions are from humans and the rest are part of the natural nitrogen cycle. The N2O emitted each year by humans has a greenhouse effect equivalent to about 3 billion tonnes of carbon dioxide: for comparison humans emitted 37 billion tonnes of actual carbon dioxide in 2019, and methane equivalent to 9 billion tonnes of carbon dioxide.Most of the N2O emitted into the atmosphere, from natural and anthropogenic sources, is produced by microorganisms such as denitrifying bacteria and fungi in soils and oceans. Soils under natural vegetation are an important source of nitrous oxide, accounting for 60% of all naturally produced emissions. Other natural sources include the oceans (35%) and atmospheric chemical reactions (5%). Wetlands can also be emitters of nitrous oxide. Emissions from thawing permafrost may be significant, but as of 2022 this is not certain.The main components of anthropogenic emissions are fertilised agricultural soils and livestock manure (42%), runoff and leaching of fertilisers (25%), biomass burning (10%), fossil fuel combustion and industrial processes (10%), biological degradation of other nitrogen-containing atmospheric emissions (9%) and human sewage (5%). Agriculture enhances nitrous oxide production through soil cultivation, the use of nitrogen fertilisers and animal waste handling. These activities stimulate naturally occurring bacteria to produce more nitrous oxide. Nitrous oxide emissions from soil can be challenging to measure as they vary markedly over time and space, and the majority of a year's emissions may occur when conditions are favorable during "hot moments" and/or at favorable locations known as "hotspots".Among industrial emissions, the production of nitric acid and adipic acid are the largest sources of nitrous oxide emissions. The adipic acid emissions specifically arise from the degradation of the nitrolic acid intermediate derived from the nitration of cyclohexanone. Biological processes Natural processes that generate nitrous oxide may be classified as nitrification and denitrification. Specifically, they include: aerobic autotrophic nitrification, the stepwise oxidation of ammonia (NH3) to nitrite (NO−2) and to nitrate (NO−3) anaerobic heterotrophic denitrification, the stepwise reduction of NO−3 to NO−2, nitric oxide (NO), N2O and ultimately N2, where facultative anaerobe bacteria use NO−3 as an electron acceptor in the respiration of organic material in the condition of insufficient oxygen (O2) nitrifier denitrification, which is carried out by autotrophic NH3-oxidising bacteria and the pathway whereby ammonia (NH3) is oxidised to nitrite (NO−2), followed by the reduction of NO−2 to nitric oxide (NO), N2O and molecular nitrogen (N2) heterotrophic nitrification aerobic denitrification by the same heterotrophic nitrifiers fungal denitrification non-biological chemodenitrificationThese processes are affected by soil chemical and physical properties such as the availability of mineral nitrogen and organic matter, acidity and soil type, as well as climate-related factors such as soil temperature and water content. The emission of the gas to the atmosphere is limited greatly by its consumption inside the cells, by a process catalysed by the enzyme nitrous oxide reductase. Environmental impact Greenhouse effect Nitrous oxide has significant global warming potential as a greenhouse gas. On a per-molecule basis, considered over a 100-year period, nitrous oxide has 265 times the atmospheric heat-trapping ability of carbon dioxide (CO2). However, because of its low concentration (less than 1/1,000 of that of CO2), its contribution to the greenhouse effect is less than one third that of carbon dioxide, and also less than water vapour and methane. On the other hand, since about 40% of the N2O entering the atmosphere is the result of human activity, control of nitrous oxide is considered part of efforts to curb greenhouse gas emissions.Most human caused nitrous oxide released into the atmosphere is from agriculture, when farmers add nitrogen-based fertilizers onto the fields, and through the breakdown of animal manure. Reduction of emissions can be a hot topic in the politics of climate change.Nitrous oxide is also released as a by-product of burning fossil fuel, though the amount released depends on which fuel was used. It is also emitted through the manufacture of nitric acid, which is used in the synthesis of nitrogen fertilizers. The production of adipic acid, a precursor to nylon and other synthetic clothing fibres, also releases nitrous oxide.A rise in atmospheric nitrous oxide concentrations has been implicated as a possible contributor to the extremely intense global warming during the Cenomanian-Turonian boundary event. Ozone layer depletion Nitrous oxide has also been implicated in thinning the ozone layer. A 2009 study suggested that N2O emission was the single most important ozone-depleting emission and it was expected to remain the largest throughout the 21st century. Legality In the United States, possession of nitrous oxide is legal under federal law and is not subject to DEA purview. It is, however, regulated by the Food and Drug Administration under the Food Drug and Cosmetics Act; prosecution is possible under its "misbranding" clauses, prohibiting the sale or distribution of nitrous oxide for the purpose of human consumption. Many states have laws regulating the possession, sale and distribution of nitrous oxide. Such laws usually ban distribution to minors or limit the amount of nitrous oxide that may be sold without special license. For example, in the state of California, possession for recreational use is prohibited and qualifies as a misdemeanor.In August 2015, the Council of the London Borough of Lambeth (UK) banned the use of the drug for recreational purposes, making offenders liable to an on-the-spot fine of up to £1,000.In New Zealand, the Ministry of Health has warned that nitrous oxide is a prescription medicine, and its sale or possession without a prescription is an offense under the Medicines Act. This statement would seemingly prohibit all non-medicinal uses of nitrous oxide, although it is implied that only recreational use will be targeted legally. In India, transfer of nitrous oxide from bulk cylinders to smaller, more transportable E-type, 1,590-litre-capacity tanks is legal when the intended use of the gas is for medical anaesthesia. In September 2023, the UK Government announced that nitrous oxide would be made illegal by the end of the year, with possession potentially carrying up to a two-year prison sentence or an unlimited fine. See also DayCent Fink effect Nitrous oxide fuel blend References == External links ==
carbon border adjustment mechanism
The Carbon Border Adjustment Mechanism (CBAM) is a carbon tariff on carbon intensive products, such as cement and some electricity, imported by the European Union. Legislated as part of the European Green Deal, it takes effect in 2026, with reporting starting in 2023. CBAM was passed by the European Parliament with 450 votes for, 115 against, and 55 abstentions and entered into force on 17 May 2023. Contents The price of CBAM certificates is linked to price of EU allowances under the European Union Emissions Trading System introduced in 2005. The CBAM is designed to stem carbon leakage to countries without a carbon price.After the political (provisional) agreement between the Council and the European Parliament reached in December 2022, the CBAM is expected to enter into force on October 1, 2023; it will apply to products in six carbon intensive sectors highly exposed to international trade: aluminium, cement, iron and steel, electricity, hydrogen and fertilisers. During the transitional phase, the regulators will be checking if other products can be added to the list like for example some downstream products.To address the ‘lose-lose’ scenario of carbon leakage, characterised by a general loss of competitiveness of EU industries with no gain from the perspective of climate protection, the CBAM will require importers of the targeted goods to purchase a sufficient amount of ‘CBAM certificates’ to cover the emissions embedded in their products. Since the main purpose of the CBAM is to avoid carbon leakage, the mechanism tries to subject covered imports to the same carbon price imposed on internal producers under the EU ETS. In other words, the EU is trying to make importers bear an equivalent burden, for what concerns regulatory costs, to the costs of European producers. Under article 6, importers must make a "CBAM declaration" with the quantity of goods, embedded emissions, and certificates for payment of the carbon import tax. Annex I sets out the goods that attract the import tax, including cement, electricity, fertilisers (such as nitric acid, ammonia, potassium), iron and steel (including tanks, drums, containers), and aluminium. Annex II specifies that the CBAM does not apply to four non-EU member states that are included in the European Economic Area, namely Iceland, Liechtenstein, Norway and Switzerland. Annex III sets out the methods for calculating embedded greenhouse gas emissions. Exporters will be required to report their emissions and purchase CBAM certificates, which will increase their costs and reduce their profitability. Debate The implementation of the Carbon Border Adjustment Mechanism (CBAM) by the European Union (EU) is a major step towards addressing the issue of carbon leakage and ensuring a level playing field for European businesses worldwide against cheaper goods without carbon taxation. The import partners most affected will be Russia, China, Turkey, Ukraine, the Balkans, as well as Mozambique, Zimbabwe, and Cameroon. This mechanism allows the EU to unilaterally impose a levy on imports from countries that do not meet the environmental standards set by the European Union. Compliance and monitoring However, enforcing the CBAM requires a robust compliance framework that would ensure transparency, accuracy, and effectiveness, as argued by two law professors. Firstly, the EU should establish clear and objective environmental standards that businesses must meet to avoid tariffs. These standards should be based on internationally recognized methodologies and benchmarks. Moreover, they should be regularly reviewed and updated to reflect the latest scientific and technological advances in the field of climate change mitigation. By setting clear and objective standards, the EU can ensure that businesses clearly understand what they need to do to comply with the CBAM. Secondly, the EU should require businesses to submit detailed data on their carbon emissions and energy consumption. This data should be verified by an independent third party to ensure its accuracy and reliability. Businesses that fail to provide accurate data should be subject to penalties and fines. The EU should also establish a reporting framework that would enable businesses to report their carbon emissions and energy consumption in a standardized and consistent manner. This framework should be compatible with existing international reporting standards. Thirdly, the EU should establish a robust verification and enforcement mechanism to ensure compliance with the CBAM. This mechanism should include regular audits of businesses' emission data, as well as on-site inspections of their production facilities. Non-compliant businesses should be subject to sanctions, such as fines, product seizures, or the exclusion from the EU market. Additionally, the EU should establish a complaint mechanism that would allow stakeholders, such as NGOs or competitors, to raise concerns about non-compliance with the CBAM. WTO compatibility and non-discrimination The EU should ensure that the CBAM is compatible with its international obligations under the World Trade Organization (WTO), according to two legal scholars at the University of Ottawa. This means that the mechanism should not discriminate against any particular country or violate the principles of free trade. The EU should also engage in constructive dialogue with its trading partners, including major emitters such as China and the United States, to ensure that the CBAM is consistent with global climate goals and does not create unnecessary tensions or trade disputes. Incentivisation of carbon pricing in countries outside the EU If countries outside the European Union have or will create their own carbon pricing policies, "they will avoid the EU’s carbon border tax and keep the revenues for their own decarbonization projects".The carbon import tax is not yet proposed to apply to a wide range of other products or services, such as automobiles, clothing, food and animal products (such as ones that lead to deforestation), shipping, aviation, or the importation of gas, oil and coal. There are suggestions, that the mechanism will help reduce emissions not only by making companies reduce emissions but also by incentivising other countries (like the United States, which lacks federal carbon pricing) to create similar mechanisms. Developing countries According to one Amsterdam legal scholar, the EU should provide adequate support to the least developed countries (LDCs) to help them comply with the CBAM. This support could include technical assistance, capacity building, or financial incentives for investments in low-carbon technologies. By providing such support, the EU can ensure that businesses have the necessary resources and knowledge to transition to a low-carbon economy and avoid the risk of carbon leakage. Another author has suggested that the transition to a low-carbon economy requires technology and investment, which may require investment in countries in the Global South. Proposed solutions include technology transfer and green finance. References External links Carbon Border Adjustment Mechanism proposal
ethanol fuel in brazil
Brazil is the world's second largest producer of ethanol fuel. Brazil and the United States have led the industrial production of ethanol fuel for several years, together accounting for 85 percent of the world's production in 2017. Brazil produced 26.72 billion liters (7.06 billion U.S. liquid gallons), representing 26.1 percent of the world's total ethanol used as fuel in 2017.Between 2006 and 2008, Brazil was considered to have the world's first "sustainable" biofuels economy and the biofuel industry leader, a policy model for other countries; and its sugarcane ethanol "the most successful alternative fuel to date." However, some authors consider that the successful Brazilian ethanol model is sustainable only in Brazil due to its advanced agri-industrial technology and its enormous amount of arable land available; while according to other authors it is a solution only for some countries in the tropical zone of Latin America, the Caribbean, and Africa. In recent years however, later-generation biofuels have sprung up which use crops that are explicitly grown for fuel production and are not suitable for use as food. Brazil's 40-year-old ethanol fuel program is based on the most efficient agricultural technology for sugarcane cultivation in the world, uses modern equipment and cheap sugar cane as feedstock, the residual cane-waste (bagasse) is used to produce heat and power, which results in a very competitive price and also in a high energy balance (output energy/input energy), which varies from 8.3 for average conditions to 10.2 for best practice production. In 2010, the U.S. EPA designated Brazilian sugarcane ethanol as an advanced biofuel due to its 61% reduction of total life cycle greenhouse gas emissions, including direct indirect land use change emissions. There are no longer any light vehicles in Brazil running on pure gasoline. Since 1976 the government made it mandatory to blend anhydrous ethanol with gasoline, fluctuating between 10% and 22%. and requiring just a minor adjustment on regular gasoline engines. In 1993 the mandatory blend was fixed by law at 22% anhydrous ethanol (E22) by volume in the entire country, but with leeway to the Executive to set different percentages of ethanol within pre-established boundaries. In 2003 these limits were set at a minimum of 20% and a maximum of 25%. Since July 1, 2007, the mandatory blend is 25% of anhydrous ethanol and 75% gasoline or E25 blend. The lower limit was reduced to 18% in April 2011 due to recurring ethanol supply shortages and high prices that take place between harvest seasons. By mid March 2015 the government raised temporarily the ethanol blend in regular gasoline from 25% to 27%.The Brazilian car manufacturing industry developed flexible-fuel vehicles that can run on any proportion of gasoline (E20-E25 blend) and hydrous ethanol (E100). Introduced in the market in 2003, flex vehicles became a commercial success, dominating the passenger vehicle market with a 94% market share of all new cars and light vehicles sold in 2013. By mid-2010 there were 70 flex models available in the market, and as of December 2013, a total of 15 car manufacturers produce flex-fuel engines, dominating all light vehicle segments except sports cars, off-road vehicles and minivans. The cumulative production of flex-fuel cars and light commercial vehicles reached the milestone of 10 million vehicles in March 2010, and the 20 million-unit milestone was reached in June 2013. As of June 2015, flex-fuel light-duty vehicle cumulative sales totaled 25.5 million units, and production of flex motorcycles totaled 4 million in March 2015.The success of "flex" vehicles, together with the mandatory E25 blend throughout the country, allowed ethanol fuel consumption in the country to achieve a 50% market share of the gasoline-powered fleet in February 2008. In terms of energy equivalent, sugarcane ethanol represented 17.6% of the country's total energy consumption by the transport sector in 2008. History Sugarcane has been cultivated in Brazil since 1532 as sugar was one of the first commodities exported to Europe by the Portuguese settlers. The first use of sugarcane ethanol as a fuel in Brazil dates back to the late twenties and early thirties of the twentieth century, with the introduction of the automobile in the country. Ethanol fuel production peaked during World War II and, as German submarine attacks threatened oil supplies, the mandatory blend became as high as 50% in 1943. After the end of the war cheap oil caused gasoline to prevail, and ethanol blends were only used sporadically, mostly to take advantage of sugar surpluses, until the seventies, when the first oil crisis resulted in gasoline shortages and awareness of the dangers of oil dependence. As a response to this crisis, the Brazilian government began promoting bioethanol as a fuel. The National Alcohol Program -Pró-Álcool- (Portuguese: Programa Nacional do Álcool), launched in 1975, was a nationwide program financed by the government to phase out automobile fuels derived from fossil fuels, such as gasoline, in favor of ethanol produced from sugar cane. The first phase of the program concentrated on production of anhydrous ethanol for blending with gasoline. The Brazilian government made mandatory the blending of ethanol fuel with gasoline, fluctuating from 1976 until 1992 between 10% and 22%. Due to this mandatory minimum gasoline blend, pure gasoline (E0) is no longer sold in the country. A federal law was passed in October 1993 establishing a mandatory blend of 22% anhydrous ethanol (E22) in the entire country. This law also authorized the Executive to set different percentages of ethanol within pre-established boundaries; and since 2003 these limits were fixed at a maximum of 25% (E25) and a minimum of 20% (E20) by volume. Since then, the government has set the percentage of the ethanol blend according to the results of the sugarcane harvest and the levels of ethanol production from sugarcane, resulting in blend variations even within the same year. Since July 2007 the mandatory blend is 25% of anhydrous ethanol and 75% gasoline or E25 blend. However, in 2010, and as a result of supply concerns and high ethanol fuel prices, the government mandated a temporary 90-day blend reduction from E25 to E20 beginning February 1, 2010. By mid March 2015 the government raised temporarily the ethanol blend in regular gasoline from 25% to 27%. The blend on premium gasoline was kept at 25% upon request by ANFAVEA, the Brazilian association of automakers, because of concerns about the effects on the higher blend on cars that were built for E25, as opposed to flex-fuel cars. The government approved the higher blend as an economic incentive for ethanol producers, due to an existing overstock of over 1 billion liters (264 million US gallons) of ethanol. The implementation of E27 is expected to allow the consumption of the overstock before the end of 2015.After testing in government fleets with several prototypes developed by the local carmakers, and compelled by the second oil crisis, the Fiat 147, the first modern commercial neat ethanol car (E100 only) was launched to the market in July 1979. The Brazilian government provided three important initial drivers for the ethanol industry: guaranteed purchases by the state-owned oil company Petrobras, low-interest loans for agro-industrial ethanol firms, and fixed gasoline and ethanol prices where hydrous ethanol sold for 59% of the government-set gasoline price at the pump. Subsidising ethanol production in this manner and setting an artificially low price established ethanol as an alternative to gasoline.After reaching more than 4 million cars and light trucks running on pure ethanol by the late 1980s, representing one third of the country's motor vehicle fleet, ethanol production and sales of ethanol-only cars tumbled due to several factors. First, gasoline prices fell sharply as a result of the 1980s oil glut, but mainly because of a shortage of ethanol fuel supply in the local market left thousands of vehicles in line at gas stations or out of fuel in their garages by mid-1989. As supply could not keep pace with the increasing demand required by the now significant ethanol-only fleet, the Brazilian government began importing ethanol in 1991. Since 1979 until December 2010 neat ethanol vehicles totaled 5.7 million units. The number of neat ethanol vehicles still in use was estimated between 2 and 3 million vehicles by 2003, and estimated at 1.22 million as of December 2011. Confidence on ethanol-powered vehicles was restored only with the introduction in the Brazilian market of flexible-fuel vehicles. In March 2003 Volkswagen launched in the Brazilian market the Gol 1.6 Total Flex, the first commercial flexible fuel vehicle capable of running on any blend of gasoline and ethanol. By 2010 manufacturers that build flexible fuel vehicles include Chevrolet, Fiat, Ford, Peugeot, Renault, Volkswagen, Honda, Mitsubishi, Toyota, Citroën, Nissan, and Kia Motors. In 2013, Ford launched the first flex fuel car with direct injection: the Focus 2.0 Duratec Direct Flex.Flexible fuel cars were 22% of the car sales in 2004, 73% in 2005, 87.6% in July 2008, and reached a record 94% in August 2009. The cumulative production of flex-fuel cars and light commercial vehicles reached the milestone of 10 million vehicles in March 2010, and 15 million in January 2012. Registrations of flex-fuel cars and light trucks represented 87.0% of all passenger and light duty vehicles sold in the country in 2012. Production passed the 20 million-unit mark in June 2013. By the end of 2014, flex-fuel cars represented 54% of the Brazilian registered stock of light-duty vehicles, while gasoline only vehicles represented 34.3%. As of June 2015, flex-fuel light-duty vehicle cumulative sales totaled 25.5 million units.The rapid adoption and commercial success of "flex" vehicles, as they are popularly known, together with the mandatory blend of alcohol with gasoline as E25 fuel, have increased ethanol consumption up to the point that by February 2008 a landmark in ethanol consumption was achieved when ethanol retail sales surpassed the 50% market share of the gasoline-powered fleet. This level of ethanol fuel consumption had not been reached since the end of the 1980s, at the peak of the Pró-Álcool Program. Under the auspices of the BioEthanol for Sustainable Transport (BEST) project, the first ethanol-powered (ED95) bus began operations in São Paulo city in December 2007 as a one-year trial project. A second ED95 trial bus began operating in São Paulo city in November 2009. Based on the satisfactory results obtained during the 3-year trial operation of the two buses, in November 2010 the municipal government of São Paulo city signed an agreement with UNICA, Cosan, Scania and Viação Metropolitana", the local bus operator, to introduced a fleet of 50 ethanol-powered ED95 buses by May 2011. The local government objective is for the city's entire bus fleet, which is made of 15,000 diesel-powered buses, to use only renewable fuels by 2018. The first ethanol-powered buses were delivered in May 2011, and the 50 ethanol-powered ED95 buses are scheduled to begin regular service in São Paulo in June 2011.Another innovation of the Brazilian flexible-fuel technology was the development of flex-fuel motorcycles. The first flex motorcycle was launched by Honda in March 2009. Produced by its Brazilian subsidiary Moto Honda da Amazônia, the CG 150 Titan Mix is sold for around US$2,700. In order to avoid cold start problems, the fuel tank must have at least 20% of gasoline at temperatures below 15 °C (59 °F). In September 2009, Honda launched a second flexible-fuel motorcycle, the on-off road NXR 150 Bros Mix. By December 2010 both Honda flexible-fuel motorcycles had reached cumulative sales of 515,726 units, representing an 18.1% market share of the Brazilian new motorcycle sales in 2010. Two other flex-fuel motorcycles manufactured by Honda were launched in October 2010 and January 2011, the CG 150 FAN and the Honda BIZ 125 Flex. During 2011 a total of 956,117 flex-fuel motorcycles were produced, raising its market share to 56.7%. Production reached the 2 million mark in August 2012. Flexible-fuel motorcycle production passed the 3 million-unit milestone in October 2013, and the 4 million mark in March 2015. Production Economic and production indicators Ethanol production in Brazil uses sugarcane as feedstock and relies on first-generation technologies based on the use of the sucrose content of sugarcane. Ethanol yield has grown 3.77% per year since 1975 and productivity gains have been based on improvements in the agricultural and industrial phases of the production process. Further improvements on best practices are expected to allow in the short to mid-term an average ethanol productivity of 9,000 liters per hectare.There were 378 ethanol plants operating in Brazil by July 2008, 126 dedicated to ethanol production and 252 producing both sugar and ethanol. There are 15 additional plants dedicated exclusively to sugar production. These plants have an installed capacity of crushing 538 million metric tons of sugarcane per year, and there are 25 plants under construction expected to be on line by 2009 that will add an additional capacity of crushing 50 million tons of sugarcane per year. The typical plant costs approximately US$150 million and requires a nearby sugarcane plantation of 30,000 hectares. Ethanol production is concentrated in the Central and Southeast regions of the country, led by São Paulo state, with around 60% of the country's total ethanol production, followed by Paraná (8%), Minas Gerais (8%) and Goiás (5%). These two regions have been responsible for 90% of Brazil's ethanol production since 2005 and the harvest season goes from April to November. The Northeast Region is responsible for the remaining 10% of ethanol production, led by Alagoas with 2% of total production. The harvest season in the North-Northeast region goes from September to March, and the average productivity in this region is lower than the South-Central region. Due to the difference in the two main harvest seasons, Brazilian statistics for sugar and ethanol production are commonly reported on a harvest two-year basis rather than on a calendar year.For the 2008/09 harvest it is expected that about 44% of the sugarcane will be used for sugar, 1% for alcoholic beverages, and 55% for ethanol production. An estimate of between 24.9 billion litres (6.58 billion U.S. liquid gallons) and 27.1 billion litres (7.16 billion gallons) of ethanol are expected to be produced in 2008/09 harvest year, with most of the production being destined for the internal market, and only 4.2 billion liters (1.1 billion gallons) for exports, with an estimated 2.5 billion liters (660 million gallons) destined for the US market. Sugarcane cultivated area grew from 7 million to 7.8 million hectares of land from 2007 to 2008, mainly using abandoned pasture lands. In 2008 Brazil has 276 million hectares of arable land, 72% use for pasture, 16.9% for grain crops, and 2.8% for sugarcane, meaning that ethanol is just requiring approximately 1.5% of all arable land available in the country.As sugar and ethanol share the same feedstock and their industrial processing is fully integrated, formal employment statistics are usually presented together. In 2000 there were 642,848 workers employed by these industries, and as ethanol production expanded, by 2005 there were 982,604 workers employed in the sugarcane cultivation and industrialization, including 414,668 workers in the sugarcane fields, 439,573 workers in the sugar mills, and 128,363 workers in the ethanol distilleries. While employment in the ethanol distilleries grew 88.4% from 2000 to 2005, employment in the sugar fields just grew 16.2% as a direct result of expansion of mechanical harvest instead manual harvesting, which avoids burning the sugarcane fields before manual cutting and also increases productivity. The states with the most employment in 2005 were São Paulo (39.2%), Pernambuco (15%), Alagoas (14.1%), Paraná (7%), and Minas Gerais (5.6%). 2009–2014 crisis Since 2009 the Brazilian ethanol industry has experienced a crisis due to multiple causes. They include the economic crisis of 2008; poor sugarcane harvests due to unfavorable weather; high sugar prices in the world market that made more attractive to produce sugar rather than ethanol; a freeze imposed by the Brazilian government on the petrol and diesel prices. Brazilian ethanol fuel production in 2011 was 21.1 billion liters (5.6 billion U.S. liquid gallons), down from 26.2 billion liters (6.9 billion gallons) in 2010, while in 2012 the production of ethanol was 26% lower than in 2008. By 2012 a total of 41 ethanol plants out of about 400 have closed and the sugar-cane crop yields dropped from 115 tonnes per hectare in 2008 to 69 tonnes per hectare in 2012.A supply shortage took place for several months during 2010 and 2011, and prices climbed to the point that ethanol fuel was no longer attractive for owners of flex-fuel vehicles; the government reduced the minimum ethanol blend in gasoline to reduce demand and keep ethanol fuel prices from rising further; and for the first time since the 1990s, (corn) ethanol fuel was imported from the United States. The imports totaled around 1.5 billion litres in 2011–2012. The ethanol share in the transport fuel market decreased from 55% in 2008 to 35% in 2012. As a result of higher ethanol prices combined with government subsidies to keep gasoline price lower than the international market value, by November 2013 only 23% flex-fuel car owners were using ethanol regularly, down from 66% in 2009.During 2014 Brazil produced 23.4 billion liters (6.19 billion U.S. liquid gallons) of ethanol fuel, however, during that year Brazil imported ethanol from the United States, ranking as the second largest U.S. export market in 2014 after Canada, and representing about 13% of total American exports. Production recovered since 2015, and Brazil produced 26.72 billion liters (7.06 billion U.S. liquid gallons) in 2017, representing 26.1 percent of the world's total ethanol used as fuel. Agricultural technology A key aspect for the development of the ethanol industry in Brazil was the investment in agricultural research and development by both the public and private sector. The work of EMBRAPA, the state-owned company in charge for applied research on agriculture, together with research developed by state institutes and universities, especially in the State of São Paulo, have allowed Brazil to become a major innovator in the fields of biotechnology and agronomic practices, resulting in the most efficient agricultural technology for sugarcane cultivation in the world. Efforts have been concentrated in increasing the efficiency of inputs and processes to optimize output per hectare of feedstock, and the result has been a threefold increase of sugarcane yields in 29 years, as Brazilian average ethanol yields went from 2,024 liters per ha in 1975 to 5,917 liters per ha in 2004; allowing the efficiency of ethanol production to grow at a rate of 3.77% per year. Brazilian biotechnologies include the development of sugarcane varieties that have a larger sugar or energy content, one of the main drivers for high yields of ethanol per unit of planted area. The increase of the index total recoverable sugar (TRS) from sugarcane has been very significant, 1.5% per year in the period 1977 to 2004, resulting in an increase from 95 to 140 kg/ha. Innovations in the industrial process have allowed an increase in sugar extraction in the period 1977 to 2003. The average annual improvement was 0.3%; some mills have already reached extraction efficiencies of 98%.Biotechnology research and genetic improvement have led to the development of strains that are more resistant to disease, bacteria, and pests, and also have the capacity to respond to different environments, thus allowing the expansion of sugarcane cultivation to areas previously considered inadequate for such cultures. By 2008 more than 500 sugarcane varieties are cultivated in Brazil, and 51 of them were released just during the last ten years. Four research programs, two private and two public, are devoted to further genetic improvement. Since the mid nineties, Brazilian biotechnology laboratories have developed transgenic varieties, still non commerciallized. Identification of 40,000 cane genes was completed in 2003 and there are a couple dozen research groups working on the functional genome, still on the experimental phase, but commercial results are expected within five years.Also, there is ongoing research regarding sugarcane biological nitrogen fixation, with the most promising plant varieties showing yields three times the national average in soils of very low fertility, thus avoiding nitrogenous fertilization. There is also research for the development of second-generation or cellulosic ethanol. In São Paulo state an increase of 12% in sugar cane yield and 6.4% in sugar content is expected over the next decade. This advance combined with an expected 6.2% improvement in fermentation efficiency and 2% in sugar extraction, may increase ethanol yields by 29%, raising average ethanol productivity to 9,000 liters/ha. Approximately US$50 million has recently been allocated for research and projects focused on advancing the obtention of ethanol from sugarcane in São Paulo state. Production process Sucrose extracted from sugarcane accounts for little more than 30% of the chemical energy stored in the harvested parts of the mature plant; 35% is in the leaves and stem tips, which are left in the fields during harvest, and 35% are in the fibrous material (bagasse) left over from pressing. Most of the industrial processing of sugarcane in Brazil is done through a very integrated production chain, allowing sugar production, industrial ethanol processing, and electricity generation from byproducts. The typical steps for large-scale production of sugar and ethanol include milling, electricity generation, fermentation, distillation of ethanol, and dehydration. Milling and refining Once harvested, sugarcane is usually transported to the plant by semi-trailer trucks. After quality control, sugarcane is washed, chopped, and shredded by revolving knives; the feedstock is fed to and extracted by a set of mill combinations to collect a juice, called garapa in Brazil, that contain 10–15% sucrose, and bagasse, the fiber residue. The main objective of the milling process is to extract the largest possible amount of sucrose from the cane, and a secondary but important objective is the production of bagasse with a low moisture content as boiler fuel, as bagasse is burned for electricity generation (see below), allowing the plant to be self-sufficient in energy and to generate electricity for the local power grid. The cane juice or garapa is then filtered and treated by chemicals and pasteurized. Before evaporation, the juice is filtered once again, producing vinasse, a fluid rich in organic compounds. The syrup resulting from evaporation is then precipitated by crystallization producing a mixture of clear crystals surrounded by molasses. A centrifuge is used to separate the sugar from molasses, and the crystals are washed by addition of steam, after which the crystals are dried by an airflow. Upon cooling, sugar crystallizes out of the syrup. From this point, the sugar refining process continues to produce different grades of sugar, and the molasses continue a separate process to produce ethanol. Fermentation, distillation and dehydration The resulting molasses are treated to become a sterilized molasse free of impurities, ready to be fermented. In the fermentation process sugars are transformed into ethanol by addition of yeast. Fermentation time varies from four to twelve hours resulting in an alcohol content of 7-10% by total volume (°GL), called fermented wine. The yeast is recovered from this wine through a centrifuge. Making use of the different boiling points the alcohol in the fermented wine is separated from the main resting solid components. The remaining product is hydrated ethanol with a concentration of 96°GL, the highest concentration of ethanol that can be achieved via azeotropic distillation, and by national specification can contain up to 4.9% of water by volume. This hydrous ethanol is the fuel used by ethanol-only and flex vehicles in the country. Further dehydration is normally done by addition of chemicals, up to the specified 99.7°GL in order to produce anhydrous ethanol, which is used for blending with pure gasoline to obtain the country's E25 mandatory blend. The additional processing required to convert hydrated into anhydrous ethanol increases the cost of the fuel, as in 2007 the average producer price difference between the two was around 14% for São Paulo State. This production price difference, though small, contributes to the competitiveness of the hydrated ethanol (E100) used in Brazil, not only with regard to local gasoline prices but also as compared to other countries such as the United States and Sweden, that only use anhydrous ethanol for their flex fuel fleet. Electricity generation from bagasse Since the early days, bagasse was burnt in the plant to provide the energy required for the industrial part of the process. Today, the Brazilian best practice uses high-pressure boilers that increases energy recovery, allowing most sugar-ethanol plants to be energetically self-sufficient and even sell surplus electricity to utilities. By 2000, the total amount of sugarcane bagasse produced per year was 50 million tons/dry basis out of more than 300 million tons of harvested sugarcane. Several authors estimated a potential power generation from the use of sugarcane bagasse ranging from 1,000 to 9,000 MW, depending on the technology used and the use of harvest trash. One utility in São Paulo is buying more than 1% of its electricity from sugar mills, with a production capacity of 600 MW for self-use and 100 MW for sale. According to analysis from Frost & Sullivan, Brazil's sugarcane bagasse used for power generation has reached 3.0 GW in 2007, and it is expected to reach 12.2 GW in 2014. The analysis also found that sugarcane bagasse cogeneration accounts for 3% of the total Brazilian energy matrix. The energy is especially valuable to utilities because it is produced mainly in the dry season when hydroelectric dams are running low. According to a study commissioned by the Dutch government in 2006 to evaluate the sustainability of Brazilian bioethanol "there are also substantial gains possible in the efficiency of electricity use and generation: The electricity used for distillery operations has been estimated at 12.9 kWh/tonne cane, with a best available technology rate of 9.6 kWh/tonne cane. For electricity generation the efficiency could be increased from 18 kWh/tonne cane presently, to 29.1 kWh/tonne cane maximum. The production of surplus electricity could in theory be increased from 5.3 kWh/tonne cane to 19 kWh/tonne cane." Electric generation from ethanol Brazil has several experimental programs for the production of electricity using sugar cane ethanol as fuel. A joint venture of General Electric and Petrobras is operating one commercial pilot plant in Juiz de Fora, Minas Gerais. Overall energy use Energy-use associated with the production of sugarcane ethanol derives from three primary sources: the agricultural sector, the industrial sector, and the distribution sector. In the agricultural sector, 35.98 GJ of energy are used to plant, maintain, and harvest one hectare (10,000 m2) of sugarcane for usable biofuel. This includes energy from numerous inputs, including nitrogen, phosphate, potassium oxide, lime, seed, herbicides, insecticides, labor and diesel fuel. The industrial sector, which includes the milling and refining sugarcane and the production of ethanol fuel, uses 3.63 GJ of energy and generates 155.57 GJ of energy per hectare of sugarcane plantation. Scientists estimate that the potential power generated from the cogeneration of bagasse could range from 1,000 to 9,000 MW, depending on harvest and technology factors. In Brazil, this is about 3% of the total energy needed. The burning of bagasse can generate 18 kilowatt-hours, or 64.7 MJ per Mg of sugarcane. Distillery facilities require about 45 MJ to operate, leaving a surplus energy supply of 19.3 MJ, or 5.4 kWh. In terms of distribution, researchers calculates sugarcane ethanol's transport energy requirement to be 0.44 GJ per cubic-meter, thus one hectare of land would require 2.82 GJ of energy for successful transport and distribution. After taking all three sectors into account, the EROEI (Energy Return over Energy Invested) for sugarcane ethanol is about 8.There are several improvements to the industrial processes, such as adopting a hydrolysis process to produce ethanol instead of surplus electricity, or the use of advanced boiler and turbine technology to increase the electricity yield, or a higher use of excess bagasse and harvest trash currently left behind in the fields, that together with various other efficiency improvements in sugarcane farming and the distribution chain have the potential to allow further efficiency increases, translating into higher yields, lower production costs, and also further improvements in the energy balance and the reduction of greenhouse gas emissions. Exports Brazil is the world's largest exporter of ethanol. In 2007 it exported 933.4 million gallons (3,532.7 million liters), representing almost 20% of its production, and accounting for almost 50% of the global exports. Since 2004 Brazilian exporters have as their main customers the United States, Netherlands, Japan, Sweden, Jamaica, El Salvador, Costa Rica, Trinidad and Tobago, Nigeria, Mexico, India, and South Korea.The countries in the Caribbean Basin import relative high quantities of Brazilian ethanol, but not much is destined for domestic consumption. These countries reprocess the product, usually converting Brazilian hydrated ethanol into anhydrous ethanol, and then re-export it to the United States, gaining value-added and avoiding the 2.5% duty and the US$0.54 per gallon tariff, thanks to the trade agreements and benefits granted by Caribbean Basin Initiative (CBI). This process is limited by a quota, set at 7% of U.S. ethanol consumption. Although direct U.S. exports fell in 2007, imports from four CBI countries almost doubled, increasing from 15.5% in 2006 to 25.8% in 2007, reflecting increasing re-exports to the U.S., thus partially compensating the loss of Brazilian direct exports to the U.S. This situation has caused some concerns in the United States, as it and Brazil are trying to build a partnership to increase ethanol production in Latin American and the Caribbean. As the U.S. is encouraging "new ethanol production in other countries, production that could directly compete with U.S.-produced ethanol".The U.S., potentially the largest market for Brazilian ethanol imports, currently imposes a tariff on Brazilian ethanol of US$0.54 per gallon in order to encourage domestic ethanol production and protect the budding ethanol industry in the United States. Historically, this tariff was intended to offset the 45 cent per gallon blender's federal tax credit that is applied to ethanol no matter its country of origin. Exports of Brazilian ethanol to the U.S. reached a total of US$1 billion in 2006, an increase of 1,020% over 2005 (US$98 million), but fell significantly in 2007 due to sharp increases in American ethanol production from corn. As shown in the table, the United States remains the largest single importer of Brazilian ethanol exports, though collectively the European Union and the CBI countries now import a similar amount.A 2010 study by Iowa State University's Center for Agricultural and Rural Development found that removing the U.S. import tariff would result in less than 5% of the United States' ethanol being imported from Brazil. Also a 2010 study by the Congressional Budget Office (CBO) found that the costs to American taxpayers of using a biofuel to reduce gasoline consumption by one gallon are $1.78 for corn ethanol and $3.00 for cellulosic ethanol. In a similar way, and without considering potential indirect land use effects, the costs to taxpayers of reducing greenhouse gas emissions through tax credits are about $750 per metric ton of CO2-equivalent for ethanol and around $275 per metric ton for cellulosic ethanol.After being renewed several times, the tax credit is set to expire on December 31, 2011, and both the US$0.54 per gallon tariff and US$0.45 per gallon blender's credit have been the subject of contentious debate in Washington, D.C., with ethanol interest groups and politicians staking positions on both sides of the issue. On June 16, 2011, the U.S. Congress approved an amendment to the economic development bill to repeal both the tax credit and the tariff on ethanol, and though this bill has an uncertain future, it is considered a signal that the tax credits will not be renew when they expire at the end of 2011. The eventual elimination of the import tariff is not expect to have significant effects in the short term, because the Brazilian ethanol industry has been having trouble meeting its own domestic demand for ethanol during 2010 and 2011, and Brazil imported some corn ethanol from the U.S. The shortage in supply is due in part to high sugar prices, which make it more profitable for Brazilian producers to sell it as sugar than convert it to ethanol fuel. Also, as a result of the credit crunch caused by the financial crisis of 2007–2010, the expansion of the Brazilian ethanol industry has not been able keep up pace with the accelerated growth of the flex fuel fleet.As U.S. EPA's 2010 final ruling for the Renewable Fuel Standard designated Brazilian sugarcane ethanol as an advanced biofuel, Brazilian ethanol producers hope this classification will contribute to lift import tariffs both in the U.S. and the rest of the world. Also they expect to increase exports to the U.S., as the blending mandate requires an increasing quota of advanced biofuels, which is not likely to be fulfilled with cellulosic ethanol, and then it would force blenders to import more Brazilian sugarcane-based ethanol, despite the existing 54¢ per gallon tariff on ethanol imported directly from Brazil, or duty-free from the CBI countries that convert Brazilian hydrated ethanol into anhydrous ethanol. Prices and effect on oil consumption Most automobiles in Brazil run either on hydrous alcohol (E100) or on gasohol (E25 blend), as the mixture of 25% anhydrous ethanol with gasoline is mandatory in the entire country. Since 2003, dual-fuel ethanol flex vehicles that run on any proportion of hydrous ethanol and gasoline have been gaining popularity. These have electronic sensors that detect the type of fuel and adjust the engine combustion to match, so users can choose the cheapest available fuel. Sales of flex fuel vehicles reached 9.3 million by December 2009, representing 39% of the passenger vehicle fleet. By mid-2010 there were 70 flex models available in the market and production by December 2010 reached more than 12.5 million flex vehicles including more than 500 thousand flex fuel motorcycles.Due to the lower energy content of ethanol fuel, full flex-fuel vehicles get fewer miles per gallon. Ethanol price has to be between 25 and 30% cheaper per gallon to reach the break even point. As a rule of thumb, Brazilian consumers are frequently advised by the media to use more alcohol than gasoline in their mix only when ethanol prices are 30% lower or more than gasoline, as ethanol price fluctuates heavily depending on the harvest yields and seasonal fluctuation of sugarcane harvest.Since 2005, ethanol prices have been very competitive without subsidies, even with gasoline prices kept constant in local currency since mid-2005, at a time when oil was just approaching US$60 a barrel. However, Brazilian gasoline taxes are high, around 54 percent, while ethanol fuel taxes are lower and vary between 12% and 30%, depending on the state. As of October 2008 the average price of E25 gasoline was $4.39 per gallon while the average price for ethanol was US$2.69 per gallon. This differential in taxation favors ethanol fuel consumption, and by the end of July 2008, when oil prices were close to its latest peak and the Brazilian real exchange rate to the US dollar was close to its most recent minimum, the average gasoline retail price at the pump in Brazil reached US$6.00 per gallon. The price ratio between gasoline and ethanol fuel has been well above 30 percent during this period for most states, except during low sugar cane supply between harvests and for states located far away from the ethanol production centers. According to Brazilian producers, ethanol can remain competitive if the price of oil does not fall below US$30 a barrel.By 2008 consumption of ethanol fuel by the Brazilian fleet of light vehicles, as pure ethanol and in gasohol, is replacing gasoline at the rate of about 27,000 cubic meters per day, and by February 2008 the combined consumption of anhydrous and hydrated ethanol fuel surpassed 50 percent of the fuel that would be needed to run the light vehicle fleet on pure gasoline alone. Monthly consumption of anhydrous ethanol for the mandatory E25 blend, together with hydrous ethanol used by flex vehicles, reached 1.432 billion liters, while pure gasoline consumption was 1.411 billion liters. Despite this volumetric parity, when expressed in terms of energy equivalent (toe), sugarcane ethanol represented 17.6 percent of the country's total energy consumption by the transport sector in 2008, while gasoline represented 23.3 percent and diesel 49.2 percent.For the first time since 2003 sales of hydrous ethanol fell in 2010, with a decrease of 8.5 percent as compared to 2009. Total consumption of both hydrous and anhydrous ethanol fell by 2.9 percent while gasoline consumption increased by 17.5 percent. Despite the reduction in ethanol consumption, total ethanol sales reached 22.2 billion liters while pure gasoline consumption was 22.7 billion liters, keeping the market share for each fuel close to 50 percent. The decrease in hydrous ethanol consumption was due mainly to high sugar prices in the international markets, which reached a 30-year high in 2010. This peak in sugar prices caused sugarcane processing plants to produce more sugar than ethanol, and as supply contracted, E100 prices increased to the point that several times during 2010 the price of hydrous ethanol was less than 30 percent cheaper than gasoline. Another factor that contributed to this shift was the increase sales of imported gasoline only vehicles that took place during 2010. Comparison with the United States Brazil's sugar cane-based industry is more efficient than the U.S. corn-based industry. Sugar cane ethanol has an energy balance seven times greater than ethanol produced from corn. Brazilian distillers are able to produce ethanol for 22 cents per liter, compared with the 30 cents per liter for corn-based ethanol. U.S. corn-derived ethanol costs 30% more because the corn starch must first be converted to sugar before being distilled into alcohol. Despite this cost differential in production, the U.S. did not import more Brazilian ethanol because of U.S. trade barriers corresponding to a tariff of 54-cent per gallon, first imposed in 1980, but kept to offset the 45-cent per gallon blender's federal tax credit that is applied to ethanol no matter its country of origin. In 2011 the U.S. Congress decided not to extend the tariff and the tax credit, and as a result both ended on December 31, 2011. During these three decades the ethanol industry was awarded an estimated US$45 billion in subsidies and US$6 billion just in 2011.Sugarcane cultivation requires a tropical or subtropical climate, with a minimum of 600 mm (24 in) of annual rainfall. Sugarcane is one of the most efficient photosynthesizers in the plant kingdom, able to convert up to 2% of incident solar energy into biomass. Sugarcane production in the United States occurs in Florida, Louisiana, Hawaii, and Texas. The first three plants to produce sugarcane-based ethanol are expected to go online in Louisiana by mid-2009. Sugar mill plants in Lacassine, St. James and Bunkie were converted to sugar cane-based ethanol production using Colombian technology in order to make possible a profitable ethanol production. These three plants will produce 100 million gallons (378.5 million liters) of ethanol within five years. By 2009 two other sugarcane ethanol production projects are being developed in Kauai, Hawaii and Imperial Valley, California. Ethanol diplomacy In March 2007, "ethanol diplomacy" was the focus of President George W. Bush's Latin American tour, in which he and Brazil's president, Luiz Inácio Lula da Silva, were seeking to promote the production and use of sugar cane–based ethanol throughout Latin America and the Caribbean. The two countries also agreed to share technology and set international standards for biofuels. The Brazilian sugar cane technology transfer will permit various Central American countries, such as Honduras, Nicaragua, Costa Rica and Panama, several Caribbean countries, and various Andean Countries tariff-free trade with the U.S. thanks to existing concessionary trade agreements.Even though the U.S. has imposed a US$0.54 tariff on every gallon of imported ethanol since 1980, the Caribbean nations and Central American countries are exempt from such duties based on the benefits granted by the Caribbean Basin Initiative (CBI). CBI provisions allow tariff-free access to the US market from ethanol produced from foreign feedstock (outside CBI countries) up to 7% of the previous year US consumption. Also additional quotas are allowed if the beneficiary countries produce at least 30% of the ethanol from local feedstocks up to an additional 35 million gallons (132.5 million liters). Thus, several countries have been importing hydrated ethanol from Brazil, processing it at local distilleries to dehydrate it, and then re-exporting it as anhydrous ethanol. American farmers have complained about this loophole to legally bypass the tariff. The 2005 Dominican Republic – Central America Free Trade Agreement (CAFTA) maintained the benefits granted by the CBI, and CAFTA provisions established country-specific shares for Costa Rica and El Salvador within the overall quota. An initial annual allowance was established for each country, with gradually increasing annual levels of access to the US market. The expectation is that using Brazilian technology for refining sugar cane–based ethanol, such countries could become net exporters to the United States in the short-term. In August 2007, Brazil's president toured Mexico and several countries in Central America and the Caribbean to promote Brazilian ethanol technology.The Memorandum of Understanding (MOU) that the American and Brazilian presidents signed in March 2007 may bring Brazil and the United States closer on energy policy, but it is not clear whether there has been substantive progress implementing the three pillars found in that agreement. Brazil has also extended its technical expertise to several African countries, including Ghana, Mozambique, Angola, and Kenya. This effort is led by EMBRAPA, the state-owned company in charge for applied research on agriculture, and responsible for most of the achievements in increasing sugarcane productivity during the last thirty years. Another 15 African countries have shown interest in receiving Brazilian technical aid to improve sugarcane productivity and to produce ethanol efficiently. Brazil also has bilateral cooperation agreements with several other countries in Europe and Asia. As President Lula wrote for The Economist regarding Brazil's global agenda: Brazil's ethanol and biodiesel programmes are a benchmark for alternative and renewable fuel sources. Partnerships are being established with developing countries seeking to follow Brazil's achievements—a 675m-tonne reduction of greenhouse-gas emissions, a million new jobs and a drastic reduction in dependence on imported fossil fuels coming from a dangerously small number of producer countries. All of this has been accomplished without compromising food security, which, on the contrary, has benefited from rising agricultural output ... We are setting up offices in developing countries interested in benefiting from Brazilian know-how in this field. Environmental and social impacts Environmental effects Benefits Ethanol produced from sugarcane provides energy that is renewable and less carbon intensive than oil. Bioethanol reduces air pollution thanks to its cleaner emissions, and also contributes to mitigate global warming by reducing greenhouse gas emissions. Energy balance One of the main concerns about bioethanol production is the energy balance, the total amount of energy input into the process compared to the energy released by burning the resulting ethanol fuel. This balance considers the full cycle of producing the fuel, as cultivation, transportation and production require energy, including the use of oil and fertilizers. A comprehensive life cycle assessment commissioned by the State of São Paulo found that Brazilian sugarcane-based ethanol has a favorable energy balance, varying from 8.3 for average conditions to 10.2 for best practice production. This means that for average conditions one unit of fossil-fuel energy is required to create 8.3 energy units from the resulting ethanol. These findings have been confirmed by other studies. Greenhouse gas emissions Another benefit of bioethanol is the reduction of greenhouse gas emissions as compared to gasoline, because as much carbon dioxide is taken up by the growing plants as is produced when the bioethanol is burnt, with a zero theoretical net contribution. Several studies have shown that sugarcane-based ethanol reduces greenhouse gases by 86 to 90% if there is no significant land use change, and ethanol from sugarcane is regarded the most efficient biofuel currently under commercial production in terms of GHG emission reduction.However, two studies published in 2008 are critical of previous assessments of greenhouse gas emissions reduction, as the authors considered that previous studies did not take into account the effect of land use changes. Recent assessments carried out in 2009 by the U.S. Environmental Protection Agency (EPA) and the California Air Resources Board (CARB) included the impact of indirect land use changes (ILUC) as part of the lifecycle analysis of crop-based biofuels. Brazilian sugarcane ethanol meets both the ruled California Low-Carbon Fuel Standard (LCFS) and the proposed federal Renewable Fuel Standard (RFS2), despite the additional carbon emissions associated with ILUC. On February 3, 2010, EPA issued its final ruling regarding the RFS2 for 2010 and beyond, and determined that Brazilian ethanol produced from sugarcane complies with the applicable 50% GHG reduction threshold for the advanced fuel category. EPA's modelling shows that sugarcane ethanol from Brazil reduces greenhouse gas emissions as compared to gasoline by 61%, using a 30-year payback for indirect land use change (ILUC) emissions. By September 2010 five Brazilian sugarcane ethanol mills have been approved by the EPA to export their ethanol in the U.S. under the advanced biofuel category.A report commissioned by the United Nations, based on a detailed review of published research up to mid-2009 as well as the input of independent experts worldwide, found that ethanol from sugar cane as produced in Brazil "in some circumstances does better than just 'zero emission.' If grown and processed correctly, it has negative emission, pulling CO2 out of the atmosphere, rather than adding it." In contrast, the report found that U.S. use of maize for biofuel is less efficient, as sugarcane can lead to emissions reductions of between 70% and well over 100% when substituted for gasoline. A 2010 study commissioned by the European Commission found that emission reduction effects of first-generation biofuels are positive, even after discounting indirect land use change effects, particularly the "more emission-efficient" sugarcane ethanol from Brazil, which would have to be imported to assure the environmental viability of the EU's biofuels mandate.Another 2010 study published by the World Bank found that "Brazil's transport sector has a lower carbon intensity compared to that of most other countries because of its widespread use of ethanol as a fuel for vehicles." The study also concluded that despite the already low emission intensity, urban transportation is responsible for 51% of CO2 emissions within the Brazilian transport sector in 2008, and mainly originate in the growing use of private cars, traffic congestion and inefficient public transportation systems. Nevertheless, the study concluded that the increased use of flexible-fuel vehicles and the switch from gasoline to sugarcane ethanol are expected to stabilize GHG emissions from the light vehicle fleet over the next 25 years despite an expected increase in the number of kilometers traveled. Furthermore, the study found that if bioethanol's market share of the gasoline-powered vehicle market reaches 80% in 2030, this switch from gasoline "could deliver more than one-third of total emissions reduction targeted for the transport sector over the period" (2008–2030). The study also concluded that by increasing Brazilian ethanol exports to attend the increasing international demand for low-carbon fuels, its trade partners will benefit from reduced CHG emissions. However, for this opportunity to be realized, trade barriers and subsidies in many countries will have to be reduced or eliminated.A 2009 study published in Energy Policy found that the use of ethanol fuel in Brazil has allowed to avoid over 600 million tons of CO2 emissions since 1975, when the Pró-Álcool Program began. The study also concluded that the neutralization of the carbon released due to land-use change was achieved in 1992. In another estimate, UNICA, the main Brazilian ethanol industry organization, estimated that just the use of ethanol fuel in flex-fuel vehicles in Brazil has avoided 83.5 million tons of CO2 emissions between March 2003 and January 2010. Air pollution The widespread use of ethanol brought several environmental benefits to urban centers regarding air pollution. Lead additives to gasoline were reduced through the 1980s as the amount of ethanol blended in the fuel was increased, and these additives were eliminated by 1991. The addition of ethanol blends instead of lead to gasoline lowered the total carbon monoxide (CO), hydrocarbons, sulfur emissions, and particulate matter significantly. The use of ethanol-only vehicles has also reduced CO emissions drastically. Before the Pró-Álcool Program started, when gasoline was the only fuel in use, CO emissions were higher than 50 g/km driven; they had been reduced to less than 5.8 g/km in 1995. Several studies have also shown that São Paulo has benefit with significantly less air pollution thanks to ethanol's cleaner emissions. Furthermore, Brazilian flex-fuel engines are being designed with higher compression ratios, taking advantage of the higher ethanol blends and maximizing the benefits of the higher oxygen content of ethanol, resulting in lower emissions and improving fuel efficiency.Even though all automotive fossil fuels emit aldehydes, one of the drawbacks of the use of hydrated ethanol in ethanol-only engines is the increase in aldehyde emissions as compared with gasoline or gasohol. However, the present ambient concentrations of aldehyde, in São Paulo city are below the reference levels recommended as adequate to human health found in the literature. Other concern is that because formaldehyde and acetaldehyde emissions are significantly higher, and although both aldehydes occur naturally and are frequently found in the open environment, additional emissions may be important because of their role in smog formation. However, more research is required to establish the extent and direct consequences, if any, on health. Issues Water use and fertilizers Ethanol production has also raised concerns regarding water overuse and pollution, soil erosion and possible contamination by excessive use of fertilizers. A study commissioned by the Dutch government in 2006 to evaluate the sustainability of Brazilian bioethanol concluded that there is sufficient water to supply all foreseeable long-term water requirements for sugarcane and ethanol production. Also, and as a result of legislation and technological progress, the amount of water collected for ethanol production has decreased considerably during the previous years. The overuse of water resources seems a limited problem in general in São Paulo, particularly because of the relatively high rainfall, yet, some local problems may occur. Regarding water pollution due to sugarcane production, Embrapa classifies the industry as level 1, which means "no impact" on water quality.This evaluation also found that consumption of agrochemicals for sugar cane production is lower than in citric, corn, coffee and soybean cropping. Disease and pest control, including the use of agrochemicals, is a crucial element in all cane production. The study found that development of resistant sugar cane varieties is a crucial aspect of disease and pest control and is one of the primary objectives of Brazil's cane genetic improvement programs. Disease control is one of the main reasons for the replacement of a commercial variety of sugar cane. Field burning Advancements in fertilizers and natural pesticides have all but eliminated the need to burn fields. Sugarcane fields are traditionally burned just before harvest to avoid harm to the workers, by removing the sharp leaves and killing snakes and other harmful animals, and also to fertilize the fields with ash. There has been less burning due to pressure from the public and health authorities, and as a result of the recent development of effective harvesting machines. In the mid 90s, it was very common to experience quite dense ash rains in cities within the sugarcane's fields during harvest seasons. A 2001 state law banned burning in sugarcane fields in São Paulo state by 2021, and machines will gradually replace human labor as the means of harvesting cane, except where the abrupt terrain does not allow for mechanical harvesting. However, 150 out of 170 of São Paulo's sugar cane processing plants signed in 2007 a voluntary agreement with the state government to comply by 2014. Independent growers signed in 2008 the voluntary agreement to comply, and the deadline was extended to 2017 for sugar cane fields located in more abrupt terrain. By the 2009/10 harvest season more than 50% of the cane was collected in São Paulo with harvesting machines. Mechanization will reduce pollution from burning fields and has higher productivity than people, but also will create unemployment for these seasonal workers, many of them coming from the poorest regions of Brazil. Due to mechanization the number of temporary workers in the sugarcane plantations has already declined as each harvester machine replaces about 100 cane cutters a day and creates 30 jobs including operators and maintenance teams. Effects of land use change Two studies published in 2008 questioned the benefits estimated in previous assessments regarding the reduction of greenhouse gas emissions from sugarcane-based ethanol, as the authors consider that previous studies did not take into account the direct and indirect effect of land use changes. The authors found a "biofuel carbon debt" is created when Brazil and other developing countries convert land in undisturbed ecosystems, such as rainforests, savannas, or grasslands, to biofuel production, and to crop production when agricultural land is diverted to biofuel production. This land use change releases more CO2 than the annual greenhouse gas (GHG) reductions that these biofuels would provide by displacing fossil fuels. Among others, the study analyzed the case of Brazilian Cerrado being converted for sugarcane ethanol production. The biofuel carbon debt on converted Cerrado is estimated to be repaid in 17 years, the least amount of time of the scenarios that were analyzed, as for example, ethanol from US corn was estimated to have a 93-year payback time. The study conclusion is that the net effect of biofuel production via clearing of carbon-rich habitats is to increase CO2 emissions for decades or centuries relative to fossil fuel use.Regarding this concern, previous studies conducted in Brazil have shown there are 355 million ha of arable land in Brazil, of which only 72 million ha are in use. Sugarcane is only taking 2% of arable land available, of which ethanol production represented 55% in 2008. Embrapa estimates that there is enough agricultural land available to increase at least 30 times the existing sugarcane plantation without endangering sensitive ecosystems or taking land destined for food crops. Most future growth is expected to take place on abandoned pasture lands, as it has been the historical trend in São Paulo state. Also, productivity is expected to improve even further based on current biotechnology research, genetic improvement, and better agronomic practices, thus contributing to reduce land demand for future sugarcane cultures. This trend is demonstrated by the increases in agricultural production that took place in São Paulo state between 1990 and 2004, where coffee, orange, sugarcane and other food crops were grown in an almost constant area.Also regarding the potential negative impacts of land use changes on carbon emissions, a study commissioned by the Dutch government concluded that "it is very difficult to determine the indirect effects of further land use for sugar cane production (i.e. sugar cane replacing another crop like soy or citrus crops, which in turn causes additional soy plantations replacing pastures, which in turn may cause deforestation), and also not logical to attribute all these soil carbon losses to sugar cane." Other authors have also questioned these indirect effects, as cattle pastures are displaced to the cheaper land near the Amazon. Studies rebutting this concern claim that land devoted to free grazing cattle is shrinking, as density of cattle on pasture land increased from 1.28 heads of cattle/ha to 1.41 from 2001 to 2005, and further improvements are expected in cattle feeding practices.A paper published in February 2010 by a team led by Lapola from the University of Kassel found that the planned expansion of biofuel plantations (sugarcane and soybean) in Brazil up to 2020 will have a small direct land-use impact on carbon emissions, but indirect land-use changes could offset the carbon savings from biofuels due to the expansion of the rangeland frontier into the Amazonian forests, particularly due to displacement of cattle ranching. "Sugarcane ethanol and soybean biodiesel each contribute to nearly half of the projected indirect deforestation of 121,970 km2 by 2020, creating a carbon debt that would take about 250 years to be repaid using these biofuels instead of fossil fuels." The analysis also showed that intensification of cattle ranching, combined with efforts to promote high-yielding oil crops are required to achieve effective carbon savings from biofuels in Brazil, "while still fulfilling all food and bioenergy demands."The main Brazilian ethanol industry organization (UNICA) commented that this study and other calculations of land-use impacts are missing a key factor, the fact that in Brazil "cattle production and pasture has been intensifying already and is projected to do so in the future." Deforestation Other criticism have focused on the potential for clearing rain forests and other environmentally valuable land for sugarcane production, such as the Amazon, the Pantanal or the Cerrado. Embrapa and UNICA have rebutted this concern explaining that 99.7% of sugarcane plantations are located at least 2,000 kilometres (1,200 mi) from the Amazonia, and expansion during the last 25 years took place in the Center-South region, also far away from the Amazonia, the Pantanal or the Atlantic forest. In São Paulo state growth took place in abandoned pasture lands.The impact assessment regarding future changes in land use, forest protection and risks on biodiversity conducted as part of the study commissioned by the Dutch government concluded that "the direct impact of cane production on biodiversity is limited, because cane production replaces mainly pastures and/or food crop and sugar cane production takes place far from the major biomes in Brazil (Amazon Rain Forest, Cerrado, Atlantic Forest, Caatinga, Campos Sulinos and Pantanal)." However, "the indirect impacts from an increase of the area under sugar cane production are likely more severe. The most important indirect impact would be an expansion of the area agricultural land at the expense of cerrados. The cerrados are an important biodiversity reserve. These indirect impacts are difficult to quantify and there is a lack of practically applicable criteria and indicators."In order to guarantee a sustainable development of ethanol production, in September 2009 the government issued by decree a countrywide agroecological land use zoning to restrict sugarcane growth in or near environmentally sensitive areas such as the Pantanal wetlands, the Amazon Rainforest and the Upper Paraguay River Basin. The installation of new ethanol production plants will not be permitted on these locations, and only existing plants and new ones with environmental licensed already approved before September 17, 2009, will be allowed to remain operating in these sensitive areas. According to the new criteria, 92.5% of the Brazilian territory is not suitable for sugarcane plantation. The government considers that the suitable areas are more than enough to meet the future demand for ethanol and sugar in the domestic and international markets foreseen for the next decades. Social implications Sugarcane has had an important social contribution to some of the poorest people in Brazil by providing income usually above the minimum wage, and a formal job with fringe benefits. Formal employment in Brazil accounts an average 45% across all sectors, while the sugarcane sector has a share of 72.9% formal jobs in 2007, up from 53.6% in 1992, and in the more developed sugarcane ethanol industry in São Paulo state formal employment reached 93.8% in 2005. Average wages in sugar cane and ethanol production are above the official minimum wage, but minimum wages may be insufficient to avoid poverty. The North-Northeast regions stands out for having much lower levels of education among workers and lower monthly income. The average number workers with 3 or less school years in Brazil is 58.8%, while in the Southeast this percentage is 46.2%, in the Northeast region is 76,4%. Therefore, earnings in the Center-South are not surprisingly higher than those in the North-Northeast for comparable levels of education. In 2005 sugarcane harvesting workers in the Center-South region received an average wage 58.7% higher than the average wage in the North-Northeast region. The main social problems are related to cane cutters which do most of the low-paid work related to ethanol production.The total number of permanent employees in the sector fell by one-third between 1992 and 2003, in part due to the increasing reliance on mechanical harvesting, especially from the richer and more mature sugarcane producers of São Paulo state. During the same period, the share of temporary or seasonal workers has fluctuated, first declining and then increasing in recent years to about one-half of the total jobs in the sector, but in absolute terms the number of temporary workers has declined also. The sugarcane sector in the poorer Northeast region is more labor-intensive as production in this region represents only 18.6% of the country's total production but employs 44.3% of worker force in the sugarcane sector.The manual harvesting of sugarcane has been associated with hardship and poor working conditions. In this regard, the study commissioned by the Dutch government confirmed that the main problem is indeed related to manual cane harvesting. A key problem in working conditions is the high work load. As a result of mechanization the workload per worker has increased from 4 to 6 ton per day in the eighties to 8 to 10 ton per day in the nineties, up to 12 to 15 ton per day in 2007. If the quota is not fulfilled, workers can be fired. Producers say this problem will disappear with greater mechanization in the next decade. Also, as mechanization of the harvesting is increasing and only feasible in flat terrain, more workers are being used in areas where conditions are not suitable for mechanized harvesting equipment, such as rough areas where the crops are planted irregularly, making working conditions harder and more hazardous.Also unhealthy working conditions and even cases of slavery and deaths from overwork (cane cutting) have been reported, but these are likely worst-case examples. Even though sufficiently strict labor laws are present in Brazil, enforcement is weak. Displacement and seasonal labor also implies physical and cultural disruption of multifunctional family farms and traditional communities.Regarding social responsibility the ethanol production sector maintains more than 600 schools, 200 nursery centers and 300 day care units, as legislation requires that 1% of the net sugar cane price and 2% of the net ethanol price must be devoted to medical, dental, pharmaceutical, sanitary, and educational services for sugar cane workers. In practice more than 90% of the mills provide health and dental care, transportation and collective life insurance, and over 80% provide meals and pharmaceutical care. However, for the temporary low wage workers in cane cutting these services may not be available. Effect on food prices Some environmentalists, such as George Monbiot, have expressed fears that the marketplace will convert crops to fuel for the rich, while the poor starve and biofuels cause environmental problems. Environmental groups have raised concerns about this trade-off for several years. The food vs fuel debate reached a global scale in 2008 as a result of the international community's concerns regarding the steep increase in food prices. In April 2008, Jean Ziegler, back then United Nations Special Rapporteur on the Right to Food, called biofuels a "crime against humanity", a claim he had previously made in October 2007, when he called for a 5-year ban for the conversion of land for the production of biofuels. Also in April 2008, the World Bank's President, Robert Zoellick, stated that "While many worry about filling their gas tanks, many others around the world are struggling to fill their stomachs. And it's getting more and more difficult every day."Luiz Inácio Lula da Silva gave a strong rebuttal, calling these claims "fallacies resulting from commercial interests", and putting the blame instead on U.S. and European agricultural subsidies, and a problem restricted to U.S. ethanol produced from maize. The Brazilian President has also claimed on several occasions that his country's sugar cane–based ethanol industry has not contributed to the food price crises.A report released by Oxfam in June 2008 criticized biofuel policies of rich countries as neither a solution to the climate crisis nor the oil crisis, while contributing to the food price crisis. The report concluded that from all biofuels available in the market, Brazilian sugarcane ethanol is "far from perfect" but it is the most favorable biofuel in the world in term of cost and greenhouse gas balance. The report discusses some existing problems and potential risks, and asks the Brazilian government for caution to avoid jeopardizing its environmental and social sustainability. The report also says that: "Rich countries spent up to $15 billion last year supporting biofuels while blocking cheaper Brazilian ethanol, which is far less damaging for global food security."A World Bank research report published in July 2008 found that from June 2002 to June 2008 "biofuels and the related consequences of low grain stocks, large land use shifts, speculative activity and export bans" accounted for 70-75% of total price rises. The study found that higher oil prices and a weak dollar explain 25-30% of total price rise. The study said that "large increases in biofuels production in the United States and Europe are the main reason behind the steep rise in global food prices" and also stated that "Brazil's sugar-based ethanol did not push food prices appreciably higher." The report argues that increased production of biofuels in these developed regions were supported by subsidies and tariffs on imports, and considers that without such policies, price increases worldwide would have been smaller. This research paper also concluded that Brazil's sugar cane–based ethanol has not raised sugar prices significantly, and recommends removing tariffs on ethanol imports by both the U.S. and EU, to allow more efficient producers such as Brazil and other developing countries, including many African countries, to produce ethanol profitably for export to meet the mandates in the EU and U.S.An economic assessment report also published in July 2008 by the OECD agrees with the World Bank report regarding the negative effects of subsidies and trade restrictions, but found that the impact of biofuels on food prices are much smaller. The OECD study is also critical of the limited reduction of GHG emissions achieved from biofuels produced in Europe and North America, concluding that the current biofuel support policies would reduce greenhouse gas emissions from transport fuel by no more than 0.8% by 2015, while Brazilian ethanol from sugar cane reduces greenhouse gas emissions by at least 80% compared to fossil fuels. The assessment calls on governments for more open markets in biofuels and feedstocks in order to improve efficiency and lower costs.A study by the Brazilian research unit of the Fundação Getúlio Vargas regarding the effects of biofuels on grain prices. concluded that the major driver behind the 2007–2008 rise in food prices was speculative activity on futures markets under conditions of increased demand in a market with low grain stocks. The study also concluded that expansion of biofuel production was not a relevant factor and also that there is no correlation between Brazilian sugarcane cultivated area and average grain prices, as on the contrary, the spread of sugarcane was accompanied by rapid growth of grain crops in the country. See also Bibliography Macedo, Isaias de Carvalho, ed. (2007). A Energia da Cana-de-Açúcar – Doze estudos sobre a agroindústria da cana-de-açúcar no Brasil e a sua sustentabilidade (in Portuguese) (Second ed.). Berlendis & Vertecchia, São Paulo: UNICA – União da Agroindústria Canavieira do Estado de São Paulo. CDD-338.173610981. Archived from the original (Available in PDF) on December 11, 2012. Retrieved March 9, 2009. Neves, Marcos Fava; Mairun Junqueira Alves Pinto; Marco Antonio Conejero; Vinicius Gustavo Trombin (2011). Food and fuel: The example of Brazil. Wageningen Academic Publishers, The Netherlands. ISBN 978-90-8686-166-8. Rothkopf, Garten (2007). A Blueprint for Green Energy in the Americas. Inter-American Development Bank, Washington, D.C. Archived from the original (Available in PDF) on January 3, 2009. Retrieved March 9, 2009. Mitchell, Donald (2010). Biofuels in Africa: Opportunities, Prospects, and Challenges. The World Bank, Washington, D.C. ISBN 978-0-8213-8516-6. Archived from the original (Available in PDF) on August 11, 2011. Retrieved February 8, 2011. See Appendix A: The Brazilian Experience References External links BBC News video segment on ethanol in Brazil Biofuels: The Promise and the Risks. The World Bank's World Development Report 2008: Agriculture for Development Biofuelwatch on Ethanol in Brazil Brazil Institute: Biofuels Central.WWICS Archived May 20, 2007, at the Wayback Machine Brazil priming ethanol initiative to supply fuel-thirsty Japan Brazilian Ethanol Policy: Lessons for the United States The Brazilian biofuels industry (2008 status) "Carbonômetro" - Tool to estimate how much CO2 emissions have been avoided by ethanol used by flex-fuel cars in Brazil since March 2003. CDM Potential in Brazil, by S. Meyers, J. Sathaye et al. CNBC's Yergin: What the U.S. Can Learn From Brazil About Ethanol By CNBC.com|07 Jun 2007|12:33 PM ET Cogeneration in Ethanol Plants by P. M. Nastari Corporate Sustainability in the Brazilian Sugar-Ethanol Sector, conducted by the Brazilian Foundation for Sustainable Development in a partnership with CSM/IMD Ethical Sugar Archived January 21, 2016, at the Wayback Machine From Alcohol to Ethanol: A winning trajectory History of ethanol fuel use in Brazil (English and Portuguese) Global Trade and Environmental Impact Study of the EU Biofuels Mandate by the International Food Policy Institute (IFPRI) March 2010 Sugarcane Agroecological Zoning - Brazilian Federal Government Reconciling food security and bioenergy: priorities for action, Global Change Biology Bioenergy Journal, June 2016. Towards Sustainable Production and Use of Resources: Assessing Biofuels, United Nations Environment Programme, October 2009 GLOBIOM model: ILUC Quantification Study of EU Biofuels
sea level rise
Between 1901 and 2018, the average global sea level rose by 15–25 cm (6–10 in), or an average of 1–2 mm per year. This rate accelerated to 4.62 mm/yr for the decade 2013–2022. Climate change due to human activities is the main cause. Between 1993 and 2018, thermal expansion of water accounted for 42% of sea level rise. Melting temperate glaciers accounted for 21%, with Greenland accounting for 15% and Antarctica 8%.: 1576  Sea level rise lags changes in the Earth's temperature. So sea level rise will continue to accelerate between now and 2050 in response to warming that is already happening. What happens after that will depend on what happens with human greenhouse gas emissions. Sea level rise may slow down between 2050 and 2100 if there are deep cuts in emissions. It could then reach a little over 30 cm (1 ft) from now by 2100. With high emissions it may accelerate. It could rise by 1 m (3+1⁄2 ft) or even 2 m (6+1⁄2 ft) by then. In the long run, sea level rise would amount to 2–3 m (7–10 ft) over the next 2000 years if warming amounts to 1.5 °C (2.7 °F). It would be 19–22 metres (62–72 ft) if warming peaks at 5 °C (9.0 °F).: 21 Rising seas ultimately impact every coastal and island population on Earth. This can be through flooding, higher storm surges, king tides, and tsunamis. These have many knock-on effects. They lead to loss of coastal ecosystems like mangroves. Crop production falls because of salinization of irrigation water and damage to ports disrupts sea trade. The sea level rise projected by 2050 will expose places currently inhabited by tens of millions of people to annual flooding. Without a sharp reduction in greenhouse gas emissions, this may increase to hundreds of millions in the latter decades of the century. Areas not directly exposed to rising sea levels could be affected by large scale migrations and economic disruption. At the same time, local factors like tidal range or land subsidence, as well as the varying resilience and adaptive capacity of individual ecosystems, sectors, and countries will greatly affect the severity of impacts. For instance, sea level rise in the United States (particularly along the US East Coast) is already higher than the global average, and is expected to be 2 to 3 times greater than the global average by the end of the century. Yet, of the 20 countries with the greatest exposure to sea level rise, 12 are in Asia. Bangladesh, China, India, Indonesia, Japan, the Philippines, Thailand and Vietnam collectively account for 70% of the global population exposed to sea level rise and land subsidence. Finally, the greatest near-term impact on human populations will occur in the low-lying Caribbean and Pacific islands—many of those would be rendered uninhabitable by sea level rise later this century.Societies can adapt to sea level rise in three ways: by managed retreat, by accommodating coastal change, or by protecting against sea level rise through hard-construction practices like seawalls or soft approaches such as dune rehabilitation and beach nourishment. Sometimes these adaptation strategies go hand in hand; at other times choices must be made among different strategies. A managed retreat strategy is difficult if an area's population is quickly increasing: this is a particularly acute problem for Africa, where the population of low-lying coastal areas is projected to increase by around 100 million people within the next 40 years. Poorer nations may also struggle to implement the same approaches to adapt to sea level rise as richer states, and sea level rise at some locations may be compounded by other environmental issues, such as subsidence in so-called sinking cities. Coastal ecosystems typically adapt to rising sea levels by moving inland; but may not always be able to do so, due to natural or artificial barriers. Observations Between 1901 and 2018, the global mean sea level rose by about 20 cm (or 8 inches). More precise data gathered from satellite radar measurements found a rise of 7.5 cm (3 in) from 1993 to 2017 (average of 2.9 mm/yr), accelerating to 4.62 mm/yr for the decade 2013–2022. Regional variations Sea level rise is not uniform around the globe. Some land masses are moving up or down as a consequence of subsidence (land sinking or settling) or post-glacial rebound (land rising due to the loss of weight from ice melt). Therefore, local relative sea level rise may be higher or lower than the global average. Gravitational effects of changing ice masses also add to differences in the distribution of sea water around the globe.When a glacier or an ice sheet melts, the loss of mass reduces its gravitational pull. In some places near current and former glaciers and ice sheets, this has caused local water levels to drop, even as the water levels will increase more than average further away from the ice sheet. Consequently, ice loss in Greenland has a different fingerprint on regional sea level than the equivalent loss in Antarctica. On the other hand, the Atlantic is warming at a faster pace than the Pacific. This has consequences for Europe and the U.S. East Coast, which receives a sea level rise 3–4 times the global average. The downturn of the Atlantic meridional overturning circulation (AMOC) has been also tied to extreme regional sea level rise on the US Northeast Coast.Many ports, urban conglomerations, and agricultural regions are built on river deltas, where subsidence of land contributes to a substantially increased relative sea level rise. This is caused by both unsustainable extraction of groundwater and oil and gas, as well as by levees and other flood management practices preventing the accumulation of sediments which otherwise compensates for the natural settling of deltaic soils.: 638 : 88  Total human-caused subsidence in the Rhine-Meuse-Scheldt delta (Netherlands) is estimated at 3–4 m (10–13 ft), over 3 m (10 ft) in urban areas of the Mississippi River Delta (New Orleans), and over 9 m (30 ft) in the Sacramento–San Joaquin River Delta.: 81–90  On the other hand, post-glacial isostatic rebound causes relative sea level fall around the Hudson Bay in Canada and the northern Baltic. Projections There are two complementary ways of modeling sea level rise and making future projections. In the first approach, scientists use process-based modeling, where all relevant and well-understood physical processes are included in a global physical model. An ice-sheet model is used to calculate the contributions of ice sheets and a general circulation model is used to compute the rising sea temperature and its expansion. While some of the relevant processes may be insufficiently understood, this approach can predict non-linearities and long delays in the response, which studies of the recent past will miss. In the other approach, scientists employ semi-empirical techniques using historical geological data to determine likely sea level responses to a warming world, in addition to some basic physical modeling. These semi-empirical sea level models rely on statistical techniques, using relationships between observed past contributions to global mean sea level and global mean temperature. This type of modeling was partially motivated by most physical models in previous Intergovernmental Panel on Climate Change (IPCC) literature assessments having underestimated the amount of sea level rise compared to observations of the 20th century. Projections for the 21st century The Intergovernmental Panel on Climate Change provides multiple plausible scenarios of 21st century sea level rise in each report, starting from the IPCC First Assessment Report in 1990. The differences between scenarios are primarily due to the uncertainty about future greenhouse gas emissions, which are subject to hard to predict political action, as well as economic developments. The scenarios used in the 2013-2014 Fifth Assessment Report (AR5) were called Representative Concentration Pathways, or RCPs. An estimate for sea level rise is given with each RCP, presented as a range with a lower and upper limit, to reflect the unknowns. The RCP2.6 pathway would see GHG emissions kept low enough to meet the Paris climate agreement goal of limiting warming by 2100 to 2 °C. Estimated SLR by 2100 for RCP2.6 was about 44 cm (the range given was as 28–61 cm). For RCP8.5 the sea level would rise between 52 and 98 cm (20+1⁄2 and 38+1⁄2 in). The report did not estimate the possibility of global SLR being accelerated by the outright collapse of the marine-based parts of the Antarctic ice sheet, due to the lack of reliable information, only stating with medium confidence that if such a collapse occurred, it would not add more than several tens of centimeters to 21st century sea level rise. Since its publication, multiple papers have questioned this decision and presented higher estimates of SLR after attempting to better incorporate ice sheet processes in Antarctica and Greenland and to compare the current events with the paleoclimate data. For instance, a 2017 study from the University of Melbourne researchers estimated that ice sheet processes would increase AR5 sea level rise estimate for the low emission scenario by about one quarter, but they would add nearly half under the moderate scenario and practically double estimated sea level rise under the high emission scenario. The 2017 Fourth United States National Climate Assessment presented estimates comparable to the IPCC for the low emission scenarios, yet found that the SLR of up to 2.4 m (10 ft) by 2100 relative to 2000 is physically possible if the high emission scenario triggers Antarctic ice sheet instability, greatly increasing the 130 cm (5 ft) estimate for the same scenario but without instability.A 2016 study led by Jim Hansen presented a hypothesis of vulnerable ice sheet collapse leading to near-term exponential sea level rise acceleration, with a doubling time of 10, 20 or 40 years, thus leading to multi-meter sea level rise in 50, 100 or 200 years, respectively. However, it remains a minority view amongst the scientific community. For comparison, two expert elicitation papers were published in 2019 and 2020, both looking at low and high emission scenarios. The former combined the projections of 22 ice sheet experts to estimate the median SLR of 30 cm (12 in) by 2050 and 70 cm (27+1⁄2 in) by 2100 in the low emission scenario and the median of 34 cm (13+1⁄2 in) by 2050 and 110 cm (43+1⁄2 in) by 2100 in a high emission scenario. They also estimated a small chance of sea levels exceeding 1 meter by 2100 even in the low emission scenario and of going beyond 2 metres in the high emission scenario, with the latter causing the displacement of 187 million people. The other paper surveyed 106 experts, who had estimated a median of 45 cm (17+1⁄2 in) by 2100 for RCP2.6, with a 5%-95% range of 21–82 cm (8+1⁄2–32+1⁄2 in). For RCP8.5, the experts estimated a median of 93 cm (36+1⁄2 in) by 2100, with a 5%-95% range of 45–165 cm (17+1⁄2–65 in).By 2020, the observed ice-sheet losses in Greenland and Antarctica were found to track the upper-end range of the AR5 projections. Consequently, the updated SLR projections in the 2019 IPCC Special Report on the Ocean and Cryosphere in a Changing Climate were somewhat larger than in AR5, and they were far more plausible when compared to an extrapolation of observed sea level rise trends.The main set of sea level rise projections used in IPCC Sixth Assessment Report (AR6) was ultimately only slightly larger than the one in SROCC, with SSP1-2.6 resulting in a 17-83% range of 32–62 cm (12+1⁄2–24+1⁄2 in) by 2100, SSP2-4.5 resulting in a 44–76 cm (17+1⁄2–30 in) range by 2100 and SSP5-8.5 leading to 65–101 cm (25+1⁄2–40 in). The report also provided extended projections on both the lower and the upper end, adding SSP1-1.9 scenario which represents meeting the 1.5 °C (2.7 °F) goal and has the likely range of 28–55 cm (11–21+1⁄2 in), as well as "low-confidence" narrative involving processes like marine ice sheet and marine ice cliff instability under SSP5-8.5. For that scenario, it cautioned that the sea level rise of over 2 m (6+1⁄2 ft) by 2100 "cannot be ruled out". And as of 2022, NOAA suggests 50% probability of 0.5 m (19+1⁄2 in) sea level rise by 2100 under 2 °C (3.6 °F), increasing to >80% to >99% under 3–5 °C (5.4–9.0 °F)." Post-2100 sea level rise Models consistent with paleo records of sea level rise: 1189  indicate that substantial long-term SLR will continue for centuries even if the temperature stabilizes. After 500 years, sea level rise from thermal expansion alone may have reached only half of its eventual level, which models suggest may lie within ranges of 0.5–2 m (1+1⁄2–6+1⁄2 ft). Additionally, tipping points of Greenland and Antarctica ice sheets are expected to play a larger role over such timescales, with very long-term SLR likely to be dominated by ice loss from Antarctica, especially if the warming exceeds 2 °C (3.6 °F). Continued carbon dioxide emissions from fossil fuel sources could cause additional tens of metres of sea level rise, over the next millennia. The available fossil fuel on Earth is enough to ultimately melt the entire Antarctic ice sheet, causing about 58 m (190 ft) of sea level rise.In the next 2,000 years the sea level is predicted to rise by 2–3 m (6+1⁄2–10 ft) if the temperature rise peaks at its current 1.5 °C (2.7 °F), by 2–6 m (6+1⁄2–19+1⁄2 ft) if it peaks at 2 °C (3.6 °F) and by 19–22 m (62+1⁄2–72 ft) if it peaks at 5 °C (9.0 °F).: SPM-28  If temperature rise stops at 2 °C (3.6 °F) or at 5 °C (9.0 °F), the sea level would still continue to rise for about 10,000 years. In the first case it will reach 8–13 m (26–42+1⁄2 ft) above pre-industrial level, and in the second 28–37 m (92–121+1⁄2 ft).As both the models and observational records have improved, a range of studies has attempted to project SLR for the centuries immediately after 2100, which remains largely speculative. For instance, when the April 2019 expert elicitation asked its 22 experts about total sea level rise projections for the years 2200 and 2300 under its high, 5 °C warming scenario, it ended up with 90% confidence intervals of −10 cm (4 in) to 740 cm (24+1⁄2 ft) and −9 cm (3+1⁄2 in) to 970 cm (32 ft), respectively (negative values represent the extremely low probability of very large increases in the ice sheet surface mass balance due to climate change-induced increase in precipitation.) The elicitation of 106 experts led by Stefan Rahmstorf had also included 2300 for RCP2.6 and RCP 8.5: the former had the median of 118 cm (46+1⁄2 in), a 17%-83% range of 54–215 cm (21+1⁄2–84+1⁄2 in) and a 5%-95% range of 24–311 cm (9+1⁄2–122+1⁄2 in), while the latter had the median of 329 cm (129+1⁄2 in), a 17%-83% range of 167–561 cm (65+1⁄2–221 in) and a 5%-95% range of 88–783 cm (34+1⁄2–308+1⁄2 in)By 2021, AR6 was also able to provide estimates for year 2150 SLR alongside the 2100 estimates for the first time. According to it, keeping warming at 1.5 °C under the SSP1-1.9 scenario would result in sea level rise in the 17-83% range of 37–86 cm (14+1⁄2–34 in), SSP1-2.6 a range of 46–99 cm (18–39 in), SSP2-4.5 of 66–133 cm (26–52+1⁄2 in) range by 2100 and SSP5-8.5 leading to 98–188 cm (38+1⁄2–74 in). Moreover, it stated that if the "low-confidence" could result in over 2 m (6+1⁄2 ft) by 2100, it would then accelerate further to potentially approach 5 m (16+1⁄2 ft) by 2150. The report provided lower-confidence estimates for year 2300 sea level rise under SSP1-2.6 and SSP5-8.5 as well: the former had a range between 0.5 m (1+1⁄2 ft) and 3.2 m (10+1⁄2 ft), while the latter ranged from just under 2 m (6+1⁄2 ft) to just under 7 m (23 ft). Finally, the version of SSP5-8.5 involving low-confidence processes has a chance of exceeding 15 m (49 ft) by then.In 2018, it was estimated that for every 5 years CO2 emissions are allowed to increase before finally peaking, the median 2300 SLR increases by the median of 20 cm (8 in), with a 5% likelihood of 1 m (3+1⁄2 ft) increase due to the same. The same estimate found that if the temperature stabilized below 2 °C (3.6 °F), 2300 sea level rise would still exceed 1.5 m (5 ft), while the early net zero and slowly falling temperatures could limit it to 70–120 cm (27+1⁄2–47 in). Measurements Sea level changes can be driven by variations in the amount of water in the oceans, by changes in the volume of that water, or by varying land elevation compared to the sea surface. Over a consistent time period, assessments can source contributions to sea level rise and provide early indications of change in trajectory, which helps to inform adaptation plans. The different techniques used to measure changes in sea level do not measure exactly the same level. Tide gauges can only measure relative sea level, whilst satellites can also measure absolute sea level changes. To get precise measurements for sea level, researchers studying the ice and the oceans on our planet factor in ongoing deformations of the solid Earth, in particular due to landmasses still rising from past ice masses retreating, and also the Earth's gravity and rotation. Satellites Since the launch of TOPEX/Poseidon in 1992, an overlapping series of altimetric satellites has been continuously recording the sea level and its changes. Those satellites can measure the hills and valleys in the sea caused by currents and detect trends in their height. To measure the distance to the sea surface, the satellites send a microwave pulse towards Earth and record the time it takes to return after reflecting off the ocean's surface. Microwave radiometers measure and correct the additional delay caused by water vapor in the atmosphere. Combining these data with the precisely known location of the spacecraft determines the sea-surface height to within a few centimetres (about one inch). Rates of sea level rise for the period 1993–2017 have been estimated from satellite altimetry to be 3.0 ± 0.4 millimetres (1⁄8 ± 1⁄64 in) per year.Satellites are useful for measuring regional variations in sea level, such as the substantial rise between 1993 and 2012 in the western tropical Pacific. This sharp rise has been linked to increasing trade winds, which occur when the Pacific Decadal Oscillation (PDO) and the El Niño–Southern Oscillation (ENSO) change from one state to the other. The PDO is a basin-wide climate pattern consisting of two phases, each commonly lasting 10 to 30 years, while the ENSO has a shorter period of 2 to 7 years. Tide gauges The global network of tide gauges is another important source of sea-level observations. Compared to the satellite record, this record has major spatial gaps but covers a much longer period of time. Coverage of tide gauges started primarily in the Northern Hemisphere, with data for the Southern Hemisphere remaining scarce up to the 1970s. The longest running sea-level measurements, NAP or Amsterdam Ordnance Datum established in 1675, are recorded in Amsterdam, Netherlands. In Australia, record collection is also quite extensive, including measurements by an amateur meteorologist beginning in 1837 and measurements taken from a sea-level benchmark struck on a small cliff on the Isle of the Dead near the Port Arthur convict settlement in 1841.This network was used, in combination with satellite altimeter data, to establish that global mean sea-level rose 19.5 cm (7.7 in) between 1870 and 2004 at an average rate of about 1.44 mm/yr (1.7 mm/yr during the 20th century). By 2018, data collected by Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) had shown that the global mean sea level was rising by 3.2 mm (1⁄8 in) per year, at double the average 20th century rate, while the 2023 World Meteorological Organization report found further acceleration to 4.62 mm/yr over the 2013–2022 period. Thus, these observations help to check and verify predictions from climate change simulations. Regional differences are also visible in the tide gauge data. Some are caused by the local sea level differences, while others are due to vertical land movements. In Europe for instance, only some land areas are rising while the others are sinking. Since 1970, most tidal stations have measured higher seas, but sea levels along the northern Baltic Sea have dropped due to post-glacial rebound. Past sea level rise An understanding of past sea level is an important guide to where current changes in sea level will end up once these processes conclude. In the recent geological past, thermal expansion from increased temperatures and changes in land ice are the dominant reasons of sea level rise. The last time that the Earth was 2 °C (3.6 °F) warmer than pre-industrial temperatures was 120,000 years ago, when warming due to Milankovitch cycles (changes in the amount of sunlight due to slow changes in the Earth's orbit) caused the Eemian interglacial; sea levels during that warmer interglacial were at least 5 m (16 ft) higher than now. The Eemian warming was sustained over a period of thousands of years, and the magnitude of the rise in sea level implies a large contribution from the Antarctic and Greenland ice sheets.: 1139  According to Royal Netherlands Institute for Sea Research, levels of atmospheric carbon dioxide similar to today's ultimately increased temperature by over 2–3 °C (3.6–5.4 °F) around three million years ago. This temperature increase eventually melted one third of Antarctica's ice sheet, causing sea levels to rise 20 meters above the present values.Since the Last Glacial Maximum, about 20,000 years ago, sea level has risen by more than 125 metres (410 ft), with rates varying from less than 1 mm/year during the pre-industrial era to 40+ mm/year when major ice sheets over Canada and Eurasia melted. meltwater pulses are periods of fast sea level rise caused by the rapid disintegration of these ice sheets. The rate of sea level rise started to slow down about 8,200 years before present; sea level was almost constant for the last 2,500 years. The recent trend of rising sea level started at the end of the 19th century or at the beginning of the 20th. Causes The three main reasons warming causes global sea level to rise are the expansion of oceans due to heating, along with water inflow from melting ice sheets and glaciers. Sea level rise since the start of the 20th century has been dominated by retreat of glaciers and expansion of the ocean, but the contributions of the two large ice sheets (Greenland and Antarctica) are expected to increase in the 21st century. The ice sheets store most of the land ice (~99.5%), with a sea-level equivalent (SLE) of 7.4 m (24 ft 3 in) for Greenland and 58.3 m (191 ft 3 in) for Antarctica.Each year about 8 mm (5⁄16 in) of precipitation (liquid equivalent) falls on the ice sheets in Antarctica and Greenland, mostly as snow, which accumulates and over time forms glacial ice. Much of this precipitation began as water vapor evaporated from the ocean surface. Some of the snow is blown away by wind or disappears from the ice sheet by melt or by sublimation (directly changing into water vapor). The rest of the snow slowly changes into ice. This ice can flow to the edges of the ice sheet and return to the ocean by melting at the edge or in the form of icebergs. If precipitation, surface processes and ice loss at the edge balance each other, sea level remains the same. However scientists have found that ice is being lost, and at an accelerating rate. Ocean heating The oceans store more than 90% of the extra heat added to Earth's climate system by climate change and act as a buffer against its effects. The amount of heat needed to increase average temperature of the entire world ocean by 0.01 °C (0.018 °F) would increase atmospheric temperature by approximately 10 °C (18 °F): a small change in the mean temperature of the ocean represents a very large change in the total heat content of the climate system. When the ocean gains heat, the water expands and sea level rises. The amount of expansion varies with both water temperature and pressure. For each degree, warmer water and water under great pressure (due to depth) expand more than cooler water and water under less pressure.: 1161  Consequently cold Arctic Ocean water will expand less than warm tropical water. Because different climate models present slightly different patterns of ocean heating, their predictions do not agree fully on the contribution of ocean heating to SLR. Heat gets transported into deeper parts of the ocean by winds and currents, and some of it reaches depths of more than 2,000 m (6,600 ft). Antarctic ice loss The large volume of ice on the Antarctic continent stores around 70% of the world's fresh water. There is constant ice discharge along the periphery, yet also constant accumulation of snow atop the ice sheet: together, these processes form Antarctic ice sheet mass balance. Warming increases melting at the base of the ice sheet, but it is likely to increase snowfall, helping offset the periphery melt even if greater weight on the surface also accelerates ice flow into the ocean. While snowfall increased over the last two centuries, no increase was found in the interior of Antarctica over the last four decades. Further, sea ice, particularly in the form of ice shelves, blocks warmer waters around the continent from coming into direct contact with the ice sheet, so any loss of ice shelves substantially increases melt raises and instability. Different satellite methods for measuring ice mass and change are in good agreement, and combining methods leads to more certainty about how the East Antarctic Ice Sheet, the West Antarctic Ice Sheet, and the Antarctic Peninsula evolve. A 2018 systematic review study estimated that the average annual ice loss across the entire continent was 43 gigatons (Gt) during the period from 1992 to 2002, acceletating to an annual average of 220 Gt from 2012 to 2017. The sea level rise due to Antarctica has been estimated to be 0.25 mm per year from 1993 to 2005, and 0.42 mm per year from 2005 to 2015, although there are significant year-to-year variations.In 2021, limiting global warming to 1.5 °C (2.7 °F) was projected to reduce all land ice contribution to sea level rise by 2100 from 25 cm to 13 cm (from 10 to 6 in.) compared to current mitigation pledges, with mountain glaciers responsible for half the sea level rise contribution, and the fate of Antarctica the source of the largest uncertainty. By 2019, several studies have attempted to estimate 2300 sea level rise caused by ice loss in Antarctica alone: they suggest 16 cm (6+1⁄2 in) median and 37 cm (14+1⁄2 in) maximum values under the low-emission scenario but a median of 1.46 m (5 ft) metres (with a minimum of 60 cm (2 ft) and a maximum of 2.89 m (9+1⁄2 ft)) under the highest-emission scenario. East Antarctica The world's largest potential source of sea level rise is the East Antarctic Ice Sheet (EAIS). It holds enough ice to raise global sea levels by 53.3 m (174 ft 10 in) Historically, it was less studied than the West Antarctica as it had been considered relatively stable, an impression that was backed up by satellite observations and modelling of its surface mass balance. However, a 2019 study employed different methodology and concluded that East Antarctica is already losing ice mass overall. All methods agree that the Totten Glacier has lost ice in recent decades in response to ocean warming and possibly a reduction in local sea ice cover. Totten Glacier is the primary outlet of the Aurora Subglacial Basin, a major ice reservoir in East Antarctica that could rapidly retreat due to hydrological processes. The global sea level potential of 3.5 m (11 ft 6 in) flowing through Totten Glacier alone is of similar magnitude to the entire probable contribution of the West Antarctic Ice Sheet.The other major ice reservoir on East Antarctica that might rapidly retreat is the Wilkes Basin which is subject to marine ice sheet instability. Ice loss from these outlet glaciers is possibly compensated by accumulation gains in other parts of Antarctica. In 2022, it was estimated that the Wilkes Basin, Aurora Basin and other nearby subglacial basins are likely to have a collective tipping point around 3 °C (5.4 °F) of global warming, although it may be as high as 6 °C (11 °F), or as low as 2 °C (3.6 °F). Once this tipping point is crossed, the collapse of these subglacial basins could take place as little as 500 or as much as 10,000 years: the median timeline is 2000 years. On the other hand, the entirety of the EAIS would not be committed to collapse until global warming reaches 7.5 °C (13.5 °F) (range between 5 °C (9.0 °F) and 10 °C (18 °F)), and would take at least 10,000 years to disappear. It is also suggested that the loss of two-thirds of its volume may require at least 6 °C (11 °F) of warming. West Antarctica Even though East Antarctica contains the largest potential source of sea level rise, West Antarctica ice sheet (WAIS) is substantially more vulnerable. In contrast to East Antarctica and the Antarctic Peninsula, temperatures on West Antarctica have increased significantly with a trend between 0.08 °C (0.14 °F) per decade and 0.96 °C (1.73 °F) per decade between 1976 and 2012. Consequently, satellite observations recorded a substantial increase in WAIS melting from 1992 to 2017, resulting in 7.6 ± 3.9 mm (19⁄64 ± 5⁄32 in) of Antarctica sea level rise, with a disproportionate role played by outflow glaciers in the Amundsen Sea Embayment.In 2021, AR6 estimated that while the median increase in sea level rise from the West Antarctic ice sheet melt by 2100 is ~11 cm (5 in) under all emission scenarios (since the increased warming would intensify the water cycle and increase snowfall accumulation over the ice sheet at about the same rate as it would increase ice loss), it can conceivably contribute as much as 41 cm (16 in) by 2100 under the low-emission scenario and 57 cm (22 in) under the highest-emission one. This is because WAIS is vulnerable to several types of instability whose role remains difficult to model. These include hydrofracturing (meltwater collecting atop the ice sheet pools into fractures and forces them open), increased contact of warm ocean water with ice shelves due to climate-change induced ocean circulation changes, marine ice sheet instability (warm water entering between the seafloor and the base of the ice sheet once it is no longer heavy enough to displace the flow, causing accelerated melting and collapse) and even marine ice cliff instability (ice cliffs with heights greater than 100 m (330 ft) collapsing under their own weight once they are no longer buttressed by ice shelves). These processes do not have equal influence and are not all equally likely to happen: for instance, marine ice cliff instability has never been observed and was ruled out by some of the more detailed modelling. The Thwaites and Pine Island glaciers are considered the most prone to ice sheet instability processes. Both glaciers' bedrock topography gets deeper farther inland, exposing them to more warm water intrusion into the grounding zone. Their contribution to global sea levels has already accelerated since the beginning of the 21st century, with the Thwaites Glacier now amounting to 4% of the global sea level rise. At the end of 2021, it was estimated that the Thwaites Ice Shelf can collapse in three to five years, which would then make the destabilization of the entire Thwaites glacier inevitable. The Thwaites glacier itself will cause a rise of sea level by 65 cm (25+1⁄2 in) if it will completely collapse, although this process is estimated to unfold over several centuries.Since most of the bedrock underlying the West Antarctic Ice Sheet lies well below sea level, it is currently buttressed by Thwaites and Pine Island Glaciers, meaning that their loss would likely destabilize the entire ice sheet. This possibility was first proposed back in the 1970s, when a 1978 study predicted that anthropogenic CO2 emissions doubling by 2050 would cause 5 m (15 ft) of SLR from the rapid WAIS loss alone. Since then, improved modelling concluded that the ice within WAIS would raise the sea level by 3.3 m (10 ft 10 in). In 2022, the collapse of the entire West Antarctica was estimated to unfold over a period of about 2000 years, with the absolute minimum of 500 years (and a potential maximum of 13,000 years). At the same time, this collapse was considered likely to be triggered at around 1.5 °C (2.7 °F) of global warming and would become unavoidable at 3 °C (5.4 °F). At worst, it may have even been triggered already: subsequent (2023) research had made that possibility more likely, suggesting that the temperatures in the Amundsen Sea are likely to increase at triple the historical rate even with low or "medium" atmospheric warming and even faster with high warming. Without unexpected strong negative feedbacks emerging, the collapse of the ice sheet would become inevitable.While it would take a very long time from start to end for the ice sheet to disappear, it has been suggested that the only way to stop it once triggered is by lowering the global temperature to 1 °C (1.8 °F) below the preindustrial level; i.e. 2 °C (3.6 °F) below the temperature of 2020. Other researchers suggested that a climate engineering intervention aiming to stabilize the ice sheet's glaciers may delay its loss by centuries and give more time to adapt, although it's an uncertain proposal, and would necessarily end up as one of the most expensive projects ever attempted by humanity. Greenland ice sheet loss Most ice on Greenland is part of the Greenland ice sheet which is 3 km (10,000 ft) at its thickest. Other Greenland ice forms isolated glaciers and ice caps. The sources contributing to sea level rise from Greenland are from ice sheet melting (70%) and from glacier calving (30%). Average annual ice loss in Greenland more than doubled in the early 21st century compared to the 20th century, and there was a corresponding increase in SLR contribution from 0.07 mm per year between 1992 and 1997 to 0.68 mm per year between 2012 and 2017. Total ice loss from the Greenland Ice Sheet between 1992 and 2018 amounted to 3,902 gigatons (Gt) of ice, which is equivalent to the SLR of 10.8 mm. The contribution for the 2012–2016 period was equivalent to 37% of sea level rise from land ice sources (excluding thermal expansion). This rate of ice sheet melting is also associated with the higher end of predictions from the past IPCC assessment reports. In 2021, AR6 estimated that under the SSP1-2.6 emission scenario which largely fulfils the Paris Agreement goals, Greenland ice sheet melt adds around 6 cm (2+1⁄2 in) to global sea level rise by the end of the century, with a plausible maximum of 15 cm (6 in) (and even a very small chance of the ice sheet reducing the sea levels by around 2 cm (1 in) due to gaining mass through surface mass balance feedback). The scenario associated with the highest global warming, SSP5-8.5, would see Greenland add a minimum of 5 cm (2 in) to sea level rise, a likely median of 13 cm (5 in) cm and a plausible maximum of 23 cm (9 in).Certain parts of the Greenland ice sheet are already known to be committed to unstoppable sea level rise. Greenland's peripheral glaciers and ice caps crossed an irreversible tipping point around 1997, and will continue to melt. A subsequent study had found that the climate of the past 20 years (2000–2019) would already result of the loss of ~3.3% volume in this manner in the future, committing the ice sheet to an eventual 27 cm (10+1⁄2 in) of SLR, independent of any future temperature change. There is also a global warming threshold beyond which a near-complete melting of the Greenland ice sheet occurs. Earlier research has put this threshold value as low as 1 °C (1.8 °F), and definitely no higher than 4 °C (7.2 °F) above pre-industrial temperatures.: 1170  A 2021 analysis of sub-glacial sediment at the bottom of a 1.4 km Greenland ice core finds that the Greenland ice sheet melted away at least once during the last million years, even though the temperatures have never been higher than 2.5 °C (4.5 °F) greater than today over that period. In 2022, it was estimated that the tipping point of the Greenland Ice Sheet may have been as low as 0.8 °C (1.4 °F) and is certainly no higher than 3 °C (5.4 °F) : there is a high chance that it will be crossed around 1.5 °C (2.7 °F). Once crossed, it would take between 1000 and 15,000 years for the ice sheet to disintegrate entirely, with the most likely estimate of 10,000 years. Mountain glacier loss There are roughly 200,000 glaciers on Earth, which are spread out across all continents. Less than 1% of glacier ice is in mountain glaciers, compared to 99% in Greenland and Antarctica. However, this small size also makes mountain glaciers more vulnerable to melting than the larger ice sheets. This means they have had a disproportionate contribution to historical sea level rise and are set to contribute a smaller, but still significant fraction of sea level rise in the 21st century. Observational and modelling studies of mass loss from glaciers and ice caps indicate a contribution to sea level rise of 0.2-0.4 mm per year, averaged over the 20th century. The contribution for the 2012–2016 period was nearly as large as that of Greenland: 0.63 mm of sea level rise per year, equivalent to 34% of sea level rise from land ice sources. Glaciers contributed around 40% to sea level rise during the 20th century, with estimates for the 21st century of around 30%. The IPCC Fifth Assessment Report estimated that glaciers contributing 7–24 cm (3–9+1⁄2 in) to global sea levels.: 1165 In 2023, a Science paper estimated that at 1.5 °C (2.7 °F), one quarter of mountain glacier mass would be lost by 2100 and nearly half would be lost at 4 °C (7.2 °F), contributing ~9 cm (3+1⁄2 in) and ~15 cm (6 in) to sea level rise, respectively. Because glacier mass is disproportionately concentrated in the most resilient glaciers, this would in practice remove between 49% and 83% of glacier formations. It had further estimated that the current likely trajectory of 2.7 °C (4.9 °F) would result in the SLR contribution of ~11 cm (4+1⁄2 in) by 2100. Mountain glaciers are even more vulnerable over the longer term. In 2022, another Science paper estimated that almost no mountain glaciers can be expected to survive once the warming crosses 2 °C (3.6 °F), and their complete loss largely inevitable around 3 °C (5.4 °F): there is even a possibility of complete loss after 2100 at just 1.5 °C (2.7 °F). This could happen as early as 50 years after the tipping point is crossed, although 200 years is the most likely value, and the maximum is around 1000 years. Sea ice loss Sea ice loss contributes very slightly to global sea level rise. If the melt water from ice floating in the sea was exactly the same as sea water then, according to Archimedes' principle, no rise would occur. However melted sea ice contains less dissolved salt than sea water and is therefore less dense, with a slightly greater volume per unit of mass. If all floating ice shelves and icebergs were to melt sea level would only rise by about 4 cm (1+1⁄2 in). Changes to land water storage Human activity impacts how much water is stored on land. Dams retain large quantities of water, which is stored on land rather than flowing into the sea (even though the total quantity stored will vary somewhat from time to time). On the other hand, humans extract water from lakes, wetlands and underground reservoirs for food production, which often causes subsidence. Furthermore, the hydrological cycle is influenced by climate change and deforestation, which can lead to further positive and negative contributions to sea level rise. In the 20th century, these processes roughly balanced, but dam building has slowed down and is expected to stay low for the 21st century.: 1155 Water redistribution caused by irrigation from 1993 to 2010 caused a drift of Earth's rotational pole by 78.48 centimetres (30.90 in), causing an amount of groundwater depletion equivalent to a global sea level rise of 6.24 millimetres (0.246 in). Impacts The impacts of sea level rise include higher and more frequent high-tide and storm-surge flooding, increased coastal erosion, inhibition of primary production processes, more extensive coastal inundation, along with changes in surface water quality and groundwater. These can lead to a greater loss of property and coastal habitats, loss of life during floods and loss of cultural resources. Agriculture and aquaculture can also be impacted. There can also be loss of tourism, recreation, and transport related functions.: 356  Coastal flooding impacts are exacerbated by land use changes such as urbanisation or deforestation of low-lying coastal zones. Regions that are already vulnerable to the rising sea level also struggle with coastal flooding washing away land and altering the landscape.Because the projected extent of sea level rise by 2050 will be only slightly affected by any changes in emissions, there is confidence that 2050 levels of SLR combined with the 2010 population distribution (i.e. absent the effects of population growth and human migration) would result in ~150 million people under the water line during high tide and ~300 million in places which are flooded every year—an increase of 40 and 50 million people relative to 2010 values for the same. By 2100, there would be another 40 million people under the water line during high tide if sea level rise remains low, and 80 million for a high estimate of the median sea level rise. If ice sheet processes under the highest emission scenario result in sea level rise of well over one metre (3+1⁄4 ft) by 2100, with a chance of levels over two metres (6+1⁄2 ft),: TS-45  then as many as 520 million additional people would end up under the water line during high tide and 640 million in places which are flooded every year, when compared to the 2010 population distribution. Over the longer term, coastal areas are particularly vulnerable to rising sea levels, changes in the frequency and intensity of storms, increased precipitation, and rising ocean temperatures. Ten percent of the world's population live in coastal areas that are less than 10 metres (33 ft) above sea level. Furthermore, two-thirds of the world's cities with over five million people are located in these low-lying coastal areas. In total, approximately 600 million people live directly on the coast around the world. Cities such as Miami, Rio de Janeiro, Osaka and Shanghai will be especially vulnerable later in the century under the warming of 3 °C (5.4 °F), which is close to the current trajectory. Altogether, LiDAR-based research had established in 2021 that 267 million people worldwide lived on land less than 2 m (6+1⁄2 ft) above sea level and that with a 1 m (3+1⁄2 ft) sea level rise and zero population growth, that number could increase to 410 million people.Even populations who live further inland may be impacted by a potential disruption of sea trade, and by migrations. In 2023, United Nations secretary general António Guterres warned that sea level rises risk causing human migrations on a "biblical scale". Sea level rise will inevitably affect ports, but the current research into this subject is limited. Not enough is known about the investments required to protect the ports currently in use, and for how they may be protected before it becomes more reasonable to build new port facilities elsewhere. Moreover, some coastal regions are rich agricultural lands, whose loss to the sea can result in food shortages elsewhere. This is a particularly acute issue for river deltas such as Nile Delta in Egypt and Red River and Mekong Deltas in Vietnam, which are disproportionately affected by saltwater intrusion into the soil and irrigation water. Ecosystems When seawater reaches inland, coastal plants, birds, and freshwater/estuarine fish are threatened with habitat loss due to flooding and soil/water salinization. So-called ghost forests emerge when coastal forest areas become inundated with saltwater to the point no trees can survive. Starting around 2050, some nesting sites in Florida, Cuba, Ecuador and the island of Sint Eustatius for leatherback, loggerhead, hawksbill, green and olive ridley turtles are expected to be flooded, and the proportion would only increase over time. And in 2016, Bramble Cay islet in the Great Barrier Reef was inundated, flooding the habitat of a rodent named Bramble Cay melomys. In 2019, it was officially declared extinct. While some ecosystems can move land inward with the high-water mark, many are prevented from migrating due to natural or artificial barriers. This coastal narrowing, sometimes called 'coastal squeeze' when considering human-made barriers, could result in the loss of habitats such as mudflats and tidal marshes. Mangrove ecosystems on the mudflats of tropical coasts nurture high biodiversity, yet they are particularly vulnerable due to mangrove plants' reliance on breathing roots or pneumatophores, which might grow to be half a metre tall. While mangroves can adjust to rising sea levels by migrating inland and building vertically using accumulated sediment and organic matter, they will be submerged if the rate is too rapid, resulting in the loss of an ecosystem. Both mangroves and tidal marshes protect against storm surges, waves and tsunamis, so their loss makes the effects of sea level rise worse. Human activities, such as dam building, may restrict sediment supplies to wetlands, and thereby prevent natural adaptation processes. The loss of some tidal marshes is unavoidable as a consequence.Likewise, corals, important for bird and fish life, need to grow vertically to remain close to the sea surface in order to get enough energy from sunlight. The corals have so far been able to keep up the vertical growth with the rising seas, but might not be able to do so in the future. Regional impacts Africa In Africa, risk from sea level rise is amplified by the future population growth. It is believed that 54.2 million people lived in the highly exposed low elevation coastal zones (LECZ) around 2000, but this number will effectively double to around 110 million people by 2030, and by 2060 it will be around 185 to 230 million people, depending on the extent of population growth. While the average regional sea level rise by 2060 will be around 21 cm (with climate change scenarios making little difference at that point), local geography and population trends interact to increase the exposure to hazards like 100-year floods in a complex way. In the near term, some of the largest displacement is projected to occur in the East Africa region, where at least 750,000 people are likely to be displaced from the coasts between 2020 and 2050. It was also estimated that by 2050, 12 major African cities (Abidjan, Alexandria, Algiers, Cape Town, Casablanca, Dakar, Dar es Salaam, Durban, Lagos, Lomé, Luanda and Maputo) would collectively sustain cumulative damages of USD 65 billion for the "moderate" climate change scenario RCP4.5 and USD 86.5 billion for the high-emission scenario RCP8.5: the version of the high-emission scenario with additional impacts from high ice sheet instability would involve up to 137.5 billion USD in damages. Additional accounting for the "low-probability, high-damage events" may increase aggregate risks to USD 187 billion for the "moderate" RCP4.5, USD 206 billion for RCP8.5 and USD 397 billion under the high-end instability scenario. In all of these estimates, the Egyptian city of Alexandria alone amounts for around half of this figure: hundreds of thousands of people in its low-lying areas may already have to be relocated in the coming decade. Across sub-Saharan Africa as a whole, damages from sea level rise could reach 2–4% of GDP by 2050, although this is strongly affected by the extent of future economic growth and adaptation. In the longer term, Egypt, Mozambique and Tanzania are also projected to have the largest number of people affected by annual flooding amongst all African countries if global warming reaches 4 °C by the end of the century (a level associated with the RCP8.5 scenario). Under RCP8.5, 10 important cultural sites (Casbah of Algiers, Carthage Archaeological site, Kerkouane, Leptis Magna Archaeological site, Medina of Sousse, Medina of Tunis, Sabratha Archaeological site, Robben Island, Island of Saint-Louis and Tipasa) would be at risk of flooding and erosion by the end of the century, along with a total of 15 Ramsar sites and other natural heritage sites (Bao Bolong Wetland Reserve, Delta du Saloum National Park, Diawling National Park, Golfe de Boughrara, Kalissaye, Lagune de Ghar el Melh et Delta de la Mejerda, Marromeu Game Reserve, Parc Naturel des Mangroves du Fleuve Cacheu, Seal Ledges Provincial Nature Reserve, Sebkhet Halk Elmanzel et Oued Essed, Sebkhet Soliman, Réserve Naturelle d'Intérêt Communautaire de la Somone, Songor Biosphere Reserve, Tanbi Wetland Complex and Watamu Marine National Park). Asia As of 2022, it is estimated that 63 million people in the East and South Asia are already at risk from a 100-year flood, in large part due to inadequate coastal protection in many countries. This will be greatly exacerbated in the future: Asia has the largest population at risk from sea level and Bangladesh, China, India, Indonesia, Japan, Pakistan, the Philippines, Thailand and Vietnam alone account for 70% number of people exposed to sea level rise during the 21st century. This is entirely due to the region's densely populated coasts, as the rate of sea level rise in Asia is generally similar to the global average. Exceptions include the Indo-Pacific region, where it had been around 10% faster since the 1990s, and the coast of China, where globally "extreme" sea level rise had been detected since the 1980s, and it is believed that the difference between and of global warming would have a disproportionate impact on flood frequency. It is also estimated that future sea level rise along the Japanese Honshu Island would be up to 25 cm faster than the global average under RCP8.5, the intense climate change scenario. RCP8.5 is additionally associated with the loss of at least a third of the Japanese beaches and 57–72% of Thai beaches.One estimate finds that Asia will suffer direct economic damages of 167.6 billion USD at 0.47 meters of sea level rise, 272.3 billion USD at 1.12 meters and 338.1 billion USD at 1.75 meters (along with the indirect impact of 8.5, 24 or 15 billion USD from population displacement at those levels), with China, India, the Republic of Korea, Japan, Indonesia and Russia experiencing the largest economic losses. Out of the 20 coastal cities expected to see the highest flood losses by 2050, 13 are in Asia. For nine of those (Bangkok, Guangzhou, Ho Chi Minh City, Jakarta, Kolkata, Nagoya, Tianjin, Xiamen and Zhanjiang) sea level rise would be compounded by subsidence. By 2050, Guangzhou would see 0.2 meters of sea level rise and the estimated annual economic losses of 254 million USD - the highest in the world. One estimate calculates that in the absence of adaptation, cumulative economic losses caused by sea level rise in Guangzhou under RCP8.5 would reach ~331 billion USD by 2050, ~660 billion USD by 2070 and 1.4 trillion USD by 2100, while the impact of high-end ice sheet instability would increase these figures to ~420 billion USD, ~840 billion USD and ~1.8 trillion USD, respectively. In Shanghai, coastal inundation amounts to ~0.03% of local GDP; but would increase to 0.8% (confidence interval of 0.4–1.4%) by 2100 even under the "moderate" RCP4.5 scenario in the absence of adaptation. Likewise, failing to adapt to sea level rise in Mumbai would result in the damages of 112–162 billion USD by 2050, which would nearly triple by 2070. As the result, efforts like the Mumbai Coastal Road are being implemented, although they are likely to affect coastal ecosystems and fishing livelihoods. Nations with extensive rice production along the coasts like Bangladesh, Vietnam and China are already seeing adverse impacts from saltwater intrusion.It is estimated that sea level rise in Bangladesh may force the relocation of up to one-third of power plants as early as 2030, while a similar proportion would have to deal with the increased salinity of their cooling water by then. Research from 2010s indicates that by 2050, between 0.9 and 2.1 million people would be displaced by sea level rise alone: this would likely necessitate the creation of ~594,000 additional jobs and ~197,000 housing units in the areas receiving the displaced persons, as well as to secure the supply of additional ~783 billion calories worth of food. in 2021, another paper estimated that 816,000 would be directly displaced by sea level rise by 2050, but this would be increased to 1,3 million when the indirect effects are taken into account. Both studies assume that the majority of the displaced people would travel to the other areas of Bangladesh, and attempt to estimate population changes in different localities. In an attempt to address these challenges, the Bangladesh Delta Plan 2100 has been launched in 2018. As of 2020, it was seen falling short of most of its initial targets. The progress is being monitored.In 2019, the president of Indonesia, Joko Widodo, declared that the city of Jakarta is sinking to a degree that requires him to move the capital to another city. A study conducted between 1982 and 2010 found that some areas of Jakarta have been sinking by as much as 28 cm (11 inches) per year due to ground water drilling and the weight of its buildings, and the problem is now exacerbated by sea level rise. However, there are concerns that building in a new location will increase tropical deforestation. Other so called sinking cities, such as Bangkok or Tokyo, are vulnerable to these compounding subsidence with sea level rise. Australasia In Australia, erosion and flooding of Queensland's Sunshine Coast beaches is projected to intensify by 60% by 2030, with severe impacts on tourism in the absence of adaptation. Adaptation costs to sea level rise under the high-emission RCP8.5 scenario are projected to be three times greater than the adaptation costs to low-emission RCP2.6 scenario. For 0.2- to 0.3-m sea level rise (set to occur by 2050), what is currently a 100-year flood would occur every year in New Zealand cities of Wellington and Christchurch. Under 0.5 m sea level rise, the current 100-year flood in Australia would be likely to occur several times a year, while in New Zealand, buildings with a collective worth of NZ$12.75 billion would become exposed to new 100-year floods. A metre or so of sea level rise would threaten assets in New Zealand with a worth of NZD$25.5 billion (with a disproportionate impact on Maori-owned holdings and cultural heritage objects), and Australian assets with a worth of AUD$164–226 billion (including many unsealed roads and railway lines). The latter represents a 111% rise in Australia's inundation costs between 2020 and 2100. Central and South America By 2100, a minimum of 3-4 million people in South America would be directly affected by coastal flooding and erosion. 6% of the population of Venezuela, 56% of the population of Guyana (including in the capital, Georgetown, much of which is already below the sea level) and 68% of the population of Suriname are already living in low-lying areas exposed to sea level rise. In Brazil, the coastal ecoregion of Caatinga is responsible for 99% of its shrimp production, yet its unique conditions are threatened by a combination of sea level rise, ocean warming and ocean acidification. The port complex of Santa Catarina had been interrupted by extreme wave or wind behavior 76 times in one 6-year period in 2010s, with a 25,000-50,000 USD loss for each idle day. In Port of Santos, storm surges were three times more frequent between 2000 and 2016 than between 1928 and 1999. Europe Many sandy coastlines in Europe are vulnerable to erosion caused by sea level rise. In Spain, Costa del Maresme is anticipated to retreat by 16 meters by 2050 relative to 2010, and potentially by 52 meters by 2100 under RCP8.5 Other vulnerable coastlines include Tyrrhenian Sea coast of Italy's Calabria region, Barra-Vagueira coast in Portugal and Nørlev Strand in Denmark.In France, it was estimated that 8,000-10,000 people would be forced to migrate away from the coasts by 2080. The Italian city of Venice is located on islands. It is highly vulnerable to flooding and has already spent $6 billion on a barrier system. A quarter of the German state of Schleswig-Holstein, inhabited by over 350,000 people, is at low elevation and has been vulnerable to flooding since the preindustrial times. Many levees already exist, but to its complex geography, a flexible mix of hard and soft measures was chosen, which is intended to support a safety margin of >1 meter rise per century. In the United Kingdom, sea level at the end of the century would increase by 53 to 115 centimetres at the mouth of river Thames and 30 to 90 centimetres at Edinburgh. To address this reality, it has divided its coast into 22 areas, each covered by a Shoreline Management Plan. Those are further sub-divided into 2000 management units in total, spanning across three "epochs" (0–20 years, 20-50 and 50–100 years).The Netherlands is a country that sits partially below sea level and is subsiding. It has responded by extending its Delta Works program. Drafted in 2008, the Delta Commission report said that the country must plan for a rise in the North Sea up to 1.3 m (4 ft 3 in) by 2100 and plan for a 2–4 m (7–13 ft) rise by 2200. It advised annual spending between €1.0 and €1.5 billion for measures such as broadening coastal dunes and strengthening sea and river dikes. Worst-case evacuation plans were also drawn up. North America As of 2017, around 95 million Americans lived on the coast: for Canada and Mexico, this figure amounts to 6.5 million and 19 million people. Increased chronic nuisance flooding and king tide flooding is already an issue in the highly vulnerable state of Florida, as well as alongside the US East Coast. On average, the number of days with tidal flooding in the USA increased 2 times in the years 2000-2020, reaching 3–7 days per year. In some areas the increase was much stronger: 4 times in the Southeast Atlantic and 11 times in the Western Gulf. By the year 2030 the average number is expected to be 7–15 days, reaching 25–75 days by 2050. U.S. coastal cities have responded to that through beach nourishment or beach replenishment, where mined sand is trucked in and added, in addition to other adaptation measures such as zoning, restrictions on state funding, and building code standards. Along an estimated 15% of the US coastline, the majority of local groundwater levels are already below the sea level. This places those groundwater reservoirs at risk of sea water intrusion, which renders fresh water unusable once its concentration exceeds 2-3%. The damages are also widespread in Canada and will affect both major cities like Halifax and the more remote locations like Lennox Island, whose Mi'kmaq community is already considering relocation due to widespread coastal erosion. In Mexico, the damages from SLR to tourism hotspots like Cancun, Isla Mujeres, Playa del Carmen, Puerto Morelos and Cozumel could amount to 1.4–2.3 billion USD. The increase in storm surge due to sea level rise is also a problem. For example, due to this effect Hurricane Sandy caused additional 8 billion dollars in damage, impacted 36,000 more houses and 71,000 more people.In the future, northern Gulf of Mexico, Atlantic Canada and the Pacific coast of Mexico would experience the greatest sea level rise. By 2030, flooding along the US Gulf Coast could cause economic losses of up to 176 billion USD: around 50 billion USD may be avoided through nature-based solutions like wetland restoration and oyster reef restoration. By 2050, the frequency of coastal flooding in the US is expected to rise tenfold to four "moderate" flooding events per year, even without storms or heavy rainfall. In the New York City, current 100-year flood would occur once in 19–68 years by 2050 and 4–60 years by 2080. By 2050, 20 million people in the greater New York City area would be threatened, as 40% of the existing water treatment facilities would be compromised and 60% of power plants will need to be relocated. By 2100, sea level rise of 0.9 m (3 ft) and 1.8 m (6 ft) would threaten 4.2 and 13.1 million people in the US, respectively. In California alone, 2 m (6+1⁄2 ft) of SLR could affect 600,000 people and threaten over 150 billion USD in property with inundation, potentially representing over 6% of the state's GDP. In North Carolina, a meter of SLR inundates 42% of the Albemarle-Pamlico Peninsula, costing up to 14 billion USD (at 2016 value of the currency). In nine southeast US states, the same level of sea level rise would claim up to 13,000 historical and archaeological sites, including over 1000 sites eligible for inclusion in the National Register for Historic Places. Island nations Small island states are nations whose populations are concentrated on atolls and other low islands. Atolls on average reach 0.9–1.8 m (3–6 ft) above sea level. This means that no other place is more vulnerable to coastal erosion, flooding and salt intrusion into soils and freshwater caused by sea level rise. The latter may render an island uninhabitable well before it is completely flooded. Already, children in small island states are encountering hampered access to food and water and are suffering an increased rate of mental and social disorders due to these stressors. At current rates, sea level would be high enough to make the Maldives uninhabitable by 2100, while five of the Solomon Islands have already disappeared due to the combined effects of sea level rise and stronger trade winds that were pushing water into the Western Pacific. Adaptation to sea level rise is costly for small island nations as a large portion of their population lives in areas that are at risk. Nations like Maldives, Kiribati and Tuvalu are already forced to consider controlled international migration of their population in response to rising seas, since the alternative of uncontrolled migration threatens to exacerbate the humanitarian crisis of climate refugees. In 2014, Kiribati had purchased 20 square kilometers of land (about 2.5% of Kiribati's current area) on the Fijian island of Vanua Levu to relocate its population there once their own islands are lost to the sea.While Fiji is also impacted by sea level rise, it is in a comparatively safer position, and its residents continue to rely on local adaptation like moving further inland and increasing sediment supply to combat erosion instead of relocating entirely. Fiji has also issued a green bond of $50 million to invest in green initiatives and use the proceeds to fund adaptation efforts, and it is restoring coral reefs and mangroves to protect itself flooding and erosion as a more cost-efficient alternative to building sea walls, with the nations of Palau and Tonga adopting similar efforts. At the same time, even when an island is not threatened with complete disappearance due to flooding, tourism and local economies may end up devastated. For instance, a sea level rise of 1.0 m (3 ft 3 in) would cause partial or complete inundation of 29% of coastal resorts in the Caribbean, while a further 49–60% of coastal resorts would be at risk from resulting coastal erosion. Adaptation Cutting greenhouse gas emissions can slow and stabilize the rate of sea level rise after 2050, greatly reducing its costs and damages, but cannot stop it outright. Thus, climate change adaptation to sea level rise is inevitable.: 3–127  The most straightforward approach is to first cease development in vulnerable areas and ultimately move the people and infrastructure away from them. Such retreat from sea level rise often results in the loss of livelihoods, and the displacement of newly impoverished people could burden their new homes and accelerate social tensions.It is possible to avoid or at least delay the retreat from sea level rise with enhanced protections like dams, levees or improved natural defenses, or through accommodation like building standards updated to reduce damage from floods, addition of storm water valves to address more frequent and severe flooding at high tide, or cultivating crops more tolerant of saltwater mixing into the soil, even at an increased cost. These options can be further divided into hard and soft adaptation. The former generally involves large-scale changes to human societies and ecological systems, often through the construction of capital-intensive infrastructure. Soft adaptation involves strengthening natural defenses and local community adaptation, usually with simple, modular and locally owned technology. The two types of adaptation might be complementary or mutually exclusive. Adaptation options often require significant investment, but the costs of doing nothing are far greater. For instance, effective adaptation measures are predicted to reduce future annual costs of flooding in 136 of the world's largest coastal cities from $1 trillion by 2050 if no adaptation was done, to a little over $60 billion annually, while costing $50 billion per year. However, it has been suggested that in the case of very high sea level rise, retreat away from the coast would have a lower impact on the GDP of India and Southeast Asia then attempting to protect every coastline. To be successful, adaptation needs to anticipate sea level rise well ahead of time. As of 2023, the global state of adaptation planning is mixed. A survey of 253 planners from 49 countries found that while 98% are aware of sea level rise projections, 26% have not yet formally integrated them into their policy documents. Only around a third of respondents from Asian and South American countries have done so, compared to 50% in Africa, and >75% in Europe, Australasia and North America. 56% of all surveyed planners have structured plans which account for 2050 and 2100 sea level rise, but 53% only plan using a single projection, rather than a range of two or three projections. Just 14% plan using four projections, including that of the "extreme" or "high-end" sea level rise. Another study found that while >75% of regional sea level rise assessments from the West and Northeastern United States included at least three estimates (usually RCP2.6, RCP4.5 and RCP8.5), and sometimes included extreme scenarios, 88% of projections from the American South had only a single estimate. Similarly, no assessment from the South went beyond 2100, while 14 assessments from the West went up to 2150, and three from the Northeast went to 2200. 56% of all localities were also found to underestimate the upper end of sea level rise relative to IPCC Sixth Assessment Report. See also Sea level drop Climate emergency declaration Climate engineering Coastal development hazards Effects of climate change on oceans Effects of climate change on small island countries Hydrosphere Islands First List of countries by average elevation References External links NASA Satellite Data 1993-present Fourth National Climate Assessment Sea Level Rise Key Message Incorporating Sea Level Change Scenarios at the Local Level Outlines eight steps a community can take to develop site-appropriate scenarios The Global Sea Level Observing System (GLOSS) USA Sea Level Rise Viewer (NOAA)
train
A train (from Old French trahiner, from Latin trahere, "to pull, to draw") is a series of connected vehicles that run along a railway track and transport people or freight. Trains are typically pulled or pushed by locomotives (often known simply as "engines"), though some are self-propelled, such as multiple units. Passengers and cargo are carried in railroad cars, also known as wagons. Trains are designed to a certain gauge, or distance between rails. Most trains operate on steel tracks with steel wheels, the low friction of which makes them more efficient than other forms of transport. Trains have their roots in wagonways, which used railway tracks and were powered by horses or pulled by cables. Following the invention of the steam locomotive in the United Kingdom in 1802, trains rapidly spread around the world, allowing freight and passengers to move over land faster and cheaper than ever possible before. Rapid transit and trams were first built in the late 1800s to transport large numbers of people in and around cities. Beginning in the 1920s, and accelerating following World War II, diesel and electric locomotives replaced steam as the means of motive power. Following the development of cars, trucks, and extensive networks of highways which offered greater mobility, as well as faster airplanes, trains declined in importance and market share, and many rail lines were abandoned. The spread of buses led to the closure of many rapid transit and tram systems during this time as well. Since the 1970s, governments, environmentalists, and train advocates have promoted increased use of trains due to their greater fuel efficiency and lower greenhouse gas emissions compared to other modes of land transport. High-speed rail, first built in the 1960s, has proven competitive with cars and planes over short to medium distances. Commuter rail has grown in importance since the 1970s as an alternative to congested highways and a means to promote development, as has light rail in the 21st century. Freight trains remain important for the transport of bulk commodities such as coal and grain, as well as being a means of reducing road traffic congestion by freight trucks. While conventional trains operate on relatively flat tracks with two rails, a number of specialized trains exist which are significantly different in their mode of operation. Monorails operate on a single rail, while funiculars and rack railways are uniquely designed to traverse steep slopes. Experimental trains such as high speed maglevs, which use magnetic levitation to float above a guideway, are under development in the 2020s and offer higher speeds than even the fastest conventional trains. Trains which use alternative fuels such as natural gas and hydrogen are another 21st-century development. History Early history Trains are an evolution of wheeled wagons running on stone wagonways, the earliest of which were built by Babylon circa 2,200 BCE. Starting in the 1500s, wagonways were introduced to haul material from mines; from the 1790s, stronger iron rails were introduced. Following early developments in the second half of the 1700s, in 1804 a steam locomotive built by British inventor Richard Trevithick powered the first ever steam train. Outside of coal mines, where fuel was readily available, steam locomotives remained untried until the opening of the Stockton and Darlington Railway in 1825. British engineer George Stephenson ran a steam locomotive named Locomotion No. 1 on this 40-kilometer (25-mile) long line, hauling over 400 passengers at up to 13 kilometers per hour (8 mph). The success of this locomotive, and Stephenson's Rocket in 1829, convinced many of the value in steam locomotives, and within a decade the stock market bubble known as "Railway Mania" started across the United Kingdom. News of the success of steam locomotives quickly reached the United States, where the first steam railroad opened in 1829. American railroad pioneers soon started manufacturing their own locomotives, designed to handle the sharper curves and rougher track typical of the country's railroads.The other nations of Europe also took note of British railroad developments, and most countries on the continent constructed and opened their first railroads in the 1830s and 1840s, following the first run of a steam train in France in late 1829. In the 1850s, trains continued to expand across Europe, with many influenced by or purchases of American locomotive designs. Other European countries pursued their own distinct designs. Around the world, steam locomotives grew larger and more powerful throughout the rest of the century as technology advanced.Trains first entered service in South America, Africa, and Asia through construction by imperial powers, which starting in the 1840s built railroads to solidify control of their colonies and transport cargo for export. In Japan, which was never colonized, railroads first arrived in the early 1870s. By 1900, railroads were operating on every continent besides uninhabited Antarctica. New technologies Even as steam locomotive technology continued to improve, inventors in Germany started work on alternative methods for powering trains. Werner von Siemens built the first train powered by electricity in 1879, and went on to pioneer electric trams. Another German inventor, Rudolf Diesel, constructed the first diesel engine in the 1890s, though the potential of his invention to power trains was not realized until decades later. Between 1897 and 1903, tests of experimental electric locomotives on the Royal Prussian Military Railway in Germany demonstrated they were viable, setting speed records in excess of 160 kilometers per hour (100 mph).Early gas powered "doodlebug" self-propelled railcars entered service on railroads in the first decade of the 1900s. Experimentation with diesel and gas power continued, culminating in the German "Flying Hamburger" in 1933, and the influential American EMD FT in 1939. These successful diesel locomotives showed that diesel power was superior to steam, due to lower costs, ease of maintenance, and better reliability. Meanwhile, Italy developed an extensive network of electric trains during the first decades of the 20th century, driven by that country's lack of significant coal reserves. Dieselization and increased competition World War II brought great destruction to existing railroads across Europe, Asia, and Africa. Following the war's conclusion in 1945, nations which had suffered extensive damage to their railroad networks took the opportunity provided by Marshall Plan funds (or economic assistance from the USSR and Comecon, for nations behind the Iron Curtain) and advances in technology to convert their trains to diesel or electric power. France, Russia, Switzerland, and Japan were leaders in adopting widespread electrified railroads, while other nations focused primarily on dieselization. By 1980, the majority of the world's steam locomotives had been retired, though they continued to be used in parts of Africa and Asia, along with a few holdouts in Europe and South America. China was the last country to fully dieselize, due to its abundant coal reserves; steam locomotives were used to haul mainline trains as late as 2005 in Inner Mongolia.Trains began to face strong competition from automobiles and freight trucks in the 1930s, which greatly intensified following World War II. After the war, air transport also became a significant competitor for passenger trains. Large amounts of traffic shifted to these new forms of transportation, resulting in a widespread decline in train service, both freight and passenger. A new development in the 1960s was high-speed rail, which runs on dedicated rights of way and travels at speeds of 240 kilometers per hour (150 mph) or greater. The first high-speed rail service was the Japanese Shinkansen, which entered service in 1964. In the following decades, high speed rail networks were developed across much of Europe and Eastern Asia, providing fast and reliable service competitive with automobiles and airplanes. The first high-speed train in the Americas was Amtrak's Acela in the United States, which entered service in 2000. To the present day Towards the end of the 20th century, increased awareness of the benefits of trains for transport led to a revival in their use and importance. Freight trains are significantly more efficient than trucks, while also emitting far fewer greenhouse gas emissions per ton-mile; passenger trains are also far more energy efficient than other modes of transport. According to the International Energy Agency, "On average, rail requires 12 times less energy and emits 7–11 times less GHGs per passenger-km travelled than private vehicles and airplanes, making it the most efficient mode of motorised passenger transport. Aside from shipping, freight rail is the most energy-efficient and least carbon-intensive way to transport goods." As such, rail transport is considered an important part of achieving sustainable energy. Intermodal freight trains, carrying double-stack shipping containers, have since the 1970s generated significant business for railroads and gained market share from trucks. Increased use of commuter rail has also been promoted as a means of fighting traffic congestion on highways in urban areas. Types and terminology Trains can be sorted into types based on whether they haul passengers or freight (though mixed trains which haul both exist), by their weight (heavy rail for regular trains, light rail for lighter rapid transit systems), by their speed, and by what form of track they use. Conventional trains operate on two rails, but several other types of track systems are also in use around the world. Terminology The railway terminology that is used to describe a train varies between countries. The two primary systems of terminology are International Union of Railways terms in much of the world, and Association of American Railroads terms in North America.Trains are typically defined as one or more locomotives coupled together, with or without cars. A collection of passenger or freight carriages connected together (not necessarily with a locomotive) is (especially in British and Indian English) typically referred to as a rake. A collection of rail vehicles may also be called a consist. A set of vehicles that are permanently or semi-permanently coupled together (such as the Pioneer Zephyr) is called a trainset. The term rolling stock is used to describe any kind of train vehicle. Components Bogies Bogies, also known in North America as trucks, support the wheels and axles of trains. Trucks range from just one axle to as many as four or more. Two-axle trucks are in the widest use worldwide, as they are better able to handle curves and support heavy loads than single axle trucks. Couplers Train vehicles are linked to one another by various systems of coupling. In much of Europe, India, and South America, trains primarily use buffers and chain couplers. In the rest of the world, Janney couplers are the most popular, with a few local variations persisting (such as Wilson couplers in the former Soviet Union). On multiple units all over the world, Scharfenberg couplers are common. Brakes Because trains are heavy, powerful brakes are needed to slow or stop trains, and because steel wheels on steel rails have relatively low friction, brakes must be distributed among as many wheels as possible. Early trains could only be stopped by manually applied hand brakes, requiring workers to ride on top of the cars and apply the brakes when the train went downhill. Hand brakes are still used to park cars and locomotives, but the predominant braking system for trains globally is air brakes, invented in 1869 by George Westinghouse. Air brakes are applied at once to the entire train using air hoses. Warning devices For safety and communication, trains are equipped with bells, horns, and lights. Steam locomotives typically use steam whistles rather than horns. Other types of lights may be installed on locomotives and cars, such as classification lights, Mars Lights, and ditch lights. Cabs Locomotives are in most cases equipped with cabs, also known as driving compartments, where a train driver controls the train's operation. They may also be installed on unpowered train cars known as cab or control cars, to allow for a train to operate with the locomotive at the rear. Operations Scheduling and dispatching To prevent collisions or other accidents, trains are often scheduled, and almost always are under the control of train dispatchers. Historically, trains operated based on timetables; most trains (including nearly all passenger trains), continue to operate based on fixed schedules, though freight trains may instead run on an as-needed basis, or when enough freight cars are available to justify running a train. Maintenance Simple repairs may be done while a train is parked on the tracks, but more extensive repairs will be done at a motive power depot. Similar facilities exist for repairing damaged or defective train cars. Maintenance of way trains are used to build and repair railroad tracks and other equipment. Crew Train drivers, also known as engineers, are responsible for operating trains. Conductors are in charge of trains and their cargo, and help passengers on passenger trains. Brakeman, also known as trainmen, were historically responsible for manually applying brakes, though the term is used today to refer to crew members who perform tasks such as operating switches, coupling and uncoupling train cars, and setting handbrakes on equipment. Steam locomotives require a fireman who is responsible for fueling and regulating the locomotive's fire and boiler. On passenger trains, other crew members assist passengers, such as chefs to prepare food, and service attendants to provide food and drinks to passengers. Other passenger train specific duties include passenger car attendants, who assist passengers with boarding and alighting from trains, answer questions, and keep train cars clean, and sleeping car attendants, who perform similar duties in sleeping cars. Gauge Around the world, various track gauges are in use for trains. In most cases, trains can only operate on tracks that are of the same gauge; where different gauge trains meet, it is known as a break of gauge. Standard gauge, defined as 1,435 mm (4 ft 8.5 in) between the rails, is the most common gauge worldwide, though both broad-gauge and narrow-gauge trains are also in use. Trains also need to fit within the loading gauge profile to avoid fouling bridges and lineside infrastructure with this being a potential limiting factor on loads such as intermodal container types that may be carried. Safety Train accidents sometimes occur, including derailments (when a train leaves the tracks) and train wrecks (collisions between trains). Accidents were more common in the early days of trains, when railway signal systems, centralized traffic control, and failsafe systems to prevent collisions were primitive or did not yet exist. To prevent accidents, systems such as automatic train stop are used; these are failsafe systems that apply the brakes on a train if it passes a red signal and enters an occupied block, or if any of the train's equipment malfunctions. More advanced safety systems, such as positive train control, can also automatically regulate train speed, preventing derailments from entering curves or switches too fast.Modern trains have a very good safety record overall, comparable with air travel. In the United States between 2000 and 2009, train travel averaged 0.43 deaths per billion passenger miles traveled. While this was higher than that of air travel at 0.07 deaths per billion passenger miles, it was also far below the 7.28 deaths per billion passenger miles of car travel. In the 21st century, several derailments of oil trains caused fatalities, most notably the Canadian Lac-Mégantic rail disaster in 2013 which killed 47 people and leveled much of the town of Lac-Mégantic.The vast majority of train-related fatalities, over 90 percent, are due to trespassing on railroad tracks, or collisions with road vehicles at level crossings. Organizations such as Operation Lifesaver have been formed to improve safety awareness at railroad crossings, and governments have also launched ad campaigns. Trains cannot stop quickly when at speed; even an emergency brake application may still require more than a mile of stopping distance. As such, emphasis is on educating motorists to yield to trains at crossings and avoid trespassing. Motive power Before steam The first trains were rope-hauled, gravity powered or pulled by horses. Steam Steam locomotives work by burning coal, wood or oil fuel in a boiler to heat water into steam, which powers the locomotive's pistons which are in turn connected to the wheels. In the mid 20th century, most steam locomotives were replaced by diesel or electric locomotives, which were cheaper, cleaner, and more reliable. Steam locomotives are still used in heritage railways operated in many countries for the leisure and enthusiast market. Diesel Diesel locomotives are powered with a diesel engine, which generates electricity to drive traction motors. This is known as a diesel–electric transmission, and is used on most larger diesels. On smaller diesels, hydraulic transmission is common. Diesel power replaced steam for a variety of reasons: diesel locomotives were less complex, far more reliable, cheaper, cleaner, easier to maintain, and more fuel efficient. Electric Electric trains receive their current via overhead lines or through a third rail electric system, which is then used to power traction motors that drive the wheels. Electric traction offers a lower cost per mile of train operation but at a higher initial cost, which can only be justified on high traffic lines. Even though the cost per mile of construction is much higher, electric traction is cheaper to operate thanks to lower maintenance and purchase costs for locomotives and equipment. Compared to diesel locomotives, electric locomotives produce no direct emissions and accelerate much faster, making them better suited to passenger service, especially underground. Other types Various other types of train propulsion have been tried, some more successful than others. In the mid 1900s, gas turbine locomotives were developed and successfully used, though most were retired due to high fuel costs and poor reliability.In the 21st century, alternative fuels for locomotives are under development, due to increasing costs for diesel and a desire to reduce greenhouse gas emissions from trains. Examples include hydrail (trains powered by hydrogen fuel cells) and the use of compressed or liquefied natural gas. Train cars Train cars, also known as wagons, are unpowered rail vehicles which are typically pulled by locomotives. Many different types exist, specialized to handle various types of cargo. Some common types include boxcars (also known as covered goods wagons) that carry a wide variety of cargo, flatcars (also known as flat wagons) which have flat tops to hold cargo, hopper cars which carry bulk commodities, and tank cars which carry liquids and gases. Examples of more specialized types of train cars include bottle cars which hold molten steel, Schnabel cars which handle very heavy loads, and refrigerator cars which carry perishable goods.Early train cars were small and light, much like early locomotives, but over time they have become larger as locomotives have become more powerful. Passenger trains A passenger train is used to transport people along a railroad line. These trains may consist of unpowered passenger railroad cars (also known as coaches or carriages) hauled by one or more locomotives, or may be self-propelled; self propelled passenger trains are known as multiple units or railcars. Passenger trains travel between stations or depots, where passengers may board and disembark. In most cases, passenger trains operate on a fixed schedule and have priority over freight trains.Passenger trains can be divided into short and long distance services. Long distance trains Long distance passenger trains travel over hundreds or even thousands of miles between cities. The longest passenger train service in the world is Russia's Trans-Siberian Railway between Moscow and Vladivostok, a distance of 9,289 kilometers (5,772 mi). In general, long distance trains may take days to complete their journeys, and stop at dozens of stations along their routes. For many rural communities, they are the only form of public transportation available. Short distance trains Short distance or regional passenger trains have travel times measured in hours or even minutes, as opposed to days. They run more frequently than long distance trains, and are often used by commuters. Short distance passenger trains specifically designed for commuters are known as commuter rail. High speed trains High speed trains are designed to be much faster than conventional trains, and typically run on their own separate tracks than other, slower trains. The first high speed train was the Japanese Shinkansen, which opened in 1964. In the 21st century, services such as the French TGV and German Intercity Express are competitive with airplanes in travel time over short to medium distances.A subset of high speed trains are higher speed trains, which bridge the gap between conventional and high speed trains, and travel at speeds between the two. Examples include the Northeast Regional in the United States, the Gatimaan Express in India, and the KTM ETS in Malaysia. Rapid transit trains A number of types of trains are used to provide rapid transit to urban areas. These are distinct from traditional passenger trains in that they operate more frequently, typically do not share tracks with freight trains, and cover relatively short distances. Many different kinds of systems are in use globally.Rapid transit trains that operate in tunnels below ground are known as subways, undergrounds, or metros. Elevated railways operate on viaducts or bridges above the ground, often on top of city streets. "Metro" may also refer to rapid transit that operates at ground level. In many systems, two or even all three of these types may exist on different portions of a network. Trams Trams, also known in North America as streetcars, typically operate on or parallel to streets in cities, with frequent stops and a high frequency of service. Light rail Light rail is a catchall term for a variety of systems, which may include characteristics of trams, passenger trains, and rapid transit systems. Specialized trains There are a number of specialized trains which differ from the traditional definition of a train as a set of vehicles which travels on two rails. Monorail Monorails were developed to meet medium-demand traffic in urban transit, and consist of a train running on a single rail, typically elevated. Monorails represent a small proportion of the train systems in use worldwide. Almost all monorail trains use linear induction motors Maglev Maglev technology uses magnets to levitate the train above the track, reducing friction and allowing higher speeds. The first commercial maglev train was an airport shuttle introduced in 1984 at Birmingham Airport in England.The Shanghai maglev train, opened in 2002, is the fastest commercial train service of any kind, operating at speeds of up to 431 km/h (268 mph). Japan's L0 Series maglev holds the record for the world's fastest train ever, with a top speed of 603.0 kilometers per hour (374.7 mph). Maglev has not yet been used for inter-city mass transit routes, with only a few examples in use worldwide as of 2019. Mine trains Mine trains are operated in large mines and carry both workers and goods. They are usually powered by electricity, to prevent emissions which would pose a health risk to workers underground. Militarized trains While they have long been important in transporting troops and military equipment, trains have occasionally been used for direct combat. Armored trains have been used in a number of conflicts, as have railroad based artillery systems. Railcar-launched ICBM systems have also been used by nuclear weapon states. Rack railway For climbing steep slopes, specialized rack railroads are used. In order to avoid slipping, a rack and pinion system is used, with a toothed rail placed between the two regular rails, which meshes with a drive gear under the locomotive. Funicular Funiculars are also used to climb steep slopes, but instead of a rack use a rope, which is attached to two cars and a pulley. The two funicular cars travel up and down the slope on parallel sets of rails when the pulley is rotated. This design makes funiculars an efficient means of moving people and cargo up and down slopes. The earliest funicular railroad, the Reisszug, opened around 1500. Rubber-Tired train Rubber tire trains, or rubber-tired metro systems, employ rubber tires for traction and guidance, offering advantages like better acceleration and reduced noise. However, they come with disadvantages, including higher costs for installation and maintenance, faster tire wear, and complex tire inflation mechanisms that require regular upkeep. Nonetheless, these systems are utilized in many urban rapid transit networks worldwide, enhancing passenger comfort and urban transportation efficiency. Freight trains Freight trains are dedicated to the transport of cargo (also known as goods), rather than people, and are made up of freight cars or wagons. Longer freight trains typically operate between classification yards, while local trains provide freight service between yards and individual loading and unloading points along railroad lines. Major origin or destination points for freight may instead be served by unit trains, which exclusively carry one type of cargo and move directly from the origin to the destination and back without any intermediate stops.Under the right circumstances, transporting freight by train is less expensive than other modes of transport, and also more energy efficient than transporting freight by road. In the United States, railroads on average moved a ton of freight 702 kilometers (436 mi) per gallon of fuel, as of 2008, an efficiency four times greater than that of trucks. The Environmental and Energy Study Institute estimates that train transportation of freight is between 1.9 and 5.5 times more efficient than by truck, and also generates significantly less pollution. Rail freight is most economic when goods are being carried in bulk and over large distances, but it is less suited to short distances and small loads. With the advent of containerization, freight rail has become part of an intermodal freight network linked with trucking and container ships.The main disadvantage of rail freight is its lack of flexibility and for this reason, rail has lost much of the freight business to road competition. Many governments are trying to encourage more freight back on to trains because of the community benefits that it would bring. Cultural impact From the dawn of railroading, trains have had a significant cultural impact worldwide. Fast train travel made possible in days or hours journeys which previously took months. Transport of both freight and passengers became far cheaper, allowing for networked economies over large areas. Towns and cities along railroad lines grew in importance, while those bypassed declined or even became ghost towns. Major cities such as Chicago became prominent because they were places where multiple train lines met. In the United States, the completion of the first transcontinental railroad played a major role in the settling of the western part of the nation by non-indigenous migrants and its incorporation into the rest of the country. The Russian Trans-Siberian Railway had a similar impact by connecting the vast country from east to west, and making travel across the frozen Siberia possible.Trains have long had a major influence on music, art, and literature. Many films heavily involve or are set on trains. Toy train sets are commonly used by children, traditionally boys. Railfans are found around the world, along with hobbyists who create model train layouts. Train enthusiasts generally have a positive relationship with the railroad industry, though sometimes cause issues by trespassing. See also List of railway companies Lists of named passenger trains Lists of rail accidents Overview of train systems by country References Bibliography Glancey, Jonathan (2005). The Train. Carlton Publishing Group. ISBN 978-1-84442-345-3. Herring, Peter (2000). Ultimate Train. Dorling Kindersley. ISBN 0-7894-4610-3. OCLC 42810706. OL 8155464M. External links The dictionary definition of train at Wiktionary Media related to Trains at Wikimedia Commons tips for rail travel travel guide from Wikivoyage
electricity sector in mexico
As required by the Constitution, the electricity sector is federally owned, with the Federal Electricity Commission (Comisión Federal de Electricidad or CFE) essentially controlling the whole sector; private participation and foreign companies are allowed to operate in the country only through specific service contracts. Attempts to reform the sector have traditionally faced strong political and social resistance in Mexico, where subsidies for residential consumers absorb substantial fiscal resources. The electricity sector in Mexico relies heavily on thermal sources (75% of total installed capacity), followed by hydropower generation (19%). Although exploitation of solar, wind, and biomass resources has a large potential, geothermal energy is the only renewable source (excluding hydropower) with a significant contribution to the energy mix (2% of total generation capacity). Expansion plans for the period 2006-2015 estimate the addition of some 14.8 GW of new generation capacity by the public sector, with a predominance of combined cycles. Electricity Supply and Demand Installed capacity Installed electricity capacity in 2008 was 58 GW. Of the installed capacity, 75.3% is thermal, 19% hydro, 2.4% nuclear (the single nuclear power plant Laguna Verde) and 3.3% renewable other than hydro. The general trend in thermal generation is a decline in petroleum-based fuels and a growth in natural gas and coal. Since Mexico is a net importer of natural gas, higher levels of natural gas consumption (i.e. for power generation) will likely depend upon higher imports from either the United States or via liquefied natural gas (LNG).Gross generation was 234 TWh that same year (not including cogeneration and autogeneration), with 79.2% coming from conventional thermal sources, 16.6% from hydroelectricity, 4.2% from nuclear power and 3% from geothermal sources.The expansion program contemplated by SENER for the period 2008-2017 includes the addition of 14,794 MW by the public service: 14,033 MW by CFE and 761 MW by LFC (Luz y Fuerza del Centro). Self-supply and cogeneration will add another 2,490 MW in new capacity. Total public installed capacity in 2017 is estimated at 61,074 MW, 40% and 21% of which would be combined-cycles and hydroelectric plants respectively. However, the deactivation of LFC on October 10, 2009, is likely to change this figure. In 2009, 4,000 MW were already compromised (i.e. with secured financing). The table below summarizes the projects that are currently (September 2009) under construction: Source: SENER Statistics Effective Energy Generation Source: Secretaría de Energía with data from Comisión Federal de Electricidad and Luz y Fuerza del Centro1 Thermoelectric power plants (residual fuel oil, natural gas and diesel)2 Installed capacity of Independent Power Producers.3 Dual power plants can operate with coal or fuel oilP Preliminary Imports and exports The external electricity trade is carried out through nine interconnections between the United States and Mexico and one interconnection with Belize. These connections have primarily been used to import and export electricity during emergencies. In 2007, Mexico exported 1.3 TWh of electricity to the United States, while importing 0.6 TWh.Companies have built power plants near the United States - Mexico border with the aim of exporting generation to the United States. There are also plans to connect Mexico with Guatemala and Belize as part of the Central American Interconnection System. The 400 kV interconnection line Mexico - Guatemala was commissioned in April 2009 and has an estimated transmission capacity of 200 MW from Mexico to Guatemala and 70 MW in the opposite direction.CFE is not a part of the North American Electric Reliability Corporation, though its transmission system in northern Baja California is part of the Western Electricity Coordinating Council, and it also has a few other interconnections across the border with the United States. Demand Consumption of electricity in 2008 was 184 TWh, which corresponds to 1,655 kWh per capita. Consumption share by sector was as follows: Residential: 26% Industrial: 59% (38% for mid-sized industry and 21% for large industry) Commercial: 7% Agriculture: 4% Services: 4% Demand and supply projections Electricity demand has grown steadily in the last decade and the Energy Secretariat (SENER) forecasts that consumption will grow by 3.3% a year for the next ten years, reaching 281.5 TWh in 2017. Demand growth forecasts have been revised down, from an estimated 4.8% a year in the projections from 2006, due to the expected effects of the economic crisis on energy demand. Reserve margin In 2008, the installed reserve margin (RM) in the National Interconnected System (SIN) was 45.8%, while the operating reserve margin (ORM) was 21.3%. It is estimated that both reserve margins will remain high during the 2009-2013 period. However, from 2014, the RM is expected to decrease to 29.2%, with the ORM reaching an 8.3%. Those values would be about 25% and 6% respectively in 2017. The commissioning of the Agua Prieta II, Norte II, Norte III, Noreste and Valle de Mexico II and III is essential to avoid power deficits in the northern and central parts of the country. However, irrespective of the reserve margins in the SIN, there are restrictions in transmission capacity that generate bottlenecks or the need to import power. Access to electricity Total electricity coverage in Mexico is 98.7% (2015). With 99.7% coverage in urban areas with more than 100,000 inhabitants; 99.3% in locales with 15,000-99,999 inhabitants; 98.8% in areas with 2,500-14,999 inhabitants and 96.1% in locales with fewer than 2,500 inhabitants. Service Quality Interruption frequency and duration In 2008, the average number of interruptions per subscriber was 2.3 for CFE and 4.2 for LFC. Duration of interruptions per subscriber was 2.2 hours for CFE and 3 for LFC. Total losses Total electricity losses in 2008 were 11% for CFE and as high as 32% for LFC. Responsibilities in the Electricity Sector Policy and Regulation The Energy Secretariat (SENER) is in charge of defining the energy policy of the country within the framework defined by the Constitution. The Energy Regulatory Commission (CRE) is, since 1995, the main regulatory agency of the electricity and gas sector. However, CRE's attributions are limited since CFE (Federal Electricity Commission) and LFC (Central Light and Power) are outside its scope. Generation The generation sector was opened to private participation in 1992. However, the Comisión Federal de la Electricidad (CFE), the state-owned utility, is still the dominant player in the generation sector, with two-thirds of installed capacity. As of the end of 2008, private generators held about 23 GW of generation capacity, mostly consisting of combined-cycle, gas-fired turbines (CCGFT). Private generators have to sell all their output to CFE since they are not allowed to sell directly to users. There is indeed a commercialization monopoly controlled by CFE. In the period between 1997 and 2009, CRE has awarded 22 permits for Independent Power Producers (IPP), for a total of 13 GW. Total private generation permits awarded by CRE as of September 2009 are summarized in the table below: Transmission and Distribution CFE holds a monopoly on electricity transmission and distribution in the country. CFE operates the national transmission grid, composed of 27,000 miles (43,000 km) of high voltage lines, 28,000 miles (45,000 km) of medium voltage lines, and 370,000 miles (600,000 km) of low voltage distribution lines, through one of its departments, the Centro Nacional de Control de la Energía (CENACE). Renewable Energy Resources The two main government agencies in charge of developing renewable energy resources are SEMARNAT and SENER. The Environment and Natural Resources Secretariat (SEMARNAT) is responsible for environmental policy and the preservation of renewable and non-renewable resources, while SENER defines the national energy policy. CONAE, the National Commission for Energy Savings, is responsible for promoting energy savings and energy efficiency. Finally, SEDESOL, the National Secretariat for Social Development, includes the promotion and use of renewable energy in some of their projects.The Renewable Energy Development and Financing for Energy Transition Law (LAERFTE), which entered into force on November 28, 2008, mandated SENER to produce a National Strategy for Energy Transition and Sustainable Energy Use and a Special Program for Renewable Energy. The Special Program contains tentative targets for renewable generation for different technologies. Those targets will be revised as SENER and CRE advance in the completion of the activities included in the law. Hydro About 19% of the electricity produced in Mexico comes from hydroelectric resources. The largest hydro plant in Mexico is the 2,400 MW Manuel Moreno Torres (Chicoasén Dam) in Chicoasén, Chiapas, in the Grijalva river. This is the world's fourth most productive hydroelectric plant. The 750 MW El Cajon hydroelectric plant in Nayarit, which started operations in November 2006, is the latest completed large project.The country has an important mini-hydro potential, estimated at 3,250 MW (in the states of Chiapas, Veracruz, Puebla and Tabasco) In 2009, there were 22 private mini-hydro installations (12 in operation, 2 inactive and 8 under construction), adding up to a total of 83.5 MW in operation, with 105 MW under development. The number of publicly owned hydro plants in 2009 was 42: 31 of them (270MW) belong to CFE, while the remaining 11 (23.4MW) belong to LFC. Solar Mexico is the country with the world's third largest solar potential. The country's gross solar potential is estimated at 5 kWh/m2 daily, which corresponds to 50 times the national electricity generation. Currently, there is over 1 million square meters of solar thermal panels installed in Mexico, while in 2005, there were 115,000 square meters of solar PV (photo-voltaic). It is expected that in 2012 there will be 1.8 million square meters of installed solar thermal panels.The project named SEGH-CFE 1, located in Puerto Libertad, Sonora, Northwest of Mexico, will have capacity of 46.8 MW from an array of 187,200 solar panels when complete in 2013. Wind Wind power production is still very limited in Mexico, although the country's potential is estimated to be very high. Three main areas for wind generation have been identified: the Isthmus of Tehuantepec, in the state of Oaxaca; La Rumorosa, in the state of Baja California; The area of the Gulf of California, which includes Baja California, Baja California Sur, Sonora and Sinaloa, the Yucatán Peninsula, and the states of Zacatecas, Hidalgo, Veracruz. In 2012, according to SENER, CFE will have 593MW of installed wind generation capacity in Mexico. Currently, there are several projects in operation and under development. The tables below show both the wind farms that have already been committed and some of the potential ones: in Mexico some groups are promoting wind power through outreach activities to increase population awareness of renewable energies. Source: SENER 2009, Programa especial para el aprovechamiento de energías renovables Source: SENER 2009, Programa especial para el aprovechamiento de energías renovables Geothermal Mexico has a large geothermal potential due to its intense tectonic and volcanic activity. This potential has been estimated at 1,395 MW by CFE, although this figure is likely to be much higher. It ranks third in geothermal power production worldwide. In 2009, geothermal installed capacity was 964.5 MW and total production was 7.1 TWh. There are four geothermal fields under exploitation: Cerro Prieto, Los Azufres, Los Humeros and Las Tres Vírgenes. Source: SENER 2009, Programa especial para el aprovechamiento de energías renovables Biomass Mexico also has a large potential to produce energy from biomass. It is estimated that, taking into account agricultural and forest waste with energy potential and solid urban waste from the ten main cities, the country has a potential capacity of 803 MW and could generate 4,507 MWh per year. In the sugarcane industry, the estimated power generation protential from bagasse is over 3,000 MWh per year. History of the electricity sector 20th Century The electricity sector in Mexico underwent its first serious process of reorganizations during the 1930s, under the mandate of the Institutional Revolutionary Party (PRI). The National Electricity Code was created and the Federal Electricity Commission (CFE), a newly create state-owned and state-financed enterprise, came to dominate all investment in new capacity. In 1960, a constitutional amendment nationalized the electricity industry and formally gave the government exclusive "responsibility" for generating, transmitting, transforming, and distributing electricity. During this decade, the government also created Compañía de la Luz y Fuerza del Centro (LFC) to supply electricity to Mexico City and the neighboring states. During the 1960s and the 1970s, Mexico alienated private investment and decided to prevent market forces from entering the power system. In addition, the surge in oil prices of the 1970s provided a windfall to oil-rich Mexico, which allowed the country to maintain substantial subsidies for electricity generation. Only during the late 1980s and the early 1990s, the Mexican government implemented market reforms in several economic sectors, including electricity.In 1992, president Carlos Salinas reformed the electricity law, establishing that private electricity production was not a public service. This modification, which allowed for private participation in generation, was debated as unconstitutional for a long time; in fact in 2002 the Mexican Supreme Court ruled that it may have been unconstitutional. The Energy Regulatory Commission (CRE) was created in 1993 as an autonomous agency in charge of regulating the natural gas and electricity industries. However, its functions were only related to private power producers (e.g. award of permits, arbitration, tariff studies) and did not cover CFE and LFC. In this period the CRE's functions were mainly focused on the gas sector and not on electricity. Reform attempts 1990s and 2000s Attempts by president Ernesto Zedillo in the late 1990s, by the National Action Party (PAN) in 2000, and president Vicente Fox to carry out a comprehensive reform of the electricity sector in Mexico faced strong political resistance. In 1999, President Zedillo sent an ambitious bill to Congress requesting a change of the Constitution and allowing for the unbundling of the sector, including the creation of distribution companies under 3-year concessions. Existing power plants would also be sold, except for nuclear and hydro power plants. In 2001, President Fox issued a reform decree that would allow independent power producers to sell directly to industrial customers and would also allow the sale of private power to CFE under long-term contracts without competitive bidding. Among other issues, the decree also specified that electricity is not a public service of general interest but a commercial service. Both reform attempts failed, opposed on grounds that the electricity and, more broadly, the energy sector is strategic for national sovereignty. As required by the Constitution, the electricity sector remained federally owned, with the CFE essentially controlling the whole sector.Among the different proposals for the reform of the electricity sector, the main ones are the creation of the CFE's Fundamental Law and the modification of this firm's operations and the extension of Electricity Regulatory Commission's (CRE) competencies. Also important is the promotion of private independent power production and the discussion of the role played by Pidiregas (see Financing below) in the financing of large projects. Renewable Energy and Energy Efficiency laws 2008 During the term of president Felipe Calderón, on 28 November 2008 two decrees published in the Official Journal of the Federation created two laws, one addressing renewable energy and the other energy efficiency.The Renewable Energy Development and Energy Transition Financing Law (LAERFTE) mandated the Secretary of Energy (SENER) to produce a Special Program for Development of Renewable Energy (PEAER), and a National Strategy for Energy Transition and Sustainable Energy Use (ENTEASE), to be updated yearly. The main objective of the law is to regulate the use of renewable energy resources and clean technology, as well as to establish financing instruments to allow Mexico to scale-up electricity generation based on renewable resources. SENER and the Energy Regulatory Commission (CRE) are responsible for defining those mechanisms and establishing legal instruments. The following functions are the responsibility of SENER, among others: (a) defining a national program for ensuring a sustainable energy development both in the short and the longer term, (b) creating and coordinating the necessary instruments to enforce the law, (c) preparing a national renewable energy inventory, (d) establishing a methodology to determine the extent to which renewable energies may contribute to total electricity generation, (e) defining transmission expansion plans to connect power generation from renewable energy to the national grid, and (f) promoting the development of renewable energy projects to increase access in rural areas. The CRE is responsible for developing rules and norms regarding the implementation of LAERFTE, including provisions for promotion, production, purchase and exchange of electricity from renewable sources. The CRE, in coordination with the Secretary of Finance (SCHP) and SENER, will determine the price that suppliers will pay to the renewable energy generators. Payments will be based on technology and geographic location. In addition, CRE will set rules for contracting between energy generators and suppliers, obliging the latter to establish long-term contracts from renewable sources.The Sustainable Energy Use Law (LASE) has as its objective to provide incentives for the sustainable use of energy in all processes and activities related to its exploitation, production, transformation, distribution and consumption, including energy efficiency measures. More specifically, the law proposes: The creation of a National Program for Sustainable Energy Use (PRONASE), which targets energy efficiency promotion in the public sector, as well as research and diffusion of sustainable energy use. The establishment of the National Commission for Efficient Energy Use (CONUEE) as a decentralized body of SENER that (i) will advise the national public administration and (ii) promote the implementation of best practices related to energy efficiency. This entity replaced the National Commission for Energy Saving (CONAE), which had been the leading government energy efficiency body. The creation of an Advisory Committee for Sustainable Energy Use (CCASE) as part of the CONUEE to evaluate the compliance of objectives, strategies, actions and goals of the program, consisting of the Energy Minister (SENER) and six academic researchers with extensive experience in the field. The creation of the National Subsystem of Information for Energy Use to register, organize, update and disseminate information about (i) energy consumption, its end-uses in distinct industries and geographical regions of the country, (ii) factors that impel these uses, and (iii) indicators of energy efficiency.In this context, the Government carries the following specific activities: (i) a program aimed at replacing incandescent bulbs (IBs) with compact fluorescent lamps (CFLs) in the residential sector, (ii) an appliances replacement program, (iii) the modernization of the public transport system for long distances and surroundings, (iv) a program for energy efficiency in municipalities, (v) industrial and commercial energy efficiency programs, (vi) supply side energy efficiency in the electricity sector, and (vii) energy efficiency in the national oil company, PEMEX. Takeover of Luz y Fuerza del Centro 2009 On 12 October 2009, the police seized the offices of the state-owned Luz y Fuerza del Centro, dissolving the company, laying off the workers, and putting its operations, which supply power to 25 million Mexicans, under the control of the CFE. According to the government, spending at the company was increasingly outpacing sales. Reforms from 2013 onwards The energy sector in Mexico was reformed by an initiative that president Enrique Peña Nieto presented to the Congress of the Union on 12 August 2013. The reform was approved by the Senate on 11 December of that year, and by the Chamber of Deputies one day later. On 18 December the reform was declared constitutional, and it was signed into effect on 20 December by its publication in the Official Journal of the Federation. The initiative proposed that Article 27 of the Constitution of Mexico returned to the wording that it had in 1938 when president Lázaro Cárdenas made a first reform, which affirmed that the natural resources belong exclusively to the nation, but allowed the participation of private enterprises in the extraction and handling of these resources on behalf of the state. In 1960 a protectionist reform had made it impossible for any private company to participate in the energy sector, so the 2013 decree restored Article 27 to its previous state and included a similar provision for developing the electrical sector: a market for electricity generation would be established in which private entities could participate, but the control, transmission and distribution would remain an exclusive task of the state as a public service. By the end of 2014, several decrees transformed the Mexican national electric sector. On 11 August 2014 the following laws were published: Electric Industry Law (LIE). Its objective is to regulate the generation, transmission, distribution, and commercialization of the electricity, the planning and control of the national electric system (SEN), and the operation of the wholesale electricity marker (MEM). The law gives further attributions to the Secretary of Energy (SENER) and the Energy Regulatory Commission (CRE) to execute the energy policies. It also supposes the transfer of certain obligations from the Federal Electricity Commission (CFE) to the National Center for Energy Control (CENACE), which becomes independent from CFE, in order to manage the electric system (SEN) and the market (MEM). Federal Electricity Commission Law (LCFE). The firm is established as a "productive company" of the state, in order to produce additional value and return of investment through industrial, commercial, or entrepreneurial activities. This contrasts with its previous ordinance in which it only provided electricity as a public service with a fixed budget. This presumes structural changes in the company, creation of new subsidiaries, resulting in more transparency, better bookkeeping, and increase in operational efficiency. Regulating Institutions in Energy Matters Law. Establishes the collaboration between the most important government bodies, such as SENER and CENACE, in order to implement the energy policies, and recommend changes to them. Other associated laws, such as Geothermic Energy Law, Hydrocarbons Law, Pemex Law.During this time the SENER published the general rules to obtain and assign clean energy certificates (CEL), which are equivalent to 1 MWh of clean energy, no matter the specific technology. The SENER also mandated that qualified users (big consumers) acquire 5% of their own consumption in CELs for 2018 and 2019. Furthermore, the SENER published simplified rules for interconnection of private generators.On 30 June 2015 the government presented the Program for Development of the National Electric System (PRODESEN) which establishes a master plan for the electrical system until 2030. On 8 September 2015 the SENER published the first Rules for the Electricity Market establishing the new rights and obligations for the generators, resellers, and qualified users of the market, to be overseen by the CRE and the CENACE. The wholesale electric market officially commenced operations on 1 January 2016.These reforms meant that in November 2015 the first public offering for private generation and CELs was made, with a decision of the winning bidders being announced on 30 March 2016. After a first round of evaluation, 227 offers were made by 69 private companies, which translated to 18 winning projects from 11 companies, including 84.6% of the requested CELs. The commencement of operation of the winning projects is scheduled to begin on 28 March 2018. The sole buyer of the energy is the CFE. On 24 December 2015 the Energy Transition Law (LTE) was published, strengthening the integration of renewables into the generation mix. It also establishes ambitious plans for having 35% of renewable energy by 2024, from 28% in 2015 (which includes 18% of hydroelectric energy). After announcing the winners of the first bid, the second public offering for energy was shortly announced, and the resulting decision made in October 2016, in which 28% is renewable energy, mostly photovoltaic and eolic.The Supreme Court of Justice of the Nation (SCJN) ruled that a May 2020 order by the Secretariat of Energy (SENER) limiting connections to the CFE distribution by private renewable energy producers was unconstitutional.On 1 February 2021, President Andrés Manuel López Obrador (AMLO) sent an initiative to reform the Electricity Industry Law to the Congress of the Union. The proposal, which must be approved in 30 days, would reverse the energy reform approved under former president Enrique Peña Nieto. There are four priorities: 1) hydroelectric energy, 2) other energy produced by CFE (nuclear, geothermal, thermoelectric, and combined cycle gas turbines), 3) wind and solar energy produced by individuals, and 4) other. AMLO insists that previous reforms were made with the intention of privatizing the energy sector and will require either massive subsidies or huge price increases for consumers. Tariffs, Cost Recovery and Subsidies Tariffs During the last decade, average electricity tariffs in Mexico have been held below cost with the aim of maintaining macroeconomic and social stability. For all tariffs, an interagency group composed of CFE (Federal Electricity Commission, or Comisión Federal de Electricidad), LFC (Central Light and Power, or Luz y Fuerza del Centro), SHCP (Ministry of Finance and Public Credit, or Secretaria de Hacienda y Crédito Público), SENER (Ministry of Energy, or Secretaria de Energia), CRE (Regulatory Commission of Energy, or Comisión Reguladora de la Energía), and CNA (National Water Commission, or Comisión Nacional del Agua) meet regularly and once a year they prepare a tariff proposal for the subsequent year. Tariffs are approved by SHCP and not by the energy sector regulator.Except for the tariff set for the agricultural sector, average electricity prices have followed an upward trend since the year 2002. In 2008, average tariffs for the different sectors were: Residential: US$0.106/kWh Commercial: US$0.255/kWh Services: US$0.172/kWh Agriculture: US$0.051/kWh Industrial: medium industry US$0.153/kWh, large industry US$0.118/kWhThe average tariff, US$0.137/kWh, was 16.5% higher in 2008 when compared to 2007. Subsidies For the industrial and commercial sectors, electricity supply is priced on a rational cost basis for large firms. As a result, they receive no government subsidy, while subsidies for small firms are relatively small. On the other hand, agricultural and residential customers have traditionally received large subsidies since the electricity they consume is significantly underpriced. Extensive subsidies have contributed to a rapid growth in demand. In 2000, the average residential tariff covered only 43% of the costs, while the average tariff for agricultural use covered 31%. Total subsidies amounted to 46% of total electricity sales. In addition, residential subsidies were mostly captured by medium and high income classes as the amount of the subsidy raised with consumption.In 2002, a restructuring of residential tariffs significantly raised the infra-marginal tariffs paid by middle and especially high consumers of electricity. Currently, billing schedules vary by temperature, season and consumption level. In spite of this reform, price/cost ratio was still under 40% in 2002, even after the 21% increase in price due to the reform. In addition, the share of subsidies going to the non-poor population remained high, estimated at 64%. Agricultural tariffs were also modified in 2003, when a fixed price per kWh was fixed. These new tariffs sought charging higher prices for excess energy use.The low tariffs, together with LFC's inefficiencies, absorb a large amount of fiscal resources. For 2008, it was estimated that the subsidies paid through electricity tariffs to final CFE and LFC consumers by the Federal Government amounted to US$10 billion (close to 1% of GDP). Investment and Financing Investment by sub-sector Necessary investment to carry out the 2008-2017 expansion plan amounts to MXN 629,106 million (US$47 billion). The breakdown of the investment is: 41.2% for generation, 21.2% for transmission, 23.9% for distribution, 11.8% for maintenance and 1.9% for other needs.From the required total, 33.9% corresponds to OFP (Obras Públicas Financiadas or Financed Public Works), 8.8% to Independent Power Production, 51.5% to budgeted works and the remaining 5.9% to financial schemes still to be defined. Financing Pidiregas In 1995-1996 the Mexican government created Pidiregas ("Proyectos de Inversión Diferida En El Registro del Gasto " – Investment Projects with Deferred Expenditure Registration) to finance long-term productive infrastructure projects. Due to budgetary restrictions, the government realized that it could not provide all the resources needed and decided to complement the public sector's efforts with Pidiregas, a deferred financing schedule. This mechanism, which only applied to investments carried out by PEMEX(Petróleos Mexicanos) and CFE aimed to create the conditions for the penetration of private initiatives in hydrocarbon exploration and electricity generation. Pidiregas have been extended and have also grown in amount (PEMEX uses them for as much as four times the amount of CFE), although the original motivation for their existence is gone.Following a project finance scheme, for a project to be executed under Pidiregas, the resources that it generates from the sale of goods and services have to be enough to cover the incurred financial obligations. Projects are paid with the revenues generated during their operation and require the signature of a contract in which a product or work is involved. The State assumes the risk since PEMEX or CFE sign the contract as guarantee, while the investors recover their investment in the agreed time. As a result, Pidiregas cannot be considered as true private investment since, under true private sector participation, firms would make investment decisions and bear the full risk. The viability of the program has been questioned as its effect in the public budget is similar to the emission of public debt. Furthermore, until 2006, the Pidigeras scheme resulted in losses. Grid extension Since 1995, states and municipalities hold the responsibility for the planning and financing of grid extension and off-grid supply. A large part of the investment is financed through FAIS (Fund to Support Social Infrastructure). The National Commission for Indigenous People and SEDESOL (Secretariat for Social Development) also finance an important share of grid extension. Once a particular system has been constructed, its assets and operational and financial responsibility are transferred to CFE.Recent studies have concluded that interconnecting Baja California with the National Interconnected System (SIN) would be both a technically and economically sound decision. This interconnection would allow to serve peak demand in the Baja California system with generation resources from the SIN. Conversely, in period of low demand in Baja California, surplus electricity and base load (i.e. geothermal and combined cycles) could be exported to the SIN. As a result, there would be a reduction of investment costs in generation infrastructure and of total production costs. In addition, the interconnection would open new opportunities for electricity exchanges with power utilities in Western United States through the existing transmission links with California. It is expected that the interconnection will be commissioned in 2013. Renewable energy The Renewable Energy Law creates a Fund for the Energy Transition and the Sustainable Use of Energy. This fund will assure the financing of projects evaluated and approved by the Technical Committee, chaired by SENER. The fund will begin with US$200 million for each year between 2009 and 2011. Summary of private participation in the electricity sector As required by the Constitution, the electricity sector in Mexico remains federally owned, with the Comisión Federal de Electricidad (CFE) essentially controlling the whole sector. Although generation was opened to private participation in 1992, CFE is still the dominant player, with two-thirds of installed capacity. Electricity and the environment Responsibility for the environment The Secretariat of Environment and Natural Resources (SEMARNAT), created in 2000 from the previous Secretariat of Environment, Natural Resources and Fishing (SEMARNAP) holds the responsibilities for the environment in Mexico. SEMARNAT was one of the government agencies within the Intersectoral Commission for Climate Change that elaborated Mexico's Climate Change Strategy. Greenhouse gas emissions According to Mexico's Third National Communication to the UNFCCC, the country emitted 643 million tons of carbon dioxide equivalent (Mt CO2e) in 2002, of which almost 400 Mt CO2e resulted from the combustion of fossil fuels (over 60 percent of total emissions). The sources of Mexico's GHG emissions are energy generation (24%), transport (18%), forests and land-use change (14%), waste management (10%), manufacturing and construction (8%), industrial processes (8%), agriculture (7%), fugitive emissions (6%), and other uses (5%). Climate change mitigation Although the Kyoto Protocol does not require Mexico to reduce its GHG emissions, the country has committed to reduce its emissions voluntarily. In May 2007, President Calderón announced the National Climate Change Strategy which focuses on climate change as a central part of Mexico's national development policy. The ENACC sets the long-term climate change agenda for the country, together with medium to long-term goals for adaptation and mitigation. In December 2008, Mexico announced that it would reduce its GHG emissions by 50% below 2002 levels by 2050. In June 2009, the Government of Mexico formally committed itself to a detailed long-term plan for emission reductions embedded in the Special Climate Change Program (PECC) that provides an accounting of emissions by sector, creates a framework for monitoring improvements and establishes a legally binding blueprint for emission reduction initiatives, sector by sector. The PECC sets out a four pillar program that includes (i) a long-term vision for government action; (ii) sectoral plans for GHG mitigation; (iii) plans for adaptation; and (iv) cross-cutting policy initiatives. Carbon Capture and storage CDM projects in electricity In September 2009, there are 47 energy-related registered CDM projects in Mexico with a total emission reduction potential of about 3.5 million tons of CO2 equivalent. The table below presents the number of projects by type: Source: United Nations Framework Convention on Climate Change External assistance World Bank Currently, the World Bank is contributing funds and assistance through several projects related to the energy sector in Mexico. A Rural Electrification Project with a US$15 million grant from GEF and a US$15 million World Bank loan is currently in the pipeline. This US$110 million project is focused in the design and implementation of sustainable energy models for areas without access to the electricity network. The project includes 50,000 households in Oaxaca, Guerrero and Veracruz. In October 2006, GEF financing was approved for the US$49.35 million Agua Prieta Hybrid Solar Thermal Power Plant. This project, located in the northern state of Sonora, will contribute to reduce GHG emissions through the installation of an Integrated Solar Combined Cycle System (ISCCS) using solar parabolic technology. A Large-Scale Renewable Energy Development Project was approved in June 2006. This two-phase project will receive a US$25.35 million grant from GEF, while the remaining $US 125 million will be financed by local and private sources. The project seeks to assist Mexico in developing initial experience in commercially based, grid-connected renewable energy applications. It will do so by supporting the construction of an approximately 101 megawatt independent power producer (IPP) wind farm, designated as "La Venta III". The Prototype Carbon Fund approved in December 2006 a US$12.29 million investment loan for a Wind Umbrella Project. A US$5.8 million GEF grant was approved in October 2002 for the Introduction of Climate Friendly Measures in Transport. The project, with a total budget of US$12.2 million, has will contribute to the establishment of policies that will assist towards a long-term modal shift in a climate-friendly, more efficient and less polluting, less carbon-intensive transport for the Mexico City Metropolitan Area (MCMA). IDB Currently, four IDB financed energy activities are under implementation in Mexico. In August 2009, a US$1 million non-reimbursable technical cooperation support to the National Program for Sustainable Energy Use In September 2008, a US$749,000 non-reimbursable technical cooperation was approved to support the implementation of a pilot initiative to use alternative energy sources and implement energy efficiency measures. This technical cooperation is still awaiting implementation. In May 2007, US$200,000 was approved to finance a project that aims at providing Assistance to CFE on Environmental and Social Aspects of Hydroelectric Projects. This US$1,168,000 project aims to assess CFE performance and management capability in dealing with environmental and social impacts of large hydroelectric projects. Financing for the Support for the Energy Secretariat on the Feasibility of Bio-Ethanol as Fuel was approved in August 2005. The US$146,000 provided by the IDB are complemented with US$30,000 from the country. The broad objective of the project is assessing the competitiveness of ethanol as a fuel. Sources Carreón and Jimenez, 2003. The Mexican Electricity Sector: Economic, Legal and Political Issues. SENER, 2006. Prospectiva del sector eléctrico 2006-2015. SENER & GTZ, 2006. Energías renovables para el desarrollo sustentable en México. SENER, 2009. Prospectiva del sector eléctrico 2008-2017. SENER, 2009. Programa Especial para el Aprovechamiento de Energías Renovables. SENER, 2015. Prospectiva del sector eléctrico 2015-2029. World Bank, 2004. Energy Policies and the Mexican Economy. World Bank, 2004. Mexico: public expenditure review, Volume II: Main report. World Bank, 2005. Mexico: infrastructure public expenditure review (IPER). See also Mexico Economy of Mexico Energy in Mexico Index of Mexico-related articles Renewable energy in Mexico Wind power in Mexico Solar power in Mexico Petroleum industry in Mexico Water supply and sanitation in Mexico Renewable energy by country References External links "Environment and Natural Resources Secretariat (SEMARNAT)". Archived from the original on 2008-07-01. Retrieved 2007-10-02.. List of World Bank projects in Mexico List of Inter-American Development Bank projects in Mexico
climate change in belgium
Climate change in Belgium describes the global warming related issues in Belgium. Belgium has the 7th largest CO2 emission per capita in the EU. The CO2 emissions have dropped 19.0% since in comparison with 1990 levels. The average temperature has risen 1.9 degrees Celsius since measurements began in 1890, with an acceleration since 1954. Greenhouse gas emissions In 2021, the greenhouse gas (GHG) emissions were 146.9 million tons of CO2 equivalent (Mt CO2 eq), whose 88 Mt came from the Flemish Region, 54.8 Mt from the Walloon Region and 4 Mt from the Brussels-capital Region. Targets by region Flemish Region The target of the Flemish Region is a reduction of 5.2% of GHG in the period 2008-2012 compared to 1990. That means average emissions of 83.4 million tons CO2 equivalent in the 2008-2012 period. The 2008–2012, Flemish allocation plan deals with installation consuming more than 0.5 PJ (139 GWh) annually. 17% of GHG emissions comes from transportation and 21 from electricity production and heat production (excluded heat for buildings). There are 178 installations listed. The largest emitters are, with their emissions in tons of CO2 equivalent (t CO2 eq) per year: Sidmar owned by ArcelorMittal in Ghent: 8,918,495 Total refinery in Antwerp: 4,323,405 BASF in Antwerp: 2,088,422 Zandvliet Power, a joint venture of BASF and GDF Suez, in Zandvliet: 1,119,158 Esso refinery in Antwerp: 1,933,000 Fina Olefins in Antwerp: 1,414,550 Electrabel in Herdersbrug: 990,397 Electrabel in Drogenbos: 998,794 E.ON Benelux in Vilvoorde: 828,920 SPE in Ringvaarts: 807,066 Electrabel in Ruien: 730,332 E.ON Benelux in Langerloo: 586,961 Degussa in Antwerp: 526,949 Brussels-Capital Region Being a federal state, Brussels-Capital Region also made a second allocation plan for 2008–2012 based on the decree of June 3, 2004 that implements the European directive 2003/87/CE. In that plan, Brussels objective is to have an increase of maximum 3.475% of greenhouse gas emissions compared to 1990. In 2004, the Brussels-Capital Region emitted 4.4 million tons CO2 equivalent, an increase of 9% compared to 1990 when emissions were 4.083 Mt CO2 eq. The emissions come from domestic use (45%), tertiary sector (25%) and transportation (19%), and energy/industry (2%). The 4.4 Mt CO2 eq do not take into account GHG emission due to electricity production outside the region. The 2008–2012 allocation plans include only eight facilities: Audi (former Volkswagen plant) auto production plant in Forest a BNP Paribas facility (former Fortis) Bruda plant producing Asphalt Electrabel turbo-jet power plant in Schaerbeek Electrabel turbo-jet power plant at Buda Electrabel turbo-jet power plant owned at Volta an RTBF television facility World Trade Center building Walloon Region In the second allocation plan (for the period 2008–2012), the Walloon Region is planning a reduction of 7.5% of GHG emissions compared to 1990 when 54.84 million tons CO2 equivalent was emitted. The plan for 2008-2012 includes 172 premises. In 2005, the largest emitters were (number in tons CO2 equivalent per year): CCB cement plant in Gaurain-Ramecroix: 1,515,543 Holcim cement plant in Obourg: 1,508,060 Electrabel power plant in Monceau: 1,260,520 CBR cement plant in Lixhe: 1,059,929 Dumont Wautier Lime plant in Saint Georges: 1,294,087Other large emitter are cast iron and steel producer in Charleroi and Liège. On October 22, 2009, BASF announced that they will close the plant located at Feluy at the end of 2009. That plant had a yearly allocation of 36,688 tons of CO2 equivalent. Mitigation and adaptation Policies and legislation Being a member of the European Union, Belgium, applied the European Union Emission Trading Scheme set up by the Directive 2003/87/EC. The Kyoto protocol sets a 7.5% reduction of greenhouse gas emission target compared to 1990. Belgium set up a National Allocation Plan at the federal level with target for each of the three regions. On 14 November 2002, Belgium signed the Cooperation Agreement for the implementation of a National Climate Plan and reporting in the context of the UNFCCC and the Kyoto protocol. The first National Allocation Plan was for the period from 2005 to 2007. The European commission approved it on 20 October 2004. The second allocation plan was for the period 2008-2012 and aims a reduction of 7.5% of greenhouse gas emissions compared to 1990. Paris Agreement The Paris agreement is a legally binding international agreement, its main goal is to limit global warming to below 1.5 degrees Celsius, compared to pre-industrial levels. The Nationally Determined Contributions (NDC's) are the plans to fight climate change adapted for each country. Every party in the agreement has different targets based on its own historical climate records and country's circumstances and all the targets for each country are stated in their NDC.In the case of member countries of the European Union the goals are very similar and the European Union work with a common strategy within the Paris Agreement. See also Plug-in electric vehicles in Belgium == References ==
nationally determined contribution
A nationally determined contribution (NDC) or intended nationally determined contribution (INDC) is a non-binding national plan highlighting climate change mitigation, including climate-related targets for greenhouse gas emission reductions. These plans also include policies and measures governments aim to implement in response to climate change and as a contribution to achieve the global targets set out in the Paris Agreement. NDCs are the first greenhouse gas targets under the UNFCCC that apply equally to both developed and developing countries. Process The establishment of NDCs combine the top-down system of a traditional international agreement with bottom-up system-in elements through which countries put forward their own goals and policies in the context of their own national circumstances, capabilities, and priorities, with the goal of reducing global greenhouse gas emissions enough limit anthropogenic temperature rise to well below 2 °C (3.6 °F) above pre-industrial levels; and to pursue efforts to limit the increase to 1.5 °C (2.7 °F).NDCs contain steps taken towards emissions reductions and also aim to address steps taken to adapt to climate change impacts, and what support the country needs, or will provide, to address climate change. After the initial submission of INDCs in March 2015, an assessment phase followed to review the impact of the submitted INDCs before the 2015 United Nations Climate Change Conference.NDCs are established independently by the parties (countries or regional groups of countries) in question. However, they are set within a binding iterative "catalytic" framework designed to ratchet up climate action over time. Once states have set their initial NDCs, these are expected to be updated on a 5-year cycle. Biennial progress reports are to be published that track progress toward the objectives set out in states' NDCs. These will be subjected to technical review, and will collectively feed into a global stocktaking exercise, itself operating on an offset 5-year cycle, where the overall sufficiency of NDCs collectively will be assessed. The information gathered from parties' individual reports and reviews, along with the more comprehensive picture attained through the "global stocktake" will, in turn, feed back into and shape the formulation of states' subsequent pledges. The logic, overall, is that this process will offer numerous avenues where domestic and transnational political processes can play out, facilitating the making of more ambitious commitments and putting pressure on states to comply with their nationally determined goals.All the goals for each country are stated in their NDC which are based on the points below. Climate neutral to 2050 Limiting global warming to well below 2 °C and pursuing efforts to limit it to 1.5 °C Reduction in emissions of greenhouse gases (GHG) Increase adaptation to the harmful effects of climate change Adjust financial flows so they can be combined with reduced GHG emissions Global goals The Sustainable Development Goal 13 on climate action has an indicator related to NDCs for its second target: Indicator 13.2.1 is the "Number of countries with nationally determined contributions, long-term strategies, national adaptation plans, strategies as reported in adaptation communications and national communications". As of 31 March 2020, 186 parties (185 countries plus the European Union) had communicated their first NDCs to the United Nations Framework Convention on Climate Change Secretariat. A report by the UN stated in 2020 that: "the world is way off track in meeting this target at the current level of nationally determined contributions."The COVID-19 pandemic was thought to offer an opportunity for countries to "reassess priorities and to rebuild their economies to be greener and more resilient to climate change". History NDCs have an antecedent in the pledge and review system that had been considered by international climage negotiators back in the early 1990s. All countries that were parties to the United Nations Framework Convention on Climate Change (UNFCCC) were asked to publish their intended nationally determined contributions at the 2013 United Nations Climate Change Conference held in Warsaw, Poland, in November 2013. The intended contributions were determined without prejudice to the legal nature of the contributions. The term was intended as a compromise between "quantified emissions limitation and reduction objective" (QELROs) and "Nationally Appropriate Mitigation Actions" (NAMAs) that the Kyoto Protocol used to describe the different legal obligations of developed and developing countries. After the Paris Agreement entered into force in 2016, the INDCs became the first NDC when a country ratified the agreement unless it decided to submit a new NDC at the same time. NDCs are the first greenhouse gas targets under the UNFCCC that apply equally to both developed and developing countries. INDC Submissions On 27 February 2015, Switzerland became the first nation to submit its INDC. Switzerland said that it had experienced a temperature rise of 1.75 °C since 1864, and aimed to reduce greenhouse gas emissions 50% by 2030.India submitted its INDC to the UNFCCC in October 2015, committing to cut the emissions intensity of GDP by 33–35% by 2030 from 2005 levels. On its submission, India wrote that it needs "at least USD 2.5 trillion" to achieve its 2015–2030 goals, and that its "international climate finance needs" will be the difference over "what can be made available from domestic sources."Of surveyed countries, 85% reported that they were challenged by the short time frame available to develop INDCs. Other challenges reported include difficulty to secure high-level political support, a lack of certainty and guidance on what should be included in INDCs, and limited expertise for the assessment of technical options. However, despite challenges, less than a quarter of countries said they had received international support to prepare their INDCs, and more than a quarter indicated they are still applying for international support. The INDC process and the challenges it presents are unique to each country and there is no "one-size-fits-all" approach or methodology. Results and current status Emission reductions offered of current NDCs Through the Climate Change Performance Index, Climate Action Tracker and the Climate Clock, people can see on-line how well each individual country is currently on track to achieving its Paris agreement commitments. These tools however only give a general insight in regards to the current collective and individual country emission reductions. They do not give insight in regards on the emission reductions offered per country, for each measure proposed in the NDC. Achievement status and sufficiency for Paris Agreement warming thresholds The rates of emissions reductions need to increase by 80% beyond NDCs to likely meet the 2 °C upper target range of the Paris Agreement. The probabilities of major emitters meeting their NDCs without such an increase is very low. Therefore, with current trends the probability of staying below 2 °C of warming is only 5% – and if NDCs were met and continued post-2030 by all signatory systems the probability would be 26%. Experts have recommended fundamental structural changes of the socioeconomics of global civilization for a systematic "decarbonization" and related mechanisms – such as of work, accountability and resource-allocation – as well as pursuing a path for a maximum of 1.5 degrees of warming, rather than 2 degrees. By country References External links Registry with admitted NDC's by country "Synthesis report on the aggregate effect of the intended nationally determined contributions" (PDF). United Nations Climate Change Commission. 30 October 2015. Retrieved 2 March 2021. "NDC Synthesis Report". United Nations Climate Change Commission. 26 February 2021. Retrieved 2 March 2021.
carmichael coal mine
The Carmichael coal mine is a coal mine in Queensland, Australia which produced its first shipment of coal in December 2021.The mine has drawn criticism for its environmental impacts on the Great Barrier Reef, water usage and carbon emissions. Other contentious issues are its claimed economic benefits, financial viability and use of taxpayer funding. The mine was initially planned to produce 60 million tonnes of coal per year, however funding difficulties resulted in downsizing the planned mine to produce 10 million tonnes per year. The thermal coal produced by the mine is predicted to consist of 11% ash and have a weighted average value of 5,000–5,500 kcal/kg. The coal is planned to be transported by rail (including the Goonyella railway line) to the ports at Hay Point and Abbot Point.Construction of the mine started in early 2019 and commercial-scale coal mining began approximately three years later. On 29 December 2021, it was widely reported that the first coal shipment from the Carmichael Mine was ready for export. Location The mine is located in Central Queensland, with the majority of the site being within the Isaac Region and a small portion being within the Charters Towers Region.The coal was formed as part of the Galilee Basin, a 247,000 square kilometre inland region which includes aquifers that are a part of the Great Artesian Basin underground fresh water source. Mining operations currently consist of small-scale barite, bentonite, calcite, gypsum; limestone, opals; phosphate and potassium mines; there is no history of coal mining of the basin. Environmental impacts Greenhouse gas emissions According to the environmental impact statement, the mine was expected to produce 200 million tonnes of carbon dioxide, based on a 60-year lifespan.In April 2019 Bob Brown led a convoy of vehicles to protest against the proposed coal mine. The protest was criticised by pro-coal lobby groups and is considered a factor in the Queensland voters' swing away from progressive parties in the 2019 Australian federal election.Greta Thunberg drew international attention to the mine in January 2020, when she called on large German-based industrial corporation Siemens AG (which claimed to be one of the first companies to have pledged to be carbon neutral by 2030) to stop the delivery of railway equipment for the mine. Siemens responded that it "should have been wiser about this project beforehand", but declined to cancel the contract. Campaigners continued to oppose the project with hundreds of rallies and other actions targeting 145 companies. Water- rivers and underground sources Adani had initially planned to use 12,500 megalitres per year from the Belyando River for the Carmichael coal mine. Regarding the reduction on the local water table, the company's Supplementary Environmental Impact Statement stated that "maximum impacts in excess of 300 metres are predicted". Beyond the mine boundary, Adani's groundwater model predicted water table levels to drop "typically between 20 and 50m" and "up to around 4m in the vicinity of the [Carmichael] river". During a hearing in the QLD Land Court, Adani's representatives defended predictions drawn from drilling data, despite allegations of this being insufficient to determine risks of collapses underground that could impact groundwater systems. Endangered species The mine site area is home to a number of species, including the yakka skink, ornamental snake, and the waxy cabbage palm. The mine site is home to the largest known community of black throated finches, and the operation of the mine is subject to a Black-Throated Finch Management Plan.The finches' population is in decline, and the southern subspecies is threatened, having vanished from 80% of its former range. Adani Australia produced a management plan for the finch, proposing to gradually clear land around the mine and force the finches to move away. The plan was heavily criticised by some ecologists, who highlighted the plan to graze cattle on protected land and noted the land was tagged to be used for other projects. There was also a lack of transparency and consultation with experts in the field. History The mine was announced in 2010, initially with a forecast mining duration of 90 years, which was later reduced to 60 years. The federal government approved the project in July 2014. Associated works included new port terminals and seabed dredging at the Abbot Point, and it was planned to dump the drudge on land.However, the approval was set aside in August 2015, when the Federal Court of Australia found that environment minister Greg Hunt did not correctly follow requirements under Environment Protection and Biodiversity Conservation Act 1999 regarding the yakka skink and ornamental snake endangered species. This led to considerable controversy and the project was re-approved in October 2015.In July 2019 Adani Australia commenced construction of the Carmichael mine. The company aims to be starting commercial-scale coal mining by the end of 2021. Mine approvals and construction were delayed by campaigns run by Traditional Owners and environmentalists, which included non-violent direct action at the construction site.In November 2020 Adani changed the name of its Australian subsidiary, which operates the mine, from Adani Mining to Bravus Mining & Resources.The company was the "jersey sleeve" sponsor of the North Queensland Cowboys NRL rugby league team for the 2021 season. Mine and associated facilities The mine is intended to be an open-cut mine (compared with earlier designs which also included underground mines) with 279 km2 (108 sq mi) of land being excavated. The total area of the mine site is planned to be 447 km2 (173 sq mi).The original plan included a new 388 kilometres (241 mi) long standard-gauge railway, which was proposed to be paid for by taxpayers. In September 2018, Adani announced that it had abandoned plans to build the standard-gauge line in favour of a 200 km (124 mi) extension to a nearby existing narrow-gauge railway. This railway is planned to connect the mine to the maritime freight terminal at Abbot Point and construction began in mid-2020. Construction of the approximately 200km Carmichael Rail Network was completed in September 2021.In January 2020, in response to protests in Berlin by Extinction Rebellion, Siemens announced it would re-evaluate its $20m contract to supply signalling systems for the rail link, but decided to continue with the contract saying there was "practically no legally and economically responsible way to unwind the contract without neglecting fiduciary duties."The fly-in fly-out (FIFO) workers for the mine during the construction phase are planned to be based in the regional cities of Rockhampton and Townsville. A new airstrip close to the mine was proposed to be constructed, at a cost to taxpayers of $31 million to $34 million. Following public criticisms and the project delays, government funding for the proposed airstrip was cancelled and the airstrip was paid for by Adani. Flights to the airstrip began in June 2020.The subcontract for the construction of a 189 km (117 mi) railway line to the mine was signed in 2020. The company claimed to have approximately 2000 employees in November 2020 and approximately 2600 employees in June 2021. Financial issues In 2015, a number of major international banks publicly ruled out financing the coal mine, railway line or shipping terminal. This included more than half of the top 20 coal financing banks globally, such as Citigroup, JP Morgan Chase, Goldman Sachs, Deutsche Bank, Royal Bank of Scotland, HSBC, Barclays, Credit Agricole and Société Générale. Standard Chartered provided financing to the project, but Adani ended the bank's advisory contract.Large coal projects in Australia typically engage with one or more of the ‘big four’ Australian banks in arranging or providing debt. However, National Australia Bank announced in September 2015 that it would not fund the project, followed by Westpac in April 2015 and ANZ Bank in August 2017.Before the mine's construction, some analysts doubted the mine was viable given the price of coal at the time. In November 2014, one analyst predicted that a price of about $100–$110 a tonne was required for the project be financially viable. The price of coal fluctuated significantly over following years; from US$60/t in 2015, then US$115/t in 2018 and US$170/t in May 2021. The company claims that it has agreements in place to sell 10 million tonnes of coal from the mine. Taxpayer funding In 2014, the Queensland Liberal National Party state government proposed reduced taxes for the project in the form of an open-ended royalty rate "holiday", and for taxpayers to pay for the sediment dumping facility in the Caley Valley wetlands. The Labor opposition criticised the secrecy surrounding the costs and suggested that up to $1.08 billion of public money would be required.Following the Queensland Labor Party's victory in the 2015 state election, the Labor party vowed to not use state funds for the railway line, however the party called on the Australian Liberal National government to use federal taxpayer money for the railway line. The federal government considered the proposal, however in the end it was not successful. In October 2020, the Queensland Labor Party government announced that the royalties deal for the coal mine had been signed. The deal includes deferring royalty payments for an unspecified period. Indigenous community impacts Native Title court cases In 2016, a group of Indigenous landholders launched a case in the Queensland Supreme court against the granting of Adani's mining lease, on the basis that they had not been properly consulted. The court ruled against overturning the mining lease. A 2018 appeal upheld the 2016 decision. Land use agreements At a 2017 meeting, the majority of Wangan and Jagalingou (W&J) people present voted to accept the proposed mining rights deal presented by Adani. However, several W&J people have stated that they were paid by Adani to attend, or that their vote against the deal was not counted. The resulting indigenous land use agreement was accepted by 7 of 12 W&J representatives. A 2019 appeal by the other 5 W&J representatives was dismissed by the Federal Court, with Adani seeking $600,000 in court costs from the W&J representatives.As of August 2022, members of the Wangan and Jagalingou people continue to occupy a cultural ceremony and camp site near the mine. Legal issues 2015: Federal EPBC approval (initial case) In January 2015, the Mackay Conservation Group, challenged the July 2013 federal approval of the Carmichael project by Greg Hunt, Environment Minister, under the Environment Protection and Biodiversity Conservation Act 1999. The Federal Court of Australia case involved three main contentions, that the Minister did not take into account the greenhouse gas emissions, the company's environmental record in India and the "approved conservation advice" for the Yakka Skink and the ornamental snake.The court set aside the approval due to concerns regarding the yakka skink and ornamental snake, effectively overturning the approval.Following this decision, the attorney-general at the time, George Brandis, stated his intention to disallow third parties from challenging the minister's approvals under the Environment Protection and Biodiversity Conservation Act 1999. This amendment to the Act did not occur. In October 2015, the coal mine was re-approved by the federal environment minister. 2015-2016: Queensland mining leases and environmental authority In 2015, the Land Services of Coast and Country (LSCC) group launched a case in the Queensland Land Court challenging the Queensland coordinator-general's Mining Lease and Environmental Authority approvals. LSCC contended the approval was flawed regarding the economic, environmental and financial impact of the mine. The court's verdict in December 2015 did not overturn the approval, but placed extra environmental requirements on the mine.LSCC applied in 2016 for an appeal in the Supreme Court of Queensland, and this application was unsuccessful. 2016-2017: Federal EPBC approval regarding impacts on Great Barrier Reef The Australian Conservation Foundation (ACF) made a judicial review challenge under the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act) in late 2015, arguing that proper procedure was not followed when the Minister did not take into account the impact of emissions on the Great Barrier Reef. The ACF appealed the decision in the Federal Court of Australia, which was unsuccessful.The court ruled in favour of the government, based on the argument that only considering the direct emissions from mining operations (and not the emissions caused by the burning of its coal) is a decision at the minister's discretion. The case hinged on a market substitution defence, which has been criticised as "the only significant barrier remaining to a successful climate change case". 2018: Federal EPBC approval- exemption for water usage from the Sutton River The Australian Conservation Foundation (ACF), represented by Environmental Defenders Office Queensland (EDO), lodged a judicial review challenge in the Federal Court in December 2018 of the Federal Environment Minister's decision to not apply appropriate legislation regarding water.The review challenged then-Environment Minister Melissa Price's decision to waive a full environmental assessment for water use from the Suttor River in central Queensland. Adani notified the government that the act was a controlled act but the government decided that it was not a controlled activity under the EPBC Act and that no environmental impact assessment (EIS) was needed for it to proceed.Under amendments to the EPBC Act known as the water trigger, the Minister must obtain advice from an Independent Expert Scientific Committee on Coal Seam Gas and Large Coal Mining Development if an activity is likely to have a significant impact on water resources or impact on a protected ecological communities, species, World Heritage Sites, national heritage sites, or protected wetlands. The water trigger was added to the EPBC Act in 2013 by Tony Windsor.In June 2019 the application was allowed by consent, meaning the Australian Government would reassess the project’s water use. 2020: Federal EPBC approval- exemption for water pipeline project In March 2020, the Australian Conservation Foundation (ACF) launched a judicial review application regarding the water pipeline for the project (called the North Galilee Water Scheme Infrastructure Project). The application argued that the water pipeline should require assessment under the EPBC water trigger. The federal court ruled in favour of the AFC in May 2021, thereby requiring the minister to reconsider the environmental impacts associated with the mine's water pipeline before it is approved. See also Coal mining in Australia List of mines in Australia == References ==
oil and gas climate initiative
The Oil and Gas Climate Initiative (OGCI), is an international industry-led organization which includes 12 member companies from the oil and gas industry: BP, Chevron, CNPC, Eni, Equinor, ExxonMobil, Occidental, Petrobras, Repsol, Saudi Aramco, Shell and TotalEnergies represent over "30% of global operated oil and gas production." It was established in 2014 and has a mandate to work together to "accelerate the reduction of greenhouse gas emissions" in full support of the Paris Agreement and its aims."Their mandate says that they will "seek actions" to "accelerate and participate in the energy transition." On November 4, 2016 OGCI announced the creation of the OGCI Climate Investments fund which will invest $1 billion over 10 years in companies or projects that reduce "methane emissions from gas production", on projects or "technologies to capture and either use or store carbon emissions", as well as on energy efficiency.The fund invests in reducing the carbon footprint of the oil and gas industry and other emitting sectors, rather than in renewable energy. Former BP CEO Bob Dudley is the chair of the OGCI CEO steering committee and Bjørn Otto Sverdrup is the chair of the OGCI executive committee. Pratima Rangarajan is the CEO of OGCI Climate Investments. Background The industry-led Oil and Gas Climate Initiative (OGCI) was created in 2014 by CEOs of the world's largest energy companies to "seek action" to support the Paris Agreement. The member companies which include BP, Chevron, CNPC, Eni, Equinor, ExxonMobil, Occidental, Pemex, Petrobras, Repsol, Saudi Aramco, Shell and Total, represent over "32% of global operated oil and gas production."As part of their initiative to "improve their environmental reputation" the OGCI announced at an event held in London, that they would be investing $1bn over the next decade in "innovative low emissions technologies". The announcement, which was planned to "coincide" with the Paris agreement which came into effect on the same day. The Telegraph had predicted that the OGCI could face "fierce scrutiny and accusations of "greenwashing" for the announcement which some environmentalists saying that "the 'big oil' business model is fundamentally incompatible with avoiding dangerous climate change." The Telegraph said that OGCI chair and BP CEO Bob Dudley makes more than 1 billion a year.On September 22, 2019, the OGCI hosted an "invitation-only forum" on the "sidelines" of the September 23, climate action summit organized by the UN secretary general, António Guterres, which was held in New York. The day before the UN "climate action summit"—which included "[W]orld leaders, academics, government representatives and environmentalists" came together for The UN "climate action summit"—OGCI oil and gas executives held their own "closed high-level discussion" with key stakeholders. According to The Guardian, critics said that the OGCI was "attempt[ing] to influence negotiations in favour of fossil fuel companies."On September 23, the OGCI held their formal forum at the Morgan Library and Museum, New York. Targets In 2017, OGCI members developed a baseline of "aggregated upstream oil and gas operations emissions" of 24 kg CO2e/boe. Methane intensity In 2018, OGCI set a methane intensity target. In 2018, member companies had "reduced collective methane intensity by 9%" and was "on track to meet the 2025 target of below 0.25%." Carbon intensity At their September 2019 forum, OGCI said that they were "working on a carbon intensity target to reduce by 2025 the collective average carbon intensity of member companies' aggregated upstream oil and gas operations." Among the actions to meet their targets of reducing carbon intensity, the OGCI listed "improving energy efficiency, minimizing flaring, upgrading facilities and co-generating electricity and useful heat." Carbon pricing Member companies pledged to "support policies that attribute an explicit or implicit value to carbon" as the "one of the most cost-efficient ways to achieve the low carbon transition as early as possible" at their September 2019 forum. Reduce flaring By October 2019, the fossil-fuel executives said that until recently they had been making progress in cutting back on routine flaring, where vast amounts of natural gas are burned off as a waste "by-product" during the extraction of crude oil. Oil extraction companies focus on drilling and pumping oil which is highly lucrative, but the less-valuable gas accompanying the oil is more difficult to transport to consumers. Production growth has "far outpaced pipeline construction" during the boom in the Permian and Bakken oil fields.Both gas flaring and gas venting waste a primary energy resource and release potent greenhouse gases into the atmosphere. Regulation of the amount of each type of waste varies from one jurisdiction to another. Instances also occur where companies have access to transport capacity, but are allowed to flare rather than pay pipeline costs. Investments The OGCI Climate Investments fund will not invest in renewables. It will focus on action that will reduce "methane emissions from gas production" and on "technologies to capture and either use or store carbon emissions. At their New York September 2019 forum, OGCI said that they had " 15 investments in its portfolio," which includes Kelvin, SeekOps, Boston Metal, 75F, Norsepower, and XL. OGCI Climate Investments focus on "innovative companies that are ready to be commercialized" in collaboration with "global co-investors and industrials to achieve speed and scale." Wabash Valley Resources On May 20, 2019 OGCI announced their funding support of the Terre Haute, Indiana-based Wabash Valley Resources that will "capture and sequester 1.5-1.75 million tons of CO2 annually from Wabash Valley Resources co-located ammonia plant" making it "the largest carbon sequestration project in the United States". SeekOps In September 2019, OGCI Climate Investments and Equinor Technology Ventures provided funding for SeekOps, which is a "technology spinoff NASA's Jet Propulsion Laboratory". SeekOps uses "integrated drone-based systems" capable of detect[ing], localiz[ing], and "quanify[ing] natural gas emissions. Initiatives In September 2019, OGCI announced a KickStarter campaign to increase the "carbon capture, use and storage (CCUS)" globally to "achieve net zero emissions." External links https://ogci.com == References ==
bitcoin
Bitcoin (abbreviation: BTC or XBT; sign: ₿) is a decentralized cryptocurrency. Nodes in the bitcoin network verify transactions through cryptography and record them in a public distributed ledger called a blockchain. Based on a free market ideology, bitcoin was invented in 2008 by Satoshi Nakamoto, an unknown person.Use of bitcoin as a currency began in 2009, with the release of its open-source implementation.: ch. 1  In 2021, El Salvador adopted it as legal tender. Still, bitcoin is rarely used for transactions with merchants and is mostly seen as an investment. For this reason, it has been widely described as an economic bubble.As bitcoin is pseudonymous, its use by criminals has attracted the attention of regulators, leading to its ban by 51 countries as of 2021. The environmental effects of bitcoin are also substantial. Its proof-of-work algorithm for bitcoin mining is computationally difficult and requires increasing quantities of electricity, so that, as of 2022, bitcoin is estimated to be responsible for 0.2% of world greenhouse gas emissions. Design Units and divisibility The unit of account of the bitcoin system is the bitcoin. Currency codes for representing bitcoin are BTC and XBT.: 2  Its Unicode character is ₿. One bitcoin is divisible to eight decimal places.: ch. 5  Units for smaller amounts of bitcoin are the millibitcoin (mBTC), equal to 1⁄1000 bitcoin, and the satoshi (sat), which is the smallest possible division, and named in homage to bitcoin's creator, representing 1⁄100000000 (one hundred millionth) bitcoin. 100,000 satoshis are one mBTC.No uniform convention for bitcoin capitalization exists; some sources use Bitcoin, capitalized, to refer to the technology and network and bitcoin, lowercase, for the unit of account. The Oxford English Dictionary advocates the use of lowercase bitcoin in all cases. Blockchain The bitcoin blockchain is a public ledger that records bitcoin transactions. It is implemented as a chain of blocks. Each block contains a SHA-256 cryptographic hash of the previous block, linking them and giving the blockchain its name.: ch. 7  A network of communicating nodes running bitcoin software maintains the blockchain.: 215–219  Transactions of the form payer X sends Y bitcoins to payee Z are broadcast to this network using readily available software applications. Individual blocks, public addresses, and transactions within blocks can be examined using a blockchain explorer.Network nodes can validate transactions, add them to their copy of the ledger, and then broadcast them to other nodes. To achieve independent verification of the chain of ownership, each network node stores its own copy of the blockchain. Every 10 minutes on average, a new group of accepted transactions, called a block, is created, added to the blockchain, and quickly published to all nodes, without requiring central oversight. This allows bitcoin software to determine when a particular bitcoin was spent, which is needed to prevent double-spending. A conventional ledger records the transfers of actual bills or promissory notes that exist apart from it, but as a digital ledger, bitcoins only exist by virtue of the blockchain; they are represented by the unspent outputs of transactions.: ch. 5 Transactions Transactions are defined using a Forth-like scripting language.: ch. 5  Transactions consist of one or more inputs and one or more outputs. When a user sends bitcoins, they designate each address and the amount of bitcoin being sent to that address in an output, allowing users to send bitcoins to multiple recipients in one transaction. To prevent double-spending, each input must refer to a previous unspent output in the blockchain. The use of multiple inputs corresponds to the use of multiple coins in a cash transaction. As in a cash transaction, the sum of inputs can exceed the intended sum of payments. In such a case, an additional output is used, returning the change back to the payer. Any input satoshis not accounted for in the transaction outputs become the transaction fee.The blocks in the blockchain were originally limited to 32 megabytes in size. The block size limit of one megabyte was introduced by Satoshi Nakamoto in 2010. Eventually, the block size limit of one megabyte created problems for transaction processing, such as increasing transaction fees and delayed processing of transactions. Andreas Antonopoulos has stated Lightning Network is a potential scaling solution and referred to lightning as a second-layer routing network.: ch. 8 Ownership In the blockchain, bitcoins are linked to specific addresses that are hashes of a public key. Creating an address involves generating a random private key and then computing the corresponding address. This process is almost instant, but the reverse–finding the private key for a given address–is nearly impossible.: ch. 4  Publishing a bitcoin address does not risk its private key, and it is extremely unlikely to accidentally generate a used key with funds. To use bitcoins, owners need their private key to digitally sign transactions, which are verified by the network using the public key, keeping the private key secret.: ch. 5 Losing a private key means losing access to the bitcoins, with no other proof of ownership accepted by the network. For instance, in 2013, a user lost ₿7,500, valued at $7.5 million, by accidentally discarding a hard drive with the private key. It is estimated that around 20% of all bitcoins, worth about $20 billion in July 2018, are lost.To ensure the security of bitcoins, the private key must be kept secret.: ch. 10  Exposure of the private key, such as through a data breach, can lead to theft of the associated bitcoins. As of December 2017, approximately ₿980,000 had been stolen from cryptocurrency exchanges. Mining and supply Mining is a record-keeping service done through the use of computer processing power. Miners keep the blockchain consistent, complete, and unalterable by repeatedly grouping newly broadcast transactions into a block, which is then broadcast to the network and verified by recipient nodes.To be accepted by the rest of the network, a new block must contain a proof-of-work (PoW). The PoW requires miners to find a number called a nonce (a number used just once), such that when the block content is hashed along with the nonce, the result is numerically smaller than the network's difficulty target.: ch. 8  This PoW is easy for any node in the network to verify, but extremely time-consuming to generate. Miners must try many different nonce values before a result happens to be less than the difficulty target. Because the difficulty target is extremely small compared to a typical SHA-256 hash, block hashes have many leading zeros: ch. 8  as can be seen in this example block hash: 0000000000000000000590fc0f3eba193a278534220b2b37e9849e1a770ca959By adjusting this difficulty target, the amount of work needed to generate a block can be changed. Every 2,016 blocks (approximately two weeks), nodes deterministically adjust the difficulty target to keep the average time between new blocks at ten minutes. In this way the system automatically adapts to the total amount of mining power on the network.: ch. 8  As of April 2022, it takes on average 122 sextillion (122 thousand billion billion) attempts to generate a block hash smaller than the difficulty target. Computations of this magnitude are extremely expensive and utilize specialized hardware.The successful miner finding the new block is allowed to collect for themselves all transaction fees from transactions they included in the block, as well as a predetermined reward of newly created bitcoins. To claim this reward, a special transaction called a coinbase is included in the block, with the miner as the payee. All bitcoins in existence have been created through this type of transaction.: ch. 8  This reward is reduced by half every 210,000 blocks (approximately every four years), until ₿21 million are generated. Under this schedule, new bitcoin issuance will halt around the year 2140. After that, miners would be rewarded by transaction fees only. Though transaction fees are optional, miners can choose which transactions to process and prioritize those that pay higher fees. Miners may choose transactions based on the fee paid relative to their storage size, not the absolute amount of money paid as a fee. These fees are generally measured in satoshis per byte (sat/b). The size of transactions is dependent on the number of inputs used to create the transaction and the number of outputs.: ch. 8 The proof-of-work system, alongside the chaining of blocks, makes modifications to the blockchain extremely hard, as an attacker must modify all subsequent blocks in order for the modifications of one block to be accepted. As new blocks are being generated continuously, the difficulty of modifying an old block increases as time passes and the number of subsequent blocks (also called confirmations of the given block) increases. Decentralization Bitcoin is decentralized due to its lack of a central authority and of a single administrator. Anybody can store the ledger on a computer,: ch. 1  which is maintained by a network of equally privileged miners.: ch. 1  Anybody can create a new bitcoin address without needing any approval.: ch. 1  The issuance of bitcoins is decentralized, in that they are issued as a reward for the creation of a new block.Conversely, researchers have pointed out a "trend towards centralization". Although bitcoin can be sent directly from user to user, in practice intermediaries are widely used.: 220–222  Bitcoin miners join large mining pools to minimize the variance of their income.: 215, 219–222 : 3  Because transactions on the network are confirmed by miners, decentralization of the network requires that no single miner or mining pool obtains 51% of the hashing power, which would allow them to double-spend coins, prevent certain transactions from being verified and prevent other miners from earning income. As of 2013 just six mining pools controlled 75% of overall bitcoin hashing power. In 2014 mining pool Ghash.io obtained 51% hashing power which raised significant controversies about the safety of the network. The pool has voluntarily capped its hashing power at 39.99% and requested other pools to act responsibly for the benefit of the whole network. According Capkun & Capkun writing for IEEE Security & Privacy, other parts of the ecosystem are also "controlled by a small set of entities", notably the maintenance of the client software, online wallets, and simplified payment verification (SPV) clients. Privacy and fungibility Bitcoin is pseudonymous, meaning that funds are not tied to real-world entities but rather bitcoin addresses. Owners of bitcoin addresses are not explicitly identified, but all transactions on the blockchain are public. In addition, transactions can be linked to individuals and companies through "idioms of use" (e.g., transactions that spend coins from multiple inputs indicate that the inputs may have a common owner) and corroborating public transaction data with known information on owners of certain addresses. Additionally, bitcoin exchanges, where bitcoins are traded for traditional currencies, may be required by law to collect personal information. To heighten financial privacy, a new bitcoin address can be generated for each transaction.While the Bitcoin network treats each bitcoin the same, thus establishing the basic level of fungibility, applications and individuals who use the network are free to break that principle. For instance, wallets and similar software technically handle all bitcoins equally, none is different from another. Still, the history of each bitcoin is registered and publicly available in the blockchain ledger, and that can allow users of chain analysis to refuse to accept bitcoins coming from controversial transactions. For example, in 2012, Mt. Gox froze accounts of users who deposited bitcoins that were known to have just been stolen. Wallets A wallet stores the information necessary to transact bitcoins. While wallets are often described as a place to hold or store bitcoins, due to the nature of the system, bitcoins are inseparable from the blockchain transaction ledger. A wallet is more correctly defined as something that "stores the digital credentials for your bitcoin holdings" and allows one to access (and spend) them. : ch. 1, glossary  Bitcoin uses public-key cryptography, in which two cryptographic keys, one public and one private, are generated. At its most basic, a wallet is a collection of these keys. Software wallets The first wallet program, simply named Bitcoin, and sometimes referred to as the Satoshi client, was released in 2009 by Satoshi Nakamoto as open-source software. In version 0.5 the client moved from the wxWidgets user interface toolkit to Qt, and the whole bundle was referred to as Bitcoin-Qt. After the release of version 0.9, the software bundle was renamed Bitcoin Core to distinguish itself from the underlying network. Bitcoin Core is, perhaps, the best known implementation or client. Forks of Bitcoin Core exist, such as Bitcoin XT, Bitcoin Unlimited, and Parity Bitcoin.There are several modes in which wallets can operate. They have an inverse relationship with regard to trustlessness and computational requirements. Full clients verify transactions directly by downloading a full copy of the blockchain (over 150 GB as of January 2018). They do not require trust in any external parties. Full clients check the validity of mined blocks, preventing them from transacting on a chain that breaks or alters network rules.: ch. 1  Because of its size and complexity, downloading and verifying the entire blockchain is not suitable for all computing devices. Lightweight clients consult full nodes to send and receive transactions without requiring a local copy of the entire blockchain (see simplified payment verification – SPV). This makes lightweight clients much faster to set up and allows them to be used on low-power, low-bandwidth devices such as smartphones. When using a lightweight wallet, however, the user must trust full nodes, as it can report faulty values back to the user. Lightweight clients follow the longest blockchain and do not ensure it is valid, requiring trust in full nodes.Third-party internet services called online wallets or webwallets offer similar functionality but may be easier to use. In this case, credentials to access funds are stored with the online wallet provider rather than on the user's hardware. As a result, the user must have complete trust in the online wallet provider. A malicious provider or a breach in server security may cause entrusted bitcoins to be stolen. An example of such a security breach occurred with Mt. Gox in 2011. Cold storage Wallet software is targeted by hackers because of the lucrative potential for stealing bitcoins. A technique called "cold storage" keeps private keys out of reach of hackers; this is accomplished by keeping private keys offline at all times: ch. 4  by generating them on a device that is not connected to the internet.: 39  The credentials necessary to spend bitcoins can be stored offline in a number of different ways, from specialized hardware wallets to simple paper printouts of the private key.: ch. 10 Paper wallets A paper wallet is created with a keypair generated on a computer with no internet connection; the private key is written or printed onto the paper and then erased from the computer.: ch. 4  The paper wallet can then be stored in a safe physical location for later retrieval.: 39 Physical wallets can also take the form of metal token coins with a private key accessible under a security hologram in a recess struck on the reverse side.: 38  The security hologram self-destructs when removed from the token, showing that the private key has been accessed. Originally, these tokens were struck in brass and other base metals, but later used precious metals as bitcoin grew in value and popularity.: 80  Coins with stored face value as high as ₿1,000 have been struck in gold.: 102–104  The British Museum's coin collection includes four specimens from the earliest series: 83  of funded bitcoin tokens; one is currently on display in the museum's money gallery. In 2013, a Utah manufacturer of these tokens was ordered by the Financial Crimes Enforcement Network (FinCEN) to register as a money services business before producing any more funded bitcoin tokens.: 80 Hardware wallets A hardware wallet is a computer peripheral that signs transactions as requested by the user. These devices store private keys and carry out signing and encryption internally, and do not share any sensitive information with the host computer except already signed (and thus unalterable) transactions. Because hardware wallets never expose their private keys, even computers that may be compromised by malware do not have a vector to access or steal them.: 42–45 The user sets a passcode when setting up a hardware wallet. As hardware wallets are tamper-resistant,: ch. 10  the passcode will be needed to extract any money. History Background Before bitcoin, several digital cash technologies were released, starting with the issuer-based ecash protocols of David Chaum and Stefan Brands. The idea that solutions to computational puzzles could have some value was first proposed by cryptographers Cynthia Dwork and Moni Naor in 1992. The concept was independently rediscovered by Adam Back who developed hashcash, a proof-of-work scheme for spam control in 1997. The first proposals for distributed digital scarcity-based cryptocurrencies came from cypherpunks Wei Dai (b-money) and Nick Szabo (bit gold). In 2004, Hal Finney developed reusable proof of work using hashcash as its proof-of-work algorithm. Creation The domain name bitcoin.org was registered on 18 August 2008. On 31 October 2008, a link to a white paper authored by Satoshi Nakamoto titled Bitcoin: A Peer-to-Peer Electronic Cash System was posted to a cryptography mailing list. Nakamoto implemented the bitcoin software as open-source code and released it in January 2009. Nakamoto's identity remains unknown.On 3 January 2009, the bitcoin network was created when Nakamoto mined the starting block of the chain, known as the genesis block. Embedded in the coinbase of this block was the text "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks", which is the date and headline of an issue of The Times newspaper.On 12 January 2009, Hal Finney received the first bitcoin transaction: ten bitcoins from Nakamoto. Wei Dai and Nick Szabo were also early supporters. In 2010, the first known commercial transaction using bitcoin occurred when programmer Laszlo Hanyecz bought two Papa John's pizzas for ₿10,000. 2010–2012 Blockchain analysts estimate that Nakamoto had mined about one million bitcoins before disappearing in 2010 when he handed the network alert key and control of the code repository over to Gavin Andresen. Andresen later became lead developer at the Bitcoin Foundation, an organization founded in September 2012 to promote bitcoin.After early "proof-of-concept" transactions, the first major users of bitcoin were black markets, such as the dark web Silk Road. During its 30 months of existence, beginning in February 2011, Silk Road exclusively accepted bitcoins as payment, transacting ₿9.9 million, worth about $214 million.: 222 2013–2016 In March 2013, the US Financial Crimes Enforcement Network (FinCEN) established regulatory guidelines for "decentralized virtual currencies" such as bitcoin, classifying American bitcoin miners who sell their generated bitcoins as money services businesses, subject to registration and other legal obligations. In May 2013, US authorities seized the exchange Mt. Gox after discovering it had not registered.In June 2013, the US Drug Enforcement Administration seized ₿11.02 from a man attempting to use them to by illegal substances. This marked the first time a government agency had seized bitcoins. The FBI seized about ₿30,000 in October 2013 from Silk Road, following the arrest of its founder Ross Ulbricht. These bitcoins were sold at blind auction by the United States Marshals Service to venture capitalist Tim Draper.In December 2013, the People's Bank of China prohibited Chinese financial institutions from using bitcoin. After the announcement, the value of bitcoin dropped, and Baidu no longer accepted bitcoins for certain services. Buying real-world goods with any virtual currency had been illegal in China since at least 2009. 2017–2019 Research produced by the University of Cambridge estimated that in 2017, there were 2.9 to 5.8 million unique users using a cryptocurrency wallet, most of them using bitcoin. In August 2017, the SegWit software upgrade was activated. Segwit was intended to support the Lightning Network as well as improve scalability. The bitcoin price rose almost 50% in the week following SegWit's approval. SegWit opponents, who supported larger blocks as a scalability solution, forked to create Bitcoin Cash, one of many forks of bitcoin.In February 2018, China imposed a complete ban on Bitcoin trading. Bitcoin price then crashed. The percentage of bitcoin trading in the Chinese renminbi fell from over 90% in September 2017 to less than 1% in June 2018.Bitcoin prices were negatively affected by several hacks or thefts from cryptocurrency exchanges, including thefts from Coincheck in January 2018, Bithumb in June, and Bancor in July. For the first six months of 2018, $761 million worth of cryptocurrencies was reported stolen from exchanges. Bitcoin's price was affected even though other cryptocurrencies were stolen at Coinrail and Bancor as investors worried about the security of cryptocurrency exchanges. In September 2019, the Intercontinental Exchange began trading of bitcoin futures on its exchange Bakkt. 2020–present On 13 March 2020, bitcoin fell below $4,000 during a broad market selloff, after trading above $10,000 in February 2020. On 11 March 2020, 281,000 bitcoins were sold, held by owners for only thirty days. This compared to ₿4,131 that had laid dormant for a year or more, indicating that the vast majority of the bitcoin volatility on that day was from recent buyers. During the week of 11 March 2020, cryptocurrency exchange Kraken experienced an 83% increase in the number of account signups over the week of bitcoin's price collapse, a result of buyers looking to capitalize on the low price. These events were attributed to the onset of the COVID-19 pandemic.In August 2020, MicroStrategy invested $250 million in bitcoin as a treasury reserve asset. In October 2020, Square, Inc. placed approximately 1% of total assets ($50 million) in bitcoin. In November 2020, PayPal announced that US users could buy, hold, or sell bitcoin. Alexander Vinnik, founder of BTC-e, was convicted and sentenced to five years in prison for money laundering in France while refusing to testify during his trial. In December 2020, Massachusetts Mutual Life Insurance Company announced a bitcoin purchase of US$100 million, or roughly 0.04% of its general investment account.In November 2021, the Taproot network software upgrade was activated, adding support for Schnorr signatures, improved functionality of smart contracts and Lightning Network. Before, Bitcoin only used a custom elliptic curve with the ECDSA algorithm to produce signatures.: 101 In September 2021, Bitcoin in El Salvador became legal tender, alongside the US dollar.In October 2021, the SEC approved the ProShares Bitcoin Strategy ETF, a cash-settled futures exchange-traded fund (ETF). The first bitcoin ETF in the United States gained 5% on its first trading day on 19 October 2021.In May 2022, the bitcoin price fell following the collapse of Terra, a stablecoin experiment. By June 13, 2022, the Celsius Network (a decentralized finance loan company) halted withdrawals and resulted in the bitcoin price falling below $20,000.Ordinals, non-fungible tokens (NFTs) on Bitcoin, went live in 2023. Economics and usage Use as a currency Bitcoin is a digital asset designed to work in peer-to-peer transactions as a currency. Bitcoins have three qualities useful in a currency, according to The Economist in January 2015: they are "hard to earn, limited in supply and easy to verify". Per some researchers, as of 2015, bitcoin functions more as a payment system than as a currency.Economists define money as serving the following three purposes: a store of value, a medium of exchange, and a unit of account. According to The Economist in 2014, bitcoin functions best as a medium of exchange. However, this is debated, and a 2018 assessment by The Economist stated that cryptocurrencies met none of these three criteria. In 2014 Yale economist Robert J. Shiller wrote that bitcoin has potential as a unit of account for measuring the relative value of goods, as with Chile's Unidad de Fomento, but that "Bitcoin in its present form ... doesn't really solve any sensible economic problem". François R. Velde, Senior Economist at the Chicago Fed, described bitcoin as "an elegant solution to the problem of creating a digital currency". David Andolfatto, Vice President at the Federal Reserve Bank of St. Louis, stated that bitcoin is a threat to the establishment, which he argues is a good thing for the Federal Reserve System and other central banks, because it prompts these institutions to operate sound policies.: 33 As of 2018, the overwhelming majority of bitcoin transactions took place on cryptocurrency exchanges, rather than being used in transactions with merchants. Prices are not usually quoted in units of bitcoin and many trades involve one, or sometimes two, conversions into conventional currencies. Commonly cited reasons for not using Bitcoin include high costs, the inability to process chargebacks, high price volatility, long transaction times, transaction fees (especially for small purchases). Still, Bloomberg reported that bitcoin was being used for large-item purchases on the site Overstock.com, and for cross-border payments to freelancers. As of 2015, there was little sign of bitcoin use in international remittances despite high fees charged by banks and Western Union who compete in this market. The South China Morning Post, however, mentions the use of bitcoin by Hong Kong workers to transfer money home.In September 2021, the Bitcoin Law made bitcoin legal tender in El Salvador, alongside the US dollar. The adoption has been criticized both internationally and within El Salvador. In particular, in 2022, the International Monetary Fund (IMF) urged El Salvador to reverse its decision. As of 2022, the use of Bitcoin in El Salvador remains low: 80% of businesses refused to accept it despite being legally required to. In April 2022, the Central African Republic (CAR) adopted Bitcoin as legal tender alongside the CFA franc, but repealed the reform one year later. Use as an investment Bitcoins can be bought on digital currency exchanges. Other methods of investment are bitcoin funds. The first regulated bitcoin fund was established in Jersey in July 2014 and approved by the Jersey Financial Services Commission. On 10 December 2017, the Chicago Board Options Exchange started trading bitcoin futures, followed by the Chicago Mercantile Exchange, which started trading bitcoin futures on 17 December 2017.Individuals such as Winklevoss twins have purchased large amounts of bitcoin. In 2013, The Washington Post reported a claim that they owned 1% of all the bitcoins in existence at the time. Similarly, Elon Musk's companies SpaceX and Tesla, and Michael Saylor's company MicroStrategy have invested massively in bitcoin. Forbes named bitcoin the best investment of 2013. In 2014, Bloomberg named bitcoin one of its worst investments of the year. In 2015, bitcoin topped Bloomberg's currency tables.The price of bitcoins has gone through cycles of appreciation and depreciation referred to by some as bubbles and busts. According to Mark T. Williams, as of 2014, bitcoin had volatility seven times greater than gold, eight times greater than the S&P 500, and 18 times greater than the US dollar. hodl is a term created in December 2013 for holding Bitcoin rather than selling it during periods of volatility. It has been described as a mantra of the Bitcoin community. Use by governments In 2020, Iran announced pending regulations that would require bitcoin miners in Iran to sell bitcoin to the Central Bank of Iran, and the central bank would use it for imports. Iran, as of October 2020, had issued over 1,000 bitcoin mining licenses. The Iranian government initially took a stance against cryptocurrency, but later changed it after seeing that digital currency could be used to circumvent sanctions.Some constituent states accept tax payments in bitcoin, including Colorado (US) and Zug (Switzerland). Ideology According to the European Central Bank, the decentralization of money offered by bitcoin has its theoretical roots in the Austrian school of economics, especially with Friedrich von Hayek's book The Denationalization of Money, in which he advocates a complete free market in the production, distribution and management of money to end the monopoly of central banks.: 22  Sociologist Nigel Dodd, citing the crypto-anarchist Declaration of Bitcoin's Independence, argues that the essence of the bitcoin ideology is to remove money from social, as well as governmental, control. The Economist describes bitcoin as "a techno-anarchist project to create an online version of cash, a way for people to transact without the possibility of interference from malicious governments or banks".Libertarians and anarchists were attracted to these philosophical ideas. David Golumbia says that the ideas influencing bitcoin advocates emerge from right-wing extremist movements such as the Liberty Lobby and the John Birch Society and their anti-central bank rhetoric, or, more recently, Ron Paul and Tea Party-style libertarianism. Steve Bannon, who owns bitcoins, considers it to be "disruptive populism. It takes control back from central authorities. It's revolutionary." Economist Paul Krugman argues that cryptocurrencies like bitcoin are "something of a cult" based in "paranoid fantasies" of government power. Economic, legal and environmental concerns Legal status The legal status of bitcoin varies substantially from one jurisdiction to another, and is still undefined or changing in many of them. Because of its decentralized nature and its global presence, regulating bitcoin is difficult. However, the use of bitcoin can be criminalized, and shutting down exchanges and the peer-to-peer economy in a given country would constitute a de facto ban. As of 2021, nine countries applied an "absolute ban" on trading or using cryptocurrencies (Algeria, Bolivia, Egypt, Iraq, Morocco, Nepal, Pakistan, Vietnam, and the United Arab Emirates) while another 42 countries had an "implicit ban". Bitcoin is only legal tender in El Salvador. Alleged bubble and Ponzi scheme Bitcoin, along with other cryptocurrencies, has been described as an economic bubble by several economists, including at least eight Nobel Prize in Economics laureates, such as Robert Shiller, Joseph Stiglitz, Richard Thaler, and Paul Krugman. On the other hand, another recipient of the prize, Robert Shiller, argues that bitcoin is not a bubble. He describes Bitcoin's price growth as an "epidemic", driven by contagious ideas and narratives, where its media-fueled price volatility perpetuates its value and popularity.Journalists, economists, investors, and the central bank of Estonia have voiced concerns that bitcoin is a Ponzi scheme. However, according to law professor Eric Posner "a real Ponzi scheme takes fraud; bitcoin, by contrast, seems more like a collective delusion." A 2014 World Bank report concluded that bitcoin was not a deliberate Ponzi scheme.: 7  Also in 2014, the Swiss Federal Council concluded that bitcoin was not a pyramid scheme as "the typical promises of profits are lacking".: 21  Bitcoin wealth is highly concentrated, with 0.01% holding 27% of in-circulation currency, as of 2021. Price manipulation investigations In May 2018, the United States Department of Justice investigated bitcoin traders for possible price manipulation, focusing on practices like spoofing and wash trades. The investigation, which involved key exchanges like Bitstamp, Coinbase, and Kraken, led to subpoenas from the Commodity Futures Trading Commission after these exchanges failed to fully comply with information requests.In 2018, research published in the Journal of Monetary Economics concluded that price manipulation occurred during the Mt. Gox bitcoin theft and that the market remained vulnerable to manipulation. Research published in The Journal of Finance also suggested that trading associated with increases in the amount of the Tether cryptocurrency and associated trading at the Bitfinex exchange accounted for about half of the price increase in bitcoin in late 2017. Bitfinex and Tether denied the claims of price manipulation. Use in illegal transactions The use of bitcoin by criminals has attracted the attention of financial regulators, legislative bodies, and law enforcement. Several news outlets have asserted that the popularity of bitcoins hinges on the ability to use them to purchase illegal goods. Nobel-prize winning economist Joseph Stiglitz says that bitcoin's anonymity encourages money laundering and other crimes. Environmental effects See also Alternative currency Notes References Further reading Nakamoto, Satoshi (31 October 2008). "Bitcoin: A Peer-to-Peer Electronic Cash System" (PDF). bitcoin.org. Archived (PDF) from the original on 20 March 2014. Retrieved 28 April 2014. External links Official website
alternate wetting and drying
Alternate wetting and drying (AWD) is a water management technique, practiced to cultivate irrigated lowland rice with much less water than the usual system of maintaining continuous standing water in the crop field. It is a method of controlled and intermittent irrigation. A periodic drying and re-flooding irrigation scheduling approach is followed in which the fields are allowed to dry for few days before re-irrigation, without stressing the plants. This method reduces water demand for irrigation and greenhouse gas emissions without reducing crop yields. History Drying and flooding practices have been used for several decades as a water-saving measure, but in many cases, farmers were practicing an uncontrolled or unplanned drying and re-flooding method. Farmers were practicing ‘forced’ AWD as early as 2006 in the AMRIS region. Some water management practices and especially keeping non-flooded conditions in the rice field for short intervals are common for about 40% of rice farmers in China and more than 80% rice farmers in North-Western India and in Japan. However, nowadays farmers follow a ‘safe’ AWD in which they maintain the 15-cm subsurface water level threshold for re-flooding. This method has become a recommended practice in water-scarce irrigated rice areas in South and Southeast Asia. In the Philippines, the adoption of safe AWD started in Tarlac Province in 2002 with farmers using deep-well pump systems. The International Rice Research Institute (IRRI) has been promoting alternate wetting and drying as a smart water-saving technology for rice cultivation through national agricultural research and extension in Bangladesh, the Philippines, and Vietnam. Implementation and operation AWD is suitable for lowland rice growing areas where soils can be drained in 5-day intervals. The field will be unable to dry during rice season if rainfall exceeds evapotranspiration and seepage. Therefore, AWD is suitable for dry season rice cultivation. Implementation method A water tube/pipe made of PVC is usually used to practice AWD method. The main purpose of the tube is to monitor the water depth. The tube allows measuring water availability in the field below the soil surface. The usual practice is to use a pipe of 7–10 cm diameter and 30 cm long, with perforations in bottom 20 cm. The pipe is installed in such a way that the bottom 20 cm of perforated portion remains below the soil surface and the non-perforated 10 cm above the surface. The perforations permit the water to come inside the tube from the soil, where a scale is used to measure water depth below the soil surface. However, there are variations in preparing the tube/pipe for the implementation of AWD. Some farmers use a bamboo pipe instead of PVC pipe. Some farmers use a 30 cm tube with 15 cm perforated at the bottom. Operation technique After the irrigation in the crop field, the water depth gradually decreases because of evapotranspiration, seepage, and percolation. Because of the installed tubes in the field, it is possible to monitor the water depth below the soil surface up to 15–20 cm. When the water level drops 15 cm below the soil surface, irrigation should be applied in the field to re-flood to a depth of 5 cm. During the flowering stage of the rice, the field should be kept flooded. After flowering, during the mid-season and late season (grain filling and ripening stages), the water level is allowed to drop below the soil surface to 15 cm before re-irrigation. To suppress the growth of weeds in the rice field, AWD method should be followed 1–2 weeks after the transplantation. In the case of many weeds in the field, AWD needs to be started after three weeks of transplantation. Usually, the fertilizer recommendations are as same as continuous flooding method. Application of nitrogen fertilizer is preferable on dry soil just before re-irrigation. To ensure a similar dry or wet condition throughout the crop field, which is essential to maintain good yield, it is important to level the rice field properly. Advantages and disadvantages Advantages AWD method can save water by about 38% without adversely affecting rice yields. This method increases water productivity by 16.9% compared with continuously flood irrigation. High-yielding rice varieties developed for continuously flood irrigation rice system still produce high yield under safe AWD. This method can even increase grain yield because of enhancement in grain-filling rate, root growth and remobilization of carbon reserves from vegetative tissues to grains.AWD can reduce the cost of irrigation by reducing pumping costs and fuel consumption. This method can also reduce the labor costs by improving field conditions at harvest, allowing mechanical harvest. AWD leads to firmer soil conditions at harvest, which is suitable to operate machines in the field. Therefore, AWD increases net return for farmers. Several studies also indicate that AWD reduces methane (CH4) emissions. AWD practice reduced seasonal CH4 emissions up to 85%. CH4 is produced by the anaerobic decomposition of the organic material in the wet/flooded paddy field. Allowing to drop water level below soil surface removes the anaerobic condition for some time until re-flooded and pauses the production of CH4 from the rice field for several times and, hence, reduce the total amount of CH4 released during the rice growing season. This method has been assumed to reduce CH4 emissions by an average of 48% compared to continuous flooding in the 2006 IPCC methodology. Alternate wetting and moderate soil drying reduce cadmium accumulation in rice grains. AWD can dramatically reduce the concentration of arsenic in harvested rice grains. A variant of AWD such as e-AWD practice can reduce grain arsenic, lead and cadmium levels up to 66, 73 and 33% respectively. This method can also reduce insect pests and diseases. Periodic soil drying may reduce the incidence of fungal diseases. Disadvantages The major disadvantage of AWD method is the increased N2O emissions. Also, rice productivity can reduce by following AWD for non-trained farmers. High weed growth rate in the crop field is a major problem from the farmers' point of view. See also Irrigation Irrigation management Conservation agriculture Paddy field Surface irrigation Environmental impact of irrigation Water conservation == References ==
plug-in electric vehicle
A plug-in electric vehicle (PEV) is any road vehicle that can utilize an external source of electricity (such as a wall socket that connects to the power grid) to store electrical energy within its onboard rechargeable battery packs, to power an electric motor and help propelling the wheels. PEV is a subset of electric vehicles, and includes all-electric/battery electric vehicles (BEVs) and plug-in hybrid electric vehicles (PHEVs). Sales of the first series production plug-in electric vehicles began in December 2008 with the introduction of the plug-in hybrid BYD F3DM, and then with the all-electric Mitsubishi i-MiEV in July 2009, but global retail sales only gained traction after the introduction of the mass production all-electric Nissan Leaf and the plug-in hybrid Chevrolet Volt in December 2010. Plug-in electric cars have several benefits compared to conventional internal combustion engine vehicles. All-electric vehicles have lower operating and maintenance costs, and produce little or no air pollution when under all-electric mode, thus (depending on the electricity source) reducing societal dependence on fossil fuels and significantly decreasing greenhouse gas emissions, but recharging takes longer time than refueling and is heavily reliant on sufficient charging infrastructures to remain operationally practical. Plug-in hybrid vehicles are a good in-between option that provides most of electric cars' benefits when they are operating in electric mode, though typically having shorter all-electric ranges, but have the auxiliary option of driving as a conventional hybrid vehicle when the battery is low, using its internal combustion engine (usually a gasoline engine) to alleviate the range anxiety that accompanies current electric cars. Cumulative global sales of highway-legal plug-in electric passenger cars and light utility vehicles achieved the 1 million unit mark in September 2015, 5 million in December 2018. and the 10 million unit milestone in 2020. Despite the rapid growth experienced, however, the stock of plug-in electric cars represented just 1% of all passengers vehicles on the world's roads by the end of 2020, of which pure electrics constituted two thirds.As of June 2021, the Tesla Model 3 ranked as the world's top selling highway-capable plug-in electric car in history, and also was the first electric car to achieve global sales of more than 1,000,000 units, The Mitsubishi Outlander P-HEV is the world's all-time best selling plug-in hybrid, with global sales of around 300,000 units through January 2022.As of December 2021, China had the world's largest stock of highway legal plug-in electric passenger cars with 7.84 million units, representing 46% of the world's stock of plug-in cars. Europe ranked next with about 5.6 million light-duty plug-in cars and vans at the end of 2021, accounting for around 32% of the global stock. The U.S. cumulative sales totaled about 2.32 million plug-in cars through December 2021. As of July 2021, Germany is the leading European country with cumulative sales of 1 million plug-in vehicles on the road, and also has led the continent plug-in sales since 2019. Norway has the highest market penetration per capita in the world, and also achieved in 2021 the world's largest annual plug-in market share ever registered, 86.2% of new car sales. Terminology Plug-in electric vehicle A plug-in electric vehicle (PEV) is any motor vehicle with rechargeable battery packs that can be charged from the electric grid, and the electricity stored on board drives or contributes to drive the wheels for propulsion. Plug-in electric vehicles are also sometimes referred to as grid-enabled vehicles (GEV), and the European Automobile Manufacturers Association (ACEA) calls them electrically chargeable vehicles (ECV).PEV is a subcategory of electric vehicles that includes battery electric vehicles (BEVs), plug-in hybrid vehicles (PHEVs), and electric vehicle conversions of hybrid electric vehicles and conventional internal combustion engine vehicles. Even though conventional hybrid electric vehicles (HEVs) have a battery that is continually recharged with power from the internal combustion engine and regenerative braking, they can not be recharged from an off-vehicle electric energy source, and therefore, they do not belong to the category of plug-in electric vehicles."Plug-in electric drive vehicle" is the legal term used in U.S. federal legislation to designate the category of motor vehicles eligible for federal tax credits depending on battery size and their all-electric range. In some European countries, particularly in France, "electrically chargeable vehicle" is the formal term used to designate the vehicles eligible for these incentives. While the term "plug-in electric vehicle" most often refers to automobiles or "plug-in cars", there are several other types of plug-in electric vehicle, including battery electric multiple units, electric motorcycles and scooters, neighborhood electric vehicles or microcars, city cars, vans, buses, electric trucks or lorries, and military vehicles. New energy vehicles In China, the term new energy vehicles (NEVs) refers to automobile vehicles that are fully powered or predominantly powered by alternative energy other than fossil fuels, typically referring to electric motor-propelled vehicles such as battery electric vehicles (BEVs), plug-in hybrids (PHEVs) and fuel cell electric vehicles (FCEVs, mainly hydrogen vehicles). The Chinese Government began implementation of its NEV program in 2009 to foster the development and introduction of new energy vehicles, aiming to reduce the country's reliance on petroleum imports and better achieve energy security by promoting more electrified transportation powered by domestically generated, sustainable energy supply (see Energy policy of China). Battery electric vehicles A battery electric vehicle (BEV) uses chemical energy stored in rechargeable battery packs as its only source for propulsion. BEVs use electric motors and motor controllers instead of internal combustion engines (ICEs) for propulsion.A plug-in hybrid operates as an all-electric vehicle or BEV when operating in charge-depleting mode, but it switches to charge-sustaining mode after the battery has reached its minimum state of charge (SOC) threshold, exhausting the vehicle's all-electric range (AER).BEVs are all-electric vehicles that use only electricity to power the motor, with no internal combustion engine. The energy stored in the rechargeable battery pack is used to power the electric motor, which propels the vehicle. BEVs have no tailpipe emissions and can be charged by plugging into a charging station or using a home charging system. They are known for their quiet operation, instant torque, and smooth ride, and are also more environmentally friendly than traditional gasoline-powered vehicles. However, the range of a BEV is limited by the size of its battery, so it may not be suitable for long-distance travel. Plug-in hybrid electric vehicles A plug-in hybrid electric vehicle (PHEV or PHV), also known as a plug-in hybrid, is a hybrid electric vehicle with rechargeable batteries that can be restored to full charge by connecting a plug to an external electric power source. A plug-in hybrid shares the characteristics of both a conventional hybrid electric vehicle and an all-electric vehicle: it uses a gasoline engine and an electric motor for propulsion, but a PHEV has a larger battery pack that can be recharged, allowing operation in all-electric mode until the battery is depleted.PHEVs are a hybrid between a traditional gasoline-powered vehicle and an all-electric vehicle. They have both an internal combustion engine and an electric motor powered by a rechargeable battery pack. PHEVs operate as all-electric vehicles when the battery has a sufficient charge, and switch to gasoline power when the battery is depleted. This provides a longer driving range compared to a pure electric vehicle, as the gasoline engine acts as a range extender. PHEVs can be charged by plugging into a charging station or using a home charging system. They are a good option for those who want the benefits of electric driving, but also need the longer driving range and refueling convenience of a gasoline-powered vehicle. Aftermarket conversions An aftermarket electric vehicle conversion is the modification of a conventional internal combustion engine vehicle (ICEV) or hybrid electric vehicle (HEV) to electric propulsion, creating an all-electric or plug-in hybrid electric vehicle.There are several companies in the U.S. offering conversions. The most common conversions have been from hybrid electric cars to plug-in hybrid, but due to the different technology used in hybrids by each carmaker, the easiest conversions are for 2004–2009 Toyota Prius and for the Ford Escape/Mercury Mariner Hybrid. Advantages compared to ICE vehicles PEVs have several advantages. These include improved air quality, reduced greenhouse gas emissions, noise reduction, and national security benefits. According to the Center for American Progress, PEVs are an important part of the group of technologies that will help the U.S. meet its goal under the Paris Agreement, which is a 26–28 percent reduction in greenhouse gas emissions by the year 2025. Improved air quality Electric cars, as well as plug-in hybrids operating in all-electric mode, emit no harmful tailpipe pollutants from the onboard source of power, such as particulates (soot), volatile organic compounds, hydrocarbons, carbon monoxide, ozone, lead, and various oxides of nitrogen. However, like ICE cars, electric cars emit particulates from brake and tyres. Depending on the source of the electricity used to recharge the batteries, air pollutant emissions are shifted to the location of the generation plants where they can be more easily captured from flue gases. Cities with chronic air pollution problems, such as Los Angeles, México City, Santiago, Chile, São Paulo, Beijing, Bangkok and Kathmandu may also gain local clean air benefits by shifting the harmful emission to electric generation plants located outside the cities. Lower greenhouse gas emissions Plug-in electric vehicles operating in all-electric mode do not emit greenhouse gases from the onboard source of power, but from the point of view of a well-to-wheel assessment, the extent of the benefit also depends on the fuel and technology used for electricity generation. This fact has been referred to as the long tailpipe of plug-in electric vehicles. From the perspective of a full life cycle analysis, the electricity used to recharge the batteries must be generated from renewable or clean sources such as wind, solar, hydroelectric, or nuclear power for PEVs to have almost none or zero well-to-wheel emissions. In the case of plug-in hybrid electric vehicles operating in hybrid mode with assistance of the internal combustion engine, tailpipe and greenhouse emissions are lower in comparison to conventional cars because of their higher fuel economy.The magnitude of the potential advantage depends on the mix of generation sources and therefore varies by country and by region. For example, France can obtain significant emission benefits from electric and plug-in hybrids because most of its electricity is generated by nuclear power plants; similarly, most regions of Canada are primarily powered with hydroelectricity, nuclear, or natural gas which have no or very low emissions at the point of generation; and the state of California, where most energy comes from natural gas, hydroelectric and nuclear plants can also secure substantial emission benefits. The United Kingdom also has a significant potential to benefit from PEVs as low carbon sources such as wind turbines dominate the generation mix. Nevertheless, the location of the plants is not relevant when considering greenhouse gas emission because their effect is global.Lifecycle GHG emissions are complex to calculate, but compared to ICE cars generally while the EV battery causes higher emissions during vehicle manufacture this excess carbon debt is paid back after several months of driving. Lower operating and maintenance costs Internal combustion engines are relatively inefficient at converting on-board fuel energy to propulsion as most of the energy is wasted as heat, and the rest while the engine is idling. Electric motors, on the other hand, are more efficient at converting stored energy into driving a vehicle. Electric drive vehicles do not consume energy while at rest or coasting, and modern plug-in cars can capture and reuse as much as one fifth of the energy normally lost during braking through regenerative braking. Typically, conventional gasoline engines effectively use only 15% of the fuel energy content to move the vehicle or to power accessories, and diesel engines can reach on-board efficiencies of 20%, while electric drive vehicles typically have on-board efficiencies of around 80%. All-electric and plug-in hybrid vehicles also have lower maintenance costs as compared to internal combustion vehicles, since electronic systems break down much less often than the mechanical systems in conventional vehicles, and the fewer mechanical systems onboard last longer due to the better use of the electric engine. PEVs do not require oil changes and other routine maintenance checks. Less dependence on imported oil For many developing countries, and particularly for the poorest African countries, oil imports have an adverse impact on the government budget and deteriorate their terms of trade thus jeopardizing their balance of payments, all leading to lower economic growth.Through the gradual replacement of internal combustion engine vehicles for electric cars and plug-in hybrids, electric drive vehicles can contribute significantly to lessen the dependence of the transport sector on imported oil as well as contributing to the development of a more resilient energy supply. Vehicle-to-grid Plug-in electric vehicles offer users the opportunity to sell electricity stored in their batteries back to the power grid, thereby helping utilities to operate more efficiently in the management of their demand peaks. A vehicle-to-grid (V2G) system would take advantage of the fact that most vehicles are parked an average of 95 percent of the time. During such idle times the electricity stored in the batteries could be transferred from the PEV to the power lines and back to the grid. In the U.S. this transfer back to the grid have an estimated value to the utilities of up to $4,000 per year per car. In a V2G system it would also be expected that battery electric (BEVs) and plug-in hybrids (PHEVs) would have the capability to communicate automatically with the power grid to sell demand response services by either delivering electricity into the grid or by throttling their charging rate. Disadvantages Cost of batteries As of 2020, plug-in electric vehicles are significantly more expensive as compared to conventional internal combustion engine vehicles and hybrid electric vehicles due to the additional cost of their lithium-ion battery pack. Cost reductions through advances in battery technology and higher production volumes will allow plug-in electric vehicles to be more competitive with conventional internal combustion engine vehicles.Bloomberg New Energy Finance (BNEF) concludes that battery costs are on a trajectory to make electric vehicles without government subsidies as affordable as internal combustion engine cars in most countries by 2022. BNEF projects that by 2040, long-range electric cars will cost less than US$22,000 expressed in 2016 dollars. BNEF expects electric car battery costs to be well below US$120 per kWh by 2030, and to fall further thereafter as new chemistries become available. Availability of recharging infrastructure Despite the widespread assumption that plug-in recharging will take place overnight at home, residents of cities, apartments, dormitories, and townhouses do not have garages or driveways with available power outlets, and they might be less likely to buy plug-in electric vehicles unless recharging infrastructure is developed. Electrical outlets or charging stations near their places of residence, in commercial or public parking lots, streets and workplaces are required for these potential users to gain the full advantage of PHEVs, and in the case of EVs, to avoid the fear of the batteries running out energy before reaching their destination, commonly called range anxiety. Even house dwellers might need to charge at the office or to take advantage of opportunity charging at shopping centers. However, this infrastructure is not in place and it will require investments by both the private and public sectors. Battery swapping A different approach to resolve the problems of range anxiety and lack of recharging infrastructure for electric vehicles was developed by Better Place but was not successful in the United States in the 2010s. As of 2020 battery swapping is available in China and is planned for other countries. Potential overload of local electrical grids The existing low-voltage network, and local transformers in particular, may not have enough capacity to handle the additional power load that might be required in certain areas with high plug-in electric car concentrations. As recharging a single electric-drive car could consume three times as much electricity as a typical home, overloading problems may arise when several vehicles in the same neighborhood recharge at the same time, or during the normal summer peak loads. To avoid such problems, utility executives recommend owners to charge their vehicles overnight when the grid load is lower or to use smarter electric meters that help control demand. When market penetration of plug-in electric vehicles begins to reach significant levels, utilities will have to invest in improvements for local electrical grids in order to handle the additional loads related to recharging to avoid blackouts due to grid overload. Also, some experts have suggested that by implementing variable time-of-day rates, utilities can provide an incentive for plug-in owners to recharge mostly overnight when rates are lower.In the five years from 2014 to 2019, EVs increased in number and range, and doubled power draw and energy per session. Charging increased after midnight, and decreased in the peak hours of early evening. Risks associated with noise reduction Electric cars and plug-in hybrids when operating in all-electric mode at low speeds produce less roadway noise as compared to vehicles propelled by an internal combustion engine, thereby reducing harmful noise health effects. However, blind people or the visually impaired consider the noise of combustion engines a helpful aid while crossing streets, hence plug-in electric cars and conventional hybrids could pose an unexpected hazard when operating at low speeds. Several tests conducted in the U.S. have shown that this is a valid concern, as vehicles operating in electric mode can be particularly hard to hear below 20 mph (30 km/h) for all types of road users and not only the visually impaired. At higher speeds the sound created by tire friction and the air displaced by the vehicle start to make sufficient audible noise. Therefore, in the 2010s laws were passed in many countries mandating warning sounds at low speeds. Risks of battery fire Lithium-ion batteries may suffer thermal runaway and cell rupture if overheated or overcharged, and in extreme cases this can lead to combustion. Reignition may occur days or months after the original fire has been extinguished. To reduce these risks, lithium-ion battery packs contain fail-safe circuitry that shuts down the battery when its voltage is outside the safe range.Several plug-in electric vehicle fire incidents have taken place since the introduction of mass-production plug-in electric vehicles in 2008. Most of them have been thermal runaway incidents related to the lithium-ion batteries. General Motors, Tesla, and Nissan have published a guide for firefighters and first responders to properly handle a crashed plug-in electric-drive vehicle and safely disable its battery and other high voltage systems. Car dealers' reluctance to sell With the exception of Tesla Motors, almost all new cars in the United States are sold through dealerships, so they play a crucial role in the sales of electric vehicles, and negative attitudes can hinder early adoption of plug-in electric vehicles. Dealers decide which cars they want to stock, and a salesperson can have a big impact on how someone feels about a prospective purchase. Sales people have ample knowledge of internal combustion cars while they do not have time to learn about a technology that represents a fraction of overall sales. As with any new technology, and in the particular case of advanced technology vehicles, retailers are central to ensuring that buyers, especially those switching to a new technology, have the information and support they need to gain the full benefits of adopting this new technology. There are several reasons for the reluctance of some dealers to sell plug-in electric vehicles. PEVs do not offer car dealers the same profits as a gasoline-powered cars. Plug-in electric vehicles take more time to sell because of the explaining required, which hurts overall sales and sales people commissions. Electric vehicles also may require less maintenance, resulting in loss of service revenue, and thus undermining the biggest source of dealer profits, their service departments. According to the National Automobile Dealers Association (NADS), dealers on average make three times as much profit from service as they do from new car sales. Government incentives and policies Several national, provincial, and local governments around the world have introduced policies to support the mass market adoption of plug-in electric vehicles. A variety of policies have been established to provide direct financial support to consumers and manufacturers; non-monetary incentives; subsidies for the deployment of charging infrastructure; procurement of electric vehicle for government fleets; and long term regulations with specific targets. Financial incentives for consumers aim to make plug-in electric car purchase price competitive with conventional cars due to the still higher up front cost of electric vehicles. Among the financial incentives there are one-time purchase incentives such as tax credits, purchase grants, exemptions from import duties, and other fiscal incentives; exemptions from road, bridge and tunnel tolls, and from congestion pricing fees; and exemption of registration and annual use vehicle fees. Some countries, like France, also introduced a bonus–malus CO2 based tax system that penalize fossil-fuel vehicle sales.As of 2020, monetary incentives for electrically chargeable vehicles are available, among others, in several European Union member states, China, the United States, the UK, Japan, Norway, some provinces in Canada, South Korea, India, Israel, Colombia, and Costa Rica. Among the non-monetary incentives there are several perks such allowing plug-in vehicles access to bus lanes and high-occupancy vehicle lanes, free parking and free charging. In addition, in some countries or cities that restrict private car ownership (purchase quota system for new vehicles), or have implemented permanent driving restrictions (no-drive days), the schemes often exclude electric vehicles from the restrictions to promote their adoption.For example, in Beijing, the license plate lottery scheme specifies a fixed number of vehicle purchase permits each year, but to promote the electrification of its fleet, the city government split the number of purchase permits into two lots, one for conventional vehicles, and another dedicated for all-electric vehicle applicants. In the case of cities with driving alternate-days based on the license plate number, such as San José, Costa Rica, since 2012, São Paulo and Bogotá since 2014, and Mexico City since 2015, all-electric vehicles were excluded from the driving restrictions.Some government have also established long term regulatory signals with specific target timeframes such as ZEV mandates, national or regional CO2 emissions regulations, stringent fuel economy standards, and the phase out of internal combustion engine vehicle sales. For example, Norway set a national goal that all new car sales by 2025 should be zero emission vehicles (electric or hydrogen). Also, some cities are planning to establish zero-emission zones (ZEZ) restricting traffic access into an urban cordon area or city center where only zero-emission vehicles (ZEVs) are allowed access. In such areas, all internal combustion engine vehicles are banned.As of June 2020, Oxford is slated to be the first city to implement a ZEZ scheme, beginning with a small area scheduled to go into effect by mid 2021. The plan is to expand the clean air zone gradually into a much larger zone, until the ZEZ encompasses the majority of the city center by 2035.As of May 2020, other cities planning to gradually introduce ZEZ, or partial or total ban fossil fuel powered vehicles include, among others, Amsterdam (2030), Athens (2025), Barcelona (2030), Brussels (2030/2035), Copenhagen (2030), London (2020/2025), Los Angeles (2030), Madrid (2025), Mexico City (2025), Milan (2030), Oslo (2024/2030), Paris (2024/2030), Quito (2030), Rio de Janeiro (2030), Rome (2024/2030), Seattle (2030), and Vancouver (2030). Production plug-in electric vehicles available During the 1990s several highway-capable plug-in electric cars were produced in limited quantities, all were battery electric vehicles. PSA Peugeot Citroën launched several electric "Électrique" versions of its models starting in 1991. Other models were available through leasing mainly in California. Popular models included the General Motors EV1 and the Toyota RAV4 EV. Some of the latter were sold to the public and were still in use by the early 2010s.In the late 2000s began a new wave of mass production plug-in electric cars, motorcycles and light trucks. However, as of 2011, most electric vehicles in the world roads were low-speed, low-range neighborhood electric vehicles (NEVs) or electric quadricycles. Sales of low-speed electric vehicles experienced considerable growth in China between 2012 and 2016, with an estimated NEV stock between 3 million to 4 million units, with most powered by lead-acid batteries.As of December 2019, according to the International Energy Agency, there were about 250 models of highway-capable plug-in electric passenger cars available for sale in the world, up from 70 in 2014. There are also available several commercial models of electric light commercial vehicles, plug-in motorcycles, all-electric buses, and plug-in heavy-duty trucks. The Renault–Nissan–Mitsubishi Alliance is the world's leading light-duty electric vehicle manufacturer. Since 2010, the Alliance's global all-electric vehicle sales totaled almost 725,000 units, including those manufactured by Mitsubishi Motors through December 2018, now part of the Alliance. Its best selling all-electric Nissan Leaf was the world's top selling plug-in electric car in 2013 and 2014. Tesla is the world's second largest plug-in electric passenger car manufacturer with global sales since 2012 of over 532,000 units as of December 2018. Its Model S was the world's top selling plug-in car in 2015 and 2016, and its Model 3 was the world's best selling plug-in electric car in 2018. Ranking next is BYD Auto with more than 529,000 plug-in passenger cars delivered in China through December 2018. Its Qin plug-in hybrid is the company's top selling model with almost 137,000 units sold in China through December 2018. As of December 2018, the BMW Group had sold more than 356,000 plug-in cars since inception of the BMW i3, accounting for global sales its BMW i cars, BMW iPerformance plug-in hybrid models, and MINI brand plug-ins.BYD Auto ended 2015 as the world's best selling manufacturer of highway legal light-duty plug-in electric vehicles, with 61,722 units sold, followed by Tesla, with 50,580 units. BYD was again the world's top selling plug-in car manufacturer in 2016, with 101,183 units delivered, one more time followed again by Tesla with 76,243 units. In 2017 BYD ranked for the third consecutive year as the global top plug-in car manufacturer with 113,669 units delivered. BAIC ranked second with 104,520 units. The BAIC EC-Series all-electric city car ranked as the world's top selling plug-in car in 2017 with 78,079 units sold in China.After 10 years in the market, Tesla was the world's top selling plug-in passenger car manufacturer in 2018, both as a brand and by automotive group, with 245,240 units delivered and a market share of 12% of all plug-in cars sold globally in 2018. BYD Auto ranked second with 227,152 plug-in passenger cars sold worldwide, representing a market share of 11%. Sales and main markets The global stock of plug-in electric vehicles between 2005 and 2009 consisted exclusively of all-electric cars, totaling about 1,700 units in 2005, and almost 6,000 in 2009. The plug-in stock rose to about 12,500 units in 2010, of which, only 350 vehicles were plug-in hybrids. By comparison, during the Golden Age of the electric car at the beginning of the 20th century, the EV stock peaked at approximately 30,000 vehicles.After the introduction of the first mass-production plug-in cars by major carmakers in late 2010, plug-in car global sales went from about 50,000 units in 2011, to 125,000 in 2012, almost 213,000 in 2013, and over 315,000 units in 2014. By mid-September 2015, the global stock of highway legal plug-in electric passenger cars and utility vans reached the one million sales milestone, almost twice as fast as hybrid electric vehicles (HEV). Light-duty plug-in electric vehicle sales in 2015 increased to more than 565,000 units in 2015, about 80% from 2014, driven mainly by China and Europe. Both markets passed in 2015 the U.S. as the largest plug-in electric car markets in terms of total annual sales, with China ranking as the world's best-selling plug-in electric passenger car country market in 2015. About 775,000 plug-in cars and vans were sold in 2016, and cumulative global sales passed the 2 million milestone by the end of 2016. The global market share of the light-duty plug-in vehicle segment achieved a record 0.86% of total new car sales in 2016, up from 0.62% in 2015 and 0.38% in 2014.Cumulative global light-duty plug-in vehicle sales passed the 3 million milestone in November 2017. About 1.2 million plug-ins cars and vans were sold worldwide in 2017, with China accounting for about half of global sales. The plug-in car segment achieved a 1.3% market share. Plug-in passenger car sales totaled just over 2 million in 2018, with a market share of 2.1%. The global stock reached 5.3 million light-duty plug-in vehicles in December 2018.By the end of 2019 the stock of light-duty plug-in vehicles totaled 7.55 million units, consisting of 4.79 million all-electric cars, 2.38 million plug-in hybrid cars, and 377,970 electric light commercial vehicles. Plug-in passenger cars still represented less than 1% of the world's car fleet in use. In addition, there were about half a million electric buses in circulation in 2019, most of them in China. In 2020, global cumulative sales of light-duty plug-in vehicles passed the 10 million unit milestone.Despite the rapid growth experienced, the plug-in electric car segment represented just about 1 out of every 250 vehicles on the world's roads by the end of 2018 (0.4%), rose to 1 out of every 200 motor vehicles (0.48%) in 2019, and by the end of 2020, the stock share of plug-in cars on the world's roads was 1%. All-electric cars have outsold plug-in hybrids for several years, and by the end of 2019, the shift towards battery electric cars continued. The global ratio between all-electrics (BEVs) and plug-in hybrids (PHEVs) went from 56:44 in 2012, to 60:40 in 2015, increased to 66:34 in 2017, and rose to 69:31 in 2018, and reached 74:26 in 2019. Out of the 7.2 million plug-in passenger cars in use at the end of 2019, two thirds were all-electric cars (4.8 million).Since 2016, China has the world's largest fleet of light-duty plug-in electric vehicles, after having overtaken during 2016 both the U.S. and Europe in terms of cumulative sales. The fleet of Chinese plug-in passenger cars represented 46.7% of the global stock of plug-in cars at the end of 2019. Europe listed next with 24.8%, followed by the U.S. with 20.2% of the global stock in use.As of December 2017, 25 cities accounted for 44% of the world's stock of plug-in electric cars, while representing just 12% of world passenger vehicle sales. Shanghai led the world with cumulative sales of over 162,000 electric vehicles since 2011, followed by Beijing with 147,000 and Los Angeles with 143,000. Among these cities, Bergen has the highest market share of the plug-in segment, with about 50% of new car sales in 2017, followed by Oslo with 40%. China As of December 2021, China had the world's largest stock of highway legal plug-in cars with 7.84 million units, corresponding to about 46% of the global plug-in car fleet in use. Of these, all-electric cars accounted for 81.6% of the all new energy passenger cars in circulation. Plug-in passenger cars represent 2.6% of all cars on Chinese roads at the end of 2021. Domestically produced cars dominate new energy car sales in China, accounting for about 96% of sales in 2017. Another particular feature of the Chinese passenger plug-in market is the dominance of small entry level vehicles.China also dominates the plug-in light commercial vehicle and electric bus deployment, with its stock reaching over 500,000 buses in 2019, 98% of the global stock, and 247,500 electric light commercial vehicles, 65% of the global fleet. In addition, the country also leads sales of medium- and heavy duty electric trucks, with over 12,000 trucks sold, and nearly all battery electric. Since 2011, combined sales of all classes of new energy vehicles (NEV) totaled almost 5.5 million at the end of 2020.The BAIC EC-Series all-electric city car was China's the top selling plug-in car in 2017 and 2018, and also the world's top selling plug-in car in 2017. BYD Auto was the world's top selling car manufacturer in 2016 and 2017. In 2020, the Tesla Model 3 listed as the best-selling plug-in car with 137,459 units. Europe Europe had about 5.67 million plug-in electric passenger cars and light commercial vehicles in circulation at the end of 2021, consisting of 2.9 million fully electric passenger cars, 2.5 million plug-in hybrid cars, and about 220,000 light commercial all-electric vehicles. The European stock of plug-in passenger is the world's second largest market after China, accounting for 30% of the global car stock in 2020. Europe also has the second largest electric light commercial vehicle stock, 33% of the global stock in 2019.In 2020, and despite the strong decline in global car sales brought by the COVID-19 pandemic, annual sales of plug-in passenger cars in Europe surpassed the 1 million mark for the first time. In addition, Europe outsold China in 2020 as the world's largest plug-in passenger car market for the first time since 2015.The plug-in car segment had a market share of 1.3% of new car registrations in 2016, rose to 3.6% in 2019, and achieved 11.4% in 2020. The largest country markets in the European region in terms of EV stock and annual sales are Germany, Norway, France, the UK, the Netherlands, and Sweden. Germany surpassed Norway in 2019 as the best selling plug-in market, leading both the all-electric and the plug-in hybrid segments in Europe. and in 2020 listed as the top selling European country market for the second consecutive year. Germany The stock of plug-in electric cars in Germany is the largest in Europe, there were 1,184,416 plug-in cars in circulation on January 1, 2022, representing 2.5% of all passenger cars on German roads, up from 1.2% the previous year. As of December 2021, cumulative sales totaled 1.38 million plug-in passenger cars since 2010. Germany had a stock of 21,890 light-duty electric commercial vehicles in 2019, the second largest in Europe after France.Germany listed as the top selling plug-in car market in the European continent in 2019 and achieved a market share of 3.10%. Despite the global decline in car sales brought by the COVID-19 pandemic, the segment market share achieved a record 13.6% in 2020. with a record volume of 394,632 plug-in passenger cars registered in 2020, up 263% from 2019, allowing Germany to be listed for a second year running as the best selling European plug-in market. Both years, the German market led both the fully electric and plug-in hybrid segments.Despite to the continued global decline in car sales brought by the shortages related to the COVID-19 pandemic, and computer chips in particular, a record 681,410 plug-in electric passenger cars were registered in Germany in 2021, consisting of 325,449 plug-in hybrids and 355,961 all-electric cars, allowing the segment's market share to surge to 26.0%. France As of December 2021, a total of 786,274 light-duty plug-in electric vehicles have been registered in France since 2010, consisting of 512,178 all-electric passenger cars and commercial vans, and 274,096 plug-in hybrids. Of these, over 60,000 were all-electric light commercial vehicles.The market share of all-electric passenger cars increased from 0.30% of new car registered in 2012, to 0.59% in 2014. After the introduction of the super-bonus for the scrappage of old diesel-power cars in 2015, sales of both pure electric cars and plug-in hybrids surged, rising the market share to 1.17% in 2015, climbed to 2.11% in 2018, and achieved 2.8% in 2019.Despite the global strong decline in car sales brought by the COVID-19 pandemic, and the related global computer chip shortage, plug-in electric car sales in France rose to a record of 315,978 light-duty plug-in vehicles in 2021, up 62% from 2020. The plug-in electric passenger car market share rose to 11.2% in 2020 and achieved a record market share of 18.3% in 2021. The combined light-duty plug-in vehicle segment (cars and utility vans) achieved a market share of 15.1% in 2021.As of December 2019, France ranked as the country with the world's second largest stock of light-duty electric commercial vehicles after China, with 49,340 utility vans in use. The market share of all-electric utility vans reached a market share of 1.22% in 2014, and 1.77% in 2018. United Kingdom About 745,000 light-duty plug-in electric vehicles had been registered in the UK up until December 2021, consisting of 395,000 all-electric vehicles and 350,000 plug-in hybrids.A surge of plug-in car sales took place in Britain beginning in 2014. Total registrations went from 3,586 in 2013, to 37,092 in 2016, and rose to 59,911 in 2018. Again sales surged to 175,339 units in 2020, despite the global strong decline in car sales brought by the COVID-19 pandemic, and achieved record sales of 305,281 units in 2021.The market share of the plug-in segment went from 0.16% in 2013 to 0.59% in 2014, and achieved 2.6% in 2018. The segment market share was 3.1% in 2019, rose to 10.7% in 2020, and achieved a record 18.6% in 2021.As of June 2020, the Mitsubishi Outlander P-HEV is the all-time top selling plug-in car in the UK over 46,400 units registered, followed by the Nissan Leaf more than 31,400 units. Norway As of 31 December 2021, the stock of light-duty plug-in electric vehicles in Norway totaled 647,000 units in use, consisting of 470,309 all-electric passenger cars and vans (including used imports), and 176,691 plug-in hybrids. Norway was the top selling plug-in country market in Europe for three consecutive years, from 2016 to 2018. Until 2019, Norway listed as the European country with the largest stock of plug-in cars and vans, and the third largest in the world.The Norwegian plug-in car segment market share has been the highest in the world for several years, reaching 39.2% in 2017, up from 29.1% in 2016, 49.1% in 2018, rose to 55.9% in 2019, and reached 74.7% in 2020, meaning that three out of every four new passenger car sold in Norway in 2020 was a plug-in electric. In September 2021, the combined market share of the plug-in car segment achieved a new record of 91.5% of new passenger car registrations, 77.5% for all-electric cars and 13.9% for plug-in hybrids, becoming the world's highest-ever monthly plug-in car market share attained by any country. The plug-in segment market share rose to 86.2% in 2021.In October 2018, Norway became the first country where 1 in every 10 passenger cars in use was a plug-in electric vehicle. and by the end of 2021, plug-in electric cars were 22.1% of all passenger cars on Norwegian roads. The country's fleet of plug-in electric cars is one of the cleanest in the world because 98% of the electricity generated in the country comes from hydropower. Norway is the country with the largest EV ownership per capita in the world. Netherlands As of 31 December 2021, there were 390,454 highway-legal light-duty plug-in electric vehicles in use in the Netherlands, consisting of 137,663 fully electric cars, 243,664 plug-in hybrid cars and 9,127 light duty plug-in commercial vehicles. Plug-in passenger cars represented 4.33% of all cars on Dutch roads at the end of 2021.As of July 2016, the Netherlands had the second largest plug-in ownership per capita in the world after Norway. Plug-in sales fell sharply in 2016 due to changes in tax rules, and as a result of the change in government's incentives, the plug-in market share declined from 9.9% in 2015, to 6.7% in 2016, and fell to 2.6% in 2017. The intake rate rose to 6.5% in 2018 in anticipation of another change in tax rules that went into effect in January 2019, and increased to 14.9% in 2019, and rose 24.6% in 2020, and achieved a record 29.8% in 2021. United States As of December 2021, cumulative sales of highway legal plug-in electric passenger cars in the U.S. totaled 2,322,291 units since 2010. Since December 2016 the U.S. has the world's third largest stock of plug-in passenger cars, after having been overtaken by Europe in 2015 and China during 2016. California is the largest plug-in regional market in the country, with 1 million plug-in cars sold in California by November 2021.The Tesla Model 3 electric car has been the best selling plug-in car in the U.S. for two consecutive years, 2018 and 2019. Cumulative sales of the Model 3 surpassed in 2019 the discontinued Chevrolet Volt plug-in hybrid to become the all-time best selling plug-in car in the country, with an estimated 300,471 units delivered since inception, followed by the Tesla Model S all-electric car with about 157,992, and the Chevrolet Volt with 157,054. Japan As of December 2020, Japan had a stock of plug-in passenger cars of 293,081 units on the road, consisting of 156,381 all-electric cars and 136,700 plug-in hybrids. The fleet of electric light commercial vehicles in use totaled 9,904 units in 2019.Plug-in sales totaled 24,660 units in 2015 and 24,851 units in 2016. The rate of growth of the Japanese plug-in segment slowed down from 2013, with annual sales falling behind Europe, the U.S. and China during 2014 and 2015. The segment market share fell from 0.68% in 2014 to 0.59% in 2016. Sales recovered in 2017, with almost 56,000 plug-in cars sold, and the segment's market share reached 1.1%. Sales fell slightly in 2018 to 52,000 units with a market share of 1.0%. The market share continued declining to 0.7% in 2019 and 0.6% in 2020.The decline in plug-in car sales reflects the Japanese government and the major domestic carmakers decision to adopt and promote hydrogen fuel cell vehicles instead of plug-in electric vehicles. Top selling PEV models All-electric cars and vans The Tesla Model 3 surpassed the Nissan Leaf in early 2020 to become the world's all-time best selling electric car, and, in June 2021, became the first electric car to sell 1 million units globally. The Model 3 has been the world's best selling plug-in electric car for four consecutive years, from 2018 to 2021. The United States is the leading market with an estimated 395,000 units sold, followed by the European market with over 180,000 units delivered, both through December 2020. The Model 3 was the best-selling plug-in car in China in 2020, with 137,459 units sold.Ranking second is the Nissan Leaf with cumulative global sales since inception of 577,000 units by February 2022. As of September 2021, Europe listed as the biggest market with more than 208,000 units sold, of which, 72 620 units have been registered in Norway, the leading European country market. As of December 2021, U.S. sales totaled 165,710 units through December 2021, and 157,059 units in Japan.The Wuling Hongguang Mini EV has dominated sales in the Chinese market in 2021, selling over 550,000 units since its release in July 2020.Europe's all-time top selling all-electric light utility vehicle is the Renault Kangoo Z.E., with global sales of over 57,500 units through November 2020.The following table presents global sales of the top selling highway-capable electric cars and light utility vehicles produced since the introduction of the first modern production all-electric car, the Tesla Roadster, in 2008 and December 2021. The table includes all-electric passenger cars and utility vans with cumulative sales of about or over 200,000 units. Plug-in hybrids The Mitsubishi Outlander P-HEV is the world's all-time best selling plug-in hybrid according to JATO Dynamics. Global sales achieved the 250,000 unit milestone in May 2020, and the 300,000 mark in January 2022. Europe is the Outlander P-HEV leading market with 126,617 units sold through January 2019, followed by Japan with 42,451 units through March 2018. European sales are led by the UK with 36,237 units delivered, followed by the Netherlands with 25,489 units, both through March 2018.Ranking second is the Toyota Prius Plug-in Hybrid (Toyota Prius Prime) with about 225,000 units sold worldwide of both generations through December 2020. The United States is the market leader with over 93,000 units delivered through December 2018. Japan ranks next with about 61,200 units through December 2018, followed by Europe with almost 14,800 units through June 2018.Combined global sales of the Chevrolet Volt and its rebadged models totaled about 186,000 units by the end of 2018, including about 10,000 Opel/Vauxhall Amperas sold in Europe through June 2016. Volt sales are led by the United States with 157,054 units delivered through December 2019, followed by Canada with 17,311 units through November 2018. Until the end of 2018, the Chevrolet Volt family listed as the world's top selling plug-in hybrid, when it was surpassed by the Mitsubishi Outlander P-HEV.The following table presents plug-in hybrid models with cumulative global sales of around or more than 100,000 units since the introduction of the first modern production plug-in hybrid car, the BYD F3DM, in 2008 up until December 2021: See also Electric car Electric car use by country Electric vehicle battery Electric vehicle warning sounds Hybrid tax credit (U.S.) List of electric cars currently available List of modern production plug-in electric vehicles Plug In America RechargeIT (Google.org PHEV program) Renewable energy by country References External links Clean Vehicle Rebate Project website Competitive Electric Town Transport, Institute of Transport Economics (TØI), Oslo, August 2015. Cradle-to-Grave Lifecycle Analysis of U.S. Light-Duty Vehicle-Fuel Pathways: A Greenhouse Gas Emissions and Economic Assessment of Current (2015) and Future (2025–2030) Technologies Archived 2020-08-12 at the Wayback Machine (includes BEVs and PHEVs), Argonne National Laboratory, June 2016. Driving Electrification – A Global Comparison of Fiscal Incentive Policy for Electric Vehicles, International Council on Clean Transportation, May 2014. Effects of Regional Temperature on Electric Vehicle Efficiency, Range, and Emissions in the United States, Tugce Yuksel and Jeremy Michalek, Carnegie Mellon University. 2015 eGallon Calculator: Compare the costs of driving with electricity, U.S. Department of Energy From Fiction to Reality: The Evolution of Electric Vehicles 2013 – 2015, JATO Dynamics, November 2015. Influence of driving patterns on life cycle cost and emissions of hybrid and plug-in electric vehicle powertrains, Carnegie MellonVehicle Electrification Group Modernizing vehicle regulations for electrification, International Council on Clean Transportation, October 2018. NHTSA Interim Guidance Electric and Hybrid Electric Vehicles Equipped with High Voltage Batteries – Vehicle Owner/General Public Archived 2013-12-09 at the Wayback Machine NHTSA Interim Guidance Electric and Hybrid Electric Vehicles Equipped with High Voltage Batteries – Law Enforcement/Emergency Medical Services/Fire Department Archived 2013-12-09 at the Wayback Machine New Energy Tax Credits for Electric Vehicles purchased in 2009 Overview of Tax Incentives for Electrically Chargeable Vehicles in the E.U. PEVs Frequently Asked Questions Plug-in Electric Vehicles: Challenges and Opportunities, American Council for an Energy-Efficient Economy, June 2013 Powering Ahead – The future of low-carbon cars and fuels, the RAC Foundation and UK Petroleum Industry Association, April 2013. Plugging In: A Consumer's Guide to the Electric Vehicle Electric Power Research Institute Plug-in America website Plug-in Electric Vehicle Deployment in the Northeast Georgetown Climate Center Plug-in List of Registered Charging Stations in the USA RechargeIT plug-in driving experiment (Google.org) Shades of Green – Electric Car's Carbon Emissions Around the Globe, Shrink that Footprint, February 2013. State of the Plug-in Electric Vehicle Market, Electrification Coalition, July 2013. The Great Debate – All-Electric Cars vs. Plug-In Hybrids, April 2014 UK Plug-in Car Grant website U.S. Federal & State Incentives & Laws US Tax Incentives for Plug-in Hybrids and Electric Cars Books David B. Sandalow, ed. (2009). Plug-In Electric Vehicles: What Role for Washington? (1st. ed.). The Brookings Institution. ISBN 978-0-8157-0305-1. Mitchell, William J.; Borroni-Bird, Christopher; Burns, Lawrence D. (2010). Reinventing the Automobile: Personal Urban Mobility for the 21st Century (1st. ed.). The MIT Press. ISBN 978-0-262-01382-6. Archived from the original on 2010-06-09. Retrieved 2010-07-18.
climate action tracker
Climate Action Tracker (CAT) is a research group with the aim of monitoring government action to achieve their reduction of greenhouse gas emissions with regard to international agreements. It is tracking climate action in 32 countries responsible for over 80% of global emissions. COP26 Toward the end of the COP26 climate conference, CAT produced a report concluding that the current "wave of net‑zero emission goals [are] not matched by action on the ground" and that the world is likely headed for more than 2.4°C of warming by the end of the century. References External links Climate Action Tracker website
the long tailpipe
The long tailpipe is an argument stating that usage of electric vehicles does not always result in fewer emissions (e.g. greenhouse gas emissions) compared to those from non-electric vehicles. While the argument acknowledges that plug-in electric vehicles operating in all-electric mode have no greenhouse gas emissions from the onboard source of power, it claims that these emissions are shifted from the vehicle tailpipe to the location of the electrical generation plants. From the point of view of a well-to-wheel assessment, the extent of the actual carbon footprint depends on the fuel and technology used for electricity generation, as well as the impact of additional electricity demand on the phase-out of fossil fuel power plants. Description Plug-in electric vehicles (PEVs) operating in all-electric mode do not emit greenhouse gases from the onboard source of power but emissions are shifted to the location of the generation plants. From the point of view of a well-to-wheel assessment, the extent of the actual carbon footprint depends on the fuel and technology used for electricity generation. From the perspective of a full life cycle analysis, the electricity used to recharge the batteries must be generated from renewable or clean sources such as wind, solar, hydroelectric, or nuclear power for PEVs to have almost none or zero well-to-wheel emissions. On the other hand, when PEVs are recharged from coal-fired plants, they usually produce slightly more greenhouse gas emissions than internal combustion engine vehicles and higher than hybrid electric vehicles.Because plug-in electric vehicles do not produce emissions at the point of operation are often perceived as being environmentally friendlier than vehicles driven through internal combustion. Assessing the validity of that perception is difficult due to the greenhouse gases generated by the power plants that provide the electricity to charge the vehicles' batteries. For example, the New York Times reported that a Nissan Leaf driving in Los Angeles would have the same environmental impact as a gasoline-powered car with 79 mpg‑US (3.0 L/100 km; 95 mpg‑imp) compared to the same trip in Denver would only have the equivalent of 33 mpg‑US (7.1 L/100 km; 40 mpg‑imp). The U.S. Department of Energy published a concise description of the problem: "Electric vehicles (EVs) themselves emit no greenhouse gases (GHGs), but substantial emissions can be produced 'upstream' at the electric power plant."A recent study by the German IfW shows that the increased electricity demand, and the resulting delay in the shutdown of coal-fired power plants in Germany, causes electric vehicles to have 73% higher CO2 emissions than Diesel vehicles. Carbon footprint in selected countries A study published in the UK in April 2013 assessed the carbon footprint of plug-in electric vehicles in 20 countries. As a baseline the analysis established that manufacturing emissions account for 70 g CO2/km. The study found that in countries with coal-intensive generation, PEVs are no different from conventional petrol-powered vehicles. Among these countries are China, Indonesia, Australia, South Africa and India. A pure electric car in India generates emissions comparable to a 20 mpg‑US (12 L/100 km; 24 mpg‑imp) petrol car. The country ranking was led by Paraguay, where all electricity is produced from hydropower, and Iceland, where electricity production relies on renewable power, mainly hydro and geothermal power. Resulting carbon emissions from an electric car in both countries are 70 g CO2/km, which is equivalent to a 220 mpg‑US (1.1 L/100 km; 260 mpg‑imp) petrol car, and correspond to manufacturing emissions. Next in the ranking are other countries with similar low carbon electricity generation, including Sweden (mostly hydro and nuclear power ), Brazil (mainly hydropower) and France (predominantly nuclear power). Countries ranking in the middle include Japan, Germany, the UK and the United States.The following table shows the emission intensity estimated in the study for each of the 20 countries, and the corresponding emissions equivalent in miles per US gallon of a petrol-powered car: Carbon footprint in the United States In the case of the United States, the Union of Concerned Scientists (UCS) conducted a study in 2012 to assess average greenhouse gas emissions resulting from charging plug-in car batteries from the perspective of the full life-cycle (well-to-wheel analysis) and according to fuel and technology used to generate electric power by region. The study used the Nissan Leaf all-electric car to establish the analysis baseline, and electric-utility emissions are based on EPA's 2007 estimates. The UCS study expressed the results in terms of miles per gallon instead of the conventional unit of grams of greenhouse gases or carbon dioxide equivalent emissions per year in order to make the results more friendly for consumers. The study found that in areas where electricity is generated from natural gas, nuclear, hydroelectric or renewable sources, the potential of plug-in electric cars to reduce greenhouse emissions is significant. On the other hand, in regions where a high proportion of power is generated from coal, hybrid electric cars produce less CO2 equivalent emissions than plug-in electric cars, and the best fuel efficient gasoline-powered subcompact car produces slightly less emissions than a PEV. In the worst-case scenario, the study estimated that for a region where all energy is generated from coal, a plug-in electric car would emit greenhouse gas emissions equivalent to a gasoline car rated at a combined city/highway driving fuel economy of 30 mpg‑US (7.8 L/100 km; 36 mpg‑imp). In contrast, in a region that is completely reliant on natural gas, the PEV would be equivalent to a gasoline-powered car rated at 50 mpg‑US (4.7 L/100 km; 60 mpg‑imp).The following table shows a representative sample of cities within each of the three categories of emissions intensity used in the UCS study, showing the corresponding miles per gallon equivalent for each city as compared to the greenhouse gas emissions of a gasoline-powered car: An analysis of EPA power plant data from 2016 showed improvement in mpg-equivalent ratings of electric cars for nearly all regions, with a national weighted average of 80 mpg for electric vehicles. The regions with the highest ratings include upstate New York, New England, and California at over 100 mpg, while only Oahu, Wisconsin, and part of Illinois and Missouri are below 40 mpg, though still higher than nearly all gasoline cars. Criticism The long tailpipe has been the target of criticism, ranging from claims that many estimates are methodologically flawed to estimates that state that electricity generation in the United States will become less carbon-intensive over time. Tesla Motors CEO Elon Musk published his own criticism of the long tailpipe. The extraction and refining of carbon based fuels and its distribution is in itself an energy intensive industry contributing to CO2 emissions. In 2007 U.S. refineries consumed 39353 million kWh, 70769 million lbs of steam and 697593 million cubic feet of Natural Gas. And the refining energy efficiency for gasoline is estimated to be, at best, 87.7%. References External links Greenhouse Gas Emissions for Electric and Plug-In Hybrid Electric Vehicles, web tool to estimate GHG by car model and zip code, U.S. Department of Energy and U.S. Environmental Protection Agency. Shades of Green - Electric Car's Carbon Emissions Around the Globe, Shrink that Footprint, February 2013. State of Charge: Electric Vehicles' Global Warming Emissions and Fuel-Cost Savings across the United States, Union of Concerned Scientists, April 2012.
arctic methane emissions
Arctic methane release is the release of methane from seas and soils in permafrost regions of the Arctic. While it is a long-term natural process, methane release is exacerbated by global warming. This results in a positive feedback cycle, as methane is itself a powerful greenhouse gas.The Arctic region is one of the many natural sources of the greenhouse gas methane. Global warming could potentially accelerate its release, due to both release of methane from existing stores, and from methanogenesis in rotting biomass. Large quantities of methane are stored in the Arctic in natural gas deposits and as undersea clathrates. When permafrost thaws as a consequence of warming, large amounts of organic material can become available for methanogenesis and may ultimately be released as methane. Clathrates also degrade on warming and release methane directly.Atmospheric methane concentrations are 8–10% higher in the Arctic than in the Antarctic atmosphere. During cold glacier epochs, this gradient decreases to practically insignificant levels. Land ecosystems are considered the main sources of this asymmetry, although it has been suggested in 2007 that "the role of the Arctic Ocean is significantly underestimated." Soil temperature and moisture levels have been found to be significant variables in soil methane fluxes in tundra environments. Contribution to climate change Main sources of global methane emissions (2008-2017) according to the Global Carbon Project Due to the relatively short lifetime of atmospheric methane, its global trends are more complex than those of carbon dioxide. NOAA annual records have been updated since 1984, and they show substantial growth during the 1980s, a slowdown in annual growth during the 1990s, a plateau (including some years of declining atmospheric concentrations) in the early 2000s and another consistent increase beginning in 2007. Since around 2018, there has been a consistent acceleration in annual methane increases, with the 2020 increase of 15.06 parts per billion breaking the previous record increase of 14.05 ppb set in 1991, and 2021 setting an even larger increase of 18.34 ppb.These trends alarm climate scientists, with some suggesting that they represent a climate change feedback increasing natural methane emissions well beyond their preindustrial levels. However, there is currently no evidence connecting the Arctic to this recent acceleration. In fact, a 2021 study indicated that the role of the Arctic was typically overerestimated in global methane accounting, while the role of tropical regions was consistently underestimated. The study suggested that tropical wetland methane emissions were the culprit behind the recent growth trend, and this hypothesis was reinforced by a 2022 paper connecting tropical terrestrial emissions to 80% of the global atmospheric methane trends between 2010 and 2019.Nevertheless, the Arctic's role in global methane trends is considered very likely to increase in the future. There is evidence for increasing methane emissions since 2004 from a Siberian permafrost site into the atmosphere linked to warming. Causes Loss of permafrost Arctic sea ice decline A 2015 study concluded that Arctic sea ice decline accelerates methane emissions from the Arctic tundra, with the emissions for 2005-2010 being around 1.7 million tonnes higher than they would have been with the sea ice at 1981–1990 levels. One of the researchers noted, "The expectation is that with further sea ice decline, temperatures in the Arctic will continue to rise, and so will methane emissions from northern wetlands." Clathrate breakdown Ice sheets A 2014 study found evidence for methane cycling below the ice sheet of the Russell Glacier, based on subglacial drainage samples which were dominated by Pseudomonadota. During the study, the most widespread surface melt on record for the past 120 years was observed in Greenland; on 12 July 2012, unfrozen water was present on almost the entire ice sheet surface (98.6%). The findings indicate that methanotrophs could serve as a biological methane sink in the subglacial ecosystem, and the region was, at least during the sample time, a source of atmospheric methane. Scaled dissolved methane flux during the 4 months of the summer melt season was estimated at 990 Mg CH4. Because the Russell-Leverett Glacier is representative of similar Greenland outlet glaciers, the researchers concluded that the Greenland Ice Sheet may represent a significant global methane source. A study in 2016 concluded that methane clathrates may exist below Greenland's and Antarctica's ice sheets, based on past evidence. Reducing methane emissions Mitigation of methane emissions has greatest potential to preserve Arctic sea ice if it is implemented within the 2020s. Use of flares ARPA-E has funded a research project from 2021-2023 to develop a "smart micro-flare fleet" to burn off methane emissions at remote locations.A 2012 review article stated that most existing technologies "operate on confined gas streams of 0.1% methane", and were most suitable for areas where methane is emitted in pockets.If Arctic oil and gas operations use Best Available Technology (BAT) and Best Environmental Practices (BEP) in petroleum gas flaring, this can result in significant methane emissions reductions, according to the Arctic Council. See also References External links Arctic permafrost is thawing fast. That affects us all. National Geographic, 2019 Why the Arctic is smouldering, BBC Future, by Zoe Cormier, 2019
energy policy of the united states
The energy policy of the United States is determined by federal, state, and local entities. It addresses issues of energy production, distribution, consumption, and modes of use, such as building codes, mileage standards, and commuting policies. Energy policy may be addressed via legislation, regulation, court decisions, public participation, and other techniques. Federal energy policy acts were passed in 1974, 1992, 2005, 2007, 2008, 2009, 2020, 2021, and 2022, although energy-related policies have appeared in many other bills. State and local energy policies typically relate to efficiency standards and/or transportation.Federal energy policies since the 1973 oil crisis have been criticized over an alleged crisis-mentality, promoting expensive quick fixes and single-shot solutions that ignore market and technology realities.Americans constitute less than 5% of the world's population, but consume 26% of the world's energy to produce 26% of the world's industrial output. Technologies such as fracking and horizontal drilling allowed the United States in 2014 to become the world's top oil fossil fuel producer. In 2018, US exports of coal, natural gas, crude oil and petroleum products exceeded imports, achieving a degree of energy independence for the first time in decades. In the second half of 2019, the US was the world's top producer of oil and gas. This energy surplus ended in 2020. Various multinational groups have attempted to establish goals and timetables for energy and other climate-related policies, such as the 1997 Kyoto Protocol, and the 2015 Paris Agreement. History In the early days of the Republic energy policy allowed free use of standing timber for heating and industry. Wind and water provided energy for tasks such as milling grain. In the 19th century, coal became widely used. Whales were rendered into lamp oil. Coal gas was fractionated for use as lighting and town gas. Natural gas was first used in America for lighting in 1816. It has grown in importance, especially for electricity generation, but US natural gas production peaked in 1973 and the price has risen significantly since then. Coal provided the bulk of the US energy needs well into the 20th century. Most urban homes had a coal bin and a coal-fired furnace. Over the years these were replaced with oil furnaces that were easier and safer to operate.From the early 1940s, the US government and oil industry entered into a mutually beneficial collaboration to control global oil resources. By 1950, oil consumption exceeded that of coal. Abundant oil in California, Texas, Oklahoma, as well as in Canada and Mexico, coupled with its low cost, ease of transportation, high energy density, and use in internal combustion engines, led to its increasing use.Following World War II, oil heating boilers took over from coal burners along the Eastern Seaboard; diesel locomotives took over from coal-fired steam engines; oil-fired power plants dominated; petroleum-burning buses replaced electric streetcars, and citizens bought gasoline-powered cars. Interstate Highways helped make cars the major means of personal transportation. As oil imports increased, US foreign policy was drawn into Middle East politics, seeking to maintain steady supply via actions such as protcting Persian Gulf sea lanes.Hydroelectricity was the basis of Nikola Tesla's introduction of the US electricity grid, starting at Niagara Falls, New York, in 1883. Electricity generated by major dams such as the TVA Project, Grand Coulee Dam and Hoover Dam still produce some of the lowest-priced ($0.08/kWh) electricity. Rural electrification strung power lines to many more areas.A National Maximum Speed Limit of 55 mph (88 km/h) was imposed in 1974 (and repealed in 1995) to help reduce consumption. Corporate Average Fuel Economy (aka CAFE) standards were enacted in 1975 and progressively tightened over time to compel manufacturers to improve vehicle mileage. Year-round Daylight Saving Time was imposed in 1974 and repealed in 1975. The United States Strategic Petroleum Reserve was created in 1975. The Weatherization Assistance Program was enacted in 1977. On average, low-cost weatherization reduces heating bills by 31% and overall energy bills by $358 per year at 2012 prices. Increased energy efficiency and weatherization spending has a high return on investment.On August 4, 1977, President Jimmy Carter signed into law The Department of Energy Organization Act of 1977 (Pub. L.Tooltip Public Law (United States) 95–91, 91 Stat. 565, enacted August 4, 1977), which created the United States Department of Energy (DOE). The new agency, which began operations on October 1, 1977, consolidated the Federal Energy Administration, the Energy Research and Development Administration, the Federal Power Commission, and programs of various other agencies. Former Secretary of Defense James Schlesinger, who served under Presidents Nixon and Ford during the Vietnam War, was appointed as the first secretary. On June 30, 1980, Congress passed the Energy Security Act, which reauthorized the Defense Production Act of 1950 and enabled it to cover domestic energy supplies. It also obligated the federal government to promote and reform the Strategic Petroleum Reserve, biofuels, geothermal power, acid rain prevention, solar power, and synthetic fuel commercialization. The Defense Production Act was further reauthorized in 2009, with modifications requiring the federal government to promote renewable energy, energy efficiency, and improved grid and grid storage installations with its defense procurements.The federal government provided substantially larger subsidies to fossil fuels than to renewables in the 2002–2008 period. Subsidies to fossil fuels totaled approximately $72 billion over the study period, a direct cost to taxpayers. Subsidies for renewable fuels, totaled $29 billion over the same period.In some cases, the US used energy policy to pursue other international goals. Richard Heinberg claimed that a declassified CIA document showed that the US used oil prices as leverage against the economy of the Soviet Union by working with Saudi Arabia during the Reagan administration to keep oil prices low, thus decreasing the value of the USSR's petroleum export industry.The 2005 Energy Policy Act (EPA) addressed (1) energy efficiency; (2) renewable energy; (3) oil and gas; (4) coal; (5) tribal energy; (6) nuclear matters; (7) vehicles and motor fuels, including ethanol; (8) hydrogen; (9) electricity; (10) energy tax incentives; (11) hydropower and geothermal energy; and (12) climate change technology. The Act also started the Department of Energy's Loan Guarantee Program.The Energy Independence and Security Act of 2007 provided funding to help improve building codes, and outlawed the sale of incandescent light bulbs, in favor of fluorescents and LEDs. It also includes funding to increase photovoltaics, and a solar air conditioning program, created the Energy Efficiency and Conservation Block Grant, and set the CAFE standard to 35 mpg by 2020. In February 2009, the American Recovery and Reinvestment Act was passed, with an initial projection of $45 billion in funding levels going to energy. $11 billion went to the Weatherization Assistance Program, the Energy Efficiency and Conservation Block Grant, and the State Energy Program, $11 billion went to federal buildings and vehicles, $8 billion went to research and development programs, $2.4 billion went to new technology and facility development projects, $14 billion went to the electric grid, and $21 billion was projected to go to tax credits for renewable energy and electric vehicles, among others. Due in part to the design of the tax credits, the final amount of energy spending and incentives reached over $90 billion, funded 180 advanced manufacturing projects, and created more than 900,000 job-years.In December 2009, the United States Patent and Trademark Office announced the Green Patent Pilot Program. The program was initiated to accelerate the examination of patent applications relating to certain green technologies, including the energy sector. The pilot program was initially designed to accommodate 3,000 applications related to certain green technology categories, and the program was originally set to expire on December 8, 2010. In May 2010, the USPTO announced that it would expand the pilot program.In 2016, federal government energy-specific subsidies and support for renewables, fossil fuels, and nuclear energy amounted to $6,682 million, $489 million and $365 million, respectively.On June 1, 2017, then-President Donald Trump announced that the U.S. would cease participation in the 2015 Paris Agreement on climate change mitigation agreed to under the President Barack Obama administration. On November 3, 2020, incoming President Joe Biden announced that the U.S. would resume its participation.The Energy Information Administration (EIA) predicted that the reduction in energy consumption in 2020 due to the COVID-19 pandemic would take many years to recover. The US imported much of its oil for many decades but in 2020 became a net exporter.In December 2020, Trump signed the Consolidated Appropriations Act, 2021, which contained the Energy Act of 2020, the first major revision package to U.S. energy policy in over a decade. The bill contains increased incentives for energy efficiency particularly in federal government buildings, improved funding for weatherization assistance, standards to phase out the use of hydrofluorocarbons, plans to rebuild the nation's energy research sector including fossil fuel research, and $7 billion in demonstration projects for carbon capture and storage.Under President Joe Biden, one-third of the Strategic Petroleum Reserve was tapped to reduce energy prices during the COVID-19 pandemic. He also invoked the Defense Production Act to boost manufacturing of solar cells and other renewable energy generators, fuel cells and other electricity-dependent clean fuel equipment, building insulation, heat pumps, critical power grid infrastructure, and electric vehicle batteries.Biden also signed the Infrastructure Investment and Jobs Act to invest $73 billion in the energy sector. $11 billion of that amount will be invested in power grid infrastructure, with the first selected recipients for $3.46 billion of the funds announced in October 2023, the largest such investment in the grid since the Recovery Act. (In November 2022 the Biden administration announced $550 million would be distributed from a grant program for clean energy generators for low-income and minority communities, created by the 2007 Energy Independence and Security Act and last funded by the Recovery Act.) $6 billion of the former amount will go to domestic nuclear power. From the $73 billion, the IIJA invests $45 billion in innovation and industrial policy for key emerging technologies in energy; $430 million–21 billion in new demonstration projects at the DOE; and nearly $24 billion in onshoring, supply chain resilience, and bolstering competitive advantages in energy, divided into an $8.6 billion investment in carbon capture and storage, $3 billion in battery material reprocessing, $3 billion in battery recycling, $1 billion in rare-earth minerals stockpiling, and $8 billion in new research hubs for green hydrogen. $4.7 billion will go to plugging orphan wells abandoned by oil and gas companies.In August 2022, Biden signed the CHIPS and Science Act to boost DOE and National Science Foundation research activities by $174 billion and the Inflation Reduction Act to create assistance programs for utility cooperatives and a $27 billion green bank, including $6 billion to lower the cost of solar power in low-income communities and $7 billion to capitalize smaller green banks, and appropriate $270–663 billion in clean energy and energy efficiency tax credits, including at least $158 billion for investments in clean energy, and $36 billion for home energy upgrades from public utilities. The Biden administration itself claimed that as of November 2023, the IIJA, CaSA, and IRA together catalyzed over $614 billion in private investment (including $231 billion in electronics, $142 billion in electric vehicles and batteries, and $133 billion in clean energy generators) and over $302.4 billion in public infrastructure spending (including $22.7 billion in energy aside from tax credits in the IRA). Department of Energy The Energy Department's mission statement is "to ensure America's security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions."As of January 2023, its elaboration of the mission statement is as follows: "Catalyze timely, material, and efficient transformation of the nation's energy system and secure US leadership in clean energy technologies. "Maintain a vibrant US effort in science and engineering as a cornerstone of our economic prosperity with clear leadership in strategic areas. "Enhance nuclear security through defense, nonproliferation, and environmental efforts. "Establish an operational and adaptable framework that combines the best wisdom of all Department stakeholders to maximize mission success." Import policies Petroleum The US bans energy imports from countries such as Russia (because of the Russo-Ukrainian War), and Venezuela. The US limits exports of oil from Iran. The US imports energy from multiple countries, led by Canada, although it is a net exporter. Export In 1975, the United States implemented a crude oil export ban, which limited most of the crude oil exports to other countries. It came two years after an OPEC oil embargo that banned oil sales to the U.S. had sent gas prices skyrocketing. Newspaper photographs of long lines of cars outside of gas stations became a common and worrisome image.Congress voted in 2015 to repeal a 40-year ban on exporting U.S. crude oil. Since that year, crude exports have skyrocketed nearly 600% to 3.2 million barrels per day in 2020, according to data from the U.S. Energy Information Administration. Strategic petroleum reserve The United States Strategic Petroleum Reserve stores as much as 600M barrels of oil. Energy consumption Sources Energy in the United States came mostly from fossil fuels in 2021: 36% originated from petroleum, 32% from natural gas, and 11% from coal. Renewable energy supplied the rest: hydropower, biomass, wind, geothermal, and solar supplied 12%, while nuclear supplied 8%. Utilities Utility rates are typically set to provide a constant 10% – 13% rate of return on their investments in a process called Rate-of-return regulation. Operating cost changes are typically passed directly through to consumers. Energy efficiency Opportunities for increased energy are available across the economy, including buildings/appliances, transportation, and manufacturing. Some opportunities require new technology. Others require behavior change by individuals or at the community level or above. Building-related energy efficiency innovation takes many forms, including improvements in water heaters; refrigerators and freezers; building control technologies heating, ventilation, and cooling (HVAC); adaptive windows; building codes; and lighting.Energy-efficient technologies may allow superior performance (e.g. higher quality lighting, heating and cooling with greater controls, or improved reliability of service through greater ability of utilities to respond to time of peak demand).More efficient vehicles save on fuel purchases, emit fewer pollutants, improve health and save on medical costs.Heat engines are only 20% efficient at converting oil into work. Energy budget, initiatives and incentives Most energy policy incentives are financial. Examples of these include tax breaks, tax reductions, tax exemptions, rebates, loans and subsidies. The Energy Policy Act of 2005, Energy Independence and Security Act of 2007, Emergency Economic Stabilization Act of 2008, and the Inflation Reduction Act all provided such incentives. Tax incentives The US Production Tax Credit (PTC) reduces the federal income taxes of qualified owners of renewable energy projects based on grid-connected output. The Investment Tax Credit (ITC) reduces federal income taxes for qualified tax-payers based on capital investment in renewable energy projects. The Advanced Energy Manufacturing Tax Credit (MTC) awards tax credits to selected domestic manufacturing facilities that support clean energy development. Loan guarantees The Department of Energy's Loan Guarantee Program guarantees financing up to 80% of a qualifying project's cost. Renewable energy In the United States, the share of renewable energy in electricity generation has grown to 21% (2020). Oil use is expected to decline in the US owing to the increasing efficiency of the vehicle fleet and replacement of crude oil by natural gas as a feedstock for the petrochemical sector. One forecast is that the rapid uptake of electric vehicles will reduce oil demand drastically, to the point where it is 80% lower in 2050 compared with today.A Renewable Portfolio Standard (RPS) is a state/local mandate that requires electricity providers to supply a minimum amount of power from renewable sources, usually defined as a percentage of total energy production. Biofuels The federal government offers many programs to support the development and implementation of biofuel-based replacements for fossil fuels.Landowners and operators who establish, produce, and deliver biofuel crops may qualify for partial reimbursement of startup costs as well as annual payments.Loan guarantees help finance development, construction, and retrofitting of commercial-scale biorefineries. Grants aid building demonstration scale biorefineries and scaling up of existing biorfineries. Loan guarantees and grants support the purchase of pumps that dispense ethanol-including fuels.Production support helps makers expand output.Tax credits support the purchase of fueling equipment (gas pumps) for specific fuels including some biofuels.Education grants support training the public about biodiesel.Research, development, and demonstration grants support feedstock development and biofuel development.Grants support research, demonstration, and deployment projects to replace buses and other petroleum-fueled vehicles with biofuel or other alternative fuel-based vehicles including necessary fueling infrastructure. Producer subsidies The 2005 Energy Policy Act offered incentives including billions in tax reductions for nuclear power, fossil fuel production, clean coal technologies, renewable electricity, and conservation and efficiency improvements. Federal leases The US leases federal land to private firms for energy production. The volume of leases has varied by presidential administration. During the first 19 months of the Joe Biden administration, 130k acres were leased, compared to 4M under the Donald Trump administration, 7M under the Obama administration, and 13M under the George W. Bush administration. Net metering Electricity distribution Electric power transmission results in energy loss, through electrical resistance, heat generation, electromagnetic induction and less-than-perfect electrical insulation. Electric transmission (production to consumer) loses over 23% of the energy due to generation, transmission, and distribution. In 1995, long distance transission losses were estimated at 7.2% of the power transported. Reducing transmission distances reduces these losses. Of five units of energy going into typical large fossil fuel power plants, only about one unit reaches the consumer in a usable form.A similar situation exists in natural gas transport, which requires compressor stations along pipelines that use energy to keep the gas moving. Gas liquefaction/cooling/regasification in the liquified natural gas supply chain uses a substantial amount of energy. Distributed generation and distributed storage are a means of reducing total and transmission losses as well as reducing costs for electricity consumers.In October 2023, the Biden administration announced the largest major investments in the grid since the Recovery Act in 2009. The DOE announced the results of a mandated triennial study that, for the first time in its history, included anticipation of future grid transmission needs. The DOE also announced the first three recipients of a new $2.5 billion loan program it called the Transmission Facilitation Program, created to provide funding to help build up the interstate power grid. They are the 1.2-gigawatt Twin States Clean Energy Link between Quebec, New Hampshire and Vermont, the 1.5-gigawatt Cross-Tie Transmission Line between Utah and Nevada; and the 1-gigawatt Southline Transmission Project between Arizona and New Mexico. Greenhouse gas emissions While the United States has cumulatively emitted the most greenhouse gases of any country, it represents a declining fraction of ongoing emissions, long superseded by China. Since its peak in 1973, per capita US emissions have declined by 40%, resulting from improved technology, the shift in economic activity from manufacturing to services, changing consumer preferences and government policy.State and local government have launched initiatives. Cities in 50 states endorsed the Kyoto protocol. Northeastern US states established the Regional Greenhouse Gas Initiative (RGGI), a state-level emissions cap and trade program. On February 16, 2007, the United States, together with leaders from Canada, France, Germany, Italy, Japan, Russia, United Kingdom, Brazil, China, India, Mexico and South Africa agreed in principle on the outline of a successor to the Kyoto Protocol known as the Washington Declaration. They envisaged a global cap-and-trade system that would apply to both industrialized nations and developing countries. The system did not come to pass. Arjun Makhijani argued that in order to limit global warming to 2 °C, the world would need to reduce CO2 emissions by 85% and the US by 95%. He developed a model by which such changes could occur. Effective delivered energy is modeled to increase from about 75 Quadrillion Btu in 2005 to about 125 Quadrillion in 2050, but due to efficiency increases, the actual energy input increases from about 99 Quadrillion Btu in 2005 to about 103 Quadrillion in 2010 and then to decrease to about 77 Quadrillion in 2050. Petroleum use is assumed to increase until 2010 and then linearly decrease to zero by 2050. The roadmap calls for nuclear power to decrease to zero, with the reduction also beginning in 2010.Joseph Romm called for the rapid deployment of existing technologies to decrease carbon emissions. He argued that "If we are to have confidence in our ability to stabilize carbon dioxide levels below 450 p.p.m. emissions must average less than [5 billion metric tons of carbon] per year over the century. This means accelerating the deployment of the 11 wedges so they begin to take effect in 2015 and are completely operational in much less time than originally modeled by Socolow and Pacala."In 2012, the National Renewable Energy Laboratory assessed the technical potential for renewable electricity for each of the 50 states, and concluded that each state had the technical potential for renewable electricity, mostly from solar and wind, that could exceed its current electricity consumption. The report cautions: "Note that as a technical potential, rather than economic or market potential, these estimates do not consider availability of transmission infrastructure, costs, reliability or time-of-dispatch, current or future electricity loads, or relevant policies."In 2022, the EPA received funding for a green bank called the Greenhouse Gas Reduction Fund to drive down carbon dioxide emissions, as part of the Inflation Reduction Act, the largest decarbonization incentives package in U.S. history. The Fund will award $14 billion to a select few green banks nationwide for a broad variety of decarbonization investments, $6 billion to green banks in low-income and historically disadvantaged communities for similar investments, and $7 billion to state and local energy funds for decentralized solar power in communities with no financing alternatives. The EPA set the deadline to apply for the first two award initiatives at October 12, 2023 and the latter initiative at September 26, 2023. See also References Further reading Matto Mildenberger & Leah C. Stokes (2021). "The Energy Politics of North America". The Oxford Handbook of Energy Politics. Oil and Natural Gas Industry Tax Issues in the FY2014 Budget Proposal Congressional Research Service External links US Department of Energy Energy Information Administration Official Energy Statistics from the US government Residential Electricity Prices USDA energy United States Energy Association (USEA) US energy stats ISEA – Database of U.S. International Energy Agreements Retail sales of electricity and associated revenue by end-use sectors through June 2007 (Energy Information Administration) International Energy Agency 2007 Review of US Energy Policies
massachusetts v. epa
Massachusetts v. Environmental Protection Agency, 549 U.S. 497 (2007), is a 5–4 U.S. Supreme Court case in which Massachusetts, along with eleven other states and several cities of the United States, represented by James Milkey, brought suit against the Environmental Protection Agency (EPA) represented by Gregory G. Garre to force the federal agency to regulate the emissions of carbon dioxide and other greenhouse gases (GHGs) that pollute the environment and contribute to climate change. Under the Clean Air Act, Massachusetts argued that the Environmental Protection Agency was required by law to regulate "any air pollutant" which could "endanger public health or welfare." The EPA denied the petition, claiming that federal law does not authorize the agency to regulate greenhouse gas emissions. Background Section 202(a)(1) of the Clean Air Act (CAA), 42 U.S.C. § 7521(a)(1), requires the Administrator of the Environmental Protection Agency to set emission standards for "any air pollutant" from motor vehicles or motor vehicle engines "which in his judgment cause[s], or contribute[s] to, air pollution which may reasonably be anticipated to endanger public health or welfare."In 2003, the EPA made two determinations: The EPA lacked authority under the CAA to regulate carbon dioxide and other GHGs for climate change purposes. Even if the EPA did have such authority, it would decline to set GHG emissions standards for vehicles. Parties The petitioners were the states of California, Connecticut, Illinois, Maine, Massachusetts, New Jersey, New Mexico, New York, Oregon, Rhode Island, Vermont and Washington, the cities of New York, Baltimore, and Washington, D.C., the territory of American Samoa, and the organizations Center for Biological Diversity, Center for Food Safety, Conservation Law Foundation, Environmental Advocates, Environmental Defense, Friends of the Earth, Greenpeace, International Center for Technology Assessment, National Environmental Trust, Natural Resources Defense Council, Sierra Club, Union of Concerned Scientists, and the U.S. Public Interest Research Group. James Milkey of the Massachusetts Attorney General's Office represented the petitioners in oral arguments before the U.S. Supreme Court.Respondents were the Environmental Protection Agency, the Alliance of Automobile Manufacturers, National Automobile Dealers Association, Engine Manufacturers Association, Truck Manufacturers Association, CO2 Litigation Group, Utility Air Regulatory Group, and the states of Michigan, Alaska, Idaho, Kansas, Nebraska, North Dakota, Ohio, South Dakota, Texas, and Utah. Appeals court The U.S. Court of Appeals for the District of Columbia Circuit decided on September 13, 2005, to uphold the decision of the EPA. However, appellate judges were sharply at odds in their reasoning for reaching the majority conclusion. The lower court was sharply divided on whether the petitioners had "standing", a personalized injury creating a right to claim remedial action from the government through the courts (i.e., rather than to seek favorable action by pressing for supportive legislation). One of the three judges found no standing while a second of the three postponed a factual decision for any later trial. Although it had granted certiorari, the Supreme Court could have revisited the question of standing to dodge a difficult decision and dismiss the case for lack of standing. However, once certiorari has been granted, such a reversal is rare. Granting of certiorari On June 26, 2006, the Supreme Court granted a writ of certiorari. Issues Whether the petitioners had standing. Whether carbon dioxide is an "air pollutant" causing "air pollution" as defined by the CAA. If carbon dioxide is not an air pollutant causing air pollution, then the EPA has no authority under the CAA to regulate carbon dioxide emissions. If the CAA governs carbon dioxide, the EPA Administrator could decide not to regulate carbon dioxide, but only consistent with the terms of the CAA. Whether the EPA Administrator may decline to issue emission standards for motor vehicles on the basis of policy considerations not enumerated in section 202(a)(1). Arguments The Petitioners argued that the definition in the CAA is so broad that carbon dioxide must be counted as an air pollutant. They claimed that the question was controlled by the words of the statute, so that factual debate was immaterial. Furthermore, the Petitioners filed substantial scientific evidence that the toxicity of carbon dioxide results from high concentrations and that causation of global warming transforms the gas into a pollutant. If the statutory definition of the CAA includes carbon dioxide, then the Federal courts would have no discretion to reach any other conclusion. The definition contained in the statute, not evidence or opinion, would control the outcome. The law's definition of air pollutant contains "any air pollution agent or combination of such agents, including any physical, chemical, biological, radioactive ... substance or matter which is emitted into or otherwise enters the ambient air, ..." Both sides agreed that CO2 and greenhouse gases are part of the second half. Petitioners argued that the use of 'including' automatically means greenhouse gases are part of the first group, 'any air pollution agent' which was not separately defined. EPA argued that this is wrong because 'Any American automobile, including trucks and minivans, ... ' does not mean that foreign trucks are American automobiles. The Petitioners asserted that the EPA Administrator's decision not to regulate carbon dioxide and other greenhouse gases violated the terms of the CAA. Thus, the Supreme Court also considered whether the reasons given by the EPA were valid reasons within the CAA statute for the EPA Administrator to decide not to regulate carbon dioxide. The EPA argued that the Administrator has the discretion under the CAA to decide not to regulate. The EPA Administrator argued that other actions are already being taken to increase fuel efficiency of automobiles and that (as of 2003) scientific investigation was still under way. Thus, the EPA Administrator decided not to regulate "at this time". This case has become notable because of a widespread perception that the truth or falsehood of theories of global warming will be decided by the courts. While this could eventually occur in later proceedings, the questions before the U.S. Supreme Court here were much more narrow, and legal in nature. One of several reasons that the EPA Administrator declined to regulate carbon dioxide is uncertainty about whether man-made carbon dioxide emissions cause global warming. This has attracted great attention to the case. However, the Supreme Court only decided whether the Administrator's reason is a valid reason within the CAA. The Supreme Court did not explicitly decide if it is true or untrue that man-made carbon dioxide emissions cause global warming, although high-profile comments by Justices during oral argument are likely to affect the public debate. The Petitioners argued that scientific uncertainty is not a valid basis for the EPA Administrator to decline to regulate. The question before the Supreme Court "was not whether the causation is true or untrue," but whether it is a valid reason for the Administrator to not regulate a pollutant. Opinion of the Court First, the petitioners were found to have standing. Justice Stevens reasoned that the states had a particularly strong interest in the standing analysis. The majority cited Justice Holmes' opinion in Georgia v. Tennessee Copper Co. (1907): "The case has been argued largely as if it were one between two private parties; but it is not. The very elements that would be relied upon in a suit between fellow-citizens as a ground for equitable relief are wanting here. The State owns very little of the territory alleged to be affected, and the damage to it capable of estimate in money, possibly, at least, is small. This is a suit by a State for an injury to it in its capacity of quasi-sovereign. In that capacity the State has an interest independent of and behind the titles of its citizens, in all the earth and air within its domain. It has the last word as to whether its mountains shall be stripped of their forests and its inhabitants shall breathe pure air." Second, the Court held that the CAA gives the EPA the authority to regulate tailpipe emissions of greenhouse gases. The CAA provides: "The Administrator shall by regulation prescribe (and from time to time revise) in accordance with the provisions of this section, standards applicable to the emission of any air pollutant from any class or classes of new motor vehicles or new motor vehicle engines, which in his judgment cause, or contribute to, air pollution which may reasonably be anticipated to endanger public health or welfare." The CAA defines "air pollutant" as "any air pollution agent or combination of such agents, including any physical, chemical, biological, radioactive ... substance or matter which is emitted into or otherwise enters the ambient air". The majority opinion commented that "greenhouse gases fit well within the CAA's capacious definition of air pollutant."Finally, the Court remanded the case to the EPA, requiring the agency to review its contention that it has discretion in regulating carbon dioxide and other greenhouse gas emissions. The Court found the current rationale for not regulating to be inadequate and required the agency to articulate a reasonable basis in order to avoid regulation. Roberts' dissent Chief Justice Roberts authored a dissenting opinion. First, the dissent condemns the majority's "special solicitude" conferred to Massachusetts as having no basis in Supreme Court cases dealing with standing. The dissent compares the majority opinion to "the previous high-water mark of diluted standing requirements," United States v. SCRAP (1973). Roberts then argues that the alleged injury (i.e., Massachusetts' loss of land because of rising sea levels) is too speculative and without adequate scientific support. The dissent also finds that even if there is a possibility that the state may lose some land because of global warming, the effect of obliging the EPA to enforce automobile emissions is hypothetical at best. According to Roberts, there is not a traceable causal connection between the EPA's refusal to enforce emission standards and petitioners' injuries. Finally, the dissent maintains that redressability of the injuries is even more problematic given that countries such as India and China are responsible for the majority of the greenhouse-gas emissions. The Chief Justice concludes by accusing the majority of lending the Court as a convenient forum for policy debate and of transgressing the limited role afforded to the Supreme Court by the U.S. Constitution. Scalia's dissent First, Justice Scalia found that the Court has no jurisdiction to decide the case because petitioners lack standing, which would have ended the inquiry. However, since the majority saw fit to find standing, his dissent continued. The main question is, "Does anything require the Administrator to make a 'judgment' whenever a petition for rulemaking is filed?" Justice Scalia sees the Court's answer to this unequivocally as yes, but with no authority to back it. He backs this assertion by explaining that the "statute says nothing at all about the reasons for which the Administrator may defer making a judgment"—the permissible reasons for deciding not to grapple with the issue at the present time. Scalia saw no basis in law for the Court's imposed limitation. In response to the Court's statement that, "If the scientific uncertainty is so profound that it precludes EPA from making a reasoned judgment as to whether greenhouse gases contribute to global warming, EPA must say so," Scalia responded that EPA has done precisely that, in the form of the National Research Council panel that researched climate-change science. Remand On remand, EPA found that six greenhouse gases "in the atmosphere may reasonably be anticipated both to endanger public health and to endanger public welfare." On February 16, 2010, the states of Alabama, Texas, and Virginia and several other parties sought judicial review of EPA's determination in the U.S. Court of Appeals, District of Columbia Circuit. On June 26, 2012, the court issued an opinion which dismissed the challenges to the EPA's endangerment finding and the related GHG regulations. The three-judge panel unanimously upheld the EPA's central finding that GHG such as carbon dioxide endanger public health and were likely responsible for the global warming experienced over the past half century. A later Supreme Court case in 2022, West Virginia v. EPA, found that the EPA had taken undue authority within the major questions doctrine on regulation of emissions and promoting alternate forms of energy for power plants in the Clean Power Plan. In response, part of the Inflation Reduction Act of 2022 includes language to address the Court's decision in West Virginia, and codified the findings in Massachusetts in that carbon dioxide among several other greenhouse gases were within the EPA's remit to regulate as pollutants under the Clean Air Act. See also Utility Air Regulatory Group v. Environmental Protection Agency (2014) Effects of global warming Global warming Global warming controversy and Climate change denial Intergovernmental Panel on Climate Change - Fourth Assessment Report Lists of United States Supreme Court cases List of United States Supreme Court cases, volume 549 Regulation of greenhouse gases under the Clean Air Act Standing (law) Notes Further reading Suing the Tobacco and Lead Pigment Industries: Government Litigation as Public Health Prescription by Donald G. Gifford. Ann Arbor, University of Michigan Press, 2010. ISBN 978-0-472-11714-7 The Rule of Five: Making Climate History at the Supreme Court by Richard J. Lazarus, Harvard University Press, 2020. ISBN 978-0674238121 External links Works related to Massachusetts v. Environmental Protection Agency at Wikisource Text of Massachusetts v. Environmental Protection Agency, 549 U.S. 497 (2007) is available from: Cornell CourtListener Findlaw Google Scholar Justia Oyez (oral argument audio) CRS Report (public domain—may be copied verbatim into article with citations) New York Times article of April 2, 2007 relating to decision Transcript of Oral Arguments Bloomberg report of 6/26/2006
nigerian energy supply crisis
The Nigerian energy supply crisis refers to the ongoing failure of the Nigerian power sector to provide adequate electricity supply to domestic households and industrial producers despite a rapidly growing economy, some of the world's largest deposits of coal, oil, and gas and the country's status as Africa's largest oil producer. Currently, only 45% of Nigeria's population is connected to the energy grid whilst power supply difficulties are experienced around 85% of the time and almost nonexistent in certain regions. At best, average daily power supply is estimated at four hours, although several days can go by without any power at all. Neither power cuts nor restorations are announced, leading to calls for a load shedding schedule during the COVID-19 lockdowns to aid fair distribution and predictability.Power supply difficulties cripple the agricultural, industrial, and mining sectors and impede Nigeria's ongoing economic development. The energy supply crisis is complex, stems from a variety of issues, and has been ongoing for decades. Most Nigerian businesses and households that can afford to do so run one or more diesel-fueled generators to supplement the intermittent supply. Since 2005, Nigerian power reforms have focused on privatizing the generator and distribution assets and encouraging private investment in the power sector. The government continues to control transmission assets whilst making "modest progress" in creating a regulatory environment attractive to foreign investors. Minor increases in average daily power supply have been reported. Background Until the power sector reforms of 2005, power supply and transmission was the sole responsibility of the Nigerian federal government. As of 2012, Nigeria generated approximately 4,000 - 5,000 megawatts of power for a population of 150 million people as compared with Africa's second-largest economy, South Africa, which generated 40,000 megawatts of power for a population of 62 million. An estimated 14 - 20 gigawatts of power is provided by private generators to make up for the shortfall. Nigeria has a theoretical capacity of more than 10,000-megawatt generation capacity using existing infrastructure but has never reached close to that potential. 96% of industry energy consumption is produced off-grid using private generators.Issues affect all areas of the sector, from generation to transmission to distribution. Currently, the only plan the government has in place to help solve the energy crisis is to expand the fossil fuel burning sector. Alternative forms of energy are not used probably because of availability of oil in Nigeria, as it has the world’s seventh largest oil reserves. History of the Power Sector 1886 - First two power generators installed in the Colony of Lagos 1951 - Act of Parliament establishes the Electricity Corporation of Nigeria (ECN) 1962 - The Niger DamsAuthority (NDA) was also established for the development of hydroelectric power 1972 - Merger of ECN and NDA to create the National Electric Power Authority (NEPA) 2005 - Following reforms, the NEPA was renamed Power Holding Company of Nigeria (PHCN). The Electric Power Sector Reform (EPSR) Act was enacted allowing private investment in electricity generation, transmission, and distribution. November 2005, the Nigerian Electricity Regulatory Commission (NERC) was inaugurated and charged with the responsibility of tariffs regulation and monitoring of the quality of services of the PHCN. February 1, 2015 - The Transitional Electricity Market (TEM) was announced Current Challenges Power Generators The most efficient location for new power plants in the Niger Delta region due to the easy access to the sources of energy needed to run the plants. Transmission Network Post-reforms the transmission network continues to be government-owned and operated and remains the weakest link in the power sector supply chain. Transmission lines are old and at the point of system collapse on any given day. Even should more power be generated, the transmission network is unable to carry any additional power loading. Designed for a peak capacity of only 3,000 to 3,500MW per day breakdown of the lines is a daily occurrence. Lack of maintenance and security challenges in parts of the country only adds to the difficulties. Currently, Nigeria uses four different types of energy: natural gas, oil, hydro, and coal The energy sector is heavily dependent on petroleum as a method for electricity production which has slowed down the development of alternative forms of energy. Three out of the four above resources used for energy production in Nigeria are linked with increasing greenhouse gas emissions: coal, oil, and natural gas, with coal, emitting the worst of the three. Solution to the energy and environmental problem in Nigeria To increase the energy production in the country, the Federal government has started investing in solar power. Nigeria is a tropical country with a large amount of insolation coming from the sun. She has involved solar companies such as Hansa Energy and Arnergy in Nigeria to help in the mass production of solar plants and the distribution of solar systems for households and businesses. The Rural electrification project embarked upon by the federal government is poised to supply solar systems to 5 million households. Which will be a great way of increasing the energy supply in the country. See the table below for a summary of the environmental impacts of the sources of electricity. According to the World Commission on Environment and Development (WCED), the importance of sustainability in energy is the ability to preserve its use, the importance of energy in living standards and for economic development and the significant impacts that energy systems and processes have had and continue to have on the environment (WCED, 1987). Nigeria needs to invest in sustainable resources because of the obvious signs that it will be strongly impacted by environmental change such as desertification, droughts, flooding, and water shortages. The biggest blow to Nigeria would be the low-lying areas that contain many of their natural resources being flooded if ocean levels rise as predicted. Since further development of hydro-electricity does not seem practical because of the dependence on the seasons for amount of water supply . Wind energy has potential but is unreliable for consistent energy supply. Nuclear energy could be a viable solution to the energy problem because of its lack of emissions and reliability. Nigeria also has easy access to the uranium needed for the plants. Environmental Solutions In light of all this, there is a lot of literature surrounding different proposals of what might be done to help Nigeria develop its potential for renewable electricity. The development of renewable sources of energy is important for the future of the world. Nigeria has been in an energy crisis for a decade despite numerous attempts to reform the energy sector . The only thing that remains is to figure out which energy source is most practical for Nigeria. The development of hydroelectricity does not seem practical because of the dependence on the seasons for the amount of water supply as well as the amount of greenhouse gases it emits in the first 10 years of being built (Middleton, 2013). Wind energy has potential but is unreliable for consistent energy supply. Two fields of arguments: The most practical solution was mentioned by Gujba, Mulugetta, and Azapagic, (2011). The authors of this article suggested that a harmonization of different forms of energy take place. In their sustainable development scenario, they suggested some reliance on renewable energy sources and a slow change from fossil fuels to renewable energy sources. Since the rural areas are further from the electricity grid and most currently do not have power, each area would become a little hub where they would produce their own power by whatever resource was closest. For example, in the northern areas, the mini-grids would work off of wind and solar power energy. Hydropower development would have to increase in order for this to be successful. Winkler, Howells, and Baumert (2002) talk about envisioning where a country wants to end up before the development of energy resources. This is a great perspective about how to fix the energy crisis because taking the big picture into account before the development of the sector could include things outside of simply fixing the energy crisis such as poverty eradication, job creation, reducing carbon emissions, etc. Fixing the energy supply will solve many problems such as the overpricing of electricity due to the loss of the electricity within the grid (Winkler, Howells & Baumert, 2002). Process and Industrial Developments dispute Process and Industrial Developments Ltd (P&ID) entered into a 20-year contract with the Nigerian government for natural gas supply and processing. Nigeria provided the gas, which PI&D refined so that it could be used to power the Nigerian electrical grid. PI&D could keep valuable byproducts for its own use. In 2012, PI&D demanded arbitration in London, alleging that Nigeria had not supplied the agreed quantity of gas or to construct the infrastructure it had agreed to build. The arbitral tribunal awarded damages of more than £4.8 billion. The compensation was valued £8.15 billion with interest when the case was heard in London High Court in December 2022. References See also Energy crisis
sasol
Sasol Limited is an integrated energy and chemical company based in Sandton, South Africa. The company was formed in 1950 in Sasolburg, South Africa, and built on processes that German chemists and engineers first developed in the early 1900s (see coal liquefaction). Today, Sasol develops and commercializes technologies, including synthetic fuel technologies, and produces different liquid fuels, chemicals, coal tar, and electricity. Sasol is listed on the Johannesburg Stock Exchange (JSE: SOL) and the New York Stock Exchange (NYSE: SSL). Major shareholders include the South African Government Employees Pension Fund, Industrial Development Corporation of South Africa Limited (IDC), Allan Gray Investment Counsel, Coronation Fund Managers, Ninety One, and others. Sasol employs 30,100 people worldwide and has operations in 33 countries. It is the largest corporate taxpayer in South Africa and the seventh-largest coal mining company in the world. History The incorporation of Sasol South Africa has large deposits of coal, which had low commercial value due to its high fly ash content. If this coal could be used to produce synthetic oil, petrol, and diesel fuel, it perhaps would have significant benefit to South Africa. In the 1920s, South African scientists started looking at the possibility of using coal as a source of liquid fuels. This work was pioneered by P. N. Lategan, working for the Transvaal Coal Owners Association. He completed his doctoral thesis from the Imperial College of Science in London on The Low-Temperature Carbonisation of South African Coal. In 1927, a white paper from the government was issued describing various oil-from-coal processes being used overseas and their potential for South Africa. In the 1930s, a young scientist named Etienne Rousseau obtained a Master of Science from the University of Stellenbosch. His thesis was entitled "The Sulfur Content of Coals and Oil Shales". Rousseau became Sasol's first managing director. After World War II, Anglovaal bought the rights to a method of using the Fischer–Tropsch process patented by M. W. Kellogg Limited, and in 1950, Sasol was formally incorporated as the South African Coal, Oil, and Gas Corporation (from the Afrikaans of which the present name is derived: Suid-Afrikaanse Steenkool-, Olie- en Gas Maatskappy), a state-owned company. Commissioning of the Sasol 1 site for the production of synfuels started in 1954. Construction of the Sasol 2 site was completed in 1980, with the Sasol 3 site coming on stream in 1982. The Zevenfontein farm house served as Sasol's first offices and is still in existence today. Coal mining To support the required economies of scale for coal-to-liquids (CTL) process to be economical and competitive with crude oil, all stages of the operations, from coal mining to the Fischer–Tropsch process and product work up must be run with great efficiency. Due to the complexity of the Lurgi gasifers used, the quality of the coal was paramount. The initial annual output from the Sigma underground mine in Sasolburg was two million tons. Annual coal production from this mine peaked in 1991 at 7.4 million tons. Today, most of the gasifiers in Sasolburg have been replaced with autothermal reformers that feed natural gas piped from Mozambique. Natural gas generates about 40–60% less carbon dioxide for the same energy produced as coal, thus is significantly more environmentally friendly. Gas-to-liquids technology converts natural gas, predominantly methane to liquid fuels. Today, Sasol mines more than 40 million tons (Mt) of saleable coal a year, mostly gasification feedstock for Sasol Synfuels in Secunda. Sasol Mining also exports some 2.8 Mt of coal a year. This amounts to roughly 22% of all the coal mined in South Africa. Underground mining operations continue in the Secunda area (Bosjesspruit, Brandspruit, Middelbult, Syferfontein, and Twistdraai collieries) and Sigma: Mooikraal colliery near Sasolburg. As some of these mines are nearing the end of their useful lives, a R14bn mine replacement program has been undertaken. The first of the new mines is the R3.4bn Thubelisha shaft, which will eventually be an operation delivering more than 8M tons/annum (mtpa) of coal over 25 years. The Impumelelo mine, which will replace the Brandspruit operation, is set for first production in 2015. It will be ramped up to produce 8.5 mtpa, and can later be upgraded to supplying some 10.5 mtpa. This coal will be used exclusively by the Sasol Synfuels plant. An underground extension of the Middelbult mine is also on the cards, with the main shaft and incline shaft being replaced by the Shondoni shaft. The first coal from the new complex was expected to be delivered in 2015.The Secunda collieries form the world's largest underground coal operations.In conjunction with the continuous improvement in the Fischer–Tropsch process and catalyst, significant developments were also made in mining technology. Coal mining at Sasol from the early days has been characterised by innovation. Sasol Mining mainly uses the room and pillar method of mining with a continuous miner. Sasol successfully used the longwall mining method from 1967 to 1987. Today, Sasol is one of the leaders in coal-mining technology and was the first to develop in-seam drilling from the surface using a directional drilling methodology. This has been developed into an effective exploration tool. Working with Fifth Dimension, Sasol developed a virtual reality technology to help train continuous miner operators in a 3D environment in which various scenarios can be simulated, including sound, dust and other signs of movement. This has recently been expanded to include shuttle car, roof-bolting, and load-haul dumper simulators. Fischer–Tropsch reactor technology The initial reactors from Kellogg and Lurgi gasifiers were tricky and expensive to operate. The original reactor design in 1955 was a circulating fluidised bed reactor (CFBR) with a capacity of about 1,500 barrels per day. Sasol improved these reactors to eventually yield about 6,500 barrels per day. The CFBR design involves moving the whole catalyst bed around the reactor, which is energy intensive and not efficient as most of the catalyst is not in the reaction zone. Sasol then developed fixed fluidized bed (FFB) reactors in which the catalyst particles were held in a fixed reaction zone. This resulted in a significant increase in reactor capacities. For example, the first FFB reactors commercialised in 1990 (5 m diameter) had a capacity of about 3,000 barrels per day, while the design in 2000 (10.7 m diameter) had a capacity of 20,000 barrels per day. Further advancements in reactor engineering have resulted on the development and commercialisation of Sasol Slurry Phase Distillate (SSPD) reactors which are the cornerstone of Sasol's first-of-a-kind GTL plant in Qatar. From fuels to chemicals The fuel price is directly linked to the oil price, so is subject to potentially large fluctuations. With Sasol only producing fuels, this meant that its profitability was largely governed by external macroeconomic forces over which it had no control. How could Sasol be less susceptible to the oil price? The answer was right in front of them, in the treasure chest of chemicals co-produced in the Fischer–Tropsch process. Chemicals have a higher value per ton of product than fuels. In the 1960s ammonia, styrene, and butadiene became the first chemical intermediates sold by Sasol. The ammonia was then used to make fertilizers. By 1964, Sasol was a major player in the nitrogenous fertilizer market. This product range was further extended in the 1980s to include both phosphate- and potassium-based fertilizers. Sasol now sells an extensive range of fertilizers and explosives to local and international markets, and is a world leader in its low-density ammonium nitrate technology.With the extraction of chemicals from its Fischer–Tropsch product slate coupled with downstream functionalization and on-purpose chemical production facilities, Sasol moved from being just a South African fuels company to become an international integrated energy and chemicals company with over 200 chemical products being sold worldwide. Some of the main products produced are diesel, petrol (gasoline), naphtha, kerosene (jet fuel), liquid petroleum gas (LPG), olefins, alcohols, polymers, solvents, surfactants (detergent alcohols and oil-field chemicals), co-monomers, ammonia, methanol, various phenolics, sulphur, illuminating paraffin, bitumen, acrylates, and fuel oil. These products are used in the production process of numerous everyday products made worldwide and benefit the lives of millions of people around the world. They include hot-melt adhesives, car products, microchip coatings, printing ink, household and industrial paints, mobile phones, circuit boards, transport fuels, compact discs, medical lasers, sun creams, perfumes and plastic bottles.In South Africa, the chemical businesses are integrated in the Fischer–Tropsch value chain. Outside South Africa, the company operates chemical businesses based on backward integration into feedstock and/or competitive market positions for example in Europe, Asia, and the United States. Operations Sasol has exploration, development, production, marketing and sales operations in 31 countries across the world, including Southern Africa, the rest of Africa, the Americas, Europe, the Middle East (West Asia), Russia, Southeast Asia, East Asia, and Oceania.The Sasol group structure is organised into two upstream business units, three regional operating hubs and four customer-facing strategic business units. Operating business units Operating Business Units comprise the mining division and exploration and production of oil and gas activities, focused on feedstock supply.Sasol Mining operates six coal mines that supply feed-stock for Secunda (Sasol Synfuels) and Sasolburg (Sasolburg Operations) complexes in South Africa. While the coal supplied to Sasol Synfuels is mainly used as gasification feedstock, some is used to generate electricity. The coal supplied to the Sasolburg Operations is used to generate electricity and steam. Coal is also exported from the Twistdraai Export Plant to international power generation customers. Sasol Exploration and Production International (SEPI) develops and manages the group's upstream interests in oil and gas exploration and production in Mozambique, South Africa, Canada, Gabon, and Australia. Regional operating hubs These include operations in Southern Africa, North America and Eurasia.The Southern African Operations business cluster is responsible for Sasol's entire Southern Africa operations portfolio, which comprises all downstream operations and related infrastructure in the region. This combined operational portfolio has simplified and consolidated responsibilities relating to the company's operating facilities in Secunda, which are divided into a synthetic fuels and chemicals component, Sasolburg, Natref, Sasol's joint-venture inland refinery with TotalEnergies, and Satellite Operations, a consolidation of all Sasol's operating activities outside of Secunda and Sasolburg.The International Operations business cluster is responsible for Sasol's international operations in Eurasia and North America, which include its US mega-projects in Lake Charles, Louisiana. Strategic business units Energy business Southern Africa Energy International EnergyThe energy business manages the marketing and sales of all oil, gas and electricity products in Southern Africa, which have been consolidated under a single umbrella. In addition, this cluster oversees Sasol's international GTL (gas to liquids) ventures in Qatar, Nigeria and Uzbekistan.Chemical business Base Chemicals Performance ChemicalsThe global chemicals business includes the marketing and sales of all chemical products, both in southern Africa and internationally. The chemicals business is divided into two niche groupings; Base Chemicals, where its fertilisers, polymers and solvents products lie, and performance chemicals, comprising key products which include surfactants, surfactant intermediates, fatty alcohols, linear alkyl benzene (LAB), short-chain linear alpha olefins, ethylene, petrolatum, paraffin waxes, synthetic waxes, cresylic acids, high-quality carbon solutions as well as high-purity and ultra-high-purity alumina and a speciality gases sub-division.In South Africa, the chemical businesses are integrated into the Fischer–Tropsch value chain. Outside South Africa, the chemical businesses are operated based on backward integration into feedstock and/or competitive market positions. Group functions Group Technology manages the research and development, technology innovation and management, engineering services and capital project management portfolios. Group Technology includes Research and Technology (R&T), Engineering and Project Services and Capital Projects. Major projects United States Sasol has granted final approval for a US$11 billion ethane cracker and derivatives plant near Westlake and the community of Mossville, both across the Calcasieu River from Lake Charles, Louisiana, and is the largest foreign investment in the history of the State of Louisiana. It was stated "Once commissioned, this world-scale petrochemicals complex will roughly triple the company's chemical production capacity in the United States, enabling Sasol to further strengthen its position in a growing global chemicals market. The U.S. Gulf Coast's robust infrastructure for transporting and storing abundant, low-cost ethane was a key driver in the decision to invest in America". The ethane cracker will also be supported by six chemical manufacturing plants.By January 2015 construction was in full swing. At peak the project will create 5000 construction and 1200 permanent jobs and cost $11 billion to $14 billion. Qatar The Oryx GTL plant in Qatar is a joint venture between Sasol and QatarEnergy, launched in 2007. The more than 32,000 barrels per day (5,100 m3/d) plant produces a combination of GTL diesel, GTL naphtha and liquid petroleum gas. Uzbekistan The proposed Uzbekistan GTL project is a partnership between Sasol, Uzbekneftegaz and Petronas. Sasol reconsidered its involvement in March 2016 Mozambique Sasol is developing a 140 MW gas-fired electricity generation plant in partnership with power utility EDM. This gas project came into operation in 2004, and is a joint venture agreement between Sasol Petroleum International, Empresa Nacional de Hidrocarbonetos (ENH), and the International Finance Corporation. Technology Natref Refinery Sasol is also involved in conventional oil refinery. Incorporated on 8 December 1967, construction started on the Natref refinery in 1968 and commissioned in Sasolburg in 1971.: 166 : 75  It was built as a joint venture between the Industrial Development Corporation (IDC), National Iranian Oil Company and Compagnie Fransçaise de Petroles (Total) with financing by the Rembrandt Group, Volkskas and SA Mutual.: 166  By 1979, prior to the Iranian Revolution, the refinery was receiving seventy percent of its oil from Iran and National Iranian Oil Company owned 17.5 percent of the facility. Oil was piped 800 km from Durban via Richards Bay to the refinery. The refinery is now a joint venture between Sasol Ltd and Total South Africa (Pty) Ltd. Sasol has a 63,64 percent interest in Natref and Total South Africa a 36,36 percent shareholding. The refining capacity of Natref to 108,500 barrels per day. Natref is one of the only inland refineries in South Africa.: 166  It was designed to get the most out of crude oil. The refinery uses the bottoms upgrading refining process using medium gravity crude oil and is capable of producing 70% more white product than coastal refineries that have to rely on heavy fuel oil. Some of the products produced from the refinery are diesel, petrol, jet fuel, LPG, illuminating paraffin, bitumen and sulfur. Natref has been certified in terms of the ISO 14001 Environmental Management System. Controversies In 2009 Sasol agreed to pay an administrative penalty of R188 million as part of a settlement agreement with the Competition Commission of South Africa for alleged price fixing, in which a competitor alleged that Sasol was abusing its dominance in the markets for fertilisers by charging excessive prices for certain products. Sasol won an appeal on the case and will not be paying the settlement anymore.Sasol also had to pay a €318 million fine to the European Commission (EC) in 2008, which is about R3.7 billion, for participating in a paraffin wax cartel. Despite its indication that it would appeal the fine amount, the full amount had to be paid to the EC within three months of the fine being issued.Sasol has been levied with a R1.2bn tax provision by the Tax Court on 30 June 2017 on the back of its international crude oil purchases between 2005 and 2012. In its 2017 financial results announced on 21 August 2017, the chemical conglomerate agreed upon footing the R1.2bn tax liability. If the court's interpretation is implemented for the following two years – 2013 and 2014 – Sasol Oil's crude purchases could result in a further tax exposure of R11.6bn, thus summing up a total tax figure up to R12.8bn.A $4 billion cost and schedule overrun at Sasol's Lake Charles project resulted in the resignation of the joint CEOs in October 2019. Adverse weather, poor subsurface conditions, and a "culture of fear" which undermined transparent reporting, were cited as contributing reasons for the over-runs. Greenhouse gas emissions Due to the stoichiometry of hydrogen production by coal gasification and the Fischer Tropsch reaction, the production of Sasol Secunda's liquid fuels result in some of the highest specific GHG emissions in the world. Sasol's Secunda CTL plant is, as of 2020, the world's largest point source of greenhouse gas, at 56.5 Mt/a CO2.Sasol is exploring alternatives such as green hydrogen which is currently being produced by the electrolysers at the Sasolburg plant. These are powered by renewable energy in the form of a 3 MW solar farm. Air Liquide acquired 16 of Sasol's energy intensive cryogenic air separation trains in 2020 which are capable of producing 42 000 t/d of pure oxygen. Sasol and Air Liquide plan to purchase 900 MW of renewable electricity as wind and solar power to reduce CO2 emissions as per their stated emissions reduction programmes: 30% reduction in CO2 from the FY17 baseline by 2030. See also List of petroleum companies Fischer–Tropsch process Coal gasification commercialization References External links Sasol Official website How Nazi Germany and apartheid South Africa perfected one of the world's most exciting new fuel sources. Slate Magazine Google Maps view of Sasol's Secunda plant SciFest Africa, part of Sasol's investment in science education in South Africa Sasol suffers shareholder fury at AGM over e318m EU fine Activist shareholder turns the screws on Sasol Fury at Sasol's AGM over price-fixing fine Greenpeace Sues Sasol for Corporate Espionage – video report by Democracy Now!
carbon monitoring
Carbon monitoring as part of greenhouse gas monitoring refers to tracking how much carbon dioxide or methane is produced by a particular activity at a particular time. For example, it may refer to tracking methane emissions from agriculture, or carbon dioxide emissions from land use changes, such as deforestation, or from burning fossil fuels, whether in a power plant, automobile, or other device. Because carbon dioxide is the greenhouse gas emitted in the largest quantities, and methane is an even more potent greenhouse gas, monitoring carbon emissions is widely seen as crucial to any effort to reduce emissions and thereby slow climate change. Monitoring carbon emissions is key to the cap-and-trade program currently being used in Europe, as well as the one in California, and will be necessary for any such program in the future, like the Paris Agreement. The lack of reliable sources of consistent data on carbon emissions is a significant barrier to efforts to reduce emissions. Data sources Sources of such emissions data include: Carbon Monitoring for Action (CARMA) – An online database provided by the Center for Global Development, that includes plant-level emissions for more than 50,000 power plants and 4,000 power companies around the world, as well as the total emissions from power generation of countries, provinces (or states), and localities. Carbon emissions from power generation account for about 25 percent of global CO2 emissions.ETSWAP – An emissions monitoring and reporting system currently in use in the UK and Ireland, which enables relevant organizations to monitor, verify and report carbon emissions, as is required by the EU ETS (European Union Emissions Trading Scheme).FMS – A system used in Germany to record and calculate annual emission reports for plant operators subject to the EU ETS. Remaining global carbon budget Carbon emissions are also monitored on a global scale (with data for countries, sectors, companies, activities, etc). In the United States Almost all climate change regulations in the US have stipulations to reduce carbon dioxide and methane emissions by economic sector, so being able to accurately monitor and assess these emissions is crucial to being able to assess compliance with these regulations. Emissions estimates at the national level have been shown to be fairly accurate, but at the state level there is still much uncertainty. As part of the Paris Agreement, the US pledged to "decrease its GHG emissions by 26–28 % relative to 2005 levels by 2025 as part of the Paris Agreement negotiated at COP21. To comply with these regulations, it is necessary to quantify emissions from specific source sectors. A source sector is a sector of the economy that emits a particular greenhouse gas, i.e. methane emissions from the oil and gas industry, which the US has pledged to decrease by 40–45 % relative to 2012 levels by 2025 as a more specific action towards achieving its Paris Agreement contribution. Currently, most governments, including the US government, estimate carbon emissions with a "bottom-up" approach, using emission factors which give the rate of carbon emissions per unit of a certain activity, and data on how much of that activity has taken place. For example an emission factor can be determined for the amount of carbon dioxide emitted per gallon of gasoline burned, and this can be combined with data on gasoline sales to get an estimate of carbon emissions from light duty vehicles. Other examples include determining the number of cows in various locations, or the mass of coal burned at power plants, and combining these data with the appropriate emissions factors to estimate methane or carbon dioxide emissions. Sometimes "top-down" methods are used to monitor carbon emissions. These involve measuring the concentration of a greenhouse gas in the atmosphere and using these measurements to determine the distribution of emissions which caused the resulting concentrations.Accounting by sector can be complicated when there is a chance of double counting. For example, when coal is gasified to produce synthetic natural gas, which is then mixed with natural gas and burned at a natural gas powered power plant, if accounted for as part of the natural gas sector, this activity must be subtracted from the coal sector and added to the natural gas sector in order to be properly accounted for. NASA Carbon Monitoring System (CMS) NASA Carbon Monitoring System (CMS) is a climate research program created by a congressional order in 2010 that provides grants of about $500,000 a year for climate research that measure carbon dioxide and methane emissions. Using instruments in satellites and airplanes CMS funded research projects provide data to the United States and other countries that help track progress of individual nations regarding their Paris climate emission cuts agreements. For example, CMS projects measured carbon emissions from deforestation and forest degradation. CMS "stitch[ed] together observations of sources and sinks into high-resolution models of the planet's flows of carbon." The 2019 federal budget specifically assured funding for CMS, after the Trump administration proposed to end funding. In the European Union As part of the European Union Emission Trading Scheme (EU-ETS), carbon monitoring is necessary in order to ensure compliance with the cap-and-trade program. This carbon monitoring program has three main components: atmospheric carbon dioxide measurements, bottom-up carbon dioxide emissions maps, and an operational data-assimilation system to synthesize the information from the first two components.The top-down, atmospheric measurement approach involves satellite data and in-situ measurements of carbon dioxide concentrations, as well as atmospheric models that model atmospheric transport of carbon dioxide. These have limited ability to determine carbon dioxide emissions at highly resolved spatial scales and can typically not represent finer scales than a 1 km grid. The models also must resolve the fluxes of carbon dioxide from anthropogenic sources like fossil fuel burning, and from natural interactions like terrestrial ecosystems and the ocean. Due to the complexities and limitations of the top-down approach, the EU combines this method with a bottom-up approach. The current bottom-up data are based on information that is self-reported by emitters in the trading scheme. However, the EU is trying to improve this information source and has proposed plans for improved bottom-up emissions maps, which will have greatly improved spatial resolution and near real-time updates.An operational data system to combine the information gathered from the two aforementioned sources is also planned. The EU hopes that by the 2030s, this will be operational and enable a highly sophisticated carbon monitoring program across the European Union. Satellites Satellites can be used to monitor carbon dioxide concentrations from orbit. NASA currently operates a satellite named the Orbiting Carbon Observatory-2 (OCO-2), and Japan operates their own satellite, the Greenhouse Gases Observing Satellite (GOSAT). These satellites can provide valuable information to fill in data gaps from emission inventories. The OCO-2 measured a strong flux of carbon dioxide over the Middle East, which had not been represented in emissions inventories, indicating that important sources were being neglected in bottom-up estimates of emissions. These satellites currently have errors of about 0.5% in their measurements, but the American and Japanese teams hope to reduce the errors to 0.25%. China recently launched their own satellite to monitor greenhouse gas concentrations on Earth, the TanSat, in December 2016. It currently has a three-year mission planned and will take readings of carbon dioxide concentrations every 16 days. See also Top contributors to greenhouse gas emissions List of countries by carbon dioxide emissions Supply chain management References External links Climatechange.gov.au Edie.net
north sea oil
North Sea oil is a mixture of hydrocarbons, comprising liquid petroleum and natural gas, produced from petroleum reservoirs beneath the North Sea. In the petroleum industry, the term "North Sea" often includes areas such as the Norwegian Sea and the area known as "West of Shetland", "the Atlantic Frontier" or "the Atlantic Margin" that is not geographically part of the North Sea. Brent crude is still used today as a standard benchmark for pricing oil, although the contract now refers to a blend of oils from fields in the northern North Sea. From the 1960s to 2014 it was reported that 42 billion barrels of oil equivalent (BOE) had been extracted from the North Sea since when production began. As there is still an estimated 24 billion BOE potentially remaining in the reservoir (equivalent to about 35 years worth of production), the North Sea will remain as an important petroleum reservoir for years to come. However, this is the upper end of a range of estimates provided by Sir Ian Wood (commissioned by the UK government to carry out a review of the oil industry in the United Kingdom ); the lower end was 12 billion barrels. Wood, upset with how his figures were being used, said the most likely amount to be found would be between 15 billion and 16 billion barrels. History 1851–1963 Commercial extraction of oil on the shores of the North Sea dates back to 1851, when James Young retorted oil from torbanite (boghead coal, or oil shale) mined in the Midland Valley of Scotland. Across the sea in Germany, oil was found in the Wietze field near Hanover in 1859, leading to the discovery of seventy more fields, mostly in Lower Cretaceous and Jurassic reservoirs, producing a combined total of around 1340 m³ (8,400 barrels) per day.Gas was found by chance in a water well near Hamburg in 1910, leading to minor gas discoveries in Zechstein dolomites elsewhere in Germany. In England, BP discovered gas in similar reservoirs in the Eskdale anticline in 1938, and in 1939 they found commercial oil in Carboniferous rocks at Eakring in Nottinghamshire. Discoveries elsewhere in the East Midlands lifted production to 400 m³ (2,500 barrels) per day, and a second wave of exploration from 1953 to 1961 found the Gainsborough field and ten smaller fields.The Netherlands' first oil shows were seen in a drilling demonstration at De Mient during the 1938 World Petroleum Congress at The Hague. Subsequent exploration led to the 1943 discovery by Exploratie Nederland, part of the Royal Dutch/Shell company Bataafsche Petroleum Maatschappij, of oil under the Dutch village of Schoonebeek, near the German border. NAM found the Netherlands' first gas in Zechstein carbonates at Coevorden in 1948. 1952 saw the first exploration well in the province of Groningen, Haren-1, which was the first to penetrate the Lower Permian Rotliegendes sandstone that is the main reservoir for the gas fields of the southern North Sea, although in Haren-1 it contained only water. The Ten Boer well failed to reach target depth for technical reasons, but was completed as a minor gas producer from the Zechstein carbonates. The Slochteren-1 well found gas in the Rotliegendes in 1959, although the full extent of what became known as the Groningen gas field was not appreciated until 1963—it is currently estimated at ≈96×10^12 cu ft (2,700 km3) recoverable gas reserves. Smaller discoveries to the west of Groningen followed. 1964–present The UK Continental Shelf Act came into force in May 1964. Seismic exploration and the first well followed later that year. It and a second well on the Mid North Sea High were dry, as the Rotliegendes was absent, but BP's Sea Gem rig struck gas in the West Sole Field in September 1965. The celebrations were short-lived since the Sea Gem sank, with the loss of 13 lives, after part of the rig collapsed as it was moved away from the discovery well. The Viking Gas Field was discovered in December 1965 with the Conoco/National Coal Board well 49/17-1, finding the gas-bearing Permian Rotliegend Sandstone at a depth of 2,756 m subsea. Helicopters were first used to transport workers. Larger gas finds followed in 1966 – Leman Bank, Indefatigable and Hewett – but by 1968 companies had lost interest in further exploration of the British sector, a result of a ban on gas exports and low prices offered by the only buyer, British Gas. West Sole came onstream in May 1967. Licensing regulations for Dutch waters were not finalised until 1967. The situation was transformed in December 1969, when Phillips Petroleum discovered oil in Chalk of Danian age at Ekofisk, in Norwegian waters in the central North Sea. The same month, Amoco discovered the Montrose Field about 217 km (135 mi) east of Aberdeen. The original objective of the well had been to drill for gas to test the idea that the southern North Sea gas province extended to the north. Amoco were astonished when the well discovered oil. BP had been awarded several licences in the area in the second licensing round late in 1965, but had been reluctant to work on them. The discovery of Ekofisk prompted them to drill what turned out to be a dry hole in May 1970, followed by the discovery of the giant Forties Oil Field in October 1970. The following year, Shell Expro discovered the giant Brent oilfield in the northern North Sea east of Shetland in Scotland and the Petronord Group discovered the Frigg gas field. The Piper oilfield was discovered in 1973 and the Statfjord Field and the Ninian Field in 1974, with the Ninian reservoir consisting of Middle Jurassic sandstones at a depth of 3000 m subsea in a "westward tilted horst block". Offshore production, like that of the North Sea, became more economical after the 1973 oil crisis caused the world oil price to quadruple, followed by the 1979 oil crisis, which caused another tripling in the oil price. Oil production started from the Argyll & Duncan Oilfields (now the Ardmore) in June 1975 followed by Forties Oil Field in November of that year. The inner Moray Firth Beatrice Field, a Jurassic sandstone/shale reservoir 1829 m deep in a "fault-bounded anticlinal trap", was discovered in 1976 with well 11/30-1, drilled by the Mesa Petroleum Group (named after T. Boone Pickens' wife Bea, "the only oil field in the North Sea named for a woman") in 49 m of water. Volatile weather conditions in Europe's North Sea have made drilling particularly hazardous, claiming many lives (see Oil platform). The conditions also make extraction a costly process; by the 1980s, costs for developing new methods and technologies to make the process both efficient and safe far exceeded NASA's budget to land a man on the moon. The exploration of the North Sea has continually pushed the edges of the technology of exploitation (in terms of what can be produced) and later the technologies of discovery and evaluation (2-D seismic, followed by 3-D and 4-D seismic; sub-salt seismic; immersive display and analysis suites and supercomputing to handle the flood of computation required).The Gullfaks oil field was discovered in 1978. The Snorre Field was discovered in 1979, producing from the Triassic Lunde Formation and the Triassic-Jurassic Statfjord Formation, both fluvial sandstones in a mudstone matrix. The Oseberg oil field and Troll gas field were also discovered in 1979. The Miller oilfield was discovered in 1983. The Alba Field produces from sandstones in the middle Eocene Alba Formation at 1860 m subsea and was discovered in 1984 in UKCS Block 16/26. The Smørbukk Field was discovered in 1984 in 250–300 m of water that produces from Lower to Middle Jurassic sandstone formations within a fault block. The Snøhvit Gas Field and the Draugen oil field were discovered in 1984. The Heidrun oil field was discovered in 1985.The largest UK field discovered in the past twenty-five years is Buzzard, also located off Scotland, found in June 2001 with producible reserves of almost 64×106 m³ (400m bbl) and an average output of 28,600 m3 to 30,200 m3 (180,000–220,000 bbl) per day.The largest field found in the past five years on the Norwegian part of the North Sea is the Johan Sverdrup oil field, which was discovered in 2010. It is one of the largest discoveries made in the Norwegian Continental Shelf. Total reserves of the field are estimated at 1.7 to 3.3 billion barrels of gross recoverable oil, and Johan Sverdrup is expected to produce 120,000 to 200,000 barrels of oil per day. Production started on 5 October 2019. As of January 2015, the North Sea was the world's most active offshore drilling region, with 173 active rigs drilling. By May 2016, the North Sea oil and gas industry was financially stressed by the reduced oil prices, and called for government support.The distances, number of workplaces, and fierce weather in the 750,000 square kilometre (290,000 square mile) North Sea area require the world's largest fleet of heavy instrument flight rules (IFR) helicopters, some specifically developed for the North Sea. They carry about two million passengers per year from sixteen onshore bases, of which Aberdeen Airport is the world's busiest, with 500,000 passengers per year. Licensing Following the 1958 Convention on the Continental Shelf and after some disputes on the rights to natural resource exploitation the national limits of the exclusive economic zones were ratified. Five countries are involved in oil production in the North Sea. All operate a tax and royalty licensing regime. The respective sectors are divided by median lines agreed in the late 1960s: Norway – Oljedirektoratet (the Norwegian Petroleum Directorate grants licences. The NCS is also divided into quads of 1 degree by 1 degree. Norwegian licence blocks are larger than British blocks, being 15 minutes of latitude by 20 minutes of longitude (12 blocks in a quad). Like in Britain, there are numerous part blocks formed by re-licensing relinquished areas. United Kingdom – Exploration and production licences are regulated by the Oil and Gas Authority following the 2014 Wood Review on maximising UKCS (United Kingdom Continental Shelf) oil and gas recovery. Licences were formerly granted by the Department of Energy and Climate Change (DECC – formerly the Department of Trade and Industry). The UKCS is divided into quadrants of 1 degree latitude and one degree longitude. Each quadrant is divided into 30 blocks measuring 10 minutes of latitude and 12 minutes of longitude. Some blocks are divided further into part blocks where some areas are relinquished by previous licensees. For example, block 13/24a is located in quad 13 and is the 24th block and is the 'a' part block. The UK government has traditionally issued licences via periodic (now annual) licensing rounds. Blocks are awarded on the basis of the work programme bid by the participants. The UK government has actively solicited new entrants to the UKCS via "promote" licensing rounds with less demanding terms and the fallow acreage initiative, where non-active licences have to be relinquished. Denmark – Energistyrelsen (the Danish Energy Agency) administers the Danish sector. The Danes also divide their sector of the North Sea into 1 degree by 1 degree quadrants. Their blocks, however, are 10 minutes latitude by 15 minutes longitude. Part blocks exist where partial relinquishment has taken place. Germany – Germany and the Netherlands share a quadrant and block grid—quadrants are given letters rather than numbers. The blocks are 10 minutes latitude by 20 minutes longitude. Netherlands – The Dutch sector is located in the Southern Gas Basin and shares a grid pattern with Germany. Reserves and production The Norwegian and British sectors hold most of the large oil reserves. It is estimated that the Norwegian sector alone contains 54% of the sea's oil reserves and 45% of its gas reserves. More than half of the North Sea oil reserves have been extracted, according to official sources in both Norway and the UK. For Norway, Oljedirektoratet gives a figure of 4,601 million cubic metres of oil (corresponding to 29 billion barrels) for the Norwegian North Sea alone (excluding smaller reserves in Norwegian Sea and Barents Sea) of which 2,778 million cubic metres (60%) has already been produced prior to January 2007. UK sources give a range of estimates of reserves, but even using the most optimistic "maximum" estimate of ultimate recovery, 76% had been recovered as of the end of 2010. Note the UK figure includes fields which are not in the North Sea (onshore, West of Shetland). United Kingdom Continental Shelf production was 137 million tonnes of oil and 105 billion m³ of gas in 1999. (1 tonne of crude oil converts to 7.5 barrels). The Danish explorations of Cenozoic stratigraphy, undertaken in the 1990s, showed petroleum-rich reserves in the northern Danish sector, especially the Central Graben area. The Dutch area of the North Sea followed through with onshore and offshore gas exploration, and well creation. Exact figures are debatable, because methods of estimating reserves vary and it is often difficult to forecast future discoveries. Peaking and decline Official production data from 1995 to 2020 is published by the UK government. Table 3.10 lists annual production, import and exports over that period. When it peaked in 1999, production of North Sea oil was 128 million tonnes per year, approx, 950,000 m³ (6 million barrels) per day, having risen by ~ 5% from the early 1990s. However, by 2010 this had halved to under 60million tonnes/year, and continued declining further, and between 2015 and 2020 has hovered between 40 and 50 million tonnes/year, at around 35% of the 1999 peak. From 2005 the UK became a net importer of crude oil, and as production declined, the amount imported has slowly risen to ~ 20 million tonnes per year by 2020. Similar historical data is available for gas. Natural gas production peaked at nearly 10 trillion cubic feet (280×109 m³) in 2001 representing some 1.2GWhr of energy; by 2018 UK production had declined to 1.4 trillion cubic feet, (41×109 m³). Over a similar period energy from gas imports have risen by a factor of approximately 10, from 60GWh in 2001 to just over 500GWh in 2019. UK oil production has seen two peaks, in the mid-1980s and the late 1990s, with a decline to around 300×103 m³ (1.9 million barrels) per day in the early 1990s. Monthly oil production peaked at 13.5×106 m³ (84.9 million barrels) in January 1985 although the highest annual production was seen in 1999, with offshore oil production in that year of 407×106 m³ (398 million barrels) and had declined to 231×106 m³ (220 million barrels) in 2007. This was the largest decrease of any oil-exporting nation in the world, and has led to Britain becoming a net importer of crude for the first time in decades, as recognized by the energy policy of the United Kingdom. Norwegian crude oil production as of 2013 is 1.4 mbpd. This is a more than 50% decline since the peak in 2001 of 3.2 mbpd. Geology The geological disposition of the UK's oil and gas fields is outlined in the following table. Carbon dioxide sequestration In the North Sea, Norway's Equinor natural-gas platform Sleipner strips carbon dioxide out of the natural gas with amine solvents and disposes of this carbon dioxide by geological sequestration ("carbon sequestration") while keeping up gas production pressure. Sleipner reduces emissions of carbon dioxide by approximately one million tonnes a year; that is about 1⁄9000th of global emissions. The cost of geological sequestration is minor relative to the overall running costs. See also References Further reading Kemp, Alex. The Official History of North Sea Oil and Gas. Volume I: The Growing Dominance of the State; Volume 2: Moderating the State’s Role (2011) excerpt and text search Kemp, Alexander G., C. Paul Hallwood, and Peter Woods Wood. "The benefits of North Sea oil." Energy Policy 11.2 (1983): 119–130. Nelsen, Brent F., The State Offshore: Petroleum, Politics, and State Intervention on the British and Norwegian Continental Shelves (1991). Noreng, Oystein. The oil industry and government strategy in the North Sea (1980) Page, S. A. B. "The Value and Distribution of the Benefits of North Sea Oil and Gas, 1970—1985." National Institute Economic Review 82.1 (1977): 41–58. Shepherd, Mike. Oil Strike North Sea: A first-hand history of North Sea oil. Luath Press (2015). Toye, Richard. "The New Commanding Height: Labour Party Policy on North Sea Oil and Gas, 1964–74." Contemporary British History 16.1 (2002): 89–118. External links Energy Voice Norway 2015 North Sea oil at the (US) Energy Information Administration UK Department for Business, Enterprise & Regulatory Reform Danish North Sea oil and gas production Archived 2006-09-07 at the Wayback Machine, Danish Energy Authority OLF Norwegian Operators association Oil & Gas UK Petroleum Exploration Society of Great Britain The OilCity Project, stories and anecdotes from people involved in the North Sea Oil & Gas industry. Interactive Map over the Norwegian Continental Shelf, live information, facts, pictures and videos. Oil and the City Aberdeen's relationship with the oil industry. map of oil and gas infrastructure in the Danish sector map of oil and gas infrastructure in the British Sector
copenhagen accord
The Copenhagen Accord is a document which delegates at the 15th session of the Conference of Parties (COP 15) to the United Nations Framework Convention on Climate Change agreed to "take note of" at the final plenary on 18 December 2009.The Accord, drafted by, on the one hand, the United States and on the other, in a united position as the BASIC countries (China, India, South Africa, and Brazil), is not legally binding and does not commit countries to agree to a binding successor to the Kyoto Protocol, whose round ended in 2012. Summary The Accord: Endorses the continuation of the Kyoto Protocol. Underlines that climate change is one of the greatest challenges of our time and emphasises a "strong political will to urgently combat climate change in accordance with the principle of common but differentiated responsibilities and respective capabilities" To prevent dangerous anthropogenic interference with the climate system, recognizes "the scientific view that the increase in global temperature should be below 2 degrees Celsius", in a context of sustainable development, to combat climate change. Recognizes "the critical impacts of climate change and the potential impacts of response measures on countries particularly vulnerable to its adverse effects" and stresses "the need to establish a comprehensive adaptation programme including international support" Recognizes that "deep cuts in global emissions are required according to science" (IPCC AR4) and agrees cooperation in peaking (stopping from rising) global and national greenhouse gas emissions "as soon as possible" and that "a low-emission development strategy is indispensable to sustainable development" States that "enhanced action and international cooperation on adaptation is urgently required to... reduce vulnerability and build.. resilience in developing countries, especially in those that are particularly vulnerable, especially least developed countries (LDCs), small island developing states (SIDS) and Africa" and agrees that "developed countries shall provide adequate, predictable and sustainable financial resources, technology and capacity-building to support the implementation of adaptation action in developing countries" For mitigation purposes, agrees that developed countries (Annex I Parties) would "commit to economy-wide emissions targets for 2020", to be submitted by 31 January 2010, and agrees that these Parties to the Kyoto Protocol would strengthen their existing targets. Delivery of reductions and finance by developed countries will be measured, reported and verified (MRV) in accordance with COP guidelines. Agrees that developing nations (non-Annex I Parties) would "implement mitigation actions" (Nationally Appropriate Mitigation Actions) to slow growth in their carbon emissions, submitting these by 31 January 2010. LDS and SIDS may undertake actions voluntarily and on the basis of (international) support. Agrees that developing countries would report those actions once every two years via the U.N. climate change secretariat, subjected to their domestic MRV. NAMAs seeking international support will be subject to international MRV Recognizes "the crucial role of reducing emission from deforestation and forest degradation and the need to enhance removals of greenhouse gas emission by forests", and the need to establish a mechanism (including REDD-plus) to enable the mobilization of financial resources from developed countries to help achieve this Decides to pursue opportunities to use markets to enhance the cost-effectiveness of, and to promote, mitigation actions. Developing countries, especially these with low-emitting economies should be provided incentives to continue to develop on a low-emission pathway States that "scaled up, new and additional, predictable and adequate funding as well as improved access shall be provided to developing countries... to enable and support enhanced action" Agrees that developed countries would raise funds of $30 billion from 2010–2012 of new and additional resources Agrees a "goal" for the world to raise $100 billion per year by 2020, from "a wide variety of sources", to help developing countries cut carbon emissions (mitigation). New multilateral funding for adaptation will be delivered, with a governance structure. Establishes a Copenhagen Green Climate Fund, as an operating entity of the financial mechanism, "to support projects, programme, policies and other activities in developing countries related to mitigation". To this end, creates a High Level Panel Establishes a Technology Mechanism "to accelerate technology development and transfer...guided by a country-driven approach" Calls for "an assessment of the implementation of the Accord to be completed by 2015. This would include consideration of strengthening the long-term goal", for example to limit temperature rises to 1.5 degrees. Emissions pledges To date, countries representing over 80% of global emissions have engaged with the Copenhagen Accord. 31 January 2010 was an initial deadline set under the Accord for countries to submit emissions reduction targets, however UNFCCC Secretary Yvo De Boer later clarified that this was a "soft deadline". Countries continue to submit pledges past this deadline. A selection of reduction targets is shown below. All are for the year 2020. Compared to 1990: EU: 20% – 30% Japan: 25% Russia: 15% – 25% Ukraine: 20%Compared to 2000: Australia: 5% – 25%Compared to 2005: Canada: 17% US: 17%Compared to business as usual: Brazil: 36.1% – 38.9% Indonesia: 26% Mexico: 30% South Africa: 34% South Korea: 30%Carbon intensity compared to 2005: China: 40% – 45% India: 20% – 25%China also promised to increase the share of non-fossil fuels in primary energy consumption to around 15% by 2020, and increase forest coverage by 40 million hectares and forest stock volume by 1.3 billion cubic meters by 2020 from the 2005 levels. Responses The G77 said that the Accord would only secure the economic security of a few nations. Australia was happy overall but "wanted more". India was "pleased", while noting that the Accord "did not constitute a mandate for future commitment". United States said that the agreement would need to be built on in the future, and that "We've come a long way but we have much further to go." United Kingdom said "We have made a start" but that the agreement needed to become legally binding quickly. Gordon Brown also accused a small number of nations of holding the Copenhagen talks to ransom. China's delegation said that "The meeting has had a positive result, everyone should be happy." Wen Jiabao, China's premier, said that the weak agreement was due to distrust between nations: "To meet the climate change challenge, the international community must strengthen confidence, build consensus, make vigorous efforts and enhance co-operation." Brazil's climate change ambassador called the agreement "disappointing". Representatives of the Bolivarian Alliance for the Americas (mainly Venezuela, Bolivia, and Cuba), Sudan, and Tuvalu were unhappy with the outcome. Bolivian president, Evo Morales said that, "The meeting has failed. It's unfortunate for the planet. The fault is with the lack of political will by a small group of countries led by the US." Analysis US Embassy dispatches released by whistleblowing site WikiLeaks showed how the US 'used spying, threats and promises of aid' to gain support for the Copenhagen Accord. The emergent US emissions pledge was the lowest by any leading nation.The BBC immediately reported that the status and legal implications of the Copenhagen Accord were unclear. Tony Tujan of the IBON Foundation suggests the failure of Copenhagen may prove useful, if it allows us to unravel some of the underlying misconceptions and work towards a new, more holistic view of things. This could help gain the support of developing countries. Lumumba Stansilaus Di-Aping, UN Ambassador from Sudan, has indicated that, in its current form, the Accord "is not sufficient to move forward on", and that a new architecture is needed which is just and equitable. Effect on emissions In February 2010, a panel discussion was held at MIT, where Henry Jacoby presented the results of an analysis of the pledges made in the Accord. According to his analysis, assuming that the pledges submitted in response to the Accord (as of February 2010) are fulfilled, global emissions would peak around 2020. The resultant stock of emissions was projected to exceed the level required to have a roughly 50% chance of meeting the 2 °C target that is specified in the Accord. Jacoby measured the 2 °C target against pre-industrial temperature levels. According to Jacoby, even emission reductions below that needed to reach the 2 °C target still had the benefit of reducing the risk of large magnitudes of future climate change. In March 2010, Nicholas Stern gave a talk at the London School of Economics on the outcome of Copenhagen conference. Stern said that he was disappointed with the outcome of the conference, but saw the Accord as a possible improvement on "business-as-usual" greenhouse gas (GHG) emissions. In his assessment, to have a reasonable chance of meeting the 2 °C target, the preferred emissions level in 2020 would be around 44 gigatons. The voluntary pledges made in the Accord (at that date) would, according to his projection, be above this, nearer to 50 gigatons. In this projection, Stern assumed that countries would fulfil the commitments they had made. Stern compared this projection to a "business-as-usual" emissions path (i.e., the emissions that might have occurred without the Accord). His estimate of "business-as-usual" suggested that without the Accord, emissions might have been above 50 gigatons in 2020. A study published in the journal Environmental Research Letters found that the Accord's voluntary commitments would probably result in a dangerous increase in the global average temperature of 4.2 °C over the next century.The International Energy Agency (IEA) publication, World Energy Outlook 2010, contains a scenario based on the voluntary pledges made in the Copenhagen Accord.: 11  In the IEA scenario, it is assumed that these pledges are acted on cautiously, reflecting their non-binding nature. In this scenario, GHG emission trends follow a path which is consistent with a stabilization of GHGs at 650 parts per million (ppm) CO2-equivalent in the atmosphere. In the long-term, a 650 ppm concentration could lead to global warming of 3.5 °C above the pre-industrial global average temperature level. World Energy Outlook 2010 suggests another scenario consistent with having a reasonable chance of limiting global warming to 2 °C above the pre-industrial level. In the IEA's scenario, GHG emissions are reduced so as to stabilize the concentration of GHGs in the atmosphere at 450 ppm CO2-eq. This scenario sees countries making vigorous efforts to cut their GHG emissions up to the year 2020, with even stronger action taken thereafter. A preliminary assessment published in November 2010 by the United Nations Environment Programme (UNEP) suggests a possible "emissions gap" between the voluntary pledges made in the Accord and the emissions cuts necessary to have a "likely" (greater than 66% probability) chance of meeting the 2 °C objective.: 10–14  The UNEP assessment takes the 2 °C objective as being measured against the pre-industrial global mean temperature level. To having a likely chance of meeting the 2 °C objective, assessed studies generally indicated the need for global emissions to peak before 2020, with substantial declines in emissions thereafter. Criticism Concerns over the Accord exist; some of the key criticisms include: The Accord itself is not legally binding. No decision was taken on whether to agree a legally binding successor or complement to the Kyoto Protocol. The Accord sets no real targets to achieve in emissions reductions. The Accord was drafted by only five countries. The deadline for assessment of the Accord was drafted as 6 years, by 2015. The mobilisation of 100 billion dollars per year to developing countries will not be fully in place until 2020. There is no guarantee or information on where the climate funds will come from. There is no agreement on how much individual countries would contribute to or benefit from any funds. COP delegates only "took note" of the Accord rather than adopting it. The head of the G77 has said it will only secure the economic security of a few nations. There is not an international approach to technology. The Accord appears to "forget" fundamental sectoral mitigation, such as transportation. It shows biases in silent ways such as the promotion of incentives on low gas-emitting countries. See also 2009 United Nations Climate Change Conference 2010 United Nations Climate Change Conference 350.org Anote Tong Bali Road Map Carbon footprint Climate change Climate debt Global warming Kyoto Protocol Measurement, reporting and verification (MRV) Net Capacity Factor Paris Agreement Post–Kyoto Protocol negotiations on greenhouse gas emissions References External links NGO Copenhagen treaty – narrative (Vol. 1), Narrative NGO Copenhagen treaty – legal text (Vol. 2), Legal text United Nations Framework Convention on Climate Change (UNFCCC), September 15, 2009 From Copenhagen Accord to Climate Action: Tracking National Commitments to Curb Global Warming, Natural Resources Defense Council, 2010.
natural gas in the united states
Natural gas was the United States' largest source of energy production in 2016, representing 33 percent of all energy produced in the country. Natural gas has been the largest source of electrical generation in the United States since July 2015. In 2012, the United States produced 25.3 trillion cubic feet of marketed natural gas, with an average wellhead value of $2.66 per thousand cubic feet, for a total wellhead value of $67.3 billion. In 2013, the country produced 30.0 trillion cubic feet (TCF) of marketed gas. With 7,545 billion cubic feet (BCF), the leading gas-producing area in the United States in 2013 was Texas, followed by Pennsylvania (3,259 BCF), and Louisiana (2,407 BCF). US natural gas production achieved new record highs for each year from 2011 through 2015. Marketed natural gas production in 2015 was 28.8 trillion cubic feet, a 5.4 percent increase over 2014, and a 52 percent increase over the production of 18.9 trillion cubic feet in 2005. The natural gas industry includes exploration for, production, processing, transportation, storage, and marketing of natural gas and natural gas liquids. The exploration for and production of natural gas and petroleum form a single industry, and many wells produce both oil and gas. Because of the greater supply, consumer prices for natural gas are significantly lower in the United States than in Europe and Japan. The low price of natural gas, together with its smaller carbon footprint compared to coal, has encouraged a rapid growth in electricity generated from natural gas. Between 2005 and 2014, US production of natural gas liquids (NGLs) increased 70 percent, from 1.74 million barrels per day in 2005 to 2.96 million barrels per day in 2014. Although the United States leads the world in natural gas production, it is only fifth in proved reserves of natural gas, behind Russia, Iran, Qatar, and Turkmenistan. Industry structure The United States oil and gas industry is often informally divided into "upstream" (exploration and production), "midstream" (transportation and refining), and "downstream" (distribution and marketing). Petroleum and natural gas share a common upstream (exploration and production) sector, but the midstream and downstream sectors are largely separate. All large oil companies in the US produce both oil and gas. However, the relative amounts of oil and gas produced vary greatly. Of the top ten natural gas-producing companies in the US 2009, only three (BP, ConocoPhiillips, and XTO) were also among the top ten oil producers. Top natural gas producers in the United States, 2009 In 2009, the production owned by the top ten companies was 31% of total US natural gas production. Natural gas exploration In 2010, the industry drilled and completed 16,696 wells primarily for gas, slightly more than the number of wells drilled primarily for oil (15,753). Many wells produced both oil and gas, and oil wells produced 18 percent of US gas production in 2013. Of the gas wells, 1,105 were exploratory wells, and 15,591 development wells.The number of actively drilling gas rigs was once regarded as a reliable leading indicator of near-future gas production. However, the average number of active gas drill rigs has fallen each year for four straight years from 2010 (942 rigs) to 2014 (332 rigs), a drop of 65 percent, even while gas production rose each year for the same period, from 21.3 trillion cubic feet (TCF) in 2010 to 25.7 TCF in 2014, an increase of 21 percent. Remaining proved reserves increased overall, from 301 TCF in 2013 to 338 TCF in 2013 (the last year for which reserves are available), an increase of 11 percent. The rise in gas production despite fewer rigs drilling has been explained by the greater efficiency in drilling, and the greater productivity of shale gas wells. Natural gas production The U.S. Energy Information Administration publishes annual natural gas production data in aggregate by well type: traditional oil wells and gas wells, coalbed methane wells, and shale gas wells. Oil and gas Most oil fields produce some gas, and vice versa, but the ratio of oil and gas varies considerably. In fields developed to produce oil, the natural gas is in a raw form called associated gas. Some fields, called "dry gas" fields, produce only gas. Of the top ten gas-producing fields in the US, only one, the Eagle Ford, is also among the top ten oil fields. The number of wells classified as traditional gas wells has been declining in recent years as they are replaced by shale gas wells.The associated gas from oil wells is utilized similar to other sources of natural gas, or may be re-injected for storage and to enhance oil production. In some cases the well operator may designate the gas as a waste product, and large amounts of gas may be intentionally vented or flared depending on local regulations. Coalbed methane Coalbed methane production in the US peaked at 1.97 TCF in 2008, when it made up 7.8 percent of US gas production. By 2018, coalbed methane production had declined to 0.95 TCF. Shale gas Since 2000, shale gas has become a key source of natural gas in the US. Production rose more than tenfold from 2007 to 2018, when shale gas contributed 23.6 TCF, 63 percent of US gas production, and was still increasing. Gas prices The most commonly quoted producer price for natural gas is the Louisiana-based Henry Hub price, which is futures-traded on NYMEX. A barrel of oil releases about 5.8 million BTU when burned, so that 5.8 MCF of gas (at the standard one thousand BTU per cubic foot) releases about the same energy as a barrel of oil. Sometimes gas containing 5.8 million BTU is defined as a "barrel of oil equivalent for energy calculation purposes. When describing reserves or production, however, the oil and gas industry more commonly uses the rounded number of 6 MCF of gas (or 6 million BTU in the natural gas) as equal to one barrel of oil equivalent.Since the price of natural gas was deregulated in the 1990s, its price has tended to parallel that of oil, with oil usually at a premium on a BTU basis. But starting in the late 2000s, an abundance of natural gas in North America has caused the price of a unit of energy from gas to be much lower than the price of energy from oil. Natural gas pipelines When oil and natural gas are brought to the surface, they are usually separated at the wellhead, after which the oil and the gas are treated separately. Gas flows through a gathering system into a pipeline to a gas processing plant. As of 2014, there were 189,000 miles of interstate natural gas pipelines in the United States Gas processing Natural gas has a variety of chemical constituents that must be removed or diluted with other gas to achieve consistent pipeline quality. Pipeline gas specifications vary from line to line, but generally the gas must contain no appreciable hydrogen sulfide (which is toxic), less than a few percent carbon dioxide (carbon dioxide reacts with water to form carbonic acid, which is corrosive to iron and steel pipe), and a British thermal unit (BTU) content of 900 or more. Natural gas delivered to consumers generally has a BTU content of about 1020 to 1050 per standard cubic foot, slightly higher than that of pure methane (1010 BTU). Natural gas liquids Natural gas is composed primarily of methane, but often contains longer-chain hydrocarbons. Hydrocarbon compounds from hexane (each molecule of which is a simple chain containing six carbon atoms, hence called C6) and heavier generally separate out ("condense") from the gas at the wellhead; this mixture is called condensate, and is usually reported as oil production, and sold to refineries the same as oil. The C2 through C5 hydrocarbons (ethane, propane, butane, and pentane) are known as natural gas liquids (NGLs), and remain in gaseous form until they are extracted at a gas processing plant. The division between the two classes is not perfect: some hexane, and heptane remain in the gas to be separated out as NGLs, while some butane and pentane, may separate out with the condensate. Natural gas which contains NGLs is called "wet gas." Gas which naturally contains no NGLs, or gas from which the NGLs have been removed, is called "dry gas." Natural gas liquids are used either for fuel (sold as propane, or liquid petroleum gas (LPG), or for feedstock to the petrochemical industry. The United States has been the world's top producer of NGLs since 2010, and is far above second place Saudi Arabia, which produced 1.82 million barrels per day in 2015. Increased production of NGLs since 2000 has lowered the price of NGLs in the North American market, leading to a surge in construction and expansion of petrochemical plants to convert ethane and propane into ethylene and propylene, which are used to make plastics. The United States has the world's largest ethylene-manufacturing capacity, 28.4 million tons per year in 2015, with projects to add another 7.6 million tons from 2015 through 2017. As of 2015, the reduction in NGL prices had turned North America from one of the high-cost places to manufacture petrochemical, into the lowest-cost area outside the Middle East. Other byproducts Some natural gas contains enough helium to be extracted as a byproduct. Sulfur, which must be removed from natural gas for safety, aesthetic, and environmental reasons, is recovered and sold as a byproduct. In 2013, natural gas processing plants recovered 1.02 million metric tons of sulfur, which was 12 percent of US supply of elemental sulfur (the remaining sulfur production came from oil refineries). Gas storage Consumption of natural gas in the US is strongly seasonal, higher in the winter than the summer by between 50% and 90%, depending on severity of the winter. To make larger volumes of gas available in the winter, companies have established underground gas storage facilities. There are three types of natural gas storage units currently in service in the US: salt domes, depleted gas reservoirs, and deep aquifers.The largest volume held in storage was 8.29 trillion cubic feet in October 2012. This was equivalent to 26 percent of the total US production in 2014. The small mid-summer increase shown on the consumption graph is due to increased gas use for electrical power in the summer. Unlike residential, commercial, and industrial use, all of which are higher in winter, electrical power generation uses more gas in the summer. Natural gas marketing From the processing plant, natural gas is sold mostly to gas utility companies. In 2014, 46% of the marketed gas was used by commercial and industrial users, 33% by electrical power generators, and 21% by residential consumers. Natural gas electricity generation Since 2009, electricity generation has been the largest use of natural gas in the US. Electricity generated by natural gas has been by far the fastest-growing source of electricity in the US since the 1990s. Natural gas became the second-largest source of US electricity in 2006, when it surpassed nuclear power. In late 2015, natural gas surpassed coal as the largest source of electricity generated in the United States. In the decade 2005 to 2015, electricity generated by natural gas increased by 574 billion kilowatt-hours, more than triple the increase of the second-fastest-growing source, wind energy, which increased 173 billion kilowatt-hours over the same period. Natural gas-generated electricity increased its share of total US electricity from 18.8 percent in 2005, to 32.6 percent in 2015. The increase in gas-generated electricity was mostly at the expense of coal power, which fell from 49.6 percent of US electricity in 2005, to 33.2 percent in 2015. Natural gas surpassed coal as the number one generator of US electricity in late 2015. During the 12-month period through August 2016, natural gas generated 34.5 percent of US electricity, versus 29.8 percent for coal.Unlike the other sectors of natural gas consumption, the electrical power industry uses more natural gas in the summer, when electricity demand is increased by air conditioning, and when natural gas prices are at seasonal lows.The increased use of natural gas for electricity is driven by three factors. First, pressure on utilities to decrease greenhouse gas emissions has favored the substitution of coal generation by natural gas generation, which, according to the National Renewable Energy Laboratory, and the IPCC, has significantly less life-cycle GHG emissions than coal-powered electricity. Second, gas power plants are able to ramp up and down quickly, making them well suited to complement intermittent power sources such as wind and solar. Third, since late 2008, the price of natural gas has been relatively cheap on the North American market, especially compared to oil. Electricity from oil-powered generators in the US declined 81 percent from 2005 to 2014. The states which use the most natural gas for electricity production are, in descending order, Texas, Florida, California, and New York. Liquified petroleum gas Liquefied petroleum gas includes the butane and propane natural gas liquids removed in gas processing. They are sold for home heating, cooking, and increasingly for motor fuel. The industry segment is represented by the National Propane Gas Association. Vehicle fuel Natural gas, in the forms of compressed natural gas, liquified natural gas, and liquified petroleum gas, is being increasingly used for motor vehicle fuel, especially in fleet vehicles. It has the advantages over gasoline and diesel fuel of being cheaper and emitting less air pollution. It has the disadvantage of having few retail outlets. As of 2011, 262,000 vehicles in the US ran on natural gas. Although natural gas used for vehicle fuel increased 60 percent in the decade 2004-2014, in 2014 it still made up only 3.7 percent on a BTU-basis of fossil fuel use (gasoline, diesel, and natural gas) as transportation fuel in the US. Transportation fuel made up 0.13 percent of natural gas consumption in 2014. History Pipeline technology The natural gas industry in the United States goes back to 1821, when natural gas was discovered and used in Fredonia, New York. From the start, the market for natural gas was limited by pipeline technology. The gas for Fredonia, New York in 1821 was supplied through wooden pipes, which were incapable of carrying gas for long distances.In the 1800s, residences in most cities were supplied with town gas generated from coal at local "gashouses." The gas was carried in cast iron pipes, introduced in 1843, typically with bell-and-spigot joints sealed with rope and molten lead.In the 1800s and early 1900s, most natural gas discoveries were made while exploring for oil. Natural gas was usually an unwanted byproduct of oil production. In the 1870s steel pipe replaced cast iron. In 1883, Pittsburgh became the first major city supplied with natural gas. Other cities followed, but only if they were close to natural gas wells. Because natural gas was a byproduct, it was priced cheaply, and, where available, undercut the market for town gas. In 1891, one of the longest pipelines of the time was built, a 120-mile long pipeline from the gas fields of Indiana to Chicago, without compression. Long-distance high-pressure gas pipelines became feasible after oxyacetylene welding was introduced in 1911, and especially after electric arc welding became popular in the 1920s This allowed remote gas deposits to be supplied to big cities. Natural gas increasingly became a sought-for commodity. Price regulation The prices charged by utilities delivering natural gas to customers have always been subject to state regulation. With the construction of interstate gas pipelines in the 1920s and 1930s, city utilities became dependent on natural gas supplies beyond the regulatory power of state and local governments. In 1935, the federal trade commission, believing that interstate pipelines had too much power to control the downstream gas market, recommended federal controls. Congress passed the Natural Gas Act of 1938 to regulate the rates charged by interstate pipelines. Federal regulations at first included only the rates interstate pipelines charged to carry gas. When the market price of natural gas at the wellhead increased in the 1950s, gas utilities complained that the gas producers should be regulated as well. In 1954, the US Supreme Court ruled in Phillips Petroleum Co. v. Wisconsin that regulation of the wellhead price was within the intent of the Natural Gas Act of 1938 to control prices to utilities, and therefore the federal government could control wellhead prices of any natural gas going into an interstate pipeline. By the early 1970s, the artificially low price set by the federal government had created a shortage, but only of interstate gas. Gas consumed within the state where it was produced was plentiful, but more expensive. By 1975, about half the natural gas produced went to the intrastate market. In 1975 and 1976, some schools and factories in the Midwest shut down periodically when the local utility could not find any natural gas to buy at the controlled price. The Federal Power Commission tried to allocate the scarce gas by identifying "high-priority" and "low-priority" customers, but this caused extensive litigation. The federal government responded to gas shortages with the Natural Gas Policy Act of 1978, which both increased federal regulation by extended price controls to all existing natural gas wells, and promised to end price controls on all new wells by 1985. Under the new rules, natural gas was subject to a complicated set of prices, depending on when the well was drilled, the size of the company that owned the well, the permeability of the formation, and the distance of the well from previous wells. Gas production from some types of gas reservoirs received tax subsidies. In 1976, the federal government established the Eastern Gas Shales Project, a large research effort to find ways to produce gas from shale. Price controls grew even more complex with the Energy Act of 1980, which exempted Devonian gas shales (shales deposited during the Devonian geologic period) from price controls (but not gas shales deposited during other geologic periods), as well as low-permeability formations and coalbed methane. In addition, production from these sources earned tax credits for the producers for qualified wells drilled before January 1, 1992; the tax credits expired at the end of 2002.The Natural Gas Wellhead Decontrol Act of 1989 mandated that all remaining price controls on natural gas were to be eliminated as of January 1, 1993. Shortages and surplus As with petroleum, the future supply of natural gas has long been the subject of concern, and predictions of shortages. In 1952, Dr. Edward Steidle, Dean of the School of Mineral Industries at Pennsylvania State College, predicted that gas production would soon decline significantly from 1952 rates, so that gas would cease to be a significant energy source by 2002, and possibly as early as 1975.In 1956, M. King Hubbert used an estimated ultimate recovery (EUR) of 850 trillion cubic feet (24,000 km3) (an amount postulated by geologist Wallace Pratt) to predict a US production peak of about 14 trillion cubic feet (400 km3) per year to occur "approximately 1970". Pratt, in his EUR estimate (p. 96), explicitly included what he called the "phenomenal discovery rate" that the industry was then experiencing in the offshore Gulf of Mexico.US marketed gas production reached a peak in 1973 at about 22.6 trillion cubic feet (640 km3), and declined to a low of 16.9 trillion cubic feet (480 km3) in 1986. But then instead of declining further, as predicted by the Hubbert curve, natural gas production rose slowly but steadily for the next 15 years, and reached 20.6 TCF in 2001. Then it dropped again for a few years, and in 2005 was down to 18.9 TCF. After 2005, natural gas production rose rapidly, exceeded its old 1973 peak, and set new records for high production in each year 2011, 2012, 2013, 2014, and 2015, when marketed production was 28.8 trillion cubic feet (820 km3). International trade In 2017, the United States became a net exporter of natural gas on an annual basis for the first time since 1957. Net exports averaged 0.4 billion cubic feet per day. The US Energy Information Administration projected that net exports would grow to 4.6 billion cubic feet per day in 2019. Export growth was driven by pipeline exports to Mexico and Canada, although the US continued to import more from Canada than it exports to that country. In addition, exports of liquefied natural gas increased.Natural gas depends on pipelines for economical transport. Without pipeline connections, natural gas must be transported as liquefied natural gas (LNG), an expensive process. For this reason, the price of natural gas tends to differ between regions not connected by gas pipelines. The North American market, consisting of Canada, Mexico, and the United States, all connected by a common pipeline network, has had much lower gas prices in recent years than some other major world gas markets, such as Europe (since 2010), Japan (since 2008), and Korea. The United States is connected by pipeline to Canada and Mexico. The US has long imported large quantities of gas from Canada, and exported smaller quantities to some parts of eastern Canada. In 2014, the US imported 2,634 BCF from Canada, and exported 769 BCF, so that net imports from Canada totaled 1,865 BCF. The United States has exported increasing volumes to Mexico over the past decade. In 2014, the US exported 728.5 BCF to Mexico, and imported 1.4 BCF, so that net exports to Mexico totaled 727 BCF. The cost of net imports peaked at US$29.7 billion in 2005; the cost of net imports was $5.9 billion in 2014. Liquified natural gas The United States became a net exporter of liquified natural gas in 2016. Principal markets for US LNG are Mexico, South Korea, China, and Japan. In late 2021, U.S. producer Venture Global LNG signed three long-term supply deals with China's state-owned Sinopec to supply liquefied natural gas. China's imports of U.S. natural gas will more than double. U.S. exports of liquefied natural gas to China and other Asian countries surged in 2021, with Asian buyers willing to pay higher prices than European importers.In past years, when experts were projecting gas shortages in North America, utility companies built liquified natural gas (LNG) import terminals along the coast. Net imports of LNG peaked in 2007, but have since decreased. In 2014, the US imported 59 BCF of LNG gas, and exported 16 BCF, so that net LNG imports amounted to 43 BCF. Most imported LNG was from Trinidad and Tobago. Long-term LNG contracts usually tie the price of LNG to the oil price. In 2010, after the price of US natural gas fell below that of world markets, US companies have proposed establishing a number of LNG export terminals. A number of these proposals involve converting inactive LNG import terminals to handle LNG exports. Any proposals to export natural gas must be approved by the US Federal Energy Regulatory Commission (FERC), which gives its approval only if the project receives a satisfactory environmental review, and if FERC finds that the export terminal would be in the public interest. As of August 2015, 24 new LNG export terminals have been proposed, of which FERC has so far approved 6. Cheniere Energy expects to begin exporting LNG through its Sabine Pass terminal in January 2016.As of 2014, the only active LNG export terminal in the US was in Kenai, Alaska. The plant, with a capacity of 0.2 BCF per day, is owned by ConocoPhillips, and has been exporting LNG since 1969. Most exported LNG went to Japan. The New England States are connected by pipeline to the rest of the US and to Canada, but the existing pipelines are insufficient to supply the winter demand. For this reason, a quarter of New England's demand for gas is supplied by more-expensive LNG. Four LNG import terminals serve New England, but most LNG imported to New England arrives through the Everett terminal in Boston, and the Canaport terminal in New Brunswick, Canada. As of 2015, pipelines were under construction to carry cheaper gas from Pennsylvania to New England. See also Energy in the United States Fracking in the United States Petroleum in the United States Shale gas in the United States References External links America's Natural Gas Alliance – Natural gas producers American Gas Association American Public Gas Association – Publicly owned gas utilities Interstate Natural Gas Association of America – Interstate gas pipelines Natural Gas Supply Association "Natural gas leakage in USA". Global Energy Monitor. 15 December 2020.
methane
Methane (US: METH-ayn, UK: MEE-thayn) is a chemical compound with the chemical formula CH4 (one carbon atom bonded to four hydrogen atoms). It is a group-14 hydride, the simplest alkane, and the main constituent of natural gas. The relative abundance of methane on Earth makes it an economically attractive fuel, although capturing and storing it poses technical challenges due to its gaseous state under normal conditions for temperature and pressure. Naturally occurring methane is found both below ground and under the seafloor and is formed by both geological and biological processes. The largest reservoir of methane is under the seafloor in the form of methane clathrates. When methane reaches the surface and the atmosphere, it is known as atmospheric methane.The Earth's atmospheric methane concentration has increased by about 160% since 1750, with the overwhelming percentage caused by human activity. It accounted for 20% of the total radiative forcing from all of the long-lived and globally mixed greenhouse gases, according to the 2021 Intergovernmental Panel on Climate Change report. Strong, rapid and sustained reductions in methane emissions could limit near-term warming and improve air quality by reducing global surface ozone.Methane has also been detected on other planets, including Mars, which has implications for astrobiology research. Properties and bonding Methane is a tetrahedral molecule with four equivalent C–H bonds. Its electronic structure is described by four bonding molecular orbitals (MOs) resulting from the overlap of the valence orbitals on C and H. The lowest-energy MO is the result of the overlap of the 2s orbital on carbon with the in-phase combination of the 1s orbitals on the four hydrogen atoms. Above this energy level is a triply degenerate set of MOs that involve overlap of the 2p orbitals on carbon with various linear combinations of the 1s orbitals on hydrogen. The resulting "three-over-one" bonding scheme is consistent with photoelectron spectroscopic measurements. Methane is an odorless, colourless and transparent gas. It does absorb visible light, especially at the red end of the spectrum, due to overtone bands, but the effect is only noticeable if the light path is very long. This is what gives Uranus and Neptune their blue or bluish-green colors, as light passes through their atmospheres containing methane and is then scattered back out.The familiar smell of natural gas as used in homes is achieved by the addition of an odorant, usually blends containing tert-butylthiol, as a safety measure. Methane has a boiling point of −161.5 °C at a pressure of one atmosphere. As a gas, it is flammable over a range of concentrations (5.4%–17%) in air at standard pressure. Solid methane exists in several modifications. Presently nine are known. Cooling methane at normal pressure results in the formation of methane I. This substance crystallizes in the cubic system (space group Fm3m). The positions of the hydrogen atoms are not fixed in methane I, i.e. methane molecules may rotate freely. Therefore, it is a plastic crystal. Chemical reactions The primary chemical reactions of methane are combustion, steam reforming to syngas, and halogenation. In general, methane reactions are difficult to control. Selective oxidation Partial oxidation of methane to methanol (CH3OH), a more convenient, liquid fuel, is challenging because the reaction typically progresses all the way to carbon dioxide and water even with an insufficient supply of oxygen. The enzyme methane monooxygenase produces methanol from methane, but cannot be used for industrial-scale reactions. Some homogeneously catalyzed systems and heterogeneous systems have been developed, but all have significant drawbacks. These generally operate by generating protected products which are shielded from overoxidation. Examples include the Catalytica system, copper zeolites, and iron zeolites stabilizing the alpha-oxygen active site.One group of bacteria catalyze methane oxidation with nitrite as the oxidant in the absence of oxygen, giving rise to the so-called anaerobic oxidation of methane. Acid–base reactions Like other hydrocarbons, methane is an extremely weak acid. Its pKa in DMSO is estimated to be 56. It cannot be deprotonated in solution, but the conjugate base is known in forms such as methyllithium. A variety of positive ions derived from methane have been observed, mostly as unstable species in low-pressure gas mixtures. These include methenium or methyl cation CH+3, methane cation CH+4, and methanium or protonated methane CH+5. Some of these have been detected in outer space. Methanium can also be produced as diluted solutions from methane with superacids. Cations with higher charge, such as CH2+6 and CH3+7, have been studied theoretically and conjectured to be stable.Despite the strength of its C–H bonds, there is intense interest in catalysts that facilitate C–H bond activation in methane (and other lower numbered alkanes). Combustion Methane's heat of combustion is 55.5 MJ/kg. Combustion of methane is a multiple step reaction summarized as follows: CH4 + 2 O2 → CO2 + 2 H2O (ΔH = −891 kJ/mol, at standard conditions)Peters four-step chemistry is a systematically reduced four-step chemistry that explains the burning of methane. Methane radical reactions Given appropriate conditions, methane reacts with halogen radicals as follows: •X + CH4 → HX + •CH3 •CH3 + X2 → CH3X + •Xwhere X is a halogen: fluorine (F), chlorine (Cl), bromine (Br), or iodine (I). This mechanism for this process is called free radical halogenation. It is initiated when UV light or some other radical initiator (like peroxides) produces a halogen atom. A two-step chain reaction ensues in which the halogen atom abstracts a hydrogen atom from a methane molecule, resulting in the formation of a hydrogen halide molecule and a methyl radical (•CH3). The methyl radical then reacts with a molecule of the halogen to form a molecule of the halomethane, with a new halogen atom as byproduct. Similar reactions can occur on the halogenated product, leading to replacement of additional hydrogen atoms by halogen atoms with dihalomethane, trihalomethane, and ultimately, tetrahalomethane structures, depending upon reaction conditions and the halogen-to-methane ratio. This reaction is commonly used with chlorine to produce dichloromethane and chloroform via chloromethane. Carbon tetrachloride can be made with excess chlorine. Uses Methane may be transported as a refrigerated liquid (liquefied natural gas, or LNG). While leaks from a refrigerated liquid container are initially heavier than air due to the increased density of the cold gas, the gas at ambient temperature is lighter than air. Gas pipelines distribute large amounts of natural gas, of which methane is the principal component. Fuel Methane is used as a fuel for ovens, homes, water heaters, kilns, automobiles, turbines, etc. As the major constituent of natural gas, methane is important for electricity generation by burning it as a fuel in a gas turbine or steam generator. Compared to other hydrocarbon fuels, methane produces less carbon dioxide for each unit of heat released. At about 891 kJ/mol, methane's heat of combustion is lower than that of any other hydrocarbon, but the ratio of the heat of combustion (891 kJ/mol) to the molecular mass (16.0 g/mol, of which 12.0 g/mol is carbon) shows that methane, being the simplest hydrocarbon, produces more heat per mass unit (55.7 kJ/g) than other complex hydrocarbons. In many cities, methane is piped into homes for domestic heating and cooking. In this context it is usually known as natural gas, which is considered to have an energy content of 39 megajoules per cubic meter, or 1,000 BTU per standard cubic foot. Liquefied natural gas (LNG) is predominantly methane (CH4) converted into liquid form for ease of storage or transport. Refined liquid methane as well as LNG is used as a rocket fuel, when combined with liquid oxygen, as in the TQ-12, BE-4 and Raptor engines. Due to the similarities between methane and LNG such engines are commonly grouped together under the term methalox. As a liquid rocket propellant, a methane/liquid oxygen combination offers the advantage over kerosene/liquid oxygen combination, or kerolox, of producing small exhaust molecules. This deposits less soot on the internal parts of rocket motors, which is beneficial for reusable rocket designs. Methane is easier to store due to its higher boiling point and density, as well as its lack of hydrogen embrittlement. In addition, it can be produced on other astronomical bodies such as the Moon and Mars, although the only methane fueled rocket that is designed to utilize this is SpaceX’s Starship spacecraft as of September 2023. The lower molecular weight of the exhaust also increases the fraction of the heat energy which is in the form of kinetic energy available for propulsion, increasing the specific impulse of the rocket. Because methalox powered engines can run at higher pressures than a kerolox powered engine, this means that a methalox engine may be roughly 20 percent more fuel efficient than a kerolox engine. Compared to liquid hydrogen/liquid oxygen combination (hydrolox), the specific energy of methane is lower but this disadvantage is offset by methane's greater density and temperature range, allowing for smaller and lighter tankage for a given fuel mass. Liquid methane has a temperature range (91–112 K) nearly compatible with liquid oxygen (54–90 K). Due to the advantages methane fuel provides, especially for reusable rockets, various companies and organizations, especially private space launch providers, aimed to develop methane-based launch systems during the 2010s. The competition between countries were dubbed the Methalox Race to Orbit. In July 2023, the Chinese private company LandSpace launched a Zhuque-2 methalox rocket, which became the first to reach orbit. It delivered a test payload into Sun-synchronous orbit (SSO). Chemical feedstock Natural gas, which is mostly composed of methane, is used to produce hydrogen gas on an industrial scale. Steam methane reforming (SMR), or simply known as steam reforming, is the standard industrial method of producing commercial bulk hydrogen gas. More than 50 million metric tons are produced annually worldwide (2013), principally from the SMR of natural gas. Much of this hydrogen is used in petroleum refineries, in the production of chemicals and in food processing. Very large quantities of hydrogen are used in the industrial synthesis of ammonia. At high temperatures (700–1100 °C) and in the presence of a metal-based catalyst (nickel), steam reacts with methane to yield a mixture of CO and H2, known as "water gas" or "syngas": CH4 + H2O ⇌ CO + 3 H2This reaction is strongly endothermic (consumes heat, ΔHr = 206 kJ/mol). Additional hydrogen is obtained by the reaction of CO with water via the water-gas shift reaction: CO + H2O ⇌ CO2 + H2This reaction is mildly exothermic (produces heat, ΔHr = −41 kJ/mol). Methane is also subjected to free-radical chlorination in the production of chloromethanes, although methanol is a more typical precursor.Hydrogen can also be produced via the direct decomposition of methane, also known as methane pyrolysis, which, unlike steam reforming, produces no greenhouse gases (GHG). The heat needed for the reaction can also be GHG emission free, e.g. from concentrated sunlight, renewable electricity, or burning some of the produced hydrogen. If the methane is from biogas then the process can be a carbon sink. Temperatures in excess of 1200 °C are required to break the bonds of methane to produce Hydrogen gas and solid carbon. However, through the use of a suitable catalyst the reaction temperature can be reduced to between 600 °C - 1000 °C depending on the chosen catalyst. The reaction is moderately endothermic as shown in the reaction equation below. CH4(g) → C(s) + 2 H2(g) (ΔH° = 74.8 kJ/mol) Generation Methane can be generated through geological, biological or industrial routes. Geological routes The two main routes for geological methane generation are (i) organic (thermally generated, or thermogenic) and (ii) inorganic (abiotic). Thermogenic methane occurs due to the breakup of organic matter at elevated temperatures and pressures in deep sedimentary strata. Most methane in sedimentary basins is thermogenic; therefore, thermogenic methane is the most important source of natural gas. Thermogenic methane components are typically considered to be relic (from an earlier time). Generally, formation of thermogenic methane (at depth) can occur through organic matter breakup, or organic synthesis. Both ways can involve microorganisms (methanogenesis), but may also occur inorganically. The processes involved can also consume methane, with and without microorganisms. The more important source of methane at depth (crystalline bedrock) is abiotic. Abiotic means that methane is created from inorganic compounds, without biological activity, either through magmatic processes or via water-rock reactions that occur at low temperatures and pressures, like serpentinization. Biological routes Most of Earth's methane is biogenic and is produced by methanogenesis, a form of anaerobic respiration only known to be conducted by some members of the domain Archaea. Methanogens occur in landfills and soils, ruminants (for example, cattle), the guts of termites, and the anoxic sediments below the seafloor and the bottom of lakes. This multistep process is used by these microorganisms for energy. The net reaction of methanogenesis is: CO2 + 4 H2 → CH4 + 2 H2O The final step in the process is catalyzed by the enzyme methyl coenzyme M reductase (MCR). Wetlands Wetlands are the largest natural sources of methane to the atmosphere, accounting for approximately 20 - 30% of atmospheric methane. Climate change is increasing the amount of methane released from wetlands due to increased temperatures and altered rainfall patterns. This phenomeon is called wetland methane feedback.Rice cultivation generates as much as 12% of total global methane emissions due to the long-term flooding of rice fields. Ruminants Ruminants, such as cattle, belch methane, accounting for about 22% of the U.S. annual methane emissions to the atmosphere. One study reported that the livestock sector in general (primarily cattle, chickens, and pigs) produces 37% of all human-induced methane. A 2013 study estimated that livestock accounted for 44% of human-induced methane and about 15% of human-induced greenhouse gas emissions. Many efforts are underway to reduce livestock methane production, such as medical treatments and dietary adjustments, and to trap the gas to use its combustion energy. Seafloor sediments Most of the subseafloor is anoxic because oxygen is removed by aerobic microorganisms within the first few centimeters of the sediment. Below the oxygen-replete seafloor, methanogens produce methane that is either used by other organisms or becomes trapped in gas hydrates. These other organisms that utilize methane for energy are known as methanotrophs ('methane-eating'), and are the main reason why little methane generated at depth reaches the sea surface. Consortia of Archaea and Bacteria have been found to oxidize methane via anaerobic oxidation of methane (AOM); the organisms responsible for this are anaerobic methanotrophic Archaea (ANME) and sulfate-reducing bacteria (SRB). Industrial routes Given its cheap abundance in natural gas, there is little incentive to produce methane industrially. Methane can be produced by hydrogenating carbon dioxide through the Sabatier process. Methane is also a side product of the hydrogenation of carbon monoxide in the Fischer–Tropsch process, which is practiced on a large scale to produce longer-chain molecules than methane. An example of large-scale coal-to-methane gasification is the Great Plains Synfuels plant, started in 1984 in Beulah, North Dakota as a way to develop abundant local resources of low-grade lignite, a resource that is otherwise difficult to transport for its weight, ash content, low calorific value and propensity to spontaneous combustion during storage and transport. A number of similar plants exist around the world, although mostly these plants are targeted towards the production of long chain alkanes for use as gasoline, diesel, or feedstock to other processes. Power to methane is a technology that uses electrical power to produce hydrogen from water by electrolysis and uses the Sabatier reaction to combine hydrogen with carbon dioxide to produce methane. Laboratory synthesis Methane can be produced by protonation of methyl lithium or a methyl Grignard reagent such as methylmagnesium chloride. It can also be made from anhydrous sodium acetate and dry sodium hydroxide, mixed and heated above 300 °C (with sodium carbonate as byproduct). In practice, a requirement for pure methane can easily be fulfilled by steel gas bottle from standard gas suppliers. Occurrence Methane was discovered and isolated by Alessandro Volta between 1776 and 1778 when studying marsh gas from Lake Maggiore. It is the major component of natural gas, about 87% by volume. The major source of methane is extraction from geological deposits known as natural gas fields, with coal seam gas extraction becoming a major source (see coal bed methane extraction, a method for extracting methane from a coal deposit, while enhanced coal bed methane recovery is a method of recovering methane from non-mineable coal seams). It is associated with other hydrocarbon fuels, and sometimes accompanied by helium and nitrogen. Methane is produced at shallow levels (low pressure) by anaerobic decay of organic matter and reworked methane from deep under the Earth's surface. In general, the sediments that generate natural gas are buried deeper and at higher temperatures than those that contain oil. Methane is generally transported in bulk by pipeline in its natural gas form, or by LNG carriers in its liquefied form; few countries transport it by truck. Atmospheric methane and climate change Methane is an important greenhouse gas, responsible for around 30% of the rise in global temperatures since the industrial revolution.Methane has a global warming potential (GWP) of 29.8 ± 11 compared to CO2 (potential of 1) over a 100-year period, and 82.5 ± 25.8 over a 20-year period. This means that, for example, a leak of one tonne of methane is equivalent to emitting 82.5 tonnes of carbon dioxide. As methane is gradually converted into carbon dioxide (and water) in the atmosphere, these values include the climate forcing from the carbon dioxide produced from methane over these timescales. Annual global methane emissions are currently approximately 580 Mt, 40% of which is from natural sources and the remaining 60% originating from human activity, known as anthropogenic emissions. The largest anthropogenic source is agriculture, responsible for around one quarter of emissions, closely followed by the energy sector, which includes emissions from coal, oil, natural gas and biofuels.Historic methane concentrations in the world's atmosphere have ranged between 300 and 400 nmol/mol during glacial periods commonly known as ice ages, and between 600 and 700 nmol/mol during the warm interglacial periods. A 2012 NASA website said the oceans were a potential important source of Arctic methane, but more recent studies associate increasing methane levels as caused by human activity.Global monitoring of atmospheric methane concentrations began in the 1980s. The Earth's atmospheric methane concentration has increased 160% since preindustrial levels in the mid-18th century. In 2013, atmospheric methane accounted for 20% of the total radiative forcing from all of the long-lived and globally mixed greenhouse gases. Between 2011 and 2019 the annual average increase of methane in the atmosphere was 1866 ppb. From 2015 to 2019 sharp rises in levels of atmospheric methane were recorded.In 2019, the atmospheric methane concentration was higher than at any time in the last 800,000 years. As stated in the AR6 of the IPCC, "Since 1750, increases in CO2 (47%) and CH4 (156%) concentrations far exceed, and increases in N2O (23%) are similar to, the natural multi-millennial changes between glacial and interglacial periods over at least the past 800,000 years (very high confidence)".In February 2020, it was reported that fugitive emissions and gas venting from the fossil fuel industry may have been significantly underestimated. The largest annual increase occurred in 2021 with the overwhelming percentage caused by human activity.Climate change can increase atmospheric methane levels by increasing methane production in natural ecosystems, forming a climate change feedback. Another explanation for the rise in methane emissions could be a slowdown of the chemical reaction that removes methane from the atmosphere.Over 100 countries have signed the Global Methane Peldge, launched in 2021, promising to cut their methane emissions by 30% by 2030. This could avoid 0.2˚C of warming globally by 2050, although there have been calls for higher commitments in order to reach this target. The International Energy Agency's 2022 report states "the most cost-effective opportunities for methane abatement are in the energy sector, especially in oil and gas operations". Clathrates Methane clathrates (also known as methane hydrates) are solid cages of water molecules that trap single molecules of methane. Significant reservoirs of methane clathrates have been found in arctic permafrost and along continental margins beneath the ocean floor within the gas clathrate stability zone, located at high pressures (1 to 100 MPa; lower end requires lower temperature) and low temperatures (< 15 °C; upper end requires higher pressure). Methane clathrates can form from biogenic methane, thermogenic methane, or a mix of the two. These deposits are both a potential source of methane fuel as well as a potential contributor to global warming. The global mass of carbon stored in gas clathrates is still uncertain and has been estimated as high as 12,500 Gt carbon and as low as 500 Gt carbon. The estimate has declined over time with a most recent estimate of ~1800 Gt carbon. A large part of this uncertainty is due to our knowledge gap in sources and sinks of methane and the distribution of methane clathrates at the global scale. For example, a source of methane was discovered relatively recently in an ultraslow spreading ridge in the Arctic. Some climate models suggest that today's methane emission regime from the ocean floor is potentially similar to that during the period of the Paleocene–Eocene Thermal Maximum (PETM) around 55.5 million years ago, although there are no data indicating that methane from clathrate dissociation currently reaches the atmosphere. Arctic methane release from permafrost and seafloor methane clathrates is a potential consequence and further cause of global warming; this is known as the clathrate gun hypothesis. Data from 2016 indicate that Arctic permafrost thaws faster than predicted. Public safety and the environment Methane "degrades air quality and adversely impacts human health, agricultural yields, and ecosystem productivity".Methane is extremely flammable and may form explosive mixtures with air. Methane gas explosions are responsible for many deadly mining disasters. A methane gas explosion was the cause of the Upper Big Branch coal mine disaster in West Virginia on April 5, 2010, killing 29. Natural gas accidental release has also been a major focus in the field of safety engineering, due to past accidental releases that concluded in the formation of jet fire disasters.The 2015–2016 methane gas leak in Aliso Canyon, California was considered to be the worst in terms of its environmental effect in American history. It was also described as more damaging to the environment than Deepwater Horizon's leak in the Gulf of Mexico.In May 2023 The Guardian published a report, blaming Turkmenistan to be the worst in the world for methane super emitting. The data collected by Kayrros researchers indicate, that two large Turkmen fossil fuel fields leaked 2.6m and 1.8m tonnes of methane in 2022 alone, pumping the CO2 equivalent of 366m tonnes into the atmosphere, surpassing the annual CO2 emissions of the United Kingdom.Methane is also an asphyxiant if the oxygen concentration is reduced to below about 16% by displacement, as most people can tolerate a reduction from 21% to 16% without ill effects. The concentration of methane at which asphyxiation risk becomes significant is much higher than the 5–15% concentration in a flammable or explosive mixture. Methane off-gas can penetrate the interiors of buildings near landfills and expose occupants to significant levels of methane. Some buildings have specially engineered recovery systems below their basements to actively capture this gas and vent it away from the building. Extraterrestrial methane Interstellar medium Methane is abundant in many parts of the Solar System and potentially could be harvested on the surface of another Solar System body (in particular, using methane production from local materials found on Mars or Titan), providing fuel for a return journey. Mars Methane has been detected on all planets of the Solar System and most of the larger moons. With the possible exception of Mars, it is believed to have come from abiotic processes. The Curiosity rover has documented seasonal fluctuations of atmospheric methane levels on Mars. These fluctuations peaked at the end of the Martian summer at 0.6 parts per billion.Methane has been proposed as a possible rocket propellant on future Mars missions due in part to the possibility of synthesizing it on the planet by in situ resource utilization. An adaptation of the Sabatier methanation reaction may be used with a mixed catalyst bed and a reverse water-gas shift in a single reactor to produce methane from the raw materials available on Mars, utilizing water from the Martian subsoil and carbon dioxide in the Martian atmosphere.Methane could be produced by a non-biological process called serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars. History In November 1776, methane was first scientifically identified by Italian physicist Alessandro Volta in the marshes of Lake Maggiore straddling Italy and Switzerland. Volta was inspired to search for the substance after reading a paper written by Benjamin Franklin about "flammable air". Volta collected the gas rising from the marsh, and by 1778 had isolated pure methane. He also demonstrated that the gas could be ignited with an electric spark.Following the Felling mine disaster of 1812 in which 92 men perished, Sir Humphry Davy established that the feared firedamp was in fact largely methane.The name "methane" was coined in 1866 by the German chemist August Wilhelm von Hofmann. The name was derived from methanol. Etymology Etymologically, the word methane is coined from the chemical suffix "-ane", which denotes substances belonging to the alkane family; and the word methyl, which is derived from the German Methyl (1840) or directly from the French méthyle, which is a back-formation from the French méthylène (corresponding to English "methylene"), the root of which was coined by Jean-Baptiste Dumas and Eugène Péligot in 1834 from the Greek μέθυ methy (wine) (related to English "mead") and ὕλη hyle (meaning "wood"). The radical is named after this because it was first detected in methanol, an alcohol first isolated by distillation of wood. The chemical suffix -ane is from the coordinating chemical suffix -ine which is from Latin feminine suffix -ina which is applied to represent abstracts. The coordination of "-ane", "-ene", "-one", etc. was proposed in 1866 by German chemist August Wilhelm von Hofmann. Abbreviations The abbreviation CH4-C can mean the mass of carbon contained in a mass of methane, and the mass of methane is always 1.33 times the mass of CH4-C. CH4-C can also mean the methane-carbon ratio, which is 1.33 by mass. Methane at scales of the atmosphere is commonly measured in teragrams (Tg CH4) or millions of metric tons (MMT CH4), which mean the same thing. Other standard units are also used, such as nanomole (nmol, one billionth of a mole), mole (mol), kilogram, and gram. See also Explanatory notes Citations Cited sources Haynes, William M., ed. (2016). CRC Handbook of Chemistry and Physics (97th ed.). CRC Press. ISBN 9781498754293. External links Methane at The Periodic Table of Videos (University of Nottingham) International Chemical Safety Card 0291 Gas (Methane) Hydrates – A New Frontier – United States Geological Survey (archived 6 February 2004) Lunsford, Jack H. (2000). "Catalytic conversion of methane to more useful chemicals and fuels: A challenge for the 21st century". Catalysis Today. 63 (2–4): 165–174. doi:10.1016/S0920-5861(00)00456-9. CDC – Handbook for Methane Control in Mining (PDF)
energy in romania
Energy in Romania describes energy and electricity production, consumption and import in Romania. Romania has significant oil and gas reserves, substantial coal deposits and it has considerable installed hydroelectric power. However, Romania imports oil and gas from Russia and other countries. To ease this dependency Romania seeks to use nuclear power as an alternative for electricity generation. Overview Electric power in Romania is dominated by government enterprises, although privately operated coal mines and oil refineries also existed. Accordingly, Romania placed an increasingly heavy emphasis on developing nuclear power generation. Electric power was provided by the Romanian Electric Power Corporation (CONEL). Energy sources used in electric power generation consisted primarily of nuclear, coal, oil, and liquefied natural gas (LNG). The country has two nuclear reactors, located at Cernavodă, generating about 18–20% of the country's electricity production. Statistics Energy strategy A review of the energy strategy took place in 2022. Gas production currently around 25m m3/d with a potential for increase from the 100billion m3 on shore reserves. Offshore reserves are estimated at 40billion m3. There is a potential for LNG production and Romania could become a major energy supplier to Europe.Renewable energy: 208 hydrological plants produce about 30% of Romania's energy. Wind, photovoltaic and biomass produce 16% of Romania's energy and is expected to rise to 30%.Nuclear energy: Currently producing around 20% of Romanian needs, two new reactors are scheduled to be built with help from the USA.Interconnectivity: Improved connections for the supply of gas and electricity to/from neighbouring countries are planned. Fossil fuels Oil Possessing substantial oil refining capacities, Romania is particularly interested in the Central Asia – Europe pipelines and seeks to strengthen its relations with some Persian Gulf states. Refining capacity was reduced by 2018, with just 5 refineries remaining and an overall refining capacity of approximately 13.7Mt per annum. Romania's refining capacity far exceeds domestic demand for refined petroleum products, allowing the country to export a wide range of oil products and petrochemicals—such as lubricants, bitumen, and fertilizers—throughout the region. Gas Romania has large natural gas reserves and produces most of the gas for the countries needs. The national natural gas transmission system in Romania is owned by Transgaz a state-owned company. Interconnector pipelines connect Romania with Hungary, Bulgaria, Ukraine and onward connections to Turkey, Greece, North Macedonia, Azerbaijan and Austria Electric power In 2022 and 2023 improved connections were made to connect with and bring neighbouring Moldova into the European grid, with Romania supplying Moldova with electricity. Nuclear Romania has two nuclear reactors located at Cernavodă with 1,300 MW capacity, the first became operational in 1996, the second in 2007. In 2020 nuclear produced 11.5 TWh of electricity, being 20% of Romania's electricity generation.Nuclear waste is stored on site at reprocessing facilities. Renewable energy Renewable energy includes wind, solar, biomass and geothermal energy sources. Wind energy Electricity transmission system operator Transelectrica is scheduled to have a 3GW wind energy capacity by 2026. 1.9 GW of offshore wind is planned for 2027-28. Solar power Solar power expanded in 2013 but for the next 10 years remained fairly static with around 1,300 MW capacity. Plans from 2023 include a doubling of capacity after the law was changed in 2023 making it easier to obtain planning permission. Biomass Biomass provides around 1% of electricity generation capacity. Climate change In the decade between 1989 and 1999, Romania saw decrease of its greenhouse gas emissions by 55%. This can be accounted for by a 45% decrease in energy use due to languishing economy, and a 15% decrease in its carbon intensity of energy use. In this period of time the carbon intensity of Romania's economy decreased by 40%, while Romania's GDP declined 15%. Romania's GDP has recovered significantly since then.There has been a big push towards renewable energy. See also Energy policy of Romania Geothermal power in Romania Hydroelectricity in Romania Renewable energy by country == References ==
climate change in namibia
Climate change is the consequence of long-term alterations in the Earth's climate caused by the emission of greenhouse gases such as carbon dioxide (CO2) and methane (CH4). These gases can trap heat in the atmosphere, resulting in global warming and a heightened temperature on our planet. The activities carried out by humans, such as the utilization of fossil fuels (coal, oil, and natural gas), along with large-scale commercial agriculture and deforestation, are accountable for the release of these greenhouse gases. The escalating temperatures and escalating extreme heat conditions, uncertain and progressively unpredictable precipitation, and extreme weather provoke new challenges and exacerbate existing ones.Namibia is located in the southwestern region of the African continent, lying between latitude 17°S and 29°S and longitude 11°E and 26°E. The country encompasses a land area of 825,418 km2 and boasts a coastline stretching 1,500 km along the South Atlantic Ocean. Namibia shares borders with Angola to the north, South Africa to the south, Botswana to the east, and Zambia to the northeast. The country's climate is predominantly arid, with the Namib Desert and the Kalahari Desert occupying significant portions of the eastern and western territories, respectively. Aridity diminishes as one moves toward the central plateau regions and the great escarpment situated between the central plateau and the Namib Desert. Namibia's climate is characterized by persistent droughts, unpredictable and varying rainfall patterns, substantial temperature fluctuations, and limited water resources. Greenhouse gas emissions The African continent is responsible for 2%-3% of global greenhouse gas emissions, thereby contributing to climate change. In 2020, Namibia emitted 24.12 million tonnes of CO2 equivalent representing 0.05% of global emissions with a climate risk index of 107. Greenhouse gas (GHG) emissions for Namibia in 2020 were 13,560.38 kt, representing a 25.69% increase compared to 2019, 2019 was 10,788.73 kt, indicating a 7.13% decline from 2018, 2018 was 11,616.69 kt, reflecting a 6.14% decrease from 2017, 2017 were 12,376.73 kt, showing a 0.47% increase from 201. Impact on the Natural Environment Temperature and weather changes The effects of climate change, both current and future, present significant risks to human health, welfare, and the natural environment. Namibia is experiencing clear indications of increasing temperatures. Over the past century, surface temperatures in Namibia have risen by 1.2 degrees Celsius, and the frequency of extreme temperatures has increased by 10% in the last four decades. Southern Africa, including Namibia, has warmed by approximately 0.8 degrees Celsius since 1900, and recent years have witnessed the highest temperatures on record since the 19th century. Projections indicate that summer temperatures may rise between 1°C and 3.5°C and winter temperatures between 1°C and 4°C within the period of 2046-2065. There has been a noticeable increase in the number of days exceeding 35°C, contributing to the overall trend of rising maximum temperatures. The evidence of climate change extends beyond surface temperature increases and encompasses changing precipitation patterns. However, attributing these changes to climate change in the context of Namibia's rainfall variability proves challenging. Records suggest that the frequency of both droughts and floods has risen by approximately 18% on average over the last four decades when compared to previous periods. This multifaceted evidence underscores the urgency of addressing climate change and its impacts on Namibia's climate system.The mean annual temperature for Namibia is 20.6°C, with average monthly temperatures ranging between 24°C (November to March) and 16°C (June, July). Impact on water resources Climate change is contributing to a global increase in temperatures, and this is also true for Namibia. The rising temperatures are resulting in higher rates of evaporation, which in turn decreases the availability of surface water and worsens water scarcity within the country. Namibia heavily relies on rainfall to meet its water needs, especially in rural regions. However, climate change is modifying precipitation patterns, leading to more intense and unpredictable rainfall events. Consequently, these changes can cause flash floods, erosion, and a decrease in groundwater recharge, all of which greatly impact water resources. Additionally, Namibia has been experiencing prolonged droughts as a result of climate change. These droughts can deplete water reservoirs such as aquifers and severely affect the country's water supply and sanitation systems. The coastal areas of Namibia are particularly vulnerable to rising sea levels, which can result in the intrusion of saltwater into freshwater aquifers, further compromising the quality and availability of water resources. Ecosystems Climate change is causing shifts in temperature and precipitation patterns, resulting in decreased rainfall in Namibia. This decrease in rainfall affects the production of staple crops, leading to food insecurity and impacting ecosystems. Furthermore, climate change manifests in droughts and other extreme weather events, which have a significant impact on natural ecosystems. These changing conditions are causing shifts in species and habitats, thereby affecting biodiversity. Particularly, Namibia's endemic species are highly vulnerable to climate change, as they face threats to their survival due to changing environmental conditions. Additionally, the reduced rainfall and increased temperatures brought about by climate change can result in severe water shortages, affecting both human communities and ecosystems in Namibia. Therefore, water policies and practices play a crucial role in maintaining the health of ecosystems. The impacts of climate change on water resources are interconnected with biodiversity and the well-being of ecosystems. As projected, Namibia is expected to experience a more rapid increase in temperatures compared to many other countries, leading to an increasing frequency of drought conditions. This exacerbates the stress on ecosystems. Climate change has adverse effects on fish stocks and coastal livelihoods, further impacting natural ecosystems and the tourism industry in Namibia. Agriculture and livestock Climate change has had a significant impact on agriculture and livestock in Namibia, resulting in consequences on food security and the livelihoods of many Namibians. Due to climate change, Namibia is experiencing more frequent and severe droughts, leading to decreased availability of water for agriculture and livestock. Consequently, this directly affects crop yields and the access to water for livestock. The changing climate has also caused unpredictable rainfall patterns, making it challenging for farmers to predict the optimal times for planting and harvesting. This unpredictability can lead to lower crop yields and decreased agricultural productivity. Livestock farming plays a crucial role in Namibia's agriculture. However, climate change-related factors such as rising temperatures and the spread of diseases have a negative impact on livestock health and productivity. Therefore, livestock farmers adapt their practices to cope with these challenges. In order to mitigate the effects of climate change, Namibian farmers are increasingly adopting conservation agriculture practices. This approach involves minimizing soil disturbance, implementing cover crops, and implementing crop rotation to enhance soil health and water retention, ultimately improving resilience to climate variability. Several projects, including those supported by the World Bank and the United Nations Development Programme (UNDP), are focused on promoting climate-resilient livestock systems, as well as enhancing traditional crops and livestock farming practices in Namibia. These initiatives aim to assist farmers in adapting to the changing climate and building resilience in their agricultural and livestock operations. Health Impacts Climate change in Namibia has resulted in an upsurge of water and vector-borne diseases, causing a direct impact on the public's health and overall well-being. The effect of climate change on Namibia's economy and livelihoods is projected to be substantial, subsequently influencing people's health due to economic hardship and research reveals that 3.6 billion people are already living in areas highly susceptible to climate change. Between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year, from undernutrition, malaria, diarrhea , and heat stress alone. Specifically, the north-central regions of Namibia are particularly susceptible to the consequences of climate change, exacerbated by environmental degradation and social vulnerability, which further contribute to health risks. Notably, organizations such as the Namibia Nature Foundation are actively engaged in combatting the effects of climate change on both human health and the environment. Mitigation and adaptations Namibia has implemented climate change mitigation strategies through its National Climate Change Strategy and Action Plan (NCCSAP) from 2013 to 2020. These strategies encompass both adaptation and mitigation efforts, with a focus on addressing the challenges posed by climate change in the country. Namibia's NCCSAP includes policies and actions aimed at adapting to the impacts of climate change. These measures assist communities and ecosystems in coping with the changing climate, such as enhancing water resource management in Namibia's arid regions. The NCCSAP also outlines strategies to reduce greenhouse gas emissions and combat climate change. These strategies may involve transitioning to cleaner and more sustainable energy sources and improving energy efficiency. Namibia's climate change policies align with the National Development Goals and Vision 2030, ensuring that climate action is integrated into the country's broader development agenda. The government is actively working to create a conducive environment for climate change adaptation and mitigation, aiming to strengthen its policies and measures in this regard. See also Climate change in Africa Water supply and sanitation in Namibia Effects of climate change Namibia Economy of Namibia Agriculture in Namibia Geography of Namibia Health in Namibia == References ==
zero-energy building
A Zero-Energy Building (ZEB), also known as a Net Zero-Energy (NZE) building, is a building with net zero energy consumption, meaning the total amount of energy used by the building on an annual basis is equal to the amount of renewable energy created on the site or in other definitions by renewable energy sources offsite, using technology such as heat pumps, high efficiency windows and insulation, and solar panels.The goal is that these buildings contribute less overall greenhouse gas to the atmosphere during operations than similar non-ZNE buildings. They do at times consume non-renewable energy and produce greenhouse gases, but at other times reduce energy consumption and greenhouse gas production elsewhere by the same amount. The development of zero-energy buildings is encouraged by the desire to have less of an impact on the environment, and their expansion is encouraged by tax breaks and savings on energy costs which make zero-energy buildings financially viable. Terminology tends to vary between countries, agencies, cities, towns and reports, so a general knowledge of this concept and its various employments is essential for a versatile understanding of clean energy and renewables. The International Energy Agency (IEA) and European Union (EU) most commonly use "Net Zero Energy", with the term "zero net" being mainly used in the USA. A similar concept approved and implemented by the European Union and other agreeing countries is nearly Zero Energy Building (nZEB), with the goal of having all new buildings in the region under nZEB standards by 2020. Overview Typical code-compliant buildings consume 40% of the total fossil fuel energy in the US and European Union and are significant contributors of greenhouse gases. To combat such high energy usage, more and more buildings are starting to implement the carbon neutrality principle, which is viewed as a means to reduce carbon emissions and reduce dependence on fossil fuels. Although zero-energy buildings remain limited, even in developed countries, they are gaining importance and popularity. Most zero-energy buildings use the electrical grid for energy storage but some are independent of the grid and some include energy storage onsite. The buildings are called "energy-plus buildings" or in some cases "low energy houses". These buildings produce energy onsite using renewable technology like solar and wind, while reducing the overall use of energy with highly efficient lightning and heating, ventilation and air conditioning (HVAC) technologies. The zero-energy goal is becoming more practical as the costs of alternative energy technologies decrease and the costs of traditional fossil fuels increase. The development of modern zero-energy buildings became possible largely through the progress made in new energy and construction technologies and techniques. These include highly insulating spray-foam insulation, high-efficiency solar panels, high-efficiency heat pumps and highly insulating, low emissivity, triple and quadruple-glazed windows. These innovations have also been significantly improved by academic research, which collects precise energy performance data on traditional and experimental buildings and provides performance parameters for advanced computer models to predict the efficacy of engineering designs. Zero-energy buildings can be part of a smart grid. Some advantages of these buildings are as follows: Integration of renewable energy resources Integration of plug-in electric vehicles – called vehicle-to-grid Implementation of zero-energy conceptsAlthough the net zero concept is applicable to a wide range of resources, water and waste, energy is usually the first resource to be targeted because: Energy, particularly electricity and heating fuel like natural gas or heating oil, is expensive. Hence reducing energy use can save the building owner money. In contrast, water and waste are inexpensive for the individual building owner. Energy, particularly electricity and heating fuel, has a high carbon footprint. Hence reducing energy use is a major way to reduce the building's carbon footprint. There are well-established means to significantly reduce the energy use and carbon footprint of buildings. These include: adding insulation, using heat pumps instead of furnaces, using low emissivity, triple or quadruple-glazed windows and adding solar panels to the roof. In some countries, there are government-sponsored subsidies and tax breaks for installing heat pumps, solar panels, triple or quadruple-glazed windows and insulation that greatly reduce the cost of getting to a net-zero energy building for the building owner. Optimizing zero-energy building for climate impact The introduction of zero-energy buildings makes buildings more energy efficient and reduces the rate of carbon emissions once the building is in operation; however, there is still a lot of pollution associated with a building's embodied carbon. Embodied carbon is the carbon emitted in the making and transportation of a building's materials and construction of the structure itself; it is responsible for 11% of global GHG emissions and 28% of global building sector emissions. The importance of embodied carbon will grow as it will begin to account for the greater portion of a building's carbon emissions. In some newer, energy efficient buildings, embodied carbon has risen to 47% of the building's lifetime emissions. Focusiong on embodied carbon is part of optimizing construction for climate impact and zero carbon emissions requires slightly different considerations from optimizing only for energy efficiency.A 2019 study found that between 2020 and 2030, reducing upfront carbon emissions and switching to clean or renewable energy is more important than increasing building efficiency because "building a highly energy efficient structure can actually produce more greenhouse gas than a basic code compliant one if carbon-intensive materials are used." The study stated that because "Net-zero energy codes will not significantly reduce emissions in time, policy makers and regulators must aim for true net zero carbon buildings, not net zero energy buildings."One way to reduced embodied carbon is by using low-carbon materials for construction such as straw, wood, linoleum, or cedar. For materials like concrete and steel, options to reduce embodied emissions do exist, however, these are unlikely to be available at large scale in the short-term. In conclusion, it has been determined that the optimal design point for greenhouse gas reduction appeared to be at four story multifamily buildings of low-carbon materials, such as those listed above, which could be a template for low-carbon emitting structures. Definitions Despite sharing the name "zero net energy", there are several definitions of what the term means in practice, with a particular difference in usage between North America and Europe. Zero net site energy use In this type of ZNE, the amount of energy provided by on-site renewable energy sources is equal to the amount of energy used by the building. In the United States, "zero net energy building" generally refers to this type of building.Zero net source energy use This ZNE generates the same amount of energy as is used, including the energy used to transport the energy to the building. This type accounts for energy losses during electricity generation and transmission. These ZNEs must generate more electricity than zero net site energy buildings.Net zero energy emissions Outside the United States and Canada, a ZEB is generally defined as one with zero net energy emissions, also known as a zero carbon building (ZCB) or zero emissions building (ZEB). Under this definition the carbon emissions generated from on-site or off-site fossil fuel use are balanced by the amount of on-site renewable energy production. Other definitions include not only the carbon emissions generated by the building in use, but also those generated in the construction of the building and the embodied energy of the structure. Others debate whether the carbon emissions of commuting to and from the building should also be included in the calculation. Recent work in New Zealand has initiated an approach to include building user transport energy within zero energy building frameworks.Net zero cost In this type of building, the cost of purchasing energy is balanced by income from sales of electricity to the grid of electricity generated on-site. Such a status depends on how a utility credits net electricity generation and the utility rate structure the building uses.Net off-site zero energy use A building may be considered a ZEB if 100% of the energy it purchases comes from renewable energy sources, even if the energy is generated off the site.Off-the-grid Off-the-grid buildings are stand-alone ZEBs that are not connected to an off-site energy utility facility. They require distributed renewable energy generation and energy storage capability (for when the sun is not shining, wind is not blowing, etc.). An energy autarkic house is a building concept where the balance of the own energy consumption and production can be made on an hourly or even smaller basis. Energy autarkic houses can be taken off-the-grid.Net Zero Energy Building Based on scientific analysis within the joint research program "Towards Net Zero Energy Solar Buildings" a methodological framework was set up which allows different definitions, in accordance with country's political targets, specific (climate) conditions and respectively formulated requirements for indoor conditions: The overall conceptual understanding of a Net ZEB is an energy efficient, grid-connected building enabled to generate energy from renewable sources to compensate its own energy demand (see figure 1).The wording "Net" emphasizes the energy exchange between the building and the energy infrastructure. By the building-grid interaction, the Net ZEBs becomes an active part of the renewable energy infrastructure. This connection to energy grids prevents seasonal energy storage and oversized on-site systems for energy generation from renewable sources like in energy autonomous buildings. The similarity of both concepts is a pathway of two actions: 1) reduce energy demand by means of energy efficiency measures and passive energy use; 2) generate energy from renewable sources. However, the Net ZEBs grid interaction and plans to widely increase their numbers of evoking considerations on increased flexibility in the shift of energy loads and reduced peak demands. Positive Energy District Expanding some of the principles of zero-energy buildings to a city district level, Positive Energy Districts (PED) are districts or other urban areas that produce at least as much energy on an annual basis as they consume. The impetus to develop whole positive energy districts instead of single buildings is based on the possibility of sharing resources, managing energy efficiently systems across many buildings and reaching economics of scale.Within this balancing procedure several aspects and explicit choices have to be determined: The building system boundary is split into a physical boundary which determines which renewable resources are considered (e.g. in buildings footprint, on-site or even off-site) respectively how many buildings are included in the balance (single building, cluster of buildings) and a balance boundary which determines the included energy uses (e.g. heating, cooling, ventilation, hot water, lighting, appliances, IT, central services, electric vehicles, and embodied energy, etc.). It should be noticed that renewable energy supply options can be prioritized (e.g. by transportation or conversion effort, availability over the lifetime of the building or replication potential for future, etc.) and therefore create a hierarchy. It may be argued that resources within the building footprint or on-site should be given priority over off-site supply options. The weighting system converts the physical units of different energy carriers into a uniform metric (site/final energy, source/primary energy renewable parts included or not, energy cost, equivalent carbon emissions and even energy or environmental credits) and allows their comparison and compensation among each other in one single balance (e.g. exported PV electricity can compensate for imported biomass). Politically influenced and therefore possibly asymmetrically or time-dependent conversion/weighting factors can affect the relative value of energy carriers and can influence the required energy generation capacity. The balancing period is often assumed to be one year (suitable to cover all operation energy uses). A shorter period (monthly or seasonal) could also be considered as well as a balance over the entire life cycle (including embodied energy, which could also be annualized and counted in addition to operational energy uses).The energy balance can be done in two balance types: 1) Balance of delivered/imported and exported energy (monitoring phase as self-consumption of energy generated on-site can be included); 2) Balance between (weighted) energy demand and (weighted) energy generation (for design phase as normal end users temporal consumption patterns -e.g. for lighting, appliances, etc.- are lacking). Alternatively, a balance based on monthly net values in which only residuals per month are summed up to an annual balance is imaginable. This can be seen either as a load/generation balance or as a special case of import/export balance where a "virtual monthly self-consumption" is assumed (see figure 2 and compare). Besides the energy balance, the Net ZEBs can be characterized by their ability to match the building's load by its energy generation (load matching) or to work beneficially with respect to the needs of the local grid infrastructure (grind interaction). Both can be expressed by suitable indicators which are intended as assessment tools only. Design and construction The most cost-effective steps toward a reduction in a building's energy consumption usually occur during the design process. To achieve efficient energy use, zero energy design departs significantly from conventional construction practice. Successful zero energy building designers typically combine time tested passive solar, or artificial/fake conditioning, principles that work with the on-site assets. Sunlight and solar heat, prevailing breezes, and the cool of the earth below a building, can provide daylighting and stable indoor temperatures with minimum mechanical means. ZEBs are normally optimized to use passive solar heat gain and shading, combined with thermal mass to stabilize diurnal temperature variations throughout the day, and in most climates are superinsulated. All the technologies needed to create zero energy buildings are available off-the-shelf today. Sophisticated 3-D building energy simulation tools are available to model how a building will perform with a range of design variables such as building orientation (relative to the daily and seasonal position of the sun), window and door type and placement, overhang depth, insulation type and values of the building elements, air tightness (weatherization), the efficiency of heating, cooling, lighting and other equipment, as well as local climate. These simulations help the designers predict how the building will perform before it is built, and enable them to model the economic and financial implications on building cost benefit analysis, or even more appropriate – life-cycle assessment. Zero-energy buildings are built with significant energy-saving features. The heating and cooling loads are lowered by using high-efficiency equipment (such as heat pumps rather than furnaces. Heat pumps are about four times as efficient as furnaces) added insulation (especially in the attic and in the basement of houses), high-efficiency windows (such as low emissivity, triple-glazed windows), draft-proofing, high efficiency appliances (particularly modern high-efficiency refrigerators), high-efficiency LED lighting, passive solar gain in winter and passive shading in the summer, natural ventilation, and other techniques. These features vary depending on climate zones in which the construction occurs. Water heating loads can be lowered by using water conservation fixtures, heat recovery units on waste water, and by using solar water heating, and high-efficiency water heating equipment. In addition, daylighting with skylights or solartubes can provide 100% of daytime illumination within the home. Nighttime illumination is typically done with fluorescent and LED lighting that use 1/3 or less power than incandescent lights, without adding unwanted heat. And miscellaneous electric loads can be lessened by choosing efficient appliances and minimizing phantom loads or standby power. Other techniques to reach net zero (dependent on climate) are Earth sheltered building principles, superinsulation walls using straw-bale construction, pre-fabricated building panels and roof elements plus exterior landscaping for seasonal shading. Once the energy use of the building has been minimized it can be possible to generate all that energy on site using roof-mounted solar panels. See examples of zero net energy houses here. Zero-energy buildings are often designed to make dual use of energy including that from white goods. For example, using refrigerator exhaust to heat domestic water, ventilation air and shower drain heat exchangers, office machines and computer servers, and body heat to heat the building. These buildings make use of heat energy that conventional buildings may exhaust outside. They may use heat recovery ventilation, hot water heat recycling, combined heat and power, and absorption chiller units. Energy harvest ZEBs harvest available energy to meet their electricity and heating or cooling needs. By far the most common way to harvest energy is to use roof-mounted solar photovoltaic panels that turn the sun's light into electricity. Energy can also be harvested with solar thermal collectors (which use the sun's heat to heat water for the building). Heat pumps can also harvest heat and cool from the air (air-sourced) or ground near the building (ground-sourced otherwise known as geothermal). Technically, heat pumps move heat rather than harvest it, but the overall effect in terms of reduced energy use and reduced carbon footprint is similar. In the case of individual houses, various microgeneration technologies may be used to provide heat and electricity to the building, using solar cells or wind turbines for electricity, and biofuels or solar thermal collectors linked to a seasonal thermal energy storage (STES) for space heating. An STES can also be used for summer cooling by storing the cold of winter underground. To cope with fluctuations in demand, zero energy buildings are frequently connected to the electricity grid, export electricity to the grid when there is a surplus, and drawing electricity when not enough electricity is being produced. Other buildings may be fully autonomous. Energy harvesting is most often more effective in regards to cost and resource utilization when done on a local but combined scale, for example a group of houses, cohousing, local district or village rather than an individual house basis. An energy benefit of such localized energy harvesting is the virtual elimination of electrical transmission and electricity distribution losses. On-site energy harvesting such as with roof top mounted solar panels eliminates these transmission losses entirely. Energy harvesting in commercial and industrial applications should benefit from the topography of each location. However, a site that is free of shade can generate large amounts of solar powered electricity from the building's roof and almost any site can use geothermal or air-sourced heat pumps. The production of goods under net zero fossil energy consumption requires locations of geothermal, microhydro, solar, and wind resources to sustain the concept.Zero-energy neighborhoods, such as the BedZED development in the United Kingdom, and those that are spreading rapidly in California and China, may use distributed generation schemes. This may in some cases include district heating, community chilled water, shared wind turbines, etc. There are current plans to use ZEB technologies to build entire off-the-grid or net zero energy use cities. The "energy harvest" versus "energy conservation" debate One of the key areas of debate in zero energy building design is over the balance between energy conservation and the distributed point-of-use harvesting of renewable energy (solar energy, wind energy and thermal energy). Most zero energy homes use a combination of these strategies.As a result of significant government subsidies for photovoltaic solar electric systems, wind turbines, etc., there are those who suggest that a ZEB is a conventional house with distributed renewable energy harvesting technologies. Entire additions of such homes have appeared in locations where photovoltaic (PV) subsidies are significant, but many so called "Zero Energy Homes" still have utility bills. This type of energy harvesting without added energy conservation may not be cost effective with the current price of electricity generated with photovoltaic equipment, depending on the local price of power company electricity. The cost, energy and carbon-footprint savings from conservation (e.g., added insulation, triple-glazed windows and heat pumps) compared to those from on-site energy generation (e.g., solar panels) have been published for an upgrade to an existing house here. Since the 1980s, passive solar building design and passive house have demonstrated heating energy consumption reductions of 70% to 90% in many locations, without active energy harvesting. For new builds, and with expert design, this can be accomplished with little additional construction cost for materials over a conventional building. Very few industry experts have the skills or experience to fully capture benefits of the passive design. Such passive solar designs are much more cost-effective than adding expensive photovoltaic panels on the roof of a conventional inefficient building. A few kilowatt-hours of photovoltaic panels (costing the equivalent of about US$2-3 dollars per annual kWh production) may only reduce external energy requirements by 15% to 30%. A 29 kWh (100,000 BTU) high seasonal energy efficiency ratio 14 conventional air conditioner requires over 7 kW of photovoltaic electricity while it is operating, and that does not include enough for off-the-grid night-time operation. Passive cooling, and superior system engineering techniques, can reduce the air conditioning requirement by 70% to 90%. Photovoltaic-generated electricity becomes more cost-effective when the overall demand for electricity is lower. Combined approach in rapid retrofits for existing buildings Companies in Germany and the Netherlands offer rapid climate retrofit packages for existing buildings, which add a custom designed shell of insulation to the outside of a building, along with upgrades for more sustainable energy use, such as heat pumps. Similar pilot projects are underway in the US. Occupant behavior The energy used in a building can vary greatly depending on the behavior of its occupants. The acceptance of what is considered comfortable varies widely. Studies of identical homes have shown dramatic differences in energy use in a variety of climates. An average widely accepted ratio of highest to lowest energy consumer in identical homes is about 3, with some identical homes using up to 20 times as much heating energy as the others. Occupant behavior can vary from differences in setting and programming thermostats, varying levels of illumination and hot water use, window and shading system operation and the amount of miscellaneous electric devices or plug loads used. Utility concerns Utility companies are typically legally responsible for maintaining the electrical infrastructure that brings power to our cities, neighborhoods, and individual buildings. Utility companies typically own this infrastructure up to the property line of an individual parcel, and in some cases own electrical infrastructure on private land as well. In the US utilities have expressed concern that the use of Net Metering for ZNE projects threatens the utilities base revenue, which in turn impacts their ability to maintain and service the portion of the electrical grid that they are responsible for. Utilities have expressed concern that states that maintain Net Metering laws may saddle non-ZNE homes with higher utility costs, as those homeowners would be responsible for paying for grid maintenance while ZNE home owners would theoretically pay nothing if they do achieve ZNE status. This creates potential equity issues, as currently, the burden would appear to fall on lower-income households. A possible solution to this issue is to create a minimum base charge for all homes connected to the utility grid, which would force ZNE home owners to pay for grid services independently of their electrical use. Additional concerns are that local distribution as well as larger transmission grids have not been designed to convey electricity in two directions, which may be necessary as higher levels of distributed energy generation come on line. Overcoming this barrier could require extensive upgrades to the electrical grid, however, as of 2010, this is not believed to be a major problem until renewable generation reaches much higher levels of penetration. Development efforts Wide acceptance of zero-energy building technology may require more government incentives or building code regulations, the development of recognized standards, or significant increases in the cost of conventional energy.The Google photovoltaic campus and the Microsoft 480-kilowatt photovoltaic campus relied on US Federal, and especially California, subsidies and financial incentives. California is now providing US$3.2 billion in subsidies for residential-and-commercial near-zero-energy buildings. The details of other American states' renewable energy subsidies (up to US$5.00 per watt) can be found in the Database of State Incentives for Renewables and Efficiency. The Florida Solar Energy Center has a slide presentation on recent progress in this area.The World Business Council for Sustainable Development has launched a major initiative to support the development of ZEB. Led by the CEO of United Technologies and the Chairman of Lafarge, the organization has both the support of large global companies and the expertise to mobilize the corporate world and governmental support to make ZEB a reality. Their first report, a survey of key players in real estate and construction, indicates that the costs of building green are overestimated by 300 percent. Survey respondents estimated that greenhouse gas emissions by buildings are 19 percent of the worldwide total, in contrast to the actual value of roughly 40 percent. Influential zero-energy and low-energy buildings Those who commissioned construction of passive houses and zero-energy homes (over the last three decades) were essential to iterative, incremental, cutting-edge, technology innovations. Much has been learned from many significant successes, and a few expensive failures.The zero-energy building concept has been a progressive evolution from other low-energy building designs. Among these, the Canadian R-2000 and the German passive house standards have been internationally influential. Collaborative government demonstration projects, such as the superinsulated Saskatchewan House, and the International Energy Agency's Task 13, have also played their part. Net zero energy building definition The US National Renewable Energy Laboratory (NREL) published a report called Net-Zero Energy Buildings: A Classification System Based on Renewable Energy Supply Options. This is the first report to lay out a full spectrum classification system for Net Zero/Renewable Energy buildings that includes the full spectrum of Clean Energy sources, both on site and off site. This classification system identifies the following four main categories of Net Zero Energy Buildings/Sites/Campuses: NZEB:A — A footprint renewables Net Zero Energy Building NZEB:B — A site renewables Net Zero Energy Building NZEB:C — An imported renewables Net Zero Energy Building NZEB:D — An off-site purchased renewables Net Zero Energy BuildingApplying this US Government Net Zero classification system means that every building can become net nero with the right combination of the key net zero technologies - PV (solar), GHP (geothermal heating and cooling, thermal batteries), EE (energy efficiency), sometimes wind, and electric batteries. A graphical exposé of the scale of impact of applying these NREL guidelines for net zero can be seen in the graphic at Net Zero Foundation titled "Net Zero Effect on U.S. Total Energy Use" showing a possible 39% US total fossil fuel use reduction by changing US residential and commercial buildings to net zero, 37% savings if we still use natural gas for cooking at the same level. Net zero carbon conversion example Many well known universities have professed to want to completely convert their energy systems off of fossil fuels. Capitalizing on the continuing developments in both photovoltaics and geothermal heat pump technologies, and in the advancing electric battery field, complete conversion to a carbon free energy solution is becoming easier. Large scale hydroelectric has been around since before 1900. An example of such a project is in the Net Zero Foundation's proposal at MIT to take that campus completely off fossil fuel use. This proposal shows the coming application of Net Zero Energy Buildings technologies at the District Energy scale. Advantages and disadvantages Advantages isolation for building owners from future energy price increases increased comfort due to more-uniform interior temperatures (this can be demonstrated with comparative isotherm maps) reduced total cost of ownership due to improved energy efficiency reduced total net monthly cost of living reduced risk of loss from grid blackouts Minimal to no future energy price increases for building owners reduced requirement for energy austerity and carbon emission taxes improved reliability – photovoltaic systems have 25-year warranties and seldom fail during weather problems – the 1982 photovoltaic systems on the Walt Disney World EPCOT (Experimental Prototype Community of Tomorrow) Energy Pavilion were still in use until 2018, even through three hurricanes. They were taken down in 2018 in preparation for a new ride. higher resale value as potential owners demand more ZEBs than available supply the value of a ZEB building relative to similar conventional building should increase every time energy costs increase contribute to the greater benefits of the society, e.g. providing sustainable renewable energy to the grid, reducing the need of grid expansion Optimizing bottom-up urban building energy models (UBEM) can make strides in the exactness of reenactment of building vitality. Disadvantages initial costs can be higher – effort required to understand, apply, and qualify for ZEB subsidies, if they exist. very few designers or builders have the necessary skills or experience to build ZEBs possible declines in future utility company renewable energy costs may lessen the value of capital invested in energy efficiency new photovoltaic solar cells equipment technology price has been falling at roughly 17% per year – It will lessen the value of capital invested in a solar electric generating system – Current subsidies may be phased out as photovoltaic mass production lowers future price challenge to recover higher initial costs on resale of building, but new energy rating systems are being introduced gradually. while the individual house may use an average of net zero energy over a year, it may demand energy at the time when peak demand for the grid occurs. In such a case, the capacity of the grid must still provide electricity to all loads. Therefore, a ZEB may not reduce risk of loss from grid blackouts. without an optimized thermal envelope the embodied energy, heating and cooling energy and resource usage is higher than needed. ZEB by definition do not mandate a minimum heating and cooling performance level thus allowing oversized renewable energy systems to fill the energy gap. solar energy capture using the house envelope only works in locations unobstructed from the sun. The solar energy capture cannot be optimized in north (for northern hemisphere, or south for southern Hemisphere) facing shade, or wooded surroundings. ZEB is not free of carbon emissions, glass has a high embodied energy, and the production requires a lot of carbon. Building regulations such as height restrictions or fire code may prevent implementation of wind or solar power or external additions to an existing thermal envelope. Zero energy building versus green building The goal of green building and sustainable architecture is to use resources more efficiently and reduce a building's negative impact on the environment. Zero energy buildings achieve one key goal of exporting as much renewable energy as it uses over the course of year; reducing greenhouse gas emissions. ZEB goals need to be defined and set, as they are critical to the design process. Zero energy buildings may or may not be considered "green" in all areas, such as reducing waste, using recycled building materials, etc. However, zero energy, or net-zero buildings do tend to have a much lower ecological impact over the life of the building compared with other "green" buildings that require imported energy and/or fossil fuel to be habitable and meet the needs of occupants. Both terms, zero energy buildings and green buildings, have similarities and differences. "Green" buildings often focus on operational energy, and disregard the embodied carbon footprint from construction. According to the IPCC, embodied carbon will make up half of the total carbon emissions between now[2020] and 2050. On the other hand, zero energy buildings are specifically designed to produce enough energy from renewable energy sources to meet its own consumption requirements, and green buildings can be generally defined as a building that reduces negative impacts or positively impacts our natural environment [1-NEWUSDE]. There are several factors that must be considered before a building is determined to be a green building. Building a green building must include an efficient use of utilities such as water and energy, use of renewable energy, use of recycling and reusing practices to reduce waste, provide proper indoor air quality, use of ethically sourced and non-toxic materials, use of a design that allows the building to adapt to changing environmental climates, and aspects of the design, construction, and operational process that address the environment and quality of life of its occupants. The term green building can also be used to refer to the practice of green building which includes being resource efficient from its design, to its construction, to its operational processes, and ultimately to its deconstruction. The practice of green building differs slightly from zero energy buildings because it considers all environmental impacts such as use of materials and water pollution for example, whereas the scope of zero energy buildings only includes the buildings energy consumption and ability to produce an equal amount, or more, of energy from renewable energy sources. There are many unforeseen design challenges and site conditions required to efficiently meet the renewable energy needs of a building and its occupants, as much of this technology is new. Designers must apply holistic design principles, and take advantage of the free naturally occurring assets available, such as passive solar orientation, natural ventilation, daylighting, thermal mass, and night time cooling. Designers and engineers must also experiment with new materials and technological advances, striving for more affordable and efficient production. Zero energy building versus zero heating building With advances in ultra low U-value glazing a (nearly) zero heating building is proposed to supersede nearly-zero energy buildings in EU. The zero heating building reduces on the passive solar design and makes the building more opened to conventional architectural design. The zero heating building removes the need for seasonal / winter utility power reserve. The annual specific heating demand for the zero-heating house should not exceed 3 kWh/m2a. Zero heating building is simpler to design and to operate. For example: there is no need for modulated sun shading. Certification The two most common certifications for green building are Passive House, and LEED. The goal of Passive House is to be energy efficient and reduce the use of heating/cooling to below standard. LEED certification is more comprehensive in regards to energy use, a building is awarded credits as it demonstrates sustainable practices across a range of categories. Another certification that designates a building as a net zero energy building exists within the requirements of the Living Building Challenge (LBC) called the Net Zero Energy Building (NZEB) certification provided by the International Living Future Institute (ILFI). The designation was developed in November 2011 as the NZEB certification but was then simplified to the Zero Energy Building Certification in 2017. Included in the list of green building certifications, the BCA Green Mark rating system allows for the evaluation of buildings for their performance and impact on the environment Worldwide International initiatives As a response to global warming and increasing greenhouse gas emissions, countries around the world have been gradually implementing different policies to tackle ZEB. Between 2008 and 2013, researchers from Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Italy, the Republic of Korea, New Zealand, Norway, Portugal, Singapore, Spain, Sweden, Switzerland, the United Kingdom and the US worked together in the joint research program called "Towards Net Zero Energy Solar Buildings". The program was created under the umbrella of International Energy Agency (IEA) Solar Heating and Cooling Program (SHC) Task 40 / Energy in Buildings and Communities (EBC, formerly ECBCS) Annex 52 with the intent of harmonizing international definition frameworks regarding net-zero and very low energy buildings by diving them into subtasks. In 2015, the Paris Agreement was created under the United Nations Framework Convention on Climate Change (UNFCC) with the intent of keeping the global temperature rise of the 21st century below 2 degrees Celsius and limiting temperature increase to 1.5 degrees Celsius by limiting greenhouse gas emissions. While there was no enforced compliance, 197 countries signed the international treaty which bound developed countries legally through a mutual cooperation where each party would update its INDC every five years and report annually to the COP. Due to the advantages of energy efficiency and carbon emission reduction, ZEBs are widely being implemented in many different countries as a solution to energy and environmental problems within the infrastructure sector. Australia In Australia, researchers have recently developed a new approach to the construction of visually-clear solar energy harvesting windows suitable for industrialization and applications in net-zero energy buildings. Industrial production of several prototype batches of solar windows has started in 2016.Up to the December 2017, the State of Queensland has more than 30% of households with rooftop solar photovoltaic (PV) system. The average size of Australian rooftop solar PV system has exceeded 3.5 kW. In Brisbane, households with 6 kW rooftop PV system and reasonable energy rating, for example 5 or 6 stars for Australian National House Energy Rating, can achieve net zero total energy target or even positive energy. Belgium In Belgium there is a project with the ambition to make the Belgian city Leuven climate-neutral in 2030. Brazil In Brazil, the Ordinance No. 42, of February 24, 2021, approved the Inmetro Normative Instruction for the Classification of Energy Efficiency of Commercial, Service and Public Buildings (INI-C), which improves the Technical Quality Requirements for the Energy Efficiency Level of Commercial, Service and Public Buildings (RTQ-C), specifying the criteria and methods for classifying commercial, service and public buildings as to their energy efficiency. Annex D presents the procedures for determining the potential for local renewable energy generation and the assessment conditions for Near Zero Energy Buildings (NZEBs) and Positive Energy Buildings (PEBs). Canada The Canadian Home Builders Association - National oversees the Net Zero Homes certification label, a voluntary industry-led labeling initiative. In December 2017, the BC Energy Step Code entered into legal force in British Columbia. Local British Columbia governments may use the standard to incentivize or require a level of energy efficiency in new construction that goes above and beyond the requirements of the base building code. The regulation is designed as a technical roadmap to help the province reach its target that all new buildings will attain a net zero energy ready level of performance by 2032. In August 2017, the Government of Canada released Build Smart - Canada's Buildings Strategy, as a key driver of the Pan Canadian Framework on Clean Growth and Climate Change, Canada's national climate strategy. The Build Smart strategy seeks to dramatically increase the energy efficiency of Canadian buildings in pursuit of a net zero energy ready level of performance. In Canada the Net-Zero Energy Home Coalition is an industry association promoting net-zero energy home construction and the adoption of a near net-zero energy home (nNZEH), NZEH Ready and NZEH standard. The Canada Mortgage and Housing Corporation is sponsoring the EQuilibrium Sustainable Housing Competition that will see the completion of fifteen zero-energy and near-zero-energy demonstration projects across the country starting in 2008. The EcoTerra House in Eastman, Quebec is Canada's first nearly net-zero energy housing built through the CMHC EQuilibrium Sustainable Housing Competition. The house was designed by Assoc. Prof. Dr. Masa Noguchi of the University of Melbourne for Alouette Homes and engineered by Prof. Dr. Andreas K. Athienitis of Concordia University. In 2014, the public library building in Varennes, QC, became the first ZNE institutional building in Canada. The library is also LEED gold certified. The EcoPlusHome in Bathurst, New Brunswick. The Eco Plus Home is a prefabricated test house built by Maple Leaf Homes and with technology from Bosch Thermotechnology. Mohawk College will be building Hamilton's first net Zero Building China With an estimated population of 1,439,323,776 people, China has become one of the world's leading contributor to greenhouse gas emissions due to its ongoing rapid urbanization. Even with the growing increase in building infrastructure, China has long been considered as a country where the overall energy demand has consistently grown less rapidly than the gross domestic product (GDP) of China. Since the late 1970s, China has been using half as much energy as it did in 1997, but due to its dense population and rapid growth of infrastructure, China has become the world's second largest energy consumer and is in a position to become the leading contributor to greenhouse gas emissions in the next century.Since 2010, Chinese government has been driven by the release of new national policies to increase ZEB design standards and has also laid out a series of incentives to increase ZEB projects in China. In November 2015, China's Ministry of Housing and Urban-Rural Development (MOHURD) released a technical guide regarding passive and low energy green residential buildings. This guide was aimed at improving energy efficiency in China's infrastructure and was also the first of its kind to be formally released as a guide for energy efficiency. Also, with rapid growth in ZEBs in the last three years, there is an estimated influx of ZEBs to be built in China by 2020 along with the existing ZEB projects that are already built.As a response to the Paris Agreement in 2015, China stated that it set a target of reducing peak carbon emissions around 2030 while also aiming to lower carbon dioxide emissions by 60-65 percent from 2005 emissions per unit of GDP. In 2020, Chinese Communist Party leader Xi Jinping released a statement in his address to the UN General Assembly declaring that China would be carbon neutral by 2060 pushing forward climate change reforms. With more than 95 percent of China's energy originating from fuel sources that emit carbon dioxide, carbon neutrality in China will require an almost complete transition to fuel sources such as solar power, wind, hydro, or nuclear power. In order to achieve carbon neutrality, China's proposed energy quota policy will have to incorporate new monitoring and mechanisms that ensure accurate measurements of energy performance of buildings. Future research should investigate the different possible challenges that could come up due to implementation of ZEB policies in China. Net-zero energy projects in China One of the new generation net-zero energy office buildings successfully constructed is the 71-story Pearl River Tower located in Guangzhou, China. Designed by Skidmore Owings Merrill LLP, the tower was designed with the idea that the building would generate the same amount of energy used on an annual basis while also following the four steps to net zero energy: reduction, absorption, reclamation, and generation. While initial plans for the Pearl River Tower included natural gas-fired microturbines used for generation electricity, photovoltaic panels integrated into the glazed roof and shading louvers and tactical building design in combination with the VAWT's electricity generation were chosen instead due to local regulations. Denmark Strategic Research Centre on Zero Energy Buildings was in 2009 established at Aalborg University by a grant from the Danish Council for Strategic Research (DSF), the Programme Commission for Sustainable Energy and Environment, and in cooperation with the Technical University of Denmark, Danish Technological Institute, The Danish Construction Association and some private companies. The purpose of the centre is through development of integrated, intelligent technologies for the buildings, which ensure considerable energy conservation and optimal application of renewable energy, to develop zero energy building concepts. In cooperation with the industry, the centre will create the necessary basis for a long-term sustainable development in the building sector. Germany Technische Universität Darmstadt won first place in the international zero energy design 2007 Solar Decathlon competition, with a passivhaus design (Passive house) + renewables, scoring highest in the Architecture, Lighting, and Engineering contests Fraunhofer Institute for Solar Energy Systems ISE, Freiburg im Breisgau Net zero energy, energy-plus or climate-neutral buildings in the next generation of electricity grids India India's first net zero building is Indira Paryavaran Bhawan, located in New Delhi, inaugurated in 2014. Features include passive solar building design and other green technologies. High-efficiency solar panels are proposed. It cools air from toilet exhaust using a thermal wheel in order to reduce load on its chiller system. It has many water conservation features. Iran In 2011, Payesh Energy House (PEH) or Khaneh Payesh Niroo by a collaboration of Fajr-e-Toseah Consultant Engineering Company and Vancouver Green Homes Ltd] under management of Payesh Energy Group (EPG) launched the first Net-Zero passive house in Iran. This concept makes the design and construction of PEH a sample model and standardized process for mass production by MAPSA.Also, an example of the new generation of zero energy office buildings is the 24-story OIIC Office Tower, which is started in 2011, as the OIIC Company headquarters. It uses both modest energy efficiency, and a big distributed renewable energy generation from both solar and wind. It is managed by Rahgostar Naft Company in Tehran, Iran. The tower is receiving economic support from government subsidies that are now funding many significant fossil-fuel-free efforts. Ireland In 2005, a private company launched the world's first standardised passive house in Ireland, this concept makes the design and construction of passive house a standardised process. Conventional low energy construction techniques have been refined and modelled on the PHPP (Passive House Design Package) to create the standardised passive house. Building offsite allows high precision techniques to be utilised and reduces the possibility of errors in construction. In 2009 the same company started a project to use 23,000 liters of water in a seasonal storage tank, heated up by evacuated solar tubes throughout the year, with the aim to provide the house with enough heat throughout the winter months thus eliminating the need for any electrical heat to keep the house comfortably warm. The system is monitored and documented by a research team from The University of Ulster and the results will be included in part of a PhD thesis. In 2012 Cork Institute of Technology started renovation work on its 1974 building stock to develop a net zero energy building retrofit. The exemplar project will become Ireland's first zero energy testbed offering a post-occupancy evaluation of actual building performance against design benchmarks. Jamaica The first zero energy building in Jamaica and the Caribbean opened at the Mona Campus of the University of the West Indies (UWI) in 2017. The 2300 square foot building was designed to inspire more sustainable and energy efficient buildings in the area. Japan After the April 2011 Fukushima earthquake followed by the up with Fukushima Daiichi nuclear disaster, Japan experienced severe power crisis that led to the awareness of the importance of energy conservation. In 2012 Ministry of Economy, Trade and Industry, Ministry of Land, Infrastructure, Transport and Tourism and Ministry of the Environment (Japan) summarized the road map for Low-carbon Society which contains the goal of ZEH and ZEB to be standard of new construction in 2020.The Mitsubishi Electric Corporation is underway with the construction of Japan's first zero energy office building, set to be completed in October, 2020 (as of September 2020). The SUSTIE ZEB test facility is located in Kamakura, Japan, to develop ZEB technology. With the net zero certification, the facility projects to reduce energy consumption by 103%.Japan has made it a goal that all new houses be net zero energy by 2030. The developing company Sekisui House introduced their first net zero home in 2013, and is now planning Japan's first zero energy condominium in Nagoya City, it is a three-story building with 12 units. There are solar panels on the roof and fuel cells for each unit to provide backup power. Korea (Republic of) South Korea's Mandatory ZEB requirements, which have been previously applied to buildings with a GFA of 1,000 m2+ in 2021 will expand to buildings with a GFA of 500 m2+ in 2022, until being applicable to all public buildings starting in 2024. For private buildings, ZEB certification will be mandated for building permits with a GFA of over 100,000 m2 from 2023. After 2025, zero-energy construction for private buildings will be expanded to GFAs over 1,000 m2. The goal of the policy is to convert all public sector buildings to ZEB grade 3 (an energy independence rate of 60% ~ 80%), and all private buildings to ZEB grade 5 (an energy independence rate of 20% ~ 40%) by 2030.EnergyX DY-Building (에너지엑스 DY빌딩), the first commercial Net-Zero Energy Building (NZEB, or ZEB grade 1) and the first Plus Energy Building (+ZEB, or ZEB grade plus) in Korea was opened and introduced in 2023. The energy technology and sustainable architectural platform company EnergyX developed, designed, and engineered the building with its proprietary technologies and services. EnergyX DY-Building received the ZEB certification with an energy independence rate (or energy self-sufficiency rate) of 121.7%. Malaysia In October 2007, the Malaysia Energy Centre (PTM) successfully completed the development and construction of the PTM Zero Energy Office (ZEO) Building. The building has been designed to be a super-energy-efficient building using only 286 kWh/day. The renewable energy – photovoltaic combination is expected to result in a net zero energy requirement from the grid. The building is currently undergoing a fine tuning process by the local energy management team. Findings are expected to be published in a year.In 2016, the Sustainable Energy Development Authority Malaysia (SEDA Malaysia) started a voluntary initiative called Low Carbon Building Facilitation Program. The purpose is to support the current low carbon cities program in Malaysia. Under the program, several project demonstration managed to reduce energy and carbon beyond 50% savings and some managed to save more than 75%. Continuous improvement of super energy efficient buildings with significant implementation of on-site renewable energy managed to make a few of them become nearly Zero Energy (nZEB) as well as Net-Zero Energy Building (NZEB). In March 2018, SEDA Malaysia has started the Zero Energy Building Facilitation Program.Malaysia also has its own sustainable building tool special for Low Carbon and zero energy building, called GreenPASS that been developed by the Construction Industry Development Board Malaysia (CIDB) in 2012, and currently being administered and promoted by SEDA Malaysia. GreenPASS official is called the Construction Industry Standard (CIS) 20:2012. Netherlands In September 2006, the Dutch headquarters of the World Wildlife Fund (WWF) in Zeist was opened. This earth-friendly building gives back more energy than it uses. All materials in the building were tested against strict requirements laid down by the WWF and the architect. Norway In February 2009, the Research Council of Norway assigned The Faculty of Architecture and Fine Art at the Norwegian University of Science and Technology to host the Research Centre on Zero Emission Buildings (ZEB), which is one of eight new national Centres for Environment-friendly Energy Research (FME). The main objective of the FME-centres is to contribute to the development of good technologies for environmentally friendly energy and to raise the level of Norwegian expertise in this area. In addition, they should help to generate new industrial activity and new jobs. Over the next eight years, the FME-Centre ZEB will develop competitive products and solutions for existing and new buildings that will lead to market penetration of zero emission buildings related to their production, operation and demolition. Singapore Singapore unveiled a prominent development at the National University of Singapore that is a net-zero energy building. The building, called SDE4, is located within a group of three buildings in its School of Design and Environment (SDE). The design of the building achieved a Green Mark Platinum certification as it produces as much energy as it consumes with its solar panel covered rooftop and hybrid cooling system along with many integrated systems to achieve optimum energy efficiency. This development was the first new-build zero-energy building to come to fruition in Singapore, and the first zero-energy building at the NUS. The first retrofitted zero energy building to be developed in Singapore was a building at the Building and Construction Authority (BCA) academy by the Minister for National Development Mah Bow Tan at the inaugural Singapore Green Building Week on October 26, 2009. Singapore's Green Building Week (SGBW) promotes sustainable development and celebrates the achievements of successfully designed sustainable buildings.A net-zero energy building unveiled more recently is the SMU Connexion (SMUC). It is the first net-zero energy building in the city that also utilizes mass engineered timber (MET). It is designed to meet the Building and Construction Authority (BCA) Green Mark Platinum certification and has been in operation since January 2020. Switzerland The Swiss MINERGIE-A-Eco label certifies zero energy buildings. The first building with this label, a single-family home, was completed in Mühleberg in 2011. United Arab Emirates Masdar City in Abu Dhabi Dubai The Sustainable City in Dubai United Kingdom In December 2006, the government announced that by 2016 all new homes in England will be zero energy buildings. To encourage this, an exemption from Stamp Duty Land Tax is planned. In Wales the plan is for the standard to be met earlier in 2011, although it is looking more likely that the actual implementation date will be 2012. However, as a result of a unilateral change of policy published at the time of the March 2011 budget, a more limited policy is now planned which, it is estimated, will only mitigate two thirds of the emissions of a new home. BedZED development Hockerton Housing ProjectIn January 2019 the Ministry of Housing Communities and Local Government simply defined 'Zero Energy' as 'just meets current building standards' neatly solving this problem. United States In the US, ZEB research is currently being supported by the US Department of Energy (DOE) Building America Program, including industry-based consortia and researcher organizations at the National Renewable Energy Laboratory (NREL), the Florida Solar Energy Center (FSEC), Lawrence Berkeley National Laboratory (LBNL), and Oak Ridge National Laboratory (ORNL). From fiscal year 2008 to 2012, DOE plans to award $40 million to four Building America teams, the Building Science Corporation; IBACOS; the Consortium of Advanced Residential Buildings; and the Building Industry Research Alliance, as well as a consortium of academic and building industry leaders. The funds will be used to develop net-zero-energy homes that consume 50% to 70% less energy than conventional homes.DOE is also awarding $4.1 million to two regional building technology application centers that will accelerate the adoption of new and developing energy-efficient technologies. The two centers, located at the University of Central Florida and Washington State University, will serve 17 states, providing information and training on commercially available energy-efficient technologies.The U.S. Energy Independence and Security Act of 2007 created 2008 through 2012 funding for a new solar air conditioning research and development program, which should soon demonstrate multiple new technology innovations and mass production economies of scale. The 2008 Solar America Initiative funded research and development into future development of cost-effective Zero Energy Homes in the amount of $148 million in 2008.The Solar Energy Tax Credits have been extended until the end of 2016. By Executive Order 13514, U.S. President Barack Obama mandated that by 2015, 15% of existing Federal buildings conform to new energy efficiency standards and 100% of all new Federal buildings be Zero-Net-Energy by 2030. Energy Free Home Challenge In 2007, the philanthropic Siebel Foundation created the Energy Free Home Foundation. The goal was to offer $20 million in global incentive prizes to design and build a 2,000 square foot (186 square meter) three-bedroom, two bathroom home with (1) net-zero annual utility bills that also has (2) high market appeal, and (3) costs no more than a conventional home to construct.The plan included funding to build the top ten entries at $250,000 each, a $10 million first prize, and then a total of 100 such homes to be built and sold to the public. Beginning in 2009, Thomas Siebel made many presentations about his Energy Free Home Challenge. The Siebel Foundation Report stated that the Energy Free Home Challenge was "Launching in late 2009".The Lawrence Berkeley National Laboratory at the University of California, Berkeley participated in writing the "Feasibility of Achieving Zero-Net-Energy, Zero-Net-Cost Homes" for the $20-million Energy Free Home Challenge. If implemented, the Energy Free Home Challenge would have provided increased incentives for improved technology and consumer education about zero energy buildings coming in at the same cost as conventional housing. US Department of Energy Solar Decathlon The US Department of Energy Solar Decathlon is an international competition that challenges collegiate teams to design, build, and operate the most attractive, effective, and energy-efficient solar-powered house. Achieving zero net energy balance is a major focus of the competition. States Arizona Zero Energy House developed by the NAHB Research Center and John Wesley Miller Companies, Tucson. California The State of California has proposed that all new low- and mid-rise residential buildings, and all new commercial buildings, be designed and constructed to ZNE standards beginning in 2020 and 2030, respectively. The requirements, if implemented, will be promulgated via the California Building Code, which is updated on a three-year cycle and which currently mandates some of the highest energy efficiency standards in the United States. California is anticipated to further increase efficiency requirements by 2020, thus avoiding the trends discussed above of building standard housing and achieving ZNE by adding large amounts of renewables. The California Energy Commission is required to perform a cost-benefit analysis to prove that new regulations create a net benefit for residents of the state. West Village, located on the University of California campus in Davis, California, was the largest ZNE-planned community in North America at the time of its opening in 2014. The development contains student housing for approximately 1,980 UC Davis students as well as leasable office space and community amenities including a community center, pool, gym, restaurant and convenience store. Office spaces in the development are currently leased by energy and transportation-related University programs. The project was a public-private partnership between the university and West Village Community Partnership LLC, led by Carmel Partners of San Francisco, a private developer, who entered into a 60-year ground lease with the university and was responsible for the design, construction, and implementation of the $300 million project, which is intended to be market-rate housing for Davis. This is unique as the developer designed the project to achieve ZNE at no added cost to themselves or to the residents. Designed and modeled to achieve ZNE, the project uses a mixture of passive elements (roof overhangs, well-insulated walls, radiant heat barriers, ducts in insulated spaces, etc.) as well as active approaches (occupancy sensors on lights, high-efficiency appliances and lighting, etc.). Designed to out-perform California's 2008 Title 24 energy codes by 50%, the project produced 87% of the energy it consumed during its first year in operation. The shortcoming in ZNE status is attributed to several factors, including improperly functioning heat pump water heaters, which have since been fixed. Occupant behavior is significantly different from that anticipated, with the all-student population using more energy on a per-capita basis than typical inhabitants of single-family homes in the area. One of the primary factors driving increased energy use appears to be the increased miscellaneous electrical loads (MEL, or plug loads) in the form of mini-refrigerators, lights, computers, gaming consoles, televisions, and other electronic equipment. The university continues to work with the developer to identify strategies for achieving ZNE status. These approaches include incentivizing occupant behavior and increasing the site's renewable energy capacity, which is a 4 MW photovoltaic array per the original design. The West Village site is also home to the Honda Smart Home US, a beyond-ZNE single-family home that incorporates cutting-edge technologies in energy management, lighting, construction, and water efficiency. The IDeAs Z2 Design Facility is a net zero energy, zero carbon retrofit project occupied since 2007. It uses less than one fourth the energy of a typical U.S. office by applying strategies such as daylighting, radiant heating/cooling with a ground-source heat pump and high energy performance lighting and computing. The remaining energy demand is met with renewable energy from its building-integrated photovoltaic array. In 2009, building owner and occupant Integrated Design Associates (IDeAs) recorded actual measured energy use intensity of 21.17 thousand British thermal units per square foot (66.8 kWh/m2) per year, with 21.72 thousand British thermal units per square foot (68.5 kWh/m2) per year produced, for a net of −0.55 thousand British thermal units per square foot (−1.7 kWh/m2) per year. The building is also carbon neutral, with no gas connection, and with carbon offsets purchased to cover the embodied carbon of the building materials used in the renovation. The Zero Net Energy Center, scheduled to open in 2013 in San Leandro, is to be a 46,000-square-foot electrician training facility created by the International Brotherhood of Electrical Workers Local 595 and the Northern California chapter of the National Electrical Contractors Association. Training will include energy-efficient construction methods. The Green Idea House is a net zero energy, zero-carbon retrofit in Hermosa Beach. George LeyVa Middle School Administrative Offices, occupied since fall 2011, is a net zero energy, net zero carbon emissions building of just over 9,000 square feet. With daylighting, variable refrigerant flow HVAC, and displacement ventilation, it is designed to use half of the energy of a conventional California school building, and, through a building-integrated solar array, provides 108% of the energy needed to offset its annual electricity use. The excess helps power the remainder of the middle school campus. It is the first publicly funded NZE K–12 building in California. The Stevens Library at Sacred Heart Schools in California is the first net-zero library in the United States, receiving Net Zero Energy Building status from the International Living Future Institute, part of the PG&E Zero Net Energy Pilot Project. The Santa Monica City Services Building is among the first net-zero energy, net-zero water public/municipal buildings in California. Completed in 2020, the 50,000-square-foot addition to the historic Santa Monica City Hall building was designed to provide its own energy and water, and to minimize energy use through efficient building systems. At 402,000 square-feet, the California Air Resources Board Southern California Headquarters - Mary D. Nichols Campus, is the largest net-zero energy facility in the United States. A photovoltaic system covers 204,903 square-feet between the facility rooftop and parking pavilions. The +3.5 megawatt system is anticipated to generate roughly 6,235,000 kWh reusable energy per year. The facility was dedicated on November 18, 2021. Colorado The Moore House achieves net-zero energy usage with passive solar design, 'tuned' heat reflective windows, super-insulated and air-tight construction, natural daylighting, solar thermal panels for hot water and space heating, a photovoltaic (PV) system that generates more carbon-free electricity than the house requires, and an energy-recovery ventilator (ERV) for fresh air. The green building strategies used on the Moore House earned it a verified home energy rating system (HERS) score of −3. The NREL Research Support Facility in Golden is a class A office building. Its energy efficiency features include: Thermal storage concrete structure, transpired solar collectors, 70 miles of radiant piping, high-efficiency office equipment, and an energy-efficient data center that reduces the data center's energy use by 50% over traditional approaches. Wayne Aspinall Federal Building in Grand Junction, originally constructed in 1918, became the first Net Zero Energy building listed on the National Register of Historic Places. On-site renewable energy generation is intended to produce 100% of the building's energy throughout the year using the following energy efficiency features: Variable refrigerant flow for the HVAC, a geo-exchange system, advanced metering and building controls, high-efficient lighting systems, thermally enhanced building envelope, interior window system (to maintain historic windows), and advanced power strips (APS) with individual occupancy sensors. Tutt Library at Colorado College was renovated to be a net-zero library in 2017, making it the largest ZNE academic library. It received an Innovation Award from the National Association of College and University Business Officers. Florida The 1999 side-by-side Florida Solar Energy Center Lakeland demonstration project was called the "Zero Energy Home". It was a first-generation university effort that significantly influenced the creation of the U.S. Department of Energy, Energy Efficiency and Renewable Energy, Zero Energy Home program. Illinois The Walgreens store located on 741 Chicago Ave, Evanston, is the first of the company's stores to be built and or converted to a net zero energy building. It is the first net zero energy retail stores to be built and will pave the way to renovating and building net zero energy retail stores in the near future. The Walgreens store includes the following energy efficiency features: Geo-exchange system, energy-efficient building materials, LED lighting and daylight harvesting, and carbon dioxide refrigerant. The Electrical and Computer Engineering building at the University of Illinois at Urbana-Champaign, which was built in 2014, is a net zero building. Iowa The MUM Sustainable Living Center was designed to surpass LEED Platinum qualification. The Maharishi University of Management (MUM) in Fairfield, Iowa, founded by Maharishi Mahesh Yogi (best known for having brought Transcendental Meditation to the West) incorporates principles of Bau Biology (a German system that focuses on creating a healthy indoor environment), as well as Maharishi Vedic Architecture (an Indian system of architecture focused on the precise orientation, proportions and placement of rooms). The building is one of the few in the country to qualify as net zero, and one of even fewer that can claim the banner of grid positive via its solar power system. A rainwater catchment system and on-site natural waste-water treatment likewise take the building off (sewer) grid with respect to water and waste treatment. Additional green features include natural daylighting in every room, natural and breathable earth block walls (made by the program's students), purified rainwater for both potable and non-potable functions; and an on-site water purification and recycling system consisting of plants, algae, and bacteria. Kentucky Richardsville Elementary School, part of the Warren County Public School District in south-central Kentucky, is the first Net Zero energy school in the United States. To reach Net Zero, innovative energy reduction strategies were used by CMTA Consulting Engineers and Sherman Carter Barnhart Architects including dedicated outdoor air systems (DOAS) with dynamic reset, new IT systems, alternative methods to prepare lunches, and the use of solar photovoltaics. The project has an efficient thermal envelope constructed with insulated concrete form (ICF) walls, geothermal water source heat pumps, low-flow fixtures, and features daylighting extensively throughout. It is also the first truly wireless school in Kentucky. Locust Trace AgriScience Center, an agricultural-based vocational school serving Fayette County Public Schools and surrounding districts, features a Net Zero Academic Building engineered by CMTA Consulting Engineers and designed by Tate Hill Jacobs Architects. The facility, located in Lexington, Kentucky, also has a greenhouse, riding arena with stalls, and a barn. To reach Net Zero in the Academic Building the project utilizes an air-tight envelope, expanded indoor temperature setpoints in specified areas to more closely model real-world conditions, a solar thermal system, and geothermal water source heat pumps. The school has further reduced its site impact by minimizing municipal water use through the use of a dual system consisting of a standard leach field system and a constructed wetlands system and using pervious surfaces to collect, drain, and use rainwater for crop irrigation and animal watering. Massachusetts The government of Cambridge has enacted a plan for "net zero" carbon emissions from all buildings in the city by 2040. John W. Olver Transit Center, designed by Charles Rose Architects Inc, is an intermodal transit hub in Greenfield, Massachusetts. Built with American Recovery and Reinvestment Act funds, the facility was constructed with solar panels, geothermal wells, copper heat screens and other energy efficient technologies. Michigan The Mission Zero House is the 110-year-old Ann Arbor home of Greenovation.TV host and Environment Report contributor Matthew Grocoff. As of 2011, the home is the oldest home in America to achieve net-zero energy. The owners are chronicling their project on Greenovation.TV and The Environment Report on public radio. The Vineyard Project is a Zero Energy Home (ZEH) thanks to the Passive Solar Design, 3.3 Kws of Photovoltaics, Solar Hot Water and Geothermal Heating and Cooling. The home is pre-wired for a future wind turbine and only uses 600 kWh of energy per month while a minimum of 20 kWh of electricity per day with many days net-metering backwards. The project also used ICF insulation throughout the entire house and is certified as Platinum under the LEED for Homes certification. This Project was awarded Green Builder Magazine Home of the Year 2009. The Lenawee Center for a Sustainable Future, a new campus for Lenawee Intermediate School District, serves as a living laboratory for the future of agriculture. It is the first Net Zero education building in Michigan, engineered by CMTA Consulting Engineers and designed by The Collaborative, Inc. The project includes solar arrays on the ground as well as the roof, a geothermal heating and cooling system, solar tubes, permeable pavement and sidewalks, a sedum green roof, and an overhang design to regulate building temperature. Missouri In 2010, architectural firm HOK worked with energy and daylighting consultant The Weidt Group to design a 170,735-square-foot (15,861.8 m2) net zero carbon emissions Class A office building prototype in St. Louis, Missouri. The team chronicled its process and results on Netzerocourt.com. New Jersey The 31 Tannery Project, located in Branchburg, New Jersey, serves as the corporate headquarters for Ferreira Construction, the Ferreira Group, and Noveda Technologies. The 42,000-square-foot (3,900 m2) office and shop building was constructed in 2006 and is the first building in the state of New Jersey to meet New Jersey's Executive Order 54. The building is also the first Net Zero Electric Commercial Building in the United States. New York Green Acres, the first true zero-net energy development in America, is located in New Paltz, about 80 miles (130 km) north of New York City. Greenhill Contracting began construction on this development of 25 single family homes in summer 2008, with designs by BOLDER Architecture. After a full year of occupancy, from March 2009 to March 2010, the solar panels of the first occupied home in Green Acres generated 1490 kWh more energy than the home consumed. The second occupied home has also achieved zero-net energy use. As of June 2011, five houses have been completed, purchased and occupied, two are under construction, and several more are being planned. The homes are built of insulated concrete forms with spray foam insulated rafters and triple pane casement windows, heated and cooled by a geothermal system, to create extremely energy-efficient and long-lasting buildings. The heat recovery ventilator provides constant fresh air and, with low or no VOC (volatile organic compound) materials, these homes are very healthy to live in. To the best of our knowledge, Green Acres is the first development of multiple buildings, residential or commercial, that achieves true zero-net energy use in the United States, and the first zero-net energy development of single family homes in the world. Greenhill Contracting has built two luxury zero-net energy homes in Esopus, completed in 2008. One house was the first Energy Star rated zero-net energy home in the Northeast and the first registered zero-net energy home on the US Department of Energy's Builder's Challenge website. These homes were the template for Green Acres and the other zero-net energy homes that Greenhill Contracting has built, in terms of methods and materials. The headquarters of Hudson Solar, a dba of Hudson Valley Clean Energy, Inc., located in Rhinebeck and completed in 2007, was determined by NESEA (the Northeast Sustainable Energy Association) to have become the first proven zero-net energy commercial building in New York State and the ten northeast United States (October 2008). The building consumes less energy than it generates, using a solar electric system to generate power from the sun, geothermal heating and cooling, and solar thermal collectors to heat all its hot water. Oklahoma The first 5,000-square-foot (460 m2) zero-energy design home was built in 1979 with support from President Carter's new United States Department of Energy. It relied heavily on passive solar building design for space heat, water heat and space cooling. It heated and cooled itself effectively in a climate where the summer peak temperature was 110 degrees Fahrenheit, and the winter low temperature was −10 F. It did not use active solar systems. It is a double envelope house that uses a gravity-fed natural convection air flow design to circulate passive solar heat from 1,000 square feet (93 m2) of south-facing glass on its greenhouse through a thermal buffer zone in the winter. A swimming pool in the greenhouse provided thermal mass for winter heat storage. In the summer, air from two 24-inch (610 mm) 100-foot-long (30 m) underground earth tubes is used to cool the thermal buffer zone and exhaust heat through 7200 cfm of outer-envelope roof vents. Oregon Net Zero Energy Building Certification launched in 2011, with an international following. The first project, Painters Hall, is Pringle Creek's Community Center, café, office, art gallery, and event venue. Originally built in the 1930s, Painters Hall was renovated to LEED Platinum Net Zero energy building standards in 2010, demonstrating the potential of converting existing building stock into high‐performance, sustainable building sites. Painters Hall features simple low-cost solutions for energy reduction, such as natural daylighting and passive cooling lighting, that save money and increase comfort. A district ground-source geothermal loop serves the building's GSHP for highly efficient heating and air conditioning. Excess generation from the 20.2 kW rooftop solar array offsets pumping for the neighborhoods geo loop system. Open to the public, Painters Hall is a hub for gatherings of friends, neighbors, and visitors at the heart of a neighborhood designed around nature and community. Pennsylvania The Phipps Center for Sustainable Landscapes in Pittsburgh was designed to be one of the greenest buildings in the world. It achieved Net Zero Energy Building Certification from the Living Building Challenge in February 2014 and is pursuing full certification. The Phipps Center uses energy conservation technologies such as solar hot water collectors, carbon dioxide sensors, and daylighting, as well as renewable energy technologies to allow it to achieve Net Zero Energy status. The Lombardo Welcome Center at Millersville University became the first building in the state to become zero-energy certified. This was the largest step in Millersville University's goal to be carbon neutral by 2040. According to the International Living Future Institute, The Lombardo Welcome Center is one of the highest-performing buildings throughout the country generating 75% more energy than currently being used. Rhode Island In Newport, the Paul W. Crowley East Bay MET School is the first Net Zero project to be constructed in Rhode Island. It is a 17,000 sq ft building, housing eight large classrooms, seven bathrooms and a kitchen. It will have PV panels to supply all necessary electricity for the building and a geothermal well which will be the source of heat. Tennessee civitas, designed by archimania, Memphis, Tennessee. civitas is a case study home on the banks of the Mississippi River, currently under construction. It aims to embrace cultural, climatic, and economic challenges. The home will set a precedent for Southeastern high-performance design. Texas The University of North Texas (UNT) was constructing a Zero Energy Research Laboratory on its 300-acre research campus, Discovery Park, in Denton, Texas. The project is funded at over $1,150,000 and will primarily benefit students in mechanical and energy engineering (UNT became the first university to offer degrees in mechanical and energy engineering in 2006). This 1,200-square-foot structure is now competed and held ribbon-cutting ceremony for the University of North Texas' Zero Energy Laboratory on April 20, 2012. The West Irving Library in Irving, Texas, became the first net zero library in Texas in 2011, running entirely off solar energy. Since then it has produced a surplus. It has LEED gold certification. Vermont The Putney School's net zero Field House was opened on October 10, 2009. In use for over a year, as of December 2010, the Field House used 48,374 kWh and produced a total of 51,371 kWh during the first 12 months of operation, thus performing at slightly better than net-zero. Also in December, the building won an AIA-Vermont Honor Award. The Charlotte Vermont House designed by Pill-Maharam Architects is a verified net zero energy house completed in 2007. The project won the Northeast Sustainable Energy Association's Net Zero Energy award in 2009. See also References Further reading Nisson, J. D. Ned; and Gautam Dutt, "The Superinsulated Home Book", John Wiley & Sons, 1985, ISBN 978-0-471-88734-8, ISBN 978-0-471-81343-9. Markvart, Thomas; Editor, "Solar Electricity" John Wiley & Sons; 2nd edition, 2000, ISBN 978-0-471-98853-3. Clarke, Joseph; "Energy Simulation in Building Design", Second Edition Butterworth-Heinemann; 2nd edition, 2001, ISBN 978-0-7506-5082-3. National Renewable Energy Laboratory, 2000 ZEB meeting report Noguchi, Masa, ed., "The Quest for Zero Carbon Housing Solutions", Open House International, Vol.33, No.3, 2008, Open House International Voss, Karsten; Musall, Eike: "Net zero energy buildings – International projects of carbon neutrality in buildings", Munich, 2011, ISBN 978-3-920034-80-5.
held v. montana
Held v. Montana is a constitutional court case in the State of Montana regarding the right to a "clean and healthful environment in Montana for present and future generations":Art. IX, § 1 as required by the Constitution of Montana. The case was filed in March 2020 by Our Children's Trust on behalf of sixteen youth residents of Montana, then aged 2 through 18. On June 12, 2023, the case became the first climate-related constitutional lawsuit to go to trial in the United States.The plaintiffs argued that the state's support of the fossil fuel industry had worsened the effects of climate change on their lives, thus depriving them of their constitutional rights. More specifically, the plaintiffs challenged a provision in the Montana Environmental Policy Act (MEPA) that prohibited the state from considering greenhouse gas emissions as a factor when deciding whether to issue permits for energy-related projects.In its defense, the state claimed that regulators were simply following state law and argued that any of Montana's contributions to climate change would need to be addressed through the Montana legislature. The defense called the plaintiffs' case an "airing of political grievances" that is not actionable in court. On August 14, 2023, Lewis and Clark County District Court Judge Kathy Seeley ruled in favor of the plaintiffs that the limitations on considering environmental factors when deciding oil and gas permits violated the right to a safe environment recited in Montana’s constitution. History Mining interests heavily influenced the content of the original (1889) Constitution of Montana, causing subsequent laws to highly favor extractive industry, with some historians even calling the state a "corporate colony". A 1972 constitutional convention added language guaranteeing citizens "the right to a clean and healthful environment"—language that would become central to the Held case. At the time of trial, Montana was one of only three states with constitutions having environmental protections explicitly recited in their bill of rights, thus avoiding a need for plaintiffs to preliminarily prove they have such rights.Despite the 1972 amendment, according to The Guardian, policy experts say Montana officials have shaped state laws around the deeply entrenched financial interests of the fossil fuel industry. For example, in 2011 the state's energy policy was changed to prohibit the state from considering climate change as a factor when deciding whether to issue new permits. In the same year, Montana withdrew from the Western Climate Initiative, an agreement among some western American states and parts of Canada to reduce greenhouse gas emissions. In May 2023, Montana Republican lawmakers amended a limitation to the Montana Environmental Protection Act—called the "MEPA limitation"—to make it what was considered the nation’s most aggressive anti-climate action law. Separately, state laws passed in spring 2023 explicit forbade local governments from banning fossil fuels in building codes, from banning fuel derived from petroleum, and from requiring new construction to have solar panels.In context, in 2023 Montana was the fifth largest coal-producing U.S. state and the twelfth largest oil-producing state. Moreover, since 2003 the state received nearly $650 million from resource extraction—the eighth highest total in the country.In 2011, nonprofit law firm Our Children's Trust asked the Montana Supreme Court to rule that the state has a duty to address climate change. The court declined the request, a decision that made the group start in a lower court. On March 13, 2020, Our Children's Trust and other law firms filed the Held v. Montana complaint in the First Judicial District Court, Lewis and Clark County, in Helena. The judge denied the state's August 2021 motion to dismiss. In 2022, the Montana attorney general requested that the state Supreme Court take control of the case, asking that discovery be stopped, but the Supreme Court denied both requests. Although over the preceding decade, youth-led climate change lawsuits had been filed in every state, only four of the suits filed by Our Children’s Trust outside of Montana case were still pending as of June 2023. On June 12, 2023, Held became the third climate-related lawsuit in the U.S. to go to trial, and the first climate-related constitutional law case in the U.S. to reach trial.On April 14, 2023, State District Judge Michael Moses ruled that the permit for NorthWestern Energy's $250 million Laurel Generation Station on the Yellowstone River in Montana, was cancelled as the Montana Department of Environmental Quality (DEQ) had misinterpreted the state's environmental law and had failed to consider the long-term consequences of carbon dioxide emissions from the plant, which are estimated at "23 million tons" that "would impact" the town of Billings that is downwind of the Laurel Generation Station. In response to the decision by Judge Moses, on April 15, House Bill 971 was introduced, sponsored by Representative Joshua Kassmier R-Fort Benton and was quickly passed into law. Bill 971 exempted the DEQ from "adhering to air quality and emissions standards when authorizing or changing permits". A Senate Bill 557 amendment sponsored by Sen. Mark Noland, R-Bigfork, which is very similar to HB 971, and was also a response to the Moses decision on the Laurel plant, was introduced on April 14. Legal principles The lawsuit is based in part on the state’s constitution as amended in 1972, whose Article IX (Environment and Natural Resources) requires that the "state and each person shall maintain and improve a clean and healthful environment in Montana for present and future generations". This passage follows Article II (Declaration of Rights), Section 3 (Inalienable Rights), which states that "All persons are born free and have certain inalienable rights. They include the right to a clean and healthful environment..."The 2020 Held complaint:"Nature of the Action", § 4 challenged the constitutionality of two Montana statutes: (1) the State Energy Policy, which directs statewide energy production and use, and (2) the "climate change exception" in the MEPA, which prohibits the state from considering how its energy economy may contribute to climate change. Lawmakers repealed the State Energy Policy before trial, causing the claim against it to be dismissed.The complaint did not seek financial compensation, but a declaration that the state had violated their constitutional rights. In addition, the complaint asked the court to prohibit the state from "subjecting Youth plaintiffs" to the unconstitutional laws, to prepare a complete and accurate accounting of the state's greenhouse gas (GHG) emissions, and to require the state to develop a remedial plan or policies to reduce GHG emissions.:"Prayer for Relief", pp. 102-104The plaintiffs and the state must file proposed findings, due in early July, before the judge makes a ruling. A judge, not a jury, decides constitutional issues. Factual issues Plaintiffs' opening statement referred to "heat, drought, wildfires, air pollution, violent storms, loss of wildlife, watching glaciers melt," saying medical and psychological impacts disproportionately impact young people. During their five-day presentation of their case, almost all of the plaintiffs themselves testified, as did eight plaintiffs' expert witnesses on topics including climate change, renewable energy, energy policy in Montana, and children's physical and mental health.The youth plaintiffs testified as to how they had been harmed by climate change, including impacts from wildfires and related evacuations; extreme temperatures and drought; changing river flow patterns; damage to infrastructure, farms, and businesses; fear of the future, and a sense of loss of control. Native American youths testified as to how their traditional ceremonies and food had been affected.One of plaintiffs' experts, climate scientist and former IPCC member Steven Running, presented the science underlying climate change, from the greenhouse effect through its impacts specific to Montana. Another expert, Stockholm Environment Institute researcher Peter Erickson, testified that Montana emits more greenhouse gases than some entire countries, comparing the emissions created by the energy consumed within the state’s borders in 2019 to the amount emitted by Ireland in the same year. Clinical psychiatrist Lise Van Susteren, co-founder of the Climate Psychiatry Alliance, described the emotional toll on the plaintiffs when they recognize what lies in their future, characterizing their experience as "pretraumatic stress". Mark Jacobson, director of the atmosphere and energy program at Stanford University, testified that the state could transition from fossil fuels—providing ~75% of Montana’s 2018 energy consumption—to having 92% of its energy from renewable sources, in part because Montana's large land area has 330 times the wind energy potential needed to power the entire state. The case is rare in that it involves questioning climate experts on the witness stand.In the defense's case, Montana Assistant Attorney General Michael Russell asserted that the court "will hear lots of emotion, lots of assumptions, accusations", but that climate change "is a global issue that effectively relegates Montana's role to that of a spectator", with "Montana’s emissions (being) simply too minuscule to make any difference". Some scholars called the state's argument—referring to emissions occurring outside Montana—"whataboutism". Russell also said that the plaintiffs' case tried to turn a "procedural matter" into "a weeklong hearing of political grievances that properly belong to the legislature and not a court of law." Although the trial was originally scheduled to last two weeks, it lasted seven days. The state’s attorneys called only three witnesses in its single-day case, closing their case at the end of the sixth day of trial. The state declined to call its only mental health witness to the stand. Also, a written report by climate change contrarian Judith Curry—the state's sole climate science witness—was not entered into the record, nor did Curry testify as originally planned. Ruling and appeal Judge Seeley's August 14, 2023 judgment ruled in favor of the youth plaintiffs, declaring the 2023 version of the MEPA limitation (HB 971) and a similar 2023 law (SB 557) to be unconstitutional. The judge's 103-page ruling concluded that the "MEPA Limitation violates Youth Plaintiffs' right to a clean and healthful environment and is unconstitutional on its face". The ruling stated that the MEPA limitation's prohibition against analysis of greenhouse gas (GHG) emissions, their impacts on the climate, and their contributions to climate change, were inconsistent with the Montana Constitution.Montana Attorney General Austin Knudsen's spokesperson called the case a "taxpayer-funded publicity stunt" and said that Judge Seeley's ruling was "absurd". The Office of the Attorney General said it would appeal the ruling. Context and impact At the time of trial, Department of Environmental Quality Director Chris Dorrington said that permitting practices would not change if the plaintiffs prevailed, as the agency does not have the authority to refuse to issue a permit that complies with the law. Furthermore, DEQ Air, Energy and Mining Division Administrator Sonja Nowakowski distinguished procedural laws such as that being challenged, from regulatory laws that can be used as a basis to refuse a permit—saying plaintiff's only redress is to challenge specific permitting decisions.The Held decision would not be binding on other states because states' courts separately interpret their respective laws. Also, at trial the judge said that even if the plaintiffs prevail, she would not order officials to formulate a new approach to address climate change. However, the judge could issue a declaratory judgment saying officials violated the state constitution. That declaratory judgment would set a legal precedent for courts to consider cases normally governed by the legislative and executive branches.In August 2023, The New York Times and The Washington Post described Held as a "landmark" climate case. Bloomberg described it as a "historic case on harm from climate change". The Post said that it was the "first ruling of its kind nationwide." The Guardian called it a "game changer". Our Children's Trust's founder, Julia Olson, noted that Held v. Montana marked the first time in U.S. history that the merits of a case led a court to rule that a government violated young people’s constitutional rights by promoting fossil fuel usage. Similar cases Juliana v. United States, a case filed in 2015 in which 21 young people sued the federal government for violating their constitutional rights to life, liberty, and property by enacting policies that support fossil fuel interests. In June 2023, the case was permitted to proceed to trial in federal court. See also Climate justice Just transition Fossil fuel divestment List of environmental lawsuits Climate change in the United States Notes References External links "The Constitution of the State of Montana" (PDF). courts.mt.gov. Montana Judicial Branch. March 22, 1972. Archived (PDF) from the original on May 28, 2023. (includes original signatures) "Complaint For Declaratory and Injunctive Relief" (PDF). ClimateCaseChart.com. Columbia Law School Sabin Center for Climate Change Law, in collaboration with Arnold and Porter. March 13, 2020. Archived (PDF) from the original on March 31, 2023. (original complaint that started the lawsuit) "Photos: Attorneys for Montana, youth plaintiffs make final pitches in climate trial". Independent Record. June 20, 2023. Archived from the original on June 21, 2023. Photos by Thom Bridge, Independent Record. Seeley, Kathy (August 14, 2023). "Findings of fact, conclusions of law, and order" (PDF). Montana First Judicial District Court, Lewis and Clark County. Archived (PDF) from the original on August 15, 2023. (103-page ruling after trial)
paludiculture
Paludiculture is wet agriculture and forestry on peatlands. Paludiculture combines the reduction of greenhouse gas emissions from drained peatlands through rewetting with continued land use and biomass production under wet conditions. “Paludi” comes from the Latin “palus” meaning “swamp, morass” and "paludiculture" as a concept was developed at Greifswald University. Paludiculture is a sustainable alternative to drainage-based agriculture, intended to maintain carbon storage in peatlands. This differentiates paludiculture from agriculture like rice paddies, which involve draining, and therefore degrading wetlands. Characteristics Impact of peatland drainage and rewetting Peatlands store an enormous amount of carbon. Covering only 3% of the land surface, they store more than 450 gigatonne of carbon - more than stored by forests (which cover 30% of the land surface). Drained peatlands cause numerous negative environmental impacts such as greenhouse gas emission, nutrient leaching, subsidence and loss of biodiversity. Although only 0.3% of all peatlands are drained, peatland drainage is estimated to be responsible for 6% of all human greenhouse gas emission. By making soils waterlogged when re-wetting peatlands, decomposition of organic matter (~50% carbon) will almost cease, and hence carbon will no longer escape into the atmosphere as carbon dioxide. Peatland rewetting can significantly reduce environmental impacts caused by drainage by restoring hydrological buffering and reducing the water table's sensitivity to atmospheric evaporative demand. Due to the drainage of soils for agriculture in many areas, the peat soil depth and water quality has dropped significantly over the years. These problems are mitigated by re-wetting peatlands. As such, they can also make installations against rising sea levels (levees, pumps) unnecessary. Wet bogs act as nitrogen sinks, whereas mineralisation and fertilisation from agriculture on drained bogs produces nitrogen run-off into nearby waters. Arguments for cultivating crops on restored peatlands Cultivating peatland products sustainably can incentivise the rewetting of drained peatlands, while maintaining similar land use in previously drained agricultural areas Raw materials can be grown on peatlands without competing with food production for land in other areas. The growing of crops extracts phosphate from the land, which is important in wetlands; it also helps to extract other nutrients from water, making it suitable for post-water treatment purposes In many tropical countries, cultivating semi-wild native crops in peat swamp forests is a traditional livelihood which can be sustainable. Restored reed beds can obstruct nitrogen and phosphorus run-off from agriculture higher up in the river system and so protect lower waters. Paludiculture areas can act as habitat corridors and ecological buffer zones between traditional agriculture and intact peatlandsabc Debates around the sustainability of paludiculture The application of the term "paludiculture" is debated as it is contingent on whether different peatland agricultural practices are considered sustainable. In terms of greenhouse gas emissions, how sustainable a paludiculture practice is deemed to be depends on the greenhouse gas measured, the species of plant and the water table level of the peatland. "Paludiculture" been used to refer to cultivating native and non-native crops on intact or re-wetted peatlands. In the EU's Common Agricultural Policy, it is defined as the productive land use of wet and rewetted peatlands that preserves the peat soil and thereby minimizes CO2 emissions and subsidence. A recent review of tropical peatland paludiculture from the National University of Singapore evaluated wet and re-wetted management pathways in terms of greenhouse gas emissions and carbon sequestration and concluded that commercial paludiculture is only suited to re-wetted peatlands, where it is carbon negative or neutral, as opposed to intact peatlands, where it increases emissions. After decades of re-wetting, can still contribute to global warming to a greater extent than intact peatlands. Exceptions where paludiculture on intact peatlands may be sustainable are some traditions of cultivating native crops semi-wild in intact peat swamp forest, or gathering peatland products without active cultivation. The review also suggests that, to be sustainable, paludiculture should only use native vegetation to restore peatlands whilst producing biomass, as opposed to any wetland plants which have the possibility of surviving. This is because using non-native species may create negative peatland conditions for other native plants, and non-native plants tend to have a lower yield and lifespan in undrained or re-wetted peatlands than when grown in their native habitats or drained wetlands. Paludiculture and ecosystem services Assessments of the sustainability of paludiculture should take into account ecosystem services besides carbon sequestration and how paludiculture can be integrated with traditional farming practices. Peatlands can provide a number of other ecosystem services e.g. biodiversity conservation and water regulation. It is therefore important to protect this areas and restore degraded areas. To conserve, restore and improve management of peat lands is a cost efficient and relatively easy way to maintain ecosystem services. However, these the ecosystem services are not priced in a market and do not produce economic profit for the local communities. The drainage and cultivation, grazing, as well as peat mining on the other hand give the local communities short-term economic profits. It has therefore been argued that conservation and restoration, which has a significant and common value, needs to be subsidized by the state or the world at large.Paludiculture is not focused on nature conservation but on production, but paludiculture and conservation may complement each other in a number of ways. 1) Paludiculture can be the starting point and an intermediate stage in the process of restoring a drained peatland. 2) Paludiculture can lower the cost of the conservation project by e.g. decrease the costs of biomass removal and establishment costs. 3) Areas with paludiculture practice can provide buffer zones around the conserved peat areas. 4) Areas with paludiculture in between conservation areas can provide corridors facilitating species migration. 5) Paludiculture may increase the acceptance by the affected stakeholder to rewet once drained peatland. The support of the local communities in rewetting project are often crucial.The effect on greenhouse gas emissions of paludiculture is complex. On the one hand a higher water table will reduce the aerobic decomposition of peat and therefore the carbon dioxide emissions. But on the other hand the increased ground water table may increase anaerobic decomposition of organic matter or methanogenesis and therefore increase the emission of methane (CH4), a short-lived but more potent greenhouse gas than CO2. The emissions emanating from rewetted peatland with paludiculture will also be affected by the land-use in terms of type of use (agriculture, forestry, grazing etc.), but also in terms of used species and intensity. Traditional use of peatland has often less impact on the environment than industrial use has, but need not be sustainable in the long run and if used at a larger scale. Management The most obvious way to maintain the ecosystem services that peatland provides is conservation of intact peatlands. This is even more true given the limited success of restoration projects especially in tropical peatlands. The conserved peatland still holds value for humans and hence provides a number of ecosystem services e.g. carbon storage, water storage and discharge. Conserving peatlands also avoids costly investments. Conservation is suggested to be a very cost-effective management practice for peatlands. The most obvious ecosystem services that the conservation management provides - i.e. carbon storage and water storage - are not easily priced on the market. Therefore, peatland conservation may need to be subsidised.To rewet peatland and thereby restore the water table level is the first step in the restoration. The intention is to recreate the hydrological function and processes of the peatland. This takes a longer time than may be expected. Studies have found that rewetted previously drained peatland had the hydrological functions - e.g. water storage and discharge capacity - somewhere between a drained and an intact peatland six years after the restoration.Undrained peatlands are recommended to be left for conservation and not used for paludiculture. Drained peatlands, on the other hand, can be rewetted and used for paludiculture often using traditional knowledge together with new science. However local communities, especially in the tropics, maintain their livelihood by draining and using the peatland in various ways e.g. agriculture, grazing, and peat mining. Paludiculture can be a way to restore degraded and drained peatlands as well as maintaining an outcome for the local community. For example, studies of Sphagnum cultivation on re-wetted peat bogs in Germany shows a significant decrease of greenhouse gas emission compared to a control with irrigated ditches. The economic feasibility of Sphagnum cultivation on peat bogs are however still unclear. The basis for paludiculture is however very different in the south, among other things because of higher population and economic pressure on peatland. Locations Tropical Peatlands Tropical peatlands extensively occur in Southeast Asia, mainland East Asia, the Caribbean and Central America, South,America,Romania and southern Africa. Often located in lowlands, tropical peatlands are uniquely identified by rapid rates of peat soil formation, under high precipitation and high temperature regimes. In contrast, a high temperature climate accelerates decomposition rates, causing degraded tropical peatlands to contribute more substantially to global green house gas emissions. Although tropical peatlands cover only 587,000 km2, they store 119.2 Gigatonnes C at a density per unit area of 203,066 tonnes C km−2. For decades, these large carbon stores have succumbed to draining in order to cater for humanity's socio-economic needs. Between 1990 and 2015, cultivation (for management including industrial and small-holder agriculture) had increased from 11 to 50% of forested peatlands in Peninsular Malaysia, Sumatra, and Borneo. In Malaysia and Indonesia in the last twenty years, peat swamp forests have retreated from covering 77% of peatlands to 36%, endangering many mammals and birds in the region. In 2010, industrial agriculture covers about 3-3.1 million hectares, with oil palm accounting for 2.15 million hectares of this area. The conversion of natural tropical peatlands into other land uses leads to peat fires and the associated health effects, soil subsidence increasing flood risks, substantial greenhouse gas emissions and loss of biodiversity. Today efforts are being made to restore degraded tropical peatlands through paludiculture. Paludiculture is researched as a sustainable solution to reduce and reverse the degradation of peat swamp forests, and includes traditional local agricultural practices which predate the use of the term. Commercial paludiculture has not been trialled to the extent that it has in northern peatlands. Below are examples of paludiculture practices in tropical peatlands. Congo Basin The Bantu people in Cuvette Central use peatlands for fishing, hunting and gathering, as well as small-scale agriculture near terra firme forests. Indonesia In Indonesia there are three areas that could be the example of paludiculture practices such as beje system in Kutai and Banjar Tribes in East Kalimantan, Nut plantations in Segedong West Kalimantan, and Sago farming in Meranti Island district and Riau Province. Sago is cultivated semi-wild near rivers in Riau. Jelutong is grown in monocultures and mixed plantings in Central Kalimantanm and in South Sumatra and Jambi, and has been traded since the mid-1800s. This trade has been stiffed by 2006 tariffs and sanctions, and growing jelutong in monocultures is considered less efficient than crops like smallholder oil palm.Besides commercial production, peatland communities in Indonesia have developed less impactful practices for extracting resources. For example, Dayak communities only cultivate peatlands shallower than three meters for small-scale farming of sago and jelutong in coastal areas where the sea inputs nutrients. In Sumatra, timber harvested in peat swamp forests are transported with wooden sleighs, rails and small canals in a traditional method called ongka which is less destructive than commercial logging transport. Peat subsidence and CO2 emissions have still been found present in agroforestry small-holdings in re-wetted peatlands in Jambi and Central Kalimantan, even those with native species. Malaysia In Malaysia, sago plantations are mostly semi-wild, situated near rivers such as in Sarawak, although Malaysia also imports sago from Sumatra to make noodles. Peatlands are also used by the Jakun people in South East Pahang for hunting, gathering and fishing. Peru Mestizo communities in Loreto, Peru use peatlands for hunting and gathering, and sustainably cultivating native palms, which they replant to restore the resource. They are conscious of the limits to the resource and the need to avoid wasteful felling during harvest. Northern Peatlands The greater part of the world's peatlands occur in the northern hemisphere, encompassing both boreal and temperate regions. Global estimates indicate that northern peatlands cover 3,794,000 km2, storing about 450 Gt of C at a density of approximately 118,318 t C km−2 . Peatlands form in poorly drained areas under conditions of high precipitation and low temperature . 66% of northern peatlands are found in Eurasia and 34% in North America. About 60% of these peatlands (2718×103 km2) are perennially frozen, with approximately 2152×103 km2 occurring in Eurasia and 565×103 km2 in North America . In the European Union (25 countries in Europe), peatlands cover approximately 291×103 km2, of which nearly 55% are in Finland and Sweden . Peatlands are more common in Belarus and Ukraine, where they occupy approximately 497×103 km2. Both boreal and temperate peatlands are primarily formed from bryophytes and graminoids, displaying slower rates of accumulation and decomposition comparative to the tropics . Northern peatlands have been drained for agriculture, forestry, and peat mining for fuel and horticulture. Historical uses of intact northern peatlands include fishing, hunting, grazing and gathering berries. Paludiculture is not widely established commercially in northern peatlands and most research projects identified below are ongoing. Many have not yet published peer-reviewed results. Most are focused on Sphagnum and reed farming. Rather than excavating decomposed Sphagnum as peat, non-decomposed reed fibres are harvested in cycles, as a renewable source of biomass. Sphagnum fibres can be used as a growing substrate, packaging to protect plants in transport, or to reintroduce moss when restoring other peatlands. Belarus The University of Greifswald and Belarusian State University are researching reed beds in Naroch National Park as filters to reduce nitrogen and phosphorus run-off from degraded peatlands agriculture into the Baltic. With research scheduled from January 2019 to September 2021, they aim to investigate the potential for harvesting reeds in the area to incentivise reed bed management. Canada Paludiculture practices include cultivating Sphagnum and cattail. One of the largest research projects was carried out between 2006 and 2012 by researchers from Université Laval in Quebec, trialling Sphagnum farming in eastern Canada. Their bog site, on the Acadian Peninsula, was previously used for block-cutting peat for fuel and so consisted of ditches of Sphagnum and raised areas of other vegetation. They found that Sphagnum farming could be practiced large-scale in the ditches, although they recommend active irrigation management for more consistent harvests. Finland The Finnish Forest Research Institute and Vapo Oy, Finland's largest peat mining company, manage around 10 hectares for experiments in cultivating Sphagnum for restoration and to produce substrates. Germany The Greifswald Mire Center lists six research projects for cultivating Sphagnum as a raw material for substrates and restoring moors in Germany: Hankhausen, Drenth, Parovinzialmoor, Ramsloh, Sedelsberg and Südfeld. The Drenth and Parovinzialmoor projects, running from 2015 to 2019, included testing varying irrigation and drainage methods. They found that peat moss can be grown on black peat. In Sedelsberg, researchers found cultivating Sphagnum on black peat to be "expensive and time-consuming". Researchers at the Südfeld project in 2002 observed a small increase in peat moss, and increasing reeds, cattails, and willows. Researchers are also investigating reed and cattail cultivation.In Mecklenburg -West Pomerania, Greifswald University's ongoing Paludi-Pellets-Project aims to create an efficient biofuel source from sedges, reeds and canary grass in the form of dry pellets. Ireland Renewable energy company Bord na Móna began peat moss trials in 2012 to restore Sphagnum in raised bogs for potential horticulture. Lithuania Lithuania's first peat moss cultivation trial was in 2011, in Aukštumala Moor in Nemunas Delta Regional Park. Researchers from Vilnius Institute of Botany transplanted sections of Sphagnum from a neighbouring degraded raised bog to the exposed peat surface. They found that 94% of the patches survived and expanded to the exposed peat.The ongoing "DESIRE" project is investigating peatland restoration and paludiculture in the Neman River catchment area to reduce nutrient run-off into the Baltic. The Netherlands In the ongoing "Omhoog met het Veen - AddMire in the Netherlands" research project, Landscape Noord-Holland aims to investigate the restoration of reed beds and wet heathlands on moors previously converted for agriculture as well as to raise awareness about peatland degradation. The project is intended to promote paludiculture as an alternative income from agriculture. Researchers have rewetted 8 hectares, including for a water storage buffer area for the peat moss experiments. They are measuring the effects of soil erosion and atmospheric nitrogen on the growth of peat moss and the resulting greenhouse gas emissions and soil chemistry. Russia Russia has the largest area of peatlands of all the northern circumpolar countries with the world's largest peatland being the West Siberian mire massif and the largest in Europe the Polistovo-Lovatsky mire in northern Russia. An estimate derived from the digital soil database of Russia at a geographical scale of 1:5 million, indicates that the area of soils with a peat depth of more than 30 cm is nearly 2210×103 km2. Approximately 28% occurs in the zone of seasonally frozen soils, nearly 30% in the zone of sporadic and discontinuous permafrost, and 42% in the zone of continuous permafrost. Peat with a depth of more than 50 cm tends to be dominant in the Northern and Middle Taiga zones, but is uncommon in the Tundra zone. Ongoing restoration does not seem to include paludiculture. The Wetland International together with the Institute of Forest Science of the Russian Academy of Sciences and the Michael Succow Foundation, implemented a major peatland restoration project in response to the extensive peat fires in the summer of 2010 in the Moscow region. The project was initiated within the framework of co-operation between the Russian Federation and the Federal Republic of Germany to the spearhead the ecological rewetting of peatlands and represents one of the largest peatland ecosystem restoration projects in the world. To date, over 35,000 ha of drained peatlands have been restored using ecological methods with another 10,000 ha currently underway. Examples of potential crops for cultivation on wet and rewetted peatlands The Database of Potential Paludiculture plants (DPPP) lists more than 1,000 wetland plants, but only a minor fraction is suitable for paludiculture. Examples for potential and tested paludicultures are provided in the table below. == References ==
alliance for automotive innovation
The Alliance for Automotive Innovation (AAI) is a Washington, D.C.-based trade association and lobby group whose members include international car and light duty truck manufacturers that build and sell products in the United States. History A predecessor organization, the Automobile Importers of America (AIA), was formed in 1965 to provide member companies information on changes to U.S. state and federal automotive industry regulations. The AIA evolved into the primary advocacy resource for many major vehicle importers in the 1970s, opposing trade restrictions and other protectionist laws and regulations that adversely impacted its members.The 1973 oil crisis led to increased market share for imported vehicles, which were often more fuel-efficient. In response, Ford Motor Company and the United Auto Workers union accused importers of dumping and unfair trading, and took their claims to trade authorities. The AIA, representing importers, had the case dismissed in 1975, arguing that other factors led to the market share changes.In the 1980s, international automobile companies that were traditionally importers began opening new manufacturing plants in the US, leading to an expansion in the organization's focus. In 1990, the AIA changed its name to the Association of International Automobile Manufacturers (AIAM). In 2011, the AIAM changed its name to the Association of Global Automakers.In 2012, there were 12 member companies, including Honda, Toyota, Nissan, Hyundai and Kia. In 2011, member companies employed 81,000 in the US in production facility investments totaling $45 billion. The association stated that its members accounted for 42% of all vehicles sold in the US and 34% of vehicles manufactured in the U.S. from January to September 2011. John Bozzella became the association's president and CEO on April 1, 2014. He was preceded by Michael J. Stanton, who had held the role since 2006. Previously, the association was led by Ralph Millet (1965 to 1977), George Nield (1977 to 1992), Philip A. Hutchinson Jr. (1992 to 2000), and Tim MacCarthy (2000 to 2006).In January 2020, the Association of Global Automakers merged with the Alliance of Automobile Manufacturers to become the Alliance for Automotive Innovation. Members of both groups became members of the Alliance, representing nearly every automotive manufacturer selling cars and light duty trucks in the US, including the American "Big Three" (General Motors, Ford, and Chrysler). The AAI also expanded its membership to include suppliers and Startups. Bozzella became the CEO and President of the new organization, with the AAM’s David Schwietert serving as chief policy officer.The AAI provides information to policymakers on key issues affecting the automotive sector and supports related state and national legislation. The AAI reported that in 2018, the automotive industry invested $US 125 billion in R&D and earned more than 5,000 global patents. Members The new organization has expanded its membership to include suppliers, startups and other automotive-related associations.As of August 2023, members included: Aisin Aptiv Autoliv BASF BMW Bosch Cruise Denso Envision AESC Ferrari Ford General Motors Harman Honda Hyundai Infineon Isuzu Jaguar Land Rover Kia LG Luminar Magna Steyr Mazda McLaren Mercedes-Benz Mitsubishi Nissan Nuro Panasonic Porsche Qualcomm Samsung SDI Sirius XM Stellantis Subaru Texas Instruments Toyota Volkswagen Volvo Activities The AAI represents, advises and advocates for manufacturers of cars and light duty trucks in the United States. It focuses on policy development for emissions reduction, expansion of electric vehicle manufacturing, and investment in safety technology. Market and trade The association helps its members formulate and defend positions against legislation and regulations that make participation in the US market more costly or difficult for automakers. In 1994, the association filed an amicus brief in support of a successful appeals decision against the classification of the Nissan Pathfinder as a cargo vehicle. The resulting ruling opened the doors to Japanese expansion in the US light truck market, in particular the growing SUV segment. During the 1990s, the association opposed a move by the Clinton administration to impose a 100% tariff on 13 luxury vehicles imported from Japan. In December 2020, the Alliance issued a report with eight policy strategies designed to secure US competitiveness in automotive technologies, including incentives for industry R&D and investments in EV charging infrastructure. Fuel economy and emissions On behalf of its members, the association develops and advances positions on fuel efficiency, greenhouse gas emissions and other regulations and standards. The association opposes allowing individual states to adopt standards more stringent than the federal standards for vehicle emissions and fuel economy. It supports the Obama administration's proposed changes to CAFE standards, which would require automakers to improve car mileage by 5 percent annually until 2025, aiming to reduce greenhouse gas pollution.In 2007, the association brought a lawsuit against the state of California, attempting to establish that the state had no authority to regulate greenhouse gas emissions. The association's argument was that the only method to significantly reduce such emissions, primarily carbon dioxide, is by improving fuel economy, and under federal energy legislation from 1975, only the Department of Transportation has authority to establish a fuel economy standard. As such, California's standards are preempted by federal law. California is able to set its own standards for tailpipe emissions via a waiver by the Environmental Protection Agency (EPA) from preemption under the Clean Air Act, as it had begun regulation of air pollution before the EPA was established. The AAI also argued that California and other states being granted authority to regulate greenhouse gas emissions would force manufacturers to adhere to too many different standards, effectively raising the cost of cars and eliminating model choices. In December 2007, a district court judge ruled against the association's suit. The association appealed this decision. In February 2008, the association issued a statement supporting the EPA's decision not to issue the waiver that would be required for California to regulate greenhouse gas emissions from motor vehicles. The association's president, Michael Stanton, stated that its interest was not in resisting such regulation, but ensuring that uniform standards are set by the federal government.In 2009, the association stated its support for an agreement reached by the Obama administration to adopt a single national standard for fuel economy, which led to outstanding lawsuits being dropped. Electric vehicles In the third quarter of 2021, the AAI reported that electric vehicles comprised 6% of all light duty car sales, with the highest volume of EV sales ever recorded at 187,000 vehicles. This was an 11% increase in sales, as opposed to a 1.3% increase in gasoline and diesel cars. The report indicated that California was the US leader in EV with nearly 40% of US purchases, followed by Florida – 6%, Texas – 5% and New York 4.4%.In August 2021, the Biden administration issued an executive order that called for half of new vehicles sold in 2030 in the US—including battery electric vehicles, gasoline-electric hybrid vehicles, and hydrogen fuel cell vehicles—to be zero-emission. The organization indicated its support for the order, but suggested the US government had to invest in expanding charging stations around the country. When the order was issued, the US had 40,000+ active charging stations, a number estimated to be insufficient to power the growing number of EVs. Later that year, the Biden administration put forth a strategy for building the network. In response, the AAI developed a list of "recommended attributes for charging stations" that provided data on charging rates, power grid requirements, charging costs, and station layouts. Fuel formulation In late 2010, the AAI was part of a coalition of engine manufacturers that filed a suit in the United States Court of Appeals to block the Environmental Protection Agency’s approval of an increase of the ethanol content of gasoline from 10 percent to 15 percent. The association expressed concerns that alcohol-blended fuel could cause damage or problems to engines that were not originally built to run on such a fuel. The association noted that the Clean Air Act required producers of any new fuel or fuel additive to show that those fuels would not contribute to the failure of vehicles or engines to meet emissions standards. The association and other plaintiffs requested time to conduct studies assessing the impact of an increase in the ethanol content of gasoline on newer model automobiles and small engines.The association and 30 other organizations—including Friends of the Earth, the National Black Chamber of Commerce, and representatives of the small-engine and snack-food industries—signed a letter in 2008 asking the House Committee on Science, Space, and Technology to support a bill requiring more study and scientific evaluation before so-called E15 fuels are approved for consumer use. Safety and consumer protection Alongside the now-defunct Automobile Manufacturers Association, in the late 1990s, the Association of Global Automakers advocated for U.S. regulators to begin recognizing some of the ECE regulations, which are used instead of U.S. regulations throughout most of the world.The association advocates a ban on the use of handheld devices to text or talk while driving as "an important part of vehicle crash prevention". New Car Assessment Program (NCAP) In 2021, the alliance recommended that the NHTSA update its New Car Assessment Program (NCAP), which provides safety data that focuses on new technologies and safety features to individuals buying new vehicles. The organization specifically asked that NCAP be updated on a consistent, regular basis by offering insight into new safety technologies; that NCAP officials engage with stakeholders on these technologies at least once/year; that the program include a three-year update and review cycle for other safety programs like the Euro NCAP program; that the NHTSA review the program regularly to determine its efficacy; and that the program’s rules be structured to remove regulatory barriers and red tape that might hinder technology.In March 2022, the NHTSA released its proposal to modernize the NCAP. The proposed new program contains three new technologies in driver assistance—driver blind spot detection and intervention, electronic lane-keeping, and computerized emergency braking systems protecting pedestrians—and also recommends new, improved test procedures and criteria on a car’s performance for existing driver-assistance technology. The NHTSA also recommended a 10-year "road map for future programs" and is seeking input on a ranking apparatus for driver-assistance technologies. Right to Repair Act The association opposes the Motor Vehicle Owners' Right to Repair Act, the name of several related proposed bills in both the United States Congress and state legislatures. According to the association, the bill's supporters want not only repair codes, but also design and manufacturing codes, which it argues is an effort by aftermarket companies to access manufacturers' intellectual property, and is unnecessary as the information that independent shops need for repairs is already available online. Advanced driver assistance systems In 2021, the organization proposed a set of driver monitoring safety principles for vehicles equipped with advanced driver assistance systems (ADAS) to make sure the technology is effective and safe. The alliance recommends that consumers understand that driver assists do not imply more capability, and states that driver monitoring should be a standard feature with assists. NHTSA has not issued specific regulations. Vehicles that are “self-driving” are not sold by any existing manufacturers. Automakers do require all drivers to be alert and mentally and physically involved in the driving experience. The alliance is specifically advocating camera-based driver monitoring systems in vehicles equipped with ADAS to ensure that drivers pay sufficient attention to driving. The driver should be ready to assume full control of the vehicle in the event that the ADAS does not perform properly. Adaptive driving beam headlights In October 2021, the Alliance recommended to the NHTSA that the administration settle on a new rule allowing adaptive driving beam (ADB) headlight technology in US vehicles. In February 2022, NHTSA administrator Steven Cliff signed a rule amending Federal Motor Vehicle Safety Standard 108, a regulation covering lighting, reflective devices and signalling in cars, satisfying a requirement from the 2021 recommendation. Rear seat reminder The AAI notified car buyers that its members would make rear seat reminders standard on US vehicles by the model year 2025. The feature is important because of drivers leaving unattended children in the back seat of a car, where interior temperatures can rise 20° in 10 minutes, causing hyperthermia. The size of a child’s body also indicates that children can be affected by heat three to five times faster than adults. Between 1990 and 2018, 889 children died of hyperthermia after being left in cars. Some safety proponents found the alarm system insufficient because it can reset when the car is shut down, and state that the system would be inadequate in the 25% of childhood hyperthermia deaths where the child climbed into a car alone. See also American Automotive Policy Council References External links Official website "Alliance for Automotive Innovation Internal Revenue Service filings". ProPublica Nonprofit Explorer.
fossil fuel power station
A fossil fuel power station is a thermal power station which burns a fossil fuel, such as coal or natural gas, to produce electricity. Fossil fuel power stations have machinery to convert the heat energy of combustion into mechanical energy, which then operates an electrical generator. The prime mover may be a steam turbine, a gas turbine or, in small plants, a reciprocating gas engine. All plants use the energy extracted from the expansion of a hot gas, either steam or combustion gases. Although different energy conversion methods exist, all thermal power station conversion methods have their efficiency limited by the Carnot efficiency and therefore produce waste heat. Fossil fuel power stations provide most of the electrical energy used in the world. Some fossil-fired power stations are designed for continuous operation as baseload power plants, while others are used as peaker plants. However, starting from the 2010s, in many countries plants designed for baseload supply are being operated as dispatchable generation to balance increasing generation by variable renewable energy.By-products of fossil fuel power plant operation must be considered in their design and operation. Flue gas from combustion of the fossil fuels contains carbon dioxide and water vapor, as well as pollutants such as nitrogen oxides (NOx), sulfur oxides (SOx), and, for coal-fired plants, mercury, traces of other metals, and fly ash. Usually all of the carbon dioxide and some of the other pollution is discharged to the air. Solid waste ash from coal-fired boilers must also be removed. Fossil fueled power stations are major emitters of carbon dioxide (CO2), a greenhouse gas which is a major contributor to global warming. The results of a recent study show that the net income available to shareholders of large companies could see a significant reduction from the greenhouse gas emissions liability related to only natural disasters in the United States from a single coal-fired power plant. However, as of 2015, no such cases have awarded damages in the United States. Per unit of electric energy, brown coal emits nearly twice as much CO2 as natural gas, and black coal emits somewhat less than brown. As of 2019, carbon capture and storage of emissions is not economically viable for fossil fuel power stations, and keeping global warming below 1.5 °C is still possible but only if no more fossil fuel power plants are built and some existing fossil fuel power plants are shut down early, together with other measures such as reforestation. Basic concepts: heat into mechanical energy In a fossil fuel power plant the chemical energy stored in fossil fuels such as coal, fuel oil, natural gas or oil shale and oxygen of the air is converted successively into thermal energy, mechanical energy and, finally, electrical energy. Each fossil fuel power plant is a complex, custom-designed system. Multiple generating units may be built at a single site for more efficient use of land, natural resources and labor. Most thermal power stations in the world use fossil fuel, outnumbering nuclear, geothermal, biomass, or concentrated solar power plants. The second law of thermodynamics states that any closed-loop cycle can only convert a fraction of the heat produced during combustion into mechanical work. The rest of the heat, called waste heat, must be released into a cooler environment during the return portion of the cycle. The fraction of heat released into a cooler medium must be equal or larger than the ratio of absolute temperatures of the cooling system (environment) and the heat source (combustion furnace). Raising the furnace temperature improves the efficiency but complicates the design, primarily by the selection of alloys used for construction, making the furnace more expensive. The waste heat cannot be converted into mechanical energy without an even cooler cooling system. However, it may be used in cogeneration plants to heat buildings, produce hot water, or to heat materials on an industrial scale, such as in some oil refineries, plants, and chemical synthesis plants. Typical thermal efficiency for utility-scale electrical generators is around 37% for coal and oil-fired plants, and 56 – 60% (LEV) for combined-cycle gas-fired plants. Plants designed to achieve peak efficiency while operating at capacity will be less efficient when operating off-design (i.e. temperatures too low.)Practical fossil fuels stations operating as heat engines cannot exceed the Carnot cycle limit for conversion of heat energy into useful work. Fuel cells do not have the same thermodynamic limits as they are not heat engines. The efficiency of a fossil fuel plant may be expressed as its heat rate, expressed in BTU/kilowatthour or megajoules/kilowatthour. Plant types Steam In a steam turbine power plant, fuel is burned in a furnace and the hot gasses flow through a boiler. Water is converted to steam in the boiler; additional heating stages may be included to superheat the steam. The hot steam is sent through controlling valves to a turbine. As the steam expands and cools, its energy is transferred to the turbine blades which turn a generator. The spent steam has very low pressure and energy content; this water vapor is fed through a condenser, which removes heat from the steam. The condensed water is then pumped into the boiler to repeat the cycle. Emissions from the boiler include carbon dioxide, oxides of sulfur, and in the case of coal fly ash from non-combustible substances in the fuel. Waste heat from the condenser is transferred either to the air, or sometimes to a cooling pond, lake or river. Gas turbine and combined gas/steam One type of fossil fuel power plant uses a gas turbine in conjunction with a heat recovery steam generator (HRSG). It is referred to as a combined cycle power plant because it combines the Brayton cycle of the gas turbine with the Rankine cycle of the HRSG. The turbines are fueled either with natural gas or fuel oil. Reciprocating engines Diesel engine generator sets are often used for prime power in communities not connected to a widespread power grid. Emergency (standby) power systems may use reciprocating internal combustion engines operated by fuel oil or natural gas. Standby generators may serve as emergency power for a factory or data center, or may also be operated in parallel with the local utility system to reduce peak power demand charge from the utility. Diesel engines can produce strong torque at relatively low rotational speeds, which is generally desirable when driving an alternator, but diesel fuel in long-term storage can be subject to problems resulting from water accumulation and chemical decomposition. Rarely used generator sets may correspondingly be installed as natural gas or LPG to minimize the fuel system maintenance requirements. Spark-ignition internal combustion engines operating on gasoline (petrol), propane, or LPG are commonly used as portable temporary power sources for construction work, emergency power, or recreational uses. Reciprocating external combustion engines such as the Stirling engine can be run on a variety of fossil fuels, as well as renewable fuels or industrial waste heat. Installations of Stirling engines for power production are relatively uncommon. Historically, the first central stations used reciprocating steam engines to drive generators. As the size of the electrical load to be served grew, reciprocating units became too large and cumbersome to install economically. The steam turbine rapidly displaced all reciprocating engines in central station service. Fuels Coal Coal is the most abundant fossil fuel on the planet, and widely used as the source of energy in thermal power stations and is a relatively cheap fuel. Coal is an impure fuel and produces more greenhouse gas and pollution than an equivalent amount of petroleum or natural gas. For instance, the operation of a 1000-MWe coal-fired power plant results in a nuclear radiation dose of 490 person-rem/year, compared to 136 person-rem/year, for an equivalent nuclear power plant including uranium mining, reactor operation and waste disposal.Coal is delivered by highway truck, rail, barge, collier ship or coal slurry pipeline. Generating stations adjacent to a mine may receive coal by conveyor belt or massive diesel-electric-drive trucks. Coal is usually prepared for use by crushing the rough coal to pieces less than 2 inches (5 cm) in size. Natural gas Gas is a very common fuel and has mostly replaced coal in countries where gas was found in the late 20th century or early 21st century, such as the US and UK. Sometimes coal-fired steam plants are refitted to use natural gas to reduce net carbon dioxide emissions. Oil-fuelled plants may be converted to natural gas to lower operating cost. Oil Heavy fuel oil was once a significant source of energy for electric power generation. After oil price increases of the 1970s, oil was displaced by coal and later natural gas. Distillate oil is still important as the fuel source for diesel engine power plants used especially in isolated communities not interconnected to a grid. Liquid fuels may also be used by gas turbine power plants, especially for peaking or emergency service. Of the three fossil fuel sources, oil has the advantages of easier transportation and handling than solid coal, and easier on-site storage than natural gas. Combined heat and power Combined heat and power (CHP), also known as cogeneration, is the use of a thermal power station to provide both electric power and heat (the latter being used, for example, for district heating purposes). This technology is practiced not only for domestic heating (low temperature) but also for industrial process heat, which is often high temperature heat. Calculations show that Combined Heat and Power District Heating (CHPDH) is the cheapest method in reducing (but not eliminating) carbon emissions, if conventional fossil fuels remain to be burned. Environmental impacts Thermal power plants are one of the main artificial sources of producing toxic gases and particulate matter. Fossil fuel power plants cause the emission of pollutants such as NOx, SOx, CO2, CO, PM, organic gases and polycyclic aromatic hydrocarbons. World organizations and international agencies, like the IEA, are concerned about the environmental impact of burning fossil fuels, and coal in particular. The combustion of coal contributes the most to acid rain and air pollution, and has been connected with global warming. Due to the chemical composition of coal there are difficulties in removing impurities from the solid fuel prior to its combustion. Modern day coal power plants pollute less than older designs due to new "scrubber" technologies that filter the exhaust air in smoke stacks. However, emission levels of various pollutants are still on average several times greater than natural gas power plants and the scrubbers transfer the captured pollutants to wastewater, which still requires treatment in order to avoid pollution of receiving water bodies. In these modern designs, pollution from coal-fired power plants comes from the emission of gases such as carbon dioxide, nitrogen oxides, and sulfur dioxide into the air, as well a significant volume of wastewater which may contain lead, mercury, cadmium and chromium, as well as arsenic, selenium and nitrogen compounds (nitrates and nitrites).Acid rain is caused by the emission of nitrogen oxides and sulfur dioxide. These gases may be only mildly acidic themselves, yet when they react with the atmosphere, they create acidic compounds such as sulfurous acid, nitric acid and sulfuric acid which fall as rain, hence the term acid rain. In Europe and the US, stricter emission laws and decline in heavy industries have reduced the environmental hazards associated with this problem, leading to lower emissions after their peak in 1960s. In 2008, the European Environment Agency (EEA) documented fuel-dependent emission factors based on actual emissions from power plants in the European Union. Carbon dioxide Electricity generation using carbon-based fuels is responsible for a large fraction of carbon dioxide (CO2) emissions worldwide and for 34% of U.S. man-made carbon dioxide emissions in 2010. In the U.S. 70% of electricity is generated by combustion of fossil fuels.Coal contains more carbon than oil or natural gas fossil fuels, resulting in greater volumes of carbon dioxide emissions per unit of electricity generated. In 2010, coal contributed about 81% of CO2 emissions from generation and contributed about 45% of the electricity generated in the United States. In 2000, the carbon intensity (CO2 emissions) of U.S. coal thermal combustion was 2249 lbs/MWh (1,029 kg/MWh) while the carbon intensity of U.S. oil thermal generation was 1672 lb/MWh (758 kg/MWh or 211 kg/GJ) and the carbon intensity of U.S. natural gas thermal production was 1135 lb/MWh (515 kg/MWh or 143 kg/GJ).The Intergovernmental Panel on Climate Change (IPCC) reports that increased quantities of the greenhouse gas carbon dioxide within the atmosphere will "very likely" lead to higher average temperatures on a global scale (global warming). Concerns regarding the potential for such warming to change the global climate prompted IPCC recommendations calling for large cuts to CO2 emissions worldwide.Emissions can be reduced with higher combustion temperatures, yielding more efficient production of electricity within the cycle. As of 2019 the price of emitting CO2 to the atmosphere is much lower than the cost of adding carbon capture and storage (CCS) to fossil fuel power stations, so owners have not done so. Estimation of carbon dioxide emissions The CO2 emissions from a fossil fuel power station can be estimated with the following formula:CO2 emissions = capacity x capacity factor x heat rate x emission intensity x time where "capacity" is the "nameplate capacity" or the maximum allowed output of the plant, "capacity factor" or "load factor" is a measure of the amount of power that a plant produces compared with the amount it would produce if operated at its rated capacity nonstop, heat rate is thermal energy in/electrical energy out, emission intensity (also called emission factor) is the CO2 emitted per unit of heat generated for a particular fuel. As an example, a new 1500 MW supercritical lignite-fueled power station running on average at half its capacity might have annual CO2 emissions estimated as: = 1500MW x 0.5 x 100/40 x 101000 kg/TJ x 1year = 1500MJ/s x 0.5 x 2.5 x 0.101 kg/MJ x 365x24x60x60s = 1.5x103 x 5x10−1 x 2.5 x 1.01−1 x 3.1536x107 kg = 59.7 x103-1-1+7 kg = 5.97 Mt Thus the example power station is estimated to emit about 6 megatonnes of carbon dioxide each year. The results of similar estimations are mapped by organisations such as Global Energy Monitor, Carbon Tracker and ElectricityMap. Alternatively it may be possible to measure CO2 emissions (perhaps indirectly via another gas) from satellite observations. Particulate matter Another problem related to coal combustion is the emission of particulates that have a serious impact on public health. Power plants remove particulate from the flue gas with the use of a bag house or electrostatic precipitator. Several newer plants that burn coal use a different process, Integrated Gasification Combined Cycle in which synthesis gas is made out of a reaction between coal and water. The synthesis gas is processed to remove most pollutants and then used initially to power gas turbines. Then the hot exhaust gases from the gas turbines are used to generate steam to power a steam turbine. The pollution levels of such plants are drastically lower than those of "classic" coal power plants.Particulate matter from coal-fired plants can be harmful and have negative health impacts. Studies have shown that exposure to particulate matter is related to an increase of respiratory and cardiac mortality. Particulate matter can irritate small airways in the lungs, which can lead to increased problems with asthma, chronic bronchitis, airway obstruction, and gas exchange.There are different types of particulate matter, depending on the chemical composition and size. The dominant form of particulate matter from coal-fired plants is coal fly ash, but secondary sulfate and nitrate also comprise a major portion of the particulate matter from coal-fired plants. Coal fly ash is what remains after the coal has been combusted, so it consists of the incombustible materials that are found in the coal.The size and chemical composition of these particles affects the impacts on human health. Currently coarse (diameter greater than 2.5 μm) and fine (diameter between 0.1 μm and 2.5 μm) particles are regulated, but ultrafine particles (diameter less than 0.1 μm) are currently unregulated, yet they pose many dangers. Unfortunately much is still unknown as to which kinds of particulate matter pose the most harm, which makes it difficult to come up with adequate legislation for regulating particulate matter.There are several methods of helping to reduce the particulate matter emissions from coal-fired plants. Roughly 80% of the ash falls into an ash hopper, but the rest of the ash then gets carried into the atmosphere to become coal-fly ash. Methods of reducing these emissions of particulate matter include: a baghouse an electrostatic precipitator (ESP) cyclone collectorThe baghouse has a fine filter that collects the ash particles, electrostatic precipitators use an electric field to trap ash particles on high-voltage plates, and cyclone collectors use centrifugal force to trap particles to the walls. A recent study indicates that sulfur emissions from fossil fueled power stations in China may have caused a 10-year lull in global warming (1998-2008). Wastewater Fossil-fuel power stations, particularly coal-fired plants, are a major source of industrial wastewater. Wastewater streams include flue-gas desulfurization, fly ash, bottom ash and flue gas mercury control. Plants with air pollution controls such as wet scrubbers typically transfer the captured pollutants to the wastewater stream.Ash ponds, a type of surface impoundment, are a widely used treatment technology at coal-fired plants. These ponds use gravity to settle out large particulates (measured as total suspended solids) from power plant wastewater. This technology does not treat dissolved pollutants. Power stations use additional technologies to control pollutants, depending on the particular wastestream in the plant. These include dry ash handling, closed-loop ash recycling, chemical precipitation, biological treatment (such as an activated sludge process), membrane systems, and evaporation-crystallization systems. In 2015 EPA published a regulation pursuant to the Clean Water Act that requires US power plants to use one or more of these technologies. Technological advancements in ion exchange membranes and electrodialysis systems has enabled high efficiency treatment of flue-gas desulfurization wastewater to meet the updated EPA discharge limits. Radioactive trace elements Coal is a sedimentary rock formed primarily from accumulated plant matter, and it includes many inorganic minerals and elements which were deposited along with organic material during its formation. As the rest of the Earth's crust, coal also contains low levels of uranium, thorium, and other naturally occurring radioactive isotopes whose release into the environment leads to radioactive contamination. While these substances are present as very small trace impurities, enough coal is burned that significant amounts of these substances are released. A 1,000 MW coal-burning power plant could have an uncontrolled release of as much as 5.2 metric tons per year of uranium (containing 74 pounds (34 kg) of uranium-235) and 12.8 metric tons per year of thorium. In comparison, a 1,000 MW nuclear plant will generate about 30 metric tons of high-level radioactive solid packed waste per year. It is estimated that during 1982, US coal burning released 155 times as much uncontrolled radioactivity into the atmosphere as the Three Mile Island incident. The collective radioactivity resulting from all coal burning worldwide between 1937 and 2040 is estimated to be 2,700,000 curies or 0.101 EBq. During normal operation, the effective dose equivalent from coal plants is 100 times that from nuclear plants. Normal operation however, is a deceiving baseline for comparison: just the Chernobyl nuclear disaster released, in iodine-131 alone, an estimated 1.76 EBq. of radioactivity, a value one order of magnitude above this value for total emissions from all coal burned within a century, while the iodine-131, the major radioactive substance which comes out in accident situations, has a half life of just 8 days. Water and air contamination by coal ash A study released in August 2010 that examined state pollution data in the United States by the organizations Environmental Integrity Project, the Sierra Club and Earthjustice found that coal ash produced by coal-fired power plants dumped at sites across 21 U.S. states has contaminated ground water with toxic elements. The contaminants including the poisons arsenic and lead. The study concluded that the problem of coal ash-caused water contamination is even more extensive in the United States than has been estimated. The study brought to 137 the number of ground water sites across the United States that are contaminated by power plant-produced coal ash.Arsenic has been shown to cause skin cancer, bladder cancer and lung cancer, and lead damages the nervous system. Coal ash contaminants are also linked to respiratory diseases and other health and developmental problems, and have disrupted local aquatic life. Coal ash also releases a variety of toxic contaminants into nearby air, posing a health threat to those who breathe in fugitive coal dust. Mercury contamination U.S. government scientists tested fish in 291 streams around the country for mercury contamination. They found mercury in every fish tested, according to the study by the U.S. Department of the Interior. They found mercury even in fish of isolated rural waterways. Twenty five percent of the fish tested had mercury levels above the safety levels determined by the U.S. Environmental Protection Agency (EPA) for people who eat the fish regularly. The largest source of mercury contamination in the United States is coal-fueled power plant emissions. Conversion of fossil fuel power plants Several methods exist to reduce pollution and reduce or eliminate carbon emissions of fossil fuel power plants. A frequently used and cost-efficient method is to convert a plant to run on a different fuel. This includes conversions of coal power plants to energy crops/biomass or waste and conversions of natural gas power plants to biogas or hydrogen. Conversions of coal powered power plants to waste-fired power plants have an extra benefit in that they can reduce landfilling. In addition, waste-fired power plants can be equipped with material recovery, which is also beneficial to the environment. In some instances, torrefaction of biomass may benefit the power plant if energy crops/biomass is the material the converted fossil fuel power plant will be using. Also, when using energy crops as the fuel, and if implementing biochar production, the thermal power plant can even become carbon negative rather than just carbon neutral. Improving the energy efficiency of a coal-fired power plant can also reduce emissions. Besides simply converting to run on a different fuel, some companies also offer the possibility to convert existing fossil-fuel power stations to grid energy storage systems which use electric thermal energy storage (ETES) Coal pollution mitigation Coal pollution mitigation is a process whereby coal is chemically washed of minerals and impurities, sometimes gasified, burned and the resulting flue gases treated with steam, with the purpose of removing sulfur dioxide, and reburned so as to make the carbon dioxide in the flue gas economically recoverable, and storable underground (the latter of which is called "carbon capture and storage"). The coal industry uses the term "clean coal" to describe technologies designed to enhance both the efficiency and the environmental acceptability of coal extraction, preparation and use, but has provided no specific quantitative limits on any emissions, particularly carbon dioxide. Whereas contaminants like sulfur or mercury can be removed from coal, carbon cannot be effectively removed while still leaving a usable fuel, and clean coal plants without carbon sequestration and storage do not significantly reduce carbon dioxide emissions. James Hansen in an open letter to then U.S. President Barack Obama advocated a "moratorium and phase-out of coal plants that do not capture and store CO2". In his book Storms of My Grandchildren, similarly, Hansen discusses his Declaration of Stewardship, the first principle of which requires "a moratorium on coal-fired power plants that do not capture and sequester carbon dioxide". Running the power station on hydrogen converted from natural gas Gas-fired power plants can also be modified to run on hydrogen. Hydrogen can at first be created from natural gas through steam reforming, as a step towards a hydrogen economy, thus eventually reducing carbon emissions.Since 2013, the conversion process has been improved by scientists at Karlsruhe Liquid-metal Laboratory (KALLA), using a process called methane pyrolysis. They succeeded in allowing the soot to be easily removed (soot is a byproduct of the process and damaged the working parts in the past -most notably the nickel-iron-cobaltcatalyst-). The soot (which contains the carbon) can then be stored underground and is not released into the atmosphere. Phase out of fossil fuel power plants As of 2019 there is still a chance of keeping global warming below 1.5 °C if no more fossil fuel power plants are built and some existing fossil fuel power plants are shut down early, together with other measures such as reforestation. Alternatives to fossil fuel power plants include nuclear power, solar power, geothermal power, wind power, hydropower, biomass power plants and other renewable energies (see non-carbon economy). Most of these are proven technologies on an industrial scale, but others are still in prototype form. Some countries only include the cost to produce the electrical energy, and do not take into account the social cost of carbon or the indirect costs associated with the many pollutants created by burning coal (e.g. increased hospital admissions due to respiratory diseases caused by fine smoke particles). Relative cost by generation source When comparing power plant costs, it is customary to start by calculating the cost of power at the generator terminals by considering several main factors. External costs such as connections costs, the effect of each plant on the distribution grid are considered separately as an additional cost to the calculated power cost at the terminals. Initial factors considered are: Capital costs, including waste disposal and decommissioning costs for nuclear energy. Operating and maintenance costs. Fuel costs for fossil fuel and biomass sources, and which may be negative for wastes. Likely annual hours per year run or load factor, which may be as low as 30% for wind energy, or as high as 90% for nuclear energy. Offset sales of heat, for example in combined heat and power district heating (CHP/DH).These costs occur over the 30–50 year life of the fossil fuel power plants, using discounted cash flows. See also References Bibliography Steam: Its Generation and Use (2005). 41st edition, Babcock & Wilcox Company, ISBN 0-9634570-0-4 Steam Plant Operation (2011). 9th edition, Everett B. Woodruff, Herbert B. Lammers, Thomas F. Lammers (coauthors), McGraw-Hill Professional, ISBN 978-0-07-166796-8 Power Generation Handbook: Fundamentals of Low-Emission, High-Efficiency Power Plant Operation (2012). 2nd edition. Philip Kiameh, McGraw-Hill Professional, ISBN 978-0-07-177227-3 Standard Handbook of Powerplant Engineering (1997). 2nd edition, Thomas C. Elliott, Kao Chen, Robert Swanekamp (coauthors), McGraw-Hill Professional, ISBN 0-07-019435-1 External links Conventional coal-fired power plant Large industrial cooling towers Coal Power more deadly than Nuclear "Must We Suffer Smoke" , May 1949, Popular Science article on early methods of scrubbing emissions from coal-fired power plants Gas Power Plant News from Power Engineering Magazine Archived 10 July 2015 at the Wayback Machine
energy policy of australia
The energy policy of Australia is subject to the regulatory and fiscal influence of all three levels of government in Australia, although only the State and Federal levels determine policy for primary industries such as coal. Federal policies for energy in Australia continue to support the coal mining and natural gas industries through subsidies for fossil fuel use and production. Australia is the 10th most coal-dependent country in the world. Coal and natural gas, along with oil-based products, are currently the primary sources of Australian energy usage and the coal industry produces over 30% of Australia's total greenhouse gas emissions. In 2018 Australia was the 8th highest emitter of greenhouse gases per capita in the world.Australia's energy policy features a combination of coal power stations and hydro electricity plants. The Australian government has decided not to build nuclear power plants, although it is one of the world's largest producers of uranium. Australia has one of the fastest deployment rates of renewable energy worldwide. The country has deployed 5.2 GW of solar and wind power in 2018 alone and at this rate, is on track to reach 50% renewable electricity in 2024 and 100% in 2032. However, Australia may be one of the leading major economies in terms of renewable deployments, but it is one of the least prepared at a network level to make this transition, being ranked 28th out of the list of 32 advanced economies on the World Economic Forum's 2019 Energy Transition Index. Electricity generation History and governance After World War II, New South Wales and Victoria started connecting the formerly small and self-contained local and regional power grids into statewide grids run centrally by public statutory authorities. Similar developments occurred in other states. Both of the industrially large states cooperated with the Commonwealth in the development and interconnection of the Snowy Mountains Scheme. Rapid economic growth led to large and expanding construction programs of coal-fired power stations such as black coal in New South Wales and brown coal in Victoria. By the 1980s complex policy questions had emerged involving the massive requirements for investment, land and water. Between 1981 and 1983 a cascade of blackouts and disruptions was triggered in both states, resulting from generator design failures in New South Wales, industrial disputes in Victoria, and drought in the storages of the Snowy system (which provided essential peak power to the State systems). Wide political controversy arose from this and from proposals to the New South Wales Government from the Electricity Commission of New South Wales for urgent approval to build large new stations at Mardi and Olney on the Central Coast, and at other sites later. The Commission of Enquiry into Electricity Generation Planning in New South Wales was established, reporting in mid-1985. This was the first independent enquiry directed from outside the industry into the Australian electricity system. It found, among other matters, that existing power stations were very inefficient, that plans for four new stations, worth then about $12 billion, should be abandoned, and that if the sector were restructured there should be sufficient capacity for normal purposes until the early years of the 21st century. This forecast was achieved. The Commission also recommended enhanced operational coordination of the adjoining State systems and the interconnection in eastern Australia of regional power markets.The New South Wales Enquiry marked the beginning of the end of the centralised power utility monopolies and established the direction of a new trajectory in Australian energy policy, towards decentralisation, interconnection of States and the use of markets for coordination. Similar enquiries were subsequently established in Victoria (by the Parliament) and elsewhere, and during the 1990s the industry was comprehensively restructured in southeastern Australia and subsequently corporatised. Following the report by the Industry Commission on the sector moves towards a national market developed. The impetus towards system-wide competition was encouraged by the Hilmer recommendations. The establishment of the National Electricity Market in 1997 was the first major accomplishment of the new Federal/State cooperative arrangements under the Council of Australian Governments. The governance provisions included a National Electricity Code, the establishment in 1996 of a central market manager, the National Electricity Market Management Company (NEMMCO), and a regulator, National Electricity Code Administrator (NECA). Following several years experience with the new system and several controversies an energy market reform process was conducted by the Ministerial Council on Energy. As a result, beginning in 2004, a broader national arrangement, including electricity and gas and other forms of energy, was established. These arrangements are administered by a national regulator, the Australian Energy Regulator (AER), and a market rule-making body, the Australian Energy Market Commission (AEMC), and a market operator, the Australian Energy Market Operator (AEMO). Over the 10 years from 1998–99 to 2008–09, Australia's electricity use increased at an average rate of 2.5% a year. In 2008–09, a total of 261 terawatt-hours (940 PJ) of electricity (including off-grid electricity) was generated in Australia. Between 2009 and 2013 NEM energy usage had decreased by 4.3% or almost 8 terawatt-hours (29 PJ). Coal-fired power The main source of Australia's electricity generation is coal. In 2003, coal-fired plants produced 58.4% of the total capacity, followed by hydropower (19.1%, of which 17% is pumped storage), natural gas (13.5%), liquid/gas fossil fuel-switching plants (5.4%), oil products (2.9%), wind power (0.4%), biomass (0.2%) and solar (0.1%). In 2003, coal-fired power plants generated 77.2% of the country's total electricity production, followed by natural gas (13.8%), hydropower (7.0%), oil (1.0%), biomass (0.6%) and solar and wind combined (0.3%).The total generating capacity from all sources in 2008-9 was approximately 51 gigawatts (68,000,000 hp) with average capacity utilisation of 52%. Coal-fired plants constituted a majority of generating capacity which in 2008-9 was 29.4 gigawatts (39,400,000 hp). In 2008–9, a total of 143.2 terawatt-hours (516 PJ) of electricity was produced from black coal and 56.9 terawatt-hours (205 PJ) from brown coal. Depending on the cost of coal at the power station, the long-run marginal cost of coal-based electricity at power stations in eastern Australia is between 7 and 8 cents per kWh, which is around $79 per MWh. Hydroelectric power Hydroelectricity accounts for 6.5–7% of NEM electricity generation. The massive Snowy Mountains Scheme is the largest producer of hydro-electricity in eastern Victoria and southern New South Wales. Wind power By 2015, there were 4,187 MW of installed wind power capacity, with another 15,284 MW either being planned or under construction. In the year to October 2015, wind power accounted for 4.9% of Australia's total electricity demand and 33.7% of total renewable energy supply. As at October 2015, there were 76 wind farms in Australia, most of which had turbines from 1.5 to 3 MW. Solar power Solar energy is used to heat water, in addition to its role in producing electricity through photovoltaics (PV). In 2014/15, PV accounted for 2.4% of Australia's electrical energy production. The installed PV capacity in Australia has increased 10-fold between 2009 and 2011, and quadrupled between 2011 and 2016. Wave power The Australian government says new technology harnessing wave energy could be important for supplying electricity to most of the country's major capital cities. The Perth Wave Energy Project near Fremantle in Western Australia operates through several submerged buoys, creating energy as they move with passing waves. The Australian government has provided more than $US600,000 in research funding for the technology developed by Carnegie, a Perth company. Nuclear power Jervis Bay Nuclear Power Plant was a proposed nuclear power reactor in the Jervis Bay Territory on the south coast of New South Wales. It would have been Australia's first nuclear power plant, and was the only proposal to have received serious consideration as of 2005. Some environmental studies and site works were completed, and two rounds of tenders were called and evaluated, but the Australian government decided not to proceed with the project. Queensland introduced legislation to ban nuclear power development on 20 February 2007. Tasmania has also banned nuclear power development. Both laws were enacted in response to a pro-nuclear position, by John Howard in 2006. John Howard went to the November 2007 election with a pro-nuclear power platform but his government was soundly defeated by Labor, which is opposed to nuclear power for Australia. Geothermal There are vast deep-seated granite systems, mainly in central Australia, that have high temperatures at depth and these are being drilled by 19 companies across Australia in 141 areas. They are spending A$654 million on exploration programs. South Australia has been described as "Australia's hot rock haven" and this emissions-free and renewable energy form could provide an estimated 6.8% of Australia's baseload power needs by 2030. According to an estimate by the Centre for International Economics, Australia has enough geothermal energy to contribute electricity for 450 years.The 2008 federal budget allocated $50 million through the Renewable Energy Fund to assist with 'proof-of-concept' projects in known geothermal areas. Biomass Biomass power plants use crops and other vegetative by-products to produce power similar to the way coal-fired power plants work. Another product of biomass is extracting ethanol from sugar mill by-products. The GGAP subsidies for biomass include ethanol extraction with funds of $7.4M and petrol/ethanol fuel with funds of $8.8 million. The total $16.2M subsidy is considered a renewable energy source subsidy. Biodiesel Biodiesel is an alternative to fossil fuel diesel that can be used in cars and other internal combustion engine vehicles. It is produced from vegetable or animal fats and is the only other type of fuel that can run in current unmodified vehicle engines. Subsidies given to ethanol oils totaled $15 million in 2003–2004, $44 million in 2004–2005, $76 million in 2005–2006 and $99 million in 2006–2007. The cost for establishing these subsidies were $1 million in 2005–2006 and $41 million in 2006–2007.However, with the introduction of the Fuel Tax Bill, grants and subsidies for using biodiesel have been cut leaving the public to continue using diesel instead. The grants were cut by up to 50% by 2010–2014. Previously the grants given to users of ethanol-based biofuels were $0.38 per litre, which were reduced to $0.19 in 2010–2014. Fossil fuels In 2003, Australian total primary energy supply (TPES) was 112.6 million tonnes of oil equivalent (Mtoe) and total final consumption (TFC) of energy was 72.3 Mtoe. Coal Australia had a fixed carbon price of A$23 ($23.78) a tonne on the top 500 polluters from July 2012 to July 2014. Australia is the fourth-largest coal producing country in the world. Newcastle is the largest coal export port in the world. In 2005, Australia mined 301 million tonnes of hard coal (which converted to at least 692.3 million tonnes of co2 emitted) and 71 million tonnes of brown coal (which converted to at least 78.1 million tonnes of co2).) Coal is mined in every state of Australia. It provides about 85% of Australia's electricity production and is Australia's largest export commodity. 75% of the coal mined in Australia is exported, mostly to eastern Asia. In 2005, Australia was the largest coal exporter in the world with 231 million tonnes of hard coal. Australian black coal exports are expected by some to increase by 2.6% per year to reach 438 million tonnes by 2029–30, but the possible introduction of emissions trading schemes in customer countries as provided for under the Kyoto protocol may affect these expectations in the medium term. Coal mining in Australia has become more controversial because of the strong link between the effects of global warming on Australia and burning coal, including exported coal, and climate change, global warming and sea level rise. Coal mining in Australia will as a result have direct impacts on agriculture in Australia, health and natural environment including the Great Barrier Reef.The IPCC AR4 Working Group III Report "Mitigation of Climate Change" states that under Scenario A (stabilisation at 450ppm) Annex 1 countries (including Australia) will need to reduce greenhouse gas emissions by 25% to 40% by 2020 and 80% to 95% by 2050. Many environmental groups around the world, including those represented in Australia, are taking direct action for the dramatic reduction in the use of coal as carbon capture and storage is not expected to be ready before 2020 if ever commercially viable. Natural gas In 2002, the Howard government announced the finalisation of negotiations for a $25 billion contract with China for LNG. The contract was to supply 3 million tonnes of LNG a year from the North West Shelf Venture off Western Australia, and was worth between $700 million and $1 billion a year for 25 years. The members of the consortium which operates the North West Shelf Venture are Woodside Petroleum, BHP, BP, Chevron, Shell and Japan Australia LNG. The price was guaranteed not to increase until 2031, and by 2015 China was paying "one-third the price for Australian gas that Australian consumers themselves had to pay."In 2007, there was another LNG deal with China worth $35 billion. The agreement was for the potential sale of 2 to 3 million tonnes of LNG a year for 15 to 20 years from the Browse LNG project, off Western Australia, of which Woodside is the operator. The agreement was expected to bring in total revenues of $35 billion to $45 billion.Succeeding governments oversaw other contracts with China, Japan and South Korea, but none have required exporters to set aside supplies to meet Australia's needs. The price of LNG has historically been linked to oil prices, but the true price, costs and supply levels are presently too difficult to determine.Santos GLNG Operations, Shell and Origin Energy are major gas producers in Australia. Australia Pacific LNG (APLNG), led by Origin Energy, is the largest producer of natural gas in eastern Australia and a major exporter of liquefied natural gas to Asia. Santos is Australia's second-largest independent oil and gas producer. According to the Australian Competition & Consumer Commission (ACCC), the demand for gas in the domestic east coast market is about 700 petajoules a year. Australia is expected to become the world's biggest LNG exporter by 2019, hurting supplies in the domestic market and driving up gas and power prices.In 2017 the Australian government received a report from the Australian Energy Market Operator and one from the ACCC showing expected gas shortages in the east coast domestic market over the next two years. The expected gas shortfall is 54 petajoules in 2018 and 48 petajoules in 2019. The federal government considered imposing export controls on gas to ensure adequate domestic supplies. The companies agreed to make sufficient supplies available to the domestic market until the end of 2019. On 7 September Santos pledged to divert 30 petajoules of gas from its Queensland-based Gladstone LNG plant slated for export into Australia's east coast market in 2018 and 2019. On 26 October 2017, APLNG agreed to increase gas to Origin Energy by 41 petajoules over 14 months, increasing APLNG's total commitment to 186 PJ for 2018, representing almost 30% of Australian east coast domestic gas market.The price at which these additional supplies are to be made available has not been disclosed. On 24 August 2017, Orica chief executive Alberto Calderon described gas prices in Australia as ridiculous, saying that prices in Australia were more than double of what was being paid in China or Japan, adding that Australian producers could buy gas overseas (at much lower world prices) to free up domestic gas to sell at the same profit margin. Transport subsidies Petrol In the transport sector, fuel subsidies reduce petrol prices by $0.38/L. This is very significant, given current petrol prices in Australia of around $1.30/L. The acceptable petrol prices hence result in Australia's petroleum consumption at 28.9 GL every year.According to Greenpeace, removal of this subsidy would make petrol prices rise to around $1.70/L and thus could make certain alternative fuels competitive with petroleum on cost. The 32% price increase associated with subsidy removal would be expected to correspond to an 18% reduction in petrol demand and a Greenhouse Gases emission reduction of 12.5 Mt CO2-e. The Petroleum Resource Rent Tax keeps oil prices low and encourages investment in the 'finite' supplies of oil, at the same time considering alternatives. Diesel The subsidies for the Oil-Diesel fuel rebate program are worth about $2 billion, which are much more than the grants devoted to renewable energy. Whilst renewable energy is out of scope at this stage, an alternative diesel–renewable hybrid system is highly recommended. If the subsidies for diesel were bounded with the renewable subsidies, remote communities could adapt hybrid electric generation systems. The Energy Grants Credit Scheme (EGCS), an off-road component is a rebate program for diesel and diesel-like fuels. Federal Government Australia introduced a national energy rating label in 1992. The system allows consumers to compare the energy efficiency between similar appliances. Institutions The responsible governmental agencies for energy policy are the Council of Australian Governments (COAG), the Ministerial Council on Energy (MCE), the Ministerial Council on Mineral and Petroleum Resources (MCMPR), the Commonwealth Department of Resources; Energy and Tourism (DRET), the Department of Environment and Heritage (DEH), the Australian Greenhouse Office (AGO), the Department of Transport and Regional Services, the Australian Competition and Consumer Commission (ACCC), the Australian Energy Market Commission, the Australian Energy Regulator and the Australian Energy Market Operator. Energy strategy In the 2004 White Paper Securing Australia's Energy Future, several initiatives were announced to achieve the Australian Government's energy objectives. These include: a complete overhaul of the fuel excise system to remove A$1.5 billion in excise liability from businesses and households in the period to 2012–13 the establishment of a A$500 million fund to leverage more than A$1 billion in private investment to develop and demonstrate low-emission technologies a strong emphasis on the urgency and importance of continued energy market reform the provision of A$75 million for Solar Cities trials in urban areas to demonstrate a new energy scenario, bringing together the benefits of solar energy, energy efficiency and vibrant energy markets the provision of A$134 million to remove impediments to the commercial development of renewable technologies incentives for petroleum exploration in frontier offshore areas as announced in the 2004–05 budget New requirements for a business to manage their emissions wisely a requirement that larger energy users undertake, and report publicly on, regular assessments to identify energy efficiency opportunities.Criticisms On a net basis this is a tax on the top 40% of income earners which will then be used largely to subsidise the coal industry in attempts to develop carbon capture and storage in Australia, clean coal. Deforestation is not included in the scheme where there will be reforestation despite the significant timing differences, the uncertainty of reforestation and the effect of leaving old-growth forests vulnerable. It is unclear what level of a carbon price will be sufficient to reduce demand for coal-fired power and increase demand for low emissions electricity like wind or solar. No commitment to maintain Mandatory Renewable Energy Target. The scheme fails to address climate change caused by burning of coal exported from Australia. Energy market reform On 11 December 2003, the Ministerial Council on Energy released a document entitled "Reform of Energy Markets". The overall purpose of this initiative was the creation of national electricity and natural gas markets rather than the state-based provision of both. As a result, two federal-level institutions, the Australian Energy Market Commission (AEMC) and the Australian Energy Regulator (AER), were created. State policies Queensland Queensland's energy policy is based on the year 2000 document called Queensland Energy Policy: A Cleaner Energy Strategy. The Queensland Government assists energy development through the Queensland Department of Energy and is most noted for its contribution to coal mining in Australia. Queensland was referred to by the Morison Government in 2019 as having "a specific problem," and provided a $10 million subsidy to assess the feasibility of a range of power minimisation projects. South Australia The South Australian Government has developed an energy policy based on sustainability objectives as well as on South Australia's Strategic Plan. A major priority of South Australia's Strategic Plan is to reduce greenhouse gas emissions in South Australia to achieve the Kyoto target as a first step towards reducing emissions by 60% (to 40% of 1990 levels) by 2050.Measures announced in South Australia include: stabilisation of greenhouse pollution by 2020 legislated cuts of 60% in greenhouse pollution by 2050 legislated renewable energy target of 15% by 2014In 2009 Premier Mike Rann announced plans to increase the State's renewable energy production target to 33% by 2020. (Letter from Energy Minister Michael O'Brien 29 April 2011) solar feed-in tariff ban on electric hot water systems. Victoria In 2006 Victoria became the first state to have a renewable energy target of 10% by 2016. In 2010 the target was increased to 25% by 2020. New South Wales New South Wales has a renewable energy target of 20% by 2020. New South Wales had the world's most generous feed in tariff for solar power from 2010 – 2011 at A$0.60/kwh. This 60c/kWh feed-in tariff was revoked for new customers from 27 October 2010. Those in the scheme received that feed-in tariff until 31 December 2016. New customers enter under a net feed-in tariff, in which the power is used by the consumer (and is, therefore, worth to them whatever they would have paid for that power). Excess power is exported at a lower rate (from 0c to 17c per kWh depending on supplier and state). In 2019 Scott Morrison's federal budget allocated $1.4 billion in equity to the Snowy Hydro Project as well as a complementary $56 million towards the building of the Marinus Link. In addition, The Australian Labour Party set a target to obtain 50% of its power from renewable energy sources by 2030. Western Australia In some remote areas of WA, the use of fossil fuels is expensive thus making renewable energy supplies commercially competitive. Western Australia offers renewable energy subsidies including; solar heaters, Photovoltaic rebate program for installations at households, schools, factories and renewable Remote Power Generation Program of >$500,000 rebates for large off-grid systems. Australian Capital Territory The ACT Government's "Sustainable Energy Policy". 20 November 2018. {{cite journal}}: Cite journal requires |journal= (help) an integrated policy framework for managing the social, economic and environmental challenges faced by the Territory about energy production and use, was released on 28 September 2011. The policy is a continued commitment to maintaining affordable and reliable electricity and gas supply to Canberra. The policy also establishes the key objective of achieving a more sustainable energy supply as the Territory moves to carbon neutrality by 2060. Tasmania Tasmania’s electricity grid is largely powered by hydroelectric generation. While this does not directly result in greenhouse gas emissions, the environmental effects of dams proposed or built for hydroelectric generation, such as the proposed but never constructed Franklin Dam, have been hugely controversial. Tasmania’s isolated grid was connected to the mainland via the Basslink undersea electricity transmission cable in 2005. Tasmania is also connected to the eastern Australian gas network via the Tasmanian Gas Pipeline, commissioned in 2005. Wood heating is also heavily used in Tasmania. Other states Tasmania has a concession rebate and a life support discount. The Northern Territory has similar programs. Renewable energy targets In 2001, the federal government introduced a Mandatory Renewable Energy Target (MRET) of 9,500 GWh of new generation, with the scheme running until at least 2020. This represents an increase of new renewable electricity generation of about 4% of Australia's total electricity generation and a doubling of renewable generation from 1997 levels. Australia's renewable energy target does not cover heating or transport energy like Europe's or China's, Australia's target is therefore equivalent to approximately 5% of all energy from renewable sources. The Commonwealth and the states agreed in December 2007, at a Council of Australian Governments (COAG) meeting, to work together from 2008, to combine the Commonwealth scheme with the disparate state schemes, into a single national scheme. The initial report on progress and an implementation plan was considered at a March 2008 COAG meeting. In May 2008, the Productivity Commission, the government's independent research and advisory body on a range of economic, social and environmental issues, claimed the MRET would drive up energy prices and would do nothing to cut greenhouse gas emissions. The Productivity Commission submission to the climate change review, stated that energy generators have warned that big coal-fired power stations are at risk of "crashing out of the system", and leaving huge supply gaps and price spikes if the transition is not carefully managed. This forecast has been described as a joke because up to A$20 billion compensation is proposed to be paid under the Carbon Pollution Reduction Scheme. In addition, in Victoria where the highest emitting power stations are located, the state government has emergency powers enabling it to take over and run the generating assets. The final design was presented for consideration at the September 2008 COAG meeting.On 20 August 2009, the Expanded Renewable Energy Target increased the 2020 MRET from 9,500 to 45,000 gigawatt-hours, and continued until 2030. This will ensure that renewable energy reaches a 20% share of the electricity supply in Australia by 2020. After 2020, the proposed Emissions Trading Scheme and improved efficiencies from innovation and manufacturing were expected to allow the MRET to be phased out by 2030. The target was criticised as unambitious and ineffective in reducing Australia's fossil fuel dependency, as it only applied to generated electricity, but not to the 77% of energy production exported, nor to energy sources which are not used for electricity generation, such as the oil used in transportation. Thus 20% renewable energy in electricity generation would represent less than 2% of total energy production in Australia.Computer modelling by the National Generators Forum has signalled the price on greenhouse emissions will need to rise from $20 a tonne in 2010 to $150 a tonne by 2050 if the federal government is to deliver its promised cuts. Generators of Australia's electricity warned of blackouts and power price spikes if the federal government moved too aggressively to put a price on greenhouse emissions.South Australia achieved its target of 20% of renewable supply by 2014 three years ahead of schedule (i.e. in 2011). In 2008 it set a new target of 33% by 2020. New South Wales and Victoria have renewable energy targets of 20% and 25% respectively by 2020. Tasmania has had 100% renewable energy for a long time. In 2011 the 'expanded MRET' was split into two schemes: a 41,000 GWh Large-scale Renewable Energy Target (LRET) for utility-scale renewable generators, and an uncapped Small-scale Renewable Energy Scheme for small household and commercial-scale generators. The MRET requires wholesale purchasers of electricity (such as electricity retailers or industrial operations) to purchase renewable energy certificates (RECs), created through the generation of electricity from renewable sources, including wind, hydro, landfill gas and geothermal, as well as solar PV and solar thermal. The objective is to provide a stimulus and additional revenue for these technologies. Since 1 January 2011, RECs were split into small-scale technology certificates (STCs) and large-scale generation certificates (LGCs). RECs are still used as a general term covering both STCs and LGCs. In 2014, the Abbott government initiated the Warburton Review and subsequently held negotiations with the Labor Opposition. In June 2015, the 2020 LRET was reduced to 33,000 GWh. This will result in more than 23.5% of Australia's electricity being derived from renewable sources by 2020. The required gigawatt-hours of renewable source electricity from 2017 to 2019 were also adjusted to reflect the new target. Greenhouse gas emissions reduction targets Coal is the most carbon-intensive energy source releasing the highest levels of carbon dioxide into the atmosphere. South Australia, legislated cuts of 60% in greenhouse pollution by 2050 and stabilisation by 2020 were announced. Victoria announced legislated cuts in greenhouse pollution of 60% by 2050 based on 2000 levels. New South Wales announced legislated cuts in greenhouse pollution of 60% by 2050 and a stabilisation target by 2025. Low Emissions Technology Demonstration Fund (LETDF) $500 million – competitive grants $1 billion – private sector fundsCurrently has funded six projects to help reduce GHG emissions, which are summarised below 82% of subsidies are concentrated in the Australian Government's 'Clean Coal Technology', with the remaining 18% of funds allocated to the renewable energy 'Project Solar Systems Australia' $75 million. The LETDF is a new subsidy scheme aimed at fossil fuel energy production started in 2007. Feed-in tariffs Between 2008 and 2012 most states and territories in Australia implemented various feed-in tariff arrangements to promote uptake of renewable electricity, primarily in the form of rooftop solar PV systems. As system costs fell uptake accelerated rapidly (in conjunction with the assistance provided through the national-level Small-scale Renewable Energy Scheme (SRES)) and these schemes were progressively wound back. Public opinion The Australian results from the 1st Annual World Environment Review, published on 5 June 2007 revealed that: 86.4% are concerned about climate change. 88.5% think their Government should do more to tackle global warming. 79.9% think that Australia is too dependent on fossil fuels. 80.2% think that Australia is too reliant on foreign oil. 89.2% think that a minimum of 25% of electricity should be generated from renewable energy sources. 25.3% think that the Government should do more to expand nuclear power. 61.3% are concerned about nuclear power. 80.3% are concerned about carbon dioxide emissions from developing countries. 68.6% think it appropriate for developed countries to demand restrictions on carbon dioxide emissions from developing countries. See also Asia-Pacific Emissions Trading Forum Australian Renewable Energy Agency Carbon capture and storage in Australia Effects of global warming on Australia Energy diplomacy Energy policy Energy in Victoria References Further reading Australian Government (2007). Australian Government Renewable Energy Policies and Programs 2 pages. New South Wales Government (2006). NSW Renewable Energy Target: Explanatory Paper 17 pages. The Natural Edge Project, Griffith University, ANU, CSIRO and NFEE (2008). Energy Transformed: Sustainable Energy Solutions for Climate Change Mitigation 600+ pages. Greenpeace Australia Pacific Energy [R]evolution Scenario: Australia, 2008 [1]) 47 pages. Beyond Zero Emissions Zero Carbon Australia 2020, 2010 [2]
climate debt
Climate debt is the debt said to be owed to developing countries by developed countries for the damage caused by their disproportionately large contributions to climate change. Historical global greenhouse gas emissions, largely by developed countries, pose significant threats to developing countries, who are less able to deal with climate change's negative effects. Therefore, some consider developed countries to owe a debt to developing ones for their disproportionate contributions to climate change. The concept of climate debt is part of the broader concept of ecological debt. It has received increased attention since its submission to the 2009 United Nations Climate Change Conference, where developing countries, led by Bolivia, sought the repayment of climate debt.The main components of climate debt are adaptation debt and emissions debt. Adaptation debt is claimed to be owed by developed countries to developing countries to assist them in their adaptation to climate change. Emissions debt is claimed to be owed by developed countries for their disproportionate amount of greenhouse gas emissions. Since the introduction of the concept of climate debt, there has been an on-going debate about the proper interpretation of climate debt. Developed countries and developing countries, as well as independent stakeholders, have taken a variety of stands on the issue. History The concept of climate debt was first introduced in the 1990s by non-governmental organizations. Advocates of climate debt claimed that the Global North owes the Global South a debt for their contributions to climate change. Support from nations soon followed. During the Group of 77 South Summit in Havana in 2000, developing countries advocated the recognition of the climate debt owed by the Global North as the basis of solutions to climate issues. However, the concept of climate debt was not explicitly defined at the UNFCCC. At the 2009 United Nations Climate Change Conference, countries including Bolivia, Venezuela, Sudan, and Tuvalu refused the adoption of the Copenhagen Accord, stating that industrialized countries did not want to take responsibility for climate change. At the conference, Bolivia, Cuba, Dominica, Honduras, Nicaragua, and Venezuela made a proposal that evaluated developed countries' historical climate debt to developing countries. The proposal analyzed the cause of climate change and explained adaptation debt and emissions debt. In 2010, Bolivia and other developing countries hosted the World People's Conference on Climate Change and the Rights of Mother Earth and reached the People's Agreement, which states: We, the people attending the World People’s Conference on Climate Change and the Rights of Mother Earth in Cochabamba, Bolivia, demand to the countries that have over-consumed the atmospheric space to acknowledge their historic and current responsibilities for the causes and adverse effects of climate change, and to honor their climate debts to developing countries, to vulnerable communities in their own countries, to our children’s children and to all living beings in our shared home – Mother Earth. The People's Agreement states that climate debt is owed by not only financial compensation but also restorative justice. It clearly rejected the Copenhagen Accord. Apart from official agreements between nations, climate debt has been appearing in public media with both supporters and opponents. Adaptation Debt Adaptation debt is the compensation that developing countries claim they are owed due to the damage they feel from the environmental effects of the developed world. This is based on the idea that poorer nations face the most damaging consequences of climate change, for which they had little contribution.Scientists and researchers cite that as a result of the rising sea levels that are spurred by the emissions from the developed world, people of poorer countries suffer an increasing amount of natural disasters and economic damages. This environmental destruction harms the economy and livelihood of the people in poorer nations.Disasters from climate change disproportionately affect poorer and tropical regions and have caused the majority of disasters and trillions of dollars worth of economic losses since around the start of the 21st century. Poorer countries also lack the necessary infrastructure, development, and capital to be able to bounce back from a disaster, forcing them to borrow money at higher interests to aid their recovery from the destruction. This in turn worsens the opportunities, development, and life quality of those living in poorer regions.Adaptation debt aims to have rich countries adopt the responsibility of helping developing nations that have suffered the negative environmental effects of their industrialization and carbon emissions. As noted in the UNFCCC, this can be done through providing financial assistance to affected countries and also through spending resources in aiding poorer countries to better cope with natural disasters. Emission Debt Emissions debt is a debt owed by developed countries based upon their majority contribution of greenhouse gases in the atmosphere, despite having relatively lower populations. Because of their contributions, the amount of carbon emissions that the Earth can currently absorb is lower.The capacity to absorb emissions by the environment is termed as the total carbon space; the emissions debt concept argues that developed countries have overused their fair allocation of this space. As a result, there is not enough carbon space left for poorer countries to release emissions during their industrialization process, placing a burden on their development and survival.Data shows that since around 1750, the United States alone has contributed to 25% of all carbon emissions and developed countries in total have contributed to 70% of all emissions. It is estimated that the average American may owe up to $12000 in emissions debt for carbon emissions between 1970 and 2013.To repay the emissions debt, developed countries would need to help developing countries industrialize in ways that reduce the strain on the environment and keep climate change in check. They would also need to lead efforts in reducing global carbon emissions. Emissions debt also calls for a redistribution of the carbon space among the developed and developing nations and aims to allocate the carbon space in accordance with the population of each country.On November 2014, the G20 nations vowed support and financial contributions to the Green Climate Fund, which aims to assist developing nations in reducing the emissions of their development and economic processes. It will also help them adapt to the consequences of climate change. The target of this initiative is to contribute $100 billion dollars to the Green Climate Fund every year starting in 2020. Political Discourse Support for climate debt generally comes from developing countries and environmentalist NGOs, with criticisms of climate debt usually coming from developed nations. Independent analysts hold a variety of views on the matter, both supporting and criticizing the idea. Support Support for the implementation of a climate debt framework is led by developing countries that have and will continue to feel severe negative impacts due to climate change. Other primary supporters outside of the global south include various environmental NGOs and climate justice movements in the developed world.In a formal presentation of the idea of climate debt at the Copenhagen conference, Bolivia provided evidence that their nation has been negatively affected by climate change in the form of threatened water supplies from glacial retreat, drought, floods, and negative economic impacts. This was complemented with data showing that developed countries have contributed far more to climate change than developing countries, with the latter being most at risk of its negative effects. This evidence was used to support the argument that developed countries owe a climate debt to developing countries that must be repaid in the form of reduced emissions as agreed upon in the Copenhagen Accord. Further support was provided with the assertion that developing countries have a right to their share of environmental space that developed countries have encroached upon with their excessive emissions, and that the repayment of the climate debt is a means to achieving this space.The earliest group of nations to propose the ideas that would become the foundation of the climate debt argument was the Alliance of Small Island States. Most of the Least Developed Countries were early to support these ideas as well. Criticism Criticisms of the idea of climate debt are purported by developed countries and some independent political analysts. Developed nations are generally negatively predisposed to the concept of climate debt because under such a framework they would need to quickly curtail emissions and provide significant financial support to developing countries.Commonly, criticisms attempt to invalidate the idea that a debt is owed from developed countries to developing countries as compensation for historical emissions and ecological damage. Arguments used to support this claim include the following: although countries are responsible for the emissions they have made, they should not bear the guilt or owe debts; the negative effects of carbon emissions were not understood until recently, and therefore any emissions made before this understanding should not be a source of guilt; countries should not bear the guilt for the actions of their ancestors, over which the current generation had no control. Statements that align with these arguments were made by the United States' chief climate negotiator, Todd Stern, at the 2009 Copenhagen conference.One criticism is that the foundational principles of a political climate debt framework are not based on science. Analyst Olivier Godard claims that the idea of a climate debt requires a priori judgment decisions to be made about debt, responsibility, and their place in international relations. These preemptive judgments invalidate the idea because they over-simplify complex ethical, historical, and political realities.Another criticism is that climate debt is based on the egalitarian view that the atmosphere is a global commons, a view that is applied to a few other finite resources. This climate-centric view disregards all the credit that should be owed to developed countries for their positive contributions to the world, such as the inventions of governments, philosophies, and technologies that have benefitted the entire world.Many critics have claimed that, although the concept of climate debt may be ethically sound, it may actually undermine political negotiations regarding climate change due to its "adversarial" basis, and negotiations should instead use a different framework.In response to some of these criticisms, supporters of climate debt claim that critics are few in number, and that the majority of political analysts are in support of enforcing climate debt. See also Paris Agreement Copenhagen Accord Kyoto Protocol == References ==
reid gardner generating station
Reid Gardner Generating Station was a 557 megawatt coal fired plant on 480 acres (190 ha) located near Moapa, Nevada. It was co-owned by NV Energy (69%) and California Department of Water Resources (31%). The plant consisted of four units. The first three were 100 MW units and were placed into service in 1965, 1968 and 1976. The fourth unit placed into service in 1983 produced 257 megawatts (345,000 hp). Three units of Reid Gardner were shut down in 2014; the fourth went in March 2017. The demolition of the plant was completed in 2019. Controversy Due to its location adjacent to the Moapa Band of Paiute Indians reservation and one of their communities, the plant had long been a concern over the health effects on the nearby residents. As a result of several agreements to improve the air quality around the plant, the upgraded plant was ranked as one of the 10 cleanest coal plants in the US.Concerns have also been expressed over particulates in the air as the plant can be upwind of the Grand Canyon and Bryce Canyon. Both of these canyons are Class I areas which place limits on the amount of haze allowed. Greenhouse Gas Emissions Reid Gardner Station was a major emitter of carbon dioxide, the main greenhouse gas contributing to global warming. California's Department of Water Resources is planning to sell its stake in the plant, and purchase less carbon-intensive electricity as a part of its overall plan to reduce emissions mandated by California law (AB32, the Global Warming Solutions Act of 2006): Electricity from the plant produces disproportionally high amounts of GHGs as compared to other SWP electricity generation sources. Emissions from Reid Gardner for electricity delivered to DWR have typically been over 1.5 million mtCO2e [million metric tonnes CO2 equivalents] per year (30%-50% of total DWR operational emissions). Between 1997 and 2007, the average emissions rate from Reid Gardner for electricity supplied to DWR has been 1.116 mtCO2e/MWh. This is more than twice the emissions rate associated with the general pool electricity from the integrated California market. (CA DWR 2012, page 54) Waste The coal ash from the plant is stored on site in a 91-acre (37 ha) landfill. Future Plans In July 2023, NV Energy applied for a permit to build a 220 megawatt battery storage facility on about five acres of the site. 208 lithium iron phosphate battery cells will be set up in isolated metal structures to reduce the chance of fire spreading. == Notes ==
fit for 55
Fit for 55 is a package by the European Union designed to reduce the European Union's greenhouse gas emissions by 55% by 2030. It is part of the union's strategy of the European green deal presented first in December 2019.The package was proposed in July 2021 by the European Commission. Under an accelerated legislative process, the plans may become law in 2022. Measures include additional support for clean transport, renewables, and a tariff called the Carbon Border Adjustment Mechanism on emissions for high-carbon imports from countries lacking sufficient greenhouse gas reduction measures of their own. It proposes to extend the European Union Emissions Trading System to transport and heat. Compared to the net-zero scenario from the International Energy Agency, the plan contains more measures to ensure that energy remains affordable. Legislation The legislation is complicated due to the high level of democratic processes in the European union. The commission sent proposals of the new law to the council and European parliament. The council started discussions including representatives of all 27 member states on the legislative proposals in working parties on an expert level. Based on that exchange the Permanent Representative Committee continues discussions preparing the ground for the council meeting of ministers. The Fit for 55 package proposals are discussed in multiple council formations such as eviroment, energy, transport, economy and finance. After the ministers of each branch found joint positions trilogues including meetings with representatives of the council, parliament and commission start. The larger part of the proposals stick to the regular legislative process of trilogues. Aspects emissions trade social climate fund carbon border adjustment mechanism Members emission reduction land use and forestry transportation co2 standards methane reduction alternative fuels green energy Energy efficiency sustainable buildings hydrogen Energy taxation Process When the bill for carbon market legislation was designed, the conservative fraction in the European Parliament initially weakened the bill. The amended bill was defeated as the social democrats voted against. The final accepted compromise became stronger in CO2 emission reduction than the proposal from the European Commission. Criticism The environmental organization Greenpeace criticized the package for not being suitable for halting global warming and the associated destruction of important life-support systems because the target envisaged was too low. The organization criticized the classification of bioenergy as renewable energy and the sale of non-emission-free cars by 2035. See also Climate change in Europe#Climate targets European green deal References === Sources ===
plug-in electric vehicles in the united states
The adoption of plug-in electric vehicles in the United States is supported by the American federal government, and several states and local governments. As of December 2021, cumulative sales in the U.S. totaled 2.32 million highway legal plug-in electric cars since 2010, led by all-electric cars. The American stock represented 20% of the global plug-in car fleet in use by the end of 2019, and the U.S. had the world's third largest stock of plug-in passenger cars after China (47%) and Europe (25%).The U.S. market share of plug-in electric passenger cars increased from 0.14% in 2011, to 0.66% in 2015, to 1.13% in 2017, and achieved a record market share of 2.1% in 2018, slightly declined to 1.9% in 2019, then rising to 4.6% by the start of 2022.It increased to over 7% by March 2023. California is the largest plug-in car regional market in the country, with almost 835,000 plug-in electric vehicles sold by the end of 2020.As of December 2020, the Tesla Model 3 all-electric car is the all-time best selling plug-in electric car with an estimated 395,600 units delivered, followed by the Tesla Model S electric car with about 172,400, and the Chevrolet Volt plug-in hybrid with 157,125 units of both generations. The Model S was the best selling plug-in car in the U.S. for three consecutive years, from 2015 to 2017, and the Model 3 also has topped sales for three years running, from 2018 to 2020.The Energy Improvement and Extension Act of 2008 granted federal tax credits for new qualified plug-in electric vehicles, which is worth between US$2,500 and US$7,500 depending on battery capacity. As of 2014, Washington, D.C. and 37 states and had established incentives and tax or fee exemptions for BEVs and PHEVs, or utility-rate breaks, and other non-monetary incentives such as free parking and high-occupancy vehicle lane access. Overview by state Articles about plug-in electric vehicles in individual states: Government support In his 2011 State of the Union address, President Barack Obama set the goal for the U.S. to become the first country to have one million electric vehicles on the road by 2015. This goal was established based on forecasts made by the U.S. Department of Energy (DoE), using production capacity of PEV models announced to enter the U.S. market through 2015. The DoE estimated a cumulative production of 1,222,200 PEVS by 2015, and was based on manufacturer announcements and media reports accounting production goals for the Fisker Karma, Fisker Nina, Ford Transit Connect, Ford Focus Electric, Chevrolet Volt, Nissan Leaf, Smith Newton, Tesla Roadster, Tesla Model S and Th!nk City.Considering that actual PEV sales were lower than initially expected, as of early 2013, several industry observers have concluded that this goal was unattainable. Obama's goal was achieved only in September 2018. In 2008, San Francisco Mayor Gavin Newsom, San Jose Mayor Chuck Reed and Oakland Mayor Ron Dellums announced a nine-step policy plan for transforming the Bay Area into the "Electric Vehicle (EV) Capital of the U.S.". Other local and state governments have also expressed interest in electric cars.Governor of California Jerry Brown issued an executive order in March 2012 that established the goal of getting 1.5 million zero-emission vehicles (ZEVs) on California roads by 2025. American Recovery and Reinvestment Act President Barack Obama pledged US$2.4 billion in federal grants to support the development of next-generation electric vehicles and batteries. $1.5 billion in grants to U.S. based manufacturers to produce highly efficient batteries and their components; up to $500 million in grants to U.S. based manufacturers to produce other components needed for electric vehicles, such as electric motors and other components; and up to $400 million to demonstrate and evaluate plug-in hybrids and other electric infrastructure concepts—like truck stop charging station, electric rail, and training for technicians to build and repair electric vehicles (green collar jobs).In March 2009, as part of the American Recovery and Reinvestment Act, the U.S. Department of Energy announced the release of two competitive solicitations for up to $2 billion in federal funding for competitively awarded cost-shared agreements for manufacturing of advanced batteries and related drive components as well as up to $400 million for transportation electrification demonstration and deployment projects. This initiative aimed to help meet President Barack Obama's goal of putting one million plug-in electric vehicles on the road by 2015. Tax credits New plug-in electric vehicles Federal incentives First the Energy Improvement and Extension Act of 2008, and later the American Clean Energy and Security Act of 2009 (ACES) granted tax credits for new qualified plug-in electric drive motor vehicles. The American Recovery and Reinvestment Act of 2009 (ARRA) also authorized federal tax credits for converted plug-ins, though the credit is lower than for new plug-in electric vehicle (PEV).As defined by the 2009 ACES Act, a PEV is a vehicle which draws propulsion energy from a traction battery with at least 5 kwh of capacity and uses an offboard source of energy to recharge such battery. The tax credit for new plug-in electric vehicles is worth US$2,500 plus US$417 for each kilowatt-hour of battery capacity over 5 kwh, and the portion of the credit determined by battery capacity cannot exceed US$5,000. Therefore, the total amount of the credit, between US$2,500 and US$7,500, will vary depending on the capacity of the battery (4 to 16 kWh) used to power the vehicles.The qualified plug-in electric vehicle credit phases out for a plug-in manufacturer over the one-year period beginning with the second calendar quarter after the calendar quarter in which at least 200,000 qualifying plug-in vehicles from that manufacturer have been sold for use in the U.S. Cumulative sales started counting sales after December 31, 2009. After reaching the cap, qualifying PEVs for one quarter still earn the full credit, the second quarter after that quarter plug-in vehicles are eligible for 50% of the credit for six months, then 25% of the credit for another six months and finally the credit is phased out. Both the Nissan Leaf electric vehicle and the Chevrolet Volt plug-in hybrid, launched in December 2010, are eligible for the maximum $7,500 tax credit. The Toyota Prius Plug-in Hybrid, released in January 2012, is eligible for a US$2,500 tax credit due to its smaller battery capacity of 5.2 kWh. All Tesla cars and the Chevrolet Bolts and BMW i3 BEV are eligible for the US$7,500 tax credit. A 2016 study conducted by researchers from the University of California, Davis found that the federal tax credit was the reason behind more than 30% of the plug-in electric sales. The impact of the federal tax incentive is higher among owners of the Nissan Leaf, with up to 49% of sales attributable to the federal incentive. The study, based on a stated preference survey of more than 2,882 plug in vehicle owners in 11 states, also found that the federal tax credit shifts buyers from internal combustion engine vehicles to plug-in vehicles and advances the purchase timing of new vehicles by a year or more.In July 2018, Tesla Inc. was the first plug-in manufacturer to pass 200,000 sales and the full tax credit will be available until the end 2018, with the phase out beginning in January 2019. General Motors combined sales of plug-in electric vehicles passed 200,000 units in November 2018. The full tax credit will be available until the end of March 2019 and thereafter reduces gradually until it is completely phase out beginning on April 1, 2020. In order of cumulative sales, as of November 2018, Nissan has delivered 126,875 units, Ford 111,715, Toyota 93,011 and the BMW Group 79,679 plug-in electric cars. State incentives As of November 2014, 37 states and Washington, D.C. have established incentives and tax or fee exemptions for BEVs and PHEVs, or utility-rate breaks, and other non-monetary incentives such as free parking and high-occupancy vehicle lane access regardless of the number of occupants. In California, for example, the Clean Vehicle Rebate Project (CVRP) was established to promote the production and use of zero-emission vehicles (ZEVs). Eligible vehicles include only new Air Resources Board-certified or approved zero-emission or plug-in hybrid electric vehicles. Among the eligible vehicles are neighborhood electric vehicles, battery electric, plug-in hybrid electric, and fuel cell vehicles including cars, trucks, medium- and heavy-duty commercial vehicles, and zero-emission motorcycles. Vehicles must be purchased or leased on or after March 15, 2010. Rebates initially of up to US$5,000 per light-duty vehicle, and later lowered to up to US$2,500, are available for individuals and business owners who purchase or lease new eligible vehicles. Certain zero-emission commercial vehicles are also eligible for rebates up to US$20,000. California's zero-emission (ZEV) regulations are anticipated to result in 1.5 million electric vehicles on the road by 2025 ( i.e., 15% sales of total states in 2025); moreover, California's mixed incentives means to reach 40% of electric vehicle sales in the entire U.S.Electric vehicle purchases made in the U.S. are eligible for $2,500 to $7,500, depending on the make and model of the vehicle, in federal tax credit.The following table summarizes some of the state incentives: New proposals Several separate initiatives have been pursued unsuccessfully at the federal level since 2011 to transform the tax credit into an instant cash rebate. The objective of these initiatives is to make new qualifying plug-in electric cars more accessible to buyers by making the incentive more effective. The rebate would be available at the point of sale allowing consumers to avoid a wait of up to a year to apply the tax credit against income tax returns.In March 2014, the Obama administration included a provision in the FY 2015 Budget to increase the maximum tax credit for plug-in electric vehicles and other advanced vehicles from US$7,500 to US$10,000. The new maximum tax credit would not apply to luxury vehicles with a sales price of over US$45,000, such as the Tesla Model S and the Cadillac ELR, which would be capped at US$7,500. In November 2017, House Republicans proposed scrapping the US$7,500 tax credit as part of a sweeping tax overhaul. Charging equipment Until 2010 there was a federal tax credit equal to 50% of the cost to buy and install a home-based charging station with a maximum credit of US$2,000 for each station. Businesses qualified for tax credits up to US$50,000 for larger installations. These credits expired on December 31, 2010, but were extended through 2013 with a reduced tax credit equal to 30% with a maximum credit of up to US$1,000 for each station for individuals and up to US$30,000 for commercial buyers. In 2016, the Obama administration and several stake holders announced $4.5 billion in loan guarantees for public charge stations, along with other initiatives. EV Everywhere Challenge On March 7, 2012, President Barack Obama launched the EV Everywhere Challenge as part of the U.S. Department of Energy's Clean Energy Grand Challenges, which seeks to solve some of the U.S. biggest energy challenges and make clean energy technologies affordable and accessible to the vast majority of American households and businesses. The EV Everywhere Challenge has the goal of advancing electric vehicle technologies to have the country, by 2022, to produce a five-passenger electric vehicle that would provide both a payback time of less than five years and the ability to be recharged quickly enough to provide enough range for the typical American driver. In January 2013 the Department of Energy (DoE) published the "EV Everywhere Grand Challenge Blueprint," which set the technical targets of the PEV program in four areas: battery research and development; electric drive system research and development; vehicle lightweighting; and advanced climate control technologies. The DoE set several specific goals, established in consultation with stakeholders. The key goals to be met over the next five years to make plug-in electric vehicles competitive with conventional fossil fuel vehicles are: Cutting battery costs from their current US$500/kWh to US$125/kWh Eliminating almost 30% of vehicle weight through lightweighting Reducing the cost of electric drive systems from US$30/kW to US$8/kWThe DoE aim is to level the purchase plus operating (fuel) cost of an all-electric vehicle with a 280 mi (450 km) range with the costs of an internal combustion engine (ICE) vehicle of similar size. The DoE expects than even before the latter goals are met, the 5-year cost of ownership of most plug-in hybrid electric vehicles and of all-electric vehicles with shorter ranges, such as 100 mi (160 km), will be comparable to the same cost of ICE vehicles of similar size. In order to achieve these goals, the DoE is providing up to US$120 million over the next five years to fund the new Joint Center for Energy Storage Research (JCESR), a research center led by the Argonne National Laboratory in Chicago. An initial progress report for the initiative was released in January 2014. Four results of the first year of the initiative were reported: DOE research and development reduced the cost of electric drive vehicle batteries to US$325/ kWhr, 50% lower than 2010 costs. In the first year of the Workplace Charging Challenge, more than 50 U.S. employers joined the Challenge and pledged to provide charging access at more than 150 sites. DOE investments in EV Everywhere technology topped US$225 million in 2013, addressing key barriers to achieving the Grand Challenge. Consumer acceptance of electric vehicles grew: 97,000 plug-in electric vehicles were sold in 2013, nearly doubling 2012 sales.Workplace Charging ChallengeIn January 2013, during the Washington Auto Show, Secretary of Energy Steven Chu announced an initiative to expand the EV Everywhere program with the "Workplace Charging Challenge." This initiative is a plan to install more electric vehicle charging stations in workplace parking lots. There are 21 founding partners and ambassadors for the program, including Ford, Chrysler, General Motors, Nissan, Tesla Motors, 3M, Google, Verizon, Duke Energy, General Electric, San Diego Gas & Electric, Siemens, Plug In America, and the Rocky Mountain Institute. The initiative's target is to increase the number of U.S. employers offering workplace charging by tenfold in the next five years. Initially, the DoE will not provide funding for this initiative. U.S. military The U.S. Army announced in 2009 that it will lease 4,000 Neighborhood Electric Vehicles (NEVs) within three years. The Army plans to use NEVs at its bases for transporting people around the base, as well as for security patrols and maintenance and delivery services. The Army accepted its first six NEVs at Virginia's Fort Myer in March 2009 and will lease a total of 600 NEVs through the rest of the year, followed by the leasing of 1,600 NEVs for each of the following two years. U.S. Air Force officials announced, in August 2011, a plan to establish Los Angeles Air Force Base, California, as the first federal facility to replace 100% of its general purpose fleet with plug-in electric vehicles. As part of the program, all Air Force-owned and -leased general purpose fleet vehicles on the base will be replaced with PEVs. There are approximately 40 eligible vehicles, ranging from passenger sedans to two-ton trucks and shuttle buses. The replacement PEVs include all-electric, plug-in hybrids, and extended-range electric vehicles. Electrification of Los Angeles AFB's general purpose fleet is the first step in a Department of Defense effort to establish strategies for large-scale integration of PEVs.By May 2013, it was announced that, as part of a test program created in January 2013, 500 plug-in electric vehicles with vehicle-to-grid (V2G) technology would be in use at six military bases, purchased using an investment of $20 million. If the program succeeds, there will be 3,000 V2G vehicles in 30 bases.The National Defense Authorization Act passed in December, 2022, requires new non-combat military vehicles be electric by 2035. Safety laws Due to the low noise typical of electric vehicles at low speeds, the National Highway Traffic Safety Administration ruled that all hybrids and EVs must emit artificial noise when idling, accelerating to 19 mph (30 km/h) or going in reverse by September 2019. U.S. commitments to the 2015 Paris Agreement As a signatory party to the 2015 Paris Climate Agreement, the United States government committed to reduce its greenhouse gas emissions, among others, from the transportation sector. Already in 2015, the Federal government had set targets to reduce its own carbon footprint 30% by 2025, and acquire 20% of all new passenger vehicles as zero emission (all-electric of fuel cell) or plug-in hybrid by 2020, and 50% by 2025. These goals are part of the U.S. nationally determined contributions (NDCs) to achieve the worldwide emissions reduction goal set by the Paris Agreement.On June 1, 2017, President Donald Trump announced that the U.S. would cease all participation in the 2015 Paris Agreement on climate change mitigation.On November 3, 2020, then President-elect Joe Biden announced that his administration will reverse President Donald Trump's United States withdrawal from the Paris Agreement by re-entering the United States into the Paris Agreement to continue to reiterated commitment in the agreement and move forward with the proposed Green New Deal legislation, to fight the global climate change problems as soon as Biden is inaugurated into office on January 20, 2021, succeeding then-outgoing Trump as President of the United States. Joe Biden also criticized Trump for withdrawing and ceasing all US participation from the UN Paris Agreement on June 1, 2017, and as Biden said that withdrawing from the UN Paris Agreement is a huge mistake. Joe Biden promises to introduce and transition to more energy-efficient buildings, increase generation of renewable energy by gradually moving away from the dependence of fracking and fossil fuels as energy sources in the US, transition the entire government fleet to 100% all-electric vehicles by the 2030s, and introduce more electric vehicles to all 50 US states. As of 5 August 2021, the Biden administration expects 50% of all vehicles sold in the US to be electric vehicles by 2030 and expects new car sales of fossil fuel vehicles to be banned in the US by the 2035 timeframe, as a result of Joe Biden signing an executive order mandating that 50% of all US car sales must be electric vehicles by 2030. The Biden government plans to accomplish these goals by more incentivizing of electric vehicles, impose hefty government taxes and restrictions on internal-combustion engine vehicles, increase fuel prices for refilling up fossil-fuel vehicles, implement congestion-charge pricing zones, and impose more tougher and stringent Corporate Average Fuel Economy standards and US Environmental Protection Agency regulations on automotive emissions standards, which are all climate change and green energy provisions included in the Build Back Better Act.In December 2021, the Biden administration imposed Executive Order 14057, which is a nationwide federal government mandate that will ban new fossil fuel vehicles from all 50 US States plus Washington D.C. and all US Territories by 2035 to push the transition to electric vehicles. The order will ban new car sales of fossil-fuel powered government-owned vehicles by 2027, new fossil-fuel buses by 2030, and new privately owned and commercial-owned vehicles by 2035. Operating costs and fuel economy The following table shows the U.S. Environmental Protection Agency (EPA) official ratings for fuel economy (miles per gallon gasoline equivalent) and EPA's estimated out-of-pocket fuel costs for all plug-in electric passenger vehicles rated by EPA in the United States since 2010 up to December 2016. Air pollution and greenhouse gas emissions Electric cars, as well as plug-in hybrids operating in all-electric mode, emit no harmful tailpipe pollutants from the onboard source of power, such as particulates (soot), volatile organic compounds, hydrocarbons, carbon monoxide, ozone, lead, and various oxides of nitrogen. The clean air benefit is usually local because, depending on the source of the electricity used to recharge the batteries, air pollutant emissions are shifted to the location of the generation plants. In a similar manner, plug-in electric vehicles operating in all-electric mode do not emit greenhouse gases from the onboard source of power, but from the point of view of a well-to-wheel assessment, the extent of the benefit also depends on the fuel and technology used for electricity generation. From the perspective of a full life cycle analysis, the electricity used to recharge the batteries must be generated from renewable or clean sources such as wind, solar, hydroelectric, or nuclear power for PEVs to have almost none or zero well-to-wheel emissions. EPA estimates The following table compares tailpipe and upstream CO2 emissions estimated by the U.S. Environmental Protection Agency for all series production model year 2014 plug-in electric vehicles available in the U.S. market. Total emissions include the emissions associated with the production and distribution of electricity used to charge the vehicle, and for plug-in hybrid electric vehicles, it also includes emissions associated with tailpipe emissions produced from the internal combustion engine. These figures were published by the EPA in October 2014 in its annual report "Light-Duty Automotive Technology, Carbon Dioxide Emissions, and Fuel Economy Trends." All emissions are estimated considering average real world city and highway operation based on the EPA 5-cycle label methodology, using a weighted 55% city and 45% highway driving.For purposes of an accurate estimation of emissions, the analysis took into consideration the differences in operation between plug-in hybrids. Some, like the Chevrolet Volt, can operate in all-electric mode without using gasoline, and others operate in a blended mode like the Toyota Prius PHV, which uses both energy stored in the battery and energy from the gasoline tank to propel the vehicle, but that can deliver substantial all-electric driving in blended mode. In addition, since the all-electric range of plug-in hybrids depends on the size of the battery pack, the analysis introduced a utility factor as a projection of the share of miles that will be driven using electricity by an average driver, for both, electric only and blended EV modes. Since all-electric cars do not produce tailpipe emissions, the utility factor applies only to plug-in hybrids. The following table shows the overall fuel economy expressed in terms of miles per gallon gasoline equivalent (mpg-e) and the utility factor for the ten MY2014 plug-in hybrids available in the U.S. market, and EPA's best estimate of the CO2 tailpipe emissions produced by these PHEVs.In order to account for the upstream CO2 emissions associated with the production and distribution of electricity, and since electricity production in the United States varies significantly from region to region, the EPA considered three scenarios/ranges with the low end scenario corresponding to the California powerplant emissions factor, the middle of the range represented by the national average powerplant emissions factor, and the upper end of the range corresponding to the powerplant emissions factor for the Rocky Mountains. The EPA estimates that the electricity GHG emission factors for various regions of the country vary from 346 g CO2/kWh in California to 986 g CO2/kWh in the Rockies, with a national average of 648 g CO2/kWh. Union of Concerned Scientists 2012 study The Union of Concerned Scientists (UCS) published a study in 2012 that assessed average greenhouse gas emissions in the U.S. resulting from charging plug-in car batteries from the perspective of the full life-cycle (well-to-wheel analysis) and according to fuel and technology used to generate electric power by region. The study used the Nissan Leaf all-electric car to establish the analysis baseline, and electric-utility emissions are based on EPA's 2009 estimates. The UCS study expressed the results in terms of miles per gallon instead of the conventional unit of grams of greenhouse gases or carbon dioxide equivalent emissions per year in order to make the results more friendly for consumers. The study found that in areas where electricity is generated from natural gas, nuclear, hydroelectric or renewable sources, the potential of plug-in electric cars to reduce greenhouse emissions is significant. On the other hand, in regions where a high proportion of power is generated from coal, hybrid electric cars produce less CO2-e equivalent emissions than plug-in electric cars, and the best fuel efficient gasoline-powered subcompact car produces slightly less emissions than a PEV. In the worst-case scenario, the study estimated that for a region where all energy is generated from coal, a plug-in electric car would emit greenhouse gas emissions equivalent to a gasoline car rated at a combined city/highway driving fuel economy of 30 mpg‑US (7.8 L/100 km; 36 mpg‑imp). In contrast, in a region that is completely reliant on natural gas, the PEV would be equivalent to a gasoline-powered car rated at 50 mpg‑US (4.7 L/100 km; 60 mpg‑imp).The study concluded that for 45% of the U.S. population, a plug-in electric car will generate lower CO2 equivalent emissions than a gasoline-powered car capable of combined 50 mpg‑US (4.7 L/100 km; 60 mpg‑imp), such as the Toyota Prius and the Prius c. The study also found that for 37% of the population, the electric car emissions will fall in the range of a gasoline-powered car rated at a combined fuel economy of 41 to 50 mpg‑US (5.7 to 4.7 L/100 km; 49 to 60 mpg‑imp), such as the Honda Civic Hybrid and the Lexus CT200h. Only 18% of the population lives in areas where the power-supply is more dependent on burning carbon, and the greenhouse gas emissions will be equivalent to a car rated at a combined fuel economy of 31 to 40 mpg‑US (7.6 to 5.9 L/100 km; 37 to 48 mpg‑imp), such as the Chevrolet Cruze and Ford Focus. The study found that there are no regions in the U.S. where plug-in electric cars will have higher greenhouse gas emissions than the average new compact gasoline engine automobile, and the area with the dirtiest power supply produces CO2 emissions equivalent to a gasoline-powered car rated at 33 mpg‑US (7.1 L/100 km).The following table shows a representative sample of cities within each of the three categories of emissions intensity used in the UCS study, showing the corresponding miles per gallon equivalent for each city as compared to the greenhouse gas emissions of a gasoline-powered car: 2014 update In September 2014 the UCS published an updated analysis of its 2012 report. The 2014 analysis found that 60% of Americans, up from 45% in 2009, live in regions where an all-electric car produce fewer CO2 equivalent emissions per mile than the most efficient hybrid. The UCS study found several reasons for the improvement. First, electric utilities have adopted cleaner sources of electricity to their mix between the two analysis. The 2014 study used electric-utility emissions based on EPA's 2010 estimates, but since coal use nationwide is down by about 5% from 2010 to 2014, actual efficiency in 2014 is expected to be better than estimated in the UCS study. Second, electric vehicles have become more efficient, as the average model year 2013 all-electric vehicle used 0.325 kWh/mile, representing a 5% improvement over 2011 models. The Nissan Leaf, used as the reference model for the baseline of the 2012 study, was upgraded in model year 2013 to achieve a rating of 0.30 kWh/mile, a 12% improvement over the 2011 model year model rating of 0.34 kWh/mile. Also, some new models are cleaner than the average, such as the BMW i3, which is rated at 0.27 kWh by the EPA. An i3 charged with power from the Midwest grid would be as clean as a gasoline-powered car with about 50 mpg‑US (4.7 L/100 km), up from 39 mpg‑US (6.0 L/100 km) for the average electric car in the 2012 study. In states with a cleaner mix generation, the gains were larger. The average all-electric car in California went up to 95 mpg‑US (2.5 L/100 km) equivalent from 78 mpg‑US (3.0 L/100 km) in the 2012 study. States with dirtier generation that rely heavily on coal still lag, such as Colorado, where the average BEV only achieves the same emissions as a 34 mpg‑US (6.9 L/100 km; 41 mpg‑imp) gasoline-powered car. The author of the 2014 analysis noted that the benefits are not distributed evenly across the U.S. because electric car adoption is concentrated in the states with cleaner power. 2015 study In November 2015 the Union of Concerned Scientists published a new report comparing two battery electric vehicles (BEVs) with similar gasoline vehicles by examining their global warming emissions over their full life-cycle, cradle-to-grave analysis. The two BEVs modeled, midsize and full-size, are based on the two most popular BEV models sold in the United States in 2015, the Nissan Leaf and the Tesla Model S. The study found that all-electric cars representative of those sold today, on average produce less than half the global warming emissions of comparable gasoline-powered vehicles, despite taken into account the higher emissions associated with BEV manufacturing. Considering the regions where the two most popular electric cars are being sold, excess manufacturing emissions are offset within 6 to 16 months of average driving. The study also concluded that driving an average EV results in lower global warming emissions than driving a gasoline car that gets 50 mpg‑US (4.7 L/100 km) in regions covering two-thirds of the U.S. population, up from 45% in 2009. Based on where EVs are being sold in the United States in 2015, the average EV produces global warming emissions equal to a gasoline vehicle with a 68 mpg‑US (3.5 L/100 km) fuel economy rating. The authors identified two main reason for the fact that EV-related emissions have become even lower in many parts of the country since the first study was conducted in 2012. Electricity generation has been getting cleaner, as coal-fired generation has declined while lower-carbon alternatives have increased. In addition, electric cars are becoming more efficient. For example, the Nissan Leaf and the Chevrolet Volt, have undergone improvements to increase their efficiencies compared to the original models launched in 2010, and other even more efficient BEV models, such as the most lightweight and efficient BMW i3, have entered the market. National Bureau of Economic Research One criticism to the UCS analysis and several other that have analyze the benefits of PEVs is that these analysis were made using average emissions rates across regions instead of marginal generation at different times of the day. The former approach does not take into account the generation mix within interconnected electricity markets and shifting load profiles throughout the day. An analysis by three economist affiliated with the National Bureau of Economic Research (NBER), published in November 2014, developed a methodology to estimate marginal emissions of electricity demand that vary by location and time of day across the United States. The study used emissions and consumption data for 2007 through 2009, and used the specifications for the Chevrolet Volt (all-electric range of 35 mi (56 km)). The analysis found that marginal emission rates are more than three times as large in the Upper Midwest compared to the Western U.S., and within regions, rates for some hours of the day are more than twice those for others. Applying the results of the marginal analysis to plug-in electric vehicles, the NBER researchers found that the emissions of charging PEVs vary by region and hours of the day. In some regions, such as the Western U.S. and Texas, CO2 emissions per mile from driving PEVs are less than those from driving a hybrid car. However, in other regions, such as the Upper Midwest, charging during the recommended hours of midnight to 4 a.m. implies that PEVs generate more emissions per mile than the average car currently on the road. The results show a fundamental tension between electricity load management and environmental goals as the hours when electricity is the least expensive to produce tend to be the hours with the greatest emissions. This occurs because coal-fired units, which have higher emission rates, are most commonly used to meet base-level and off-peak electricity demand; while natural gas units, which have relatively low emissions rates, are often brought online to meet peak demand. This pattern of fuel shifting explains why emission rates tend to be higher at night and lower during periods of peak demand in the morning and evening. Environmental footprint In February 2014, the Automotive Science Group (ASG) published the result of a study conducted to assess the life-cycle of over 1,300 automobiles across nine categories sold in North America. The study found that among advanced automotive technologies, the Nissan Leaf holds the smallest life-cycle environmental footprint of any model year 2014 automobile available in the North American market with minimum four-person occupancy. The study concluded that the increased environmental impacts of manufacturing the battery electric technology is more than offset with increased environmental performance during operational life. For the assessment, the study used the average electricity mix of the U.S. grid in 2014. In the 2014 mid-size cars category, the Leaf also ranked as the best all-around performance, best environmental and best social performance. The Ford Focus Electric, within the 2014 compact cars category, ranked as the best all-around performance, best environmental and best social performance. The Tesla Model S ranked as the best environmental performance in the 2014 full-size cars category. Charging infrastructure As of February 2020, the United States had 84,866 charging points across the country, up from 19,472 in December 2013. California led with 26,219 stations, followed by New York with 4,541. There were 592 CHAdeMO quick charging stations across the country by April 2014.Among the charging networks are Electrify America, launched in May 2019 as part of VW's settlement for the Dieselgate emissions scandal, and the Electric Highway Coalition, announced in March 2021, a group of six major power utilities in the Southeast and Midwest installing EV charging across 16 states, with the first chargers targeted for opening in 2022. Car2Go made San Diego the only North American city with an all-electric carsharing fleet when it launched service in 2011. As of March 2016, the carsharing service has 40,000 members and 400 all-electric Smart EDs in operation. However, due to lack of enough charging infrastructure Car2Go decided to replace all of its all-electric car fleet with gasoline-powered cars starting on 1 May 2016. When the carsharing service started Car2Go expected 1,000 charging stations to be deployed around the city, but only 400 were in place by early 2016. As a result, an average of 20% of the carsharing fleet is unavailable at any given time because the cars are either being charged or because they don't have enough electricity in them to be driven. Also, many of the company's San Diego members say they often worry their Car2Go will run out of charge before they finish their trip. Car2Go merged with ReachNow into Share Now, which closed its North American operations in February 2020. Charging stations by state EV Charging by State Plug-in Electric Vehicle Readiness Index Researchers from the Indiana University School of Public and Environmental Affairs developed an index that identifies and ranks the municipal plug-in electric vehicle readiness ("PEV readiness"). The evaluation ranked the U.S. 25 largest cities by population along with five other large cities that have been included in other major PEV studies. The rankings also included the largest cities in states that joined California zero-emissions vehicle goal. A total of 36 major U.S. cities were included in the study. The evaluation found that Portland, Oregon ranks at the top of the list of major American cities that are the most ready to accommodate plug-in electric vehicles.Readiness is the degree to which adoption of electric vehicles is supported, as reflected in the presence of various types of policy instruments, infrastructure development, municipal investments in PEV technology, and participation in relevant stakeholder coalitions. The study also compares cities within states that participate in the Zero Emission Vehicle program, with those that do not, with the objective to understand whether participation in that program has a meaningful impact on PEV readiness.In order to accelerate the adoption of plug-in electric vehicles (PEV), many municipalities, along with their parent states, offer a variety of benefits to owners and operators of PEVs to make PEV adoption easier and more affordable. All six cities in the top of the ranking offer purchase incentives for PEVs and charging equipment. Four of the six offer time-of-use electricity rates, which makes overnight charging more affordable. The top-ranking cities also score well in categories such as public charging station density, special parking privileges, access to high occupancy vehicle (HOV) lanes, and streamlined processes for installing charging equipment. Those services and incentives are largely absent from the bottom six cities.The following is the full ranking of the 36 U.S. cities in 25 states included in the evaluation of PEV readiness: Issues Fossil-fuel dependency While EVs and other electric vehicles release less greenhouse gases than internal combustion engine vehicles, they are still dependent on fossil fuels for the electricity. This situation will improve in the future as the energy mix of renewables increases. However, in 2021, 80% of energy in the US was generated using fossil fuels. Of the remaining 20%, only 12% was generated using renewable sources. Therefore, between an electric vehicle and an internal combustion vehicle, the electric vehicle emits 20% less fossil fuels - a very minor improvement. Note: While electric vehicles do not have a large impact on the global environment, they do help to keep emissions near power stations and away from cities. However, the overall emissions of an average EV may be worse than an average petrol car due to the extreme weight of electric vehicles. Weight of EVs EVs tend to be much heavier than internal combustion vehicles due to the large batteries. This means higher emissions as well as increasing the cost of road maintenance. The heavy weights of electric vehicles also arises safety issues - larger cars cause more damage to pedestrians, people on bikes, and lighter vehicles. Solving the wrong problem While EVs may directly help decrease carbon emissions, they do not solve any of the other problems with cars.They still create congestion, use up valuable city space, require large parking lots, promote suburban sprawl, and create many other issues within cities.For these other issues, it is important to have a strong public transport system, which can reduce the need for cars in the first place, and can lead to a better quality of life for the majority. Markets and sales National market As of December 2021, cumulative sales of highway lNoteegal plug-in electric passenger cars in the U.S. totalled 2,322,291 units since 2010. As of August 2020, the U.S. stock consisting of 1,008,118 electric cars (62.7%) and 600,143 plug-in hybrids (37.3%). As of December 2019, the American stock represented 20% of the global plug-in car fleet in use, down from about 40% in 2014. Sales in the American market are led by California with 668,827 plug-in electric vehicles sold up until 2019.As of December 2014, the United States had the world's largest stock of light-duty plug-in electric vehicles, and led annual plug-in car sales in calendar year 2014. By May 2016, the European stock of light-duty had surpassed the U.S. By the end of September 2016, the Chinese stock of plug-in passenger cars reached the level of the American plug-in stock, and by November 2016, China's cumulative total plug-in passenger vehicles sales had surpassed those of Europe, allowing China to become the market with the world's largest stock of light-duty plug-in electric vehicles. China also surpassed both the U.S. and Europe in terms of annual sales of light-duty plug-in electric vehicles since 2015. National sales increased from 17,800 units delivered in 2011 to 53,200 during 2012, and reached 97,100 in 2013, up 83% from the previous year. During 2014 plug-in electric car sales totaled 123,347 units, up 27.0% from 2013, and fell to 114,248 units in 2015, down 7.4% from 2014. A total of 157,181 plug-in cars were sold in 2016, up 37.6% from 2015, rose to 199,818 in 2017, and achieved a record sales volume of 361,307 units in 2018. Sales declined in 2019 to 329,528 units.The market share of plug-in electric passenger cars increased from 0.14% in 2011 to 0.37% in 2012, 0.62% in 2013, and reached 0.75% of new car sales in 2014. As plug-in car sales slowed down during the 2015, the segment's market share fell to 0.66% of new car sales, then increased to 0.90% in 2016. The market share passed the 1% mark for the first time in 2017 (1.13%). Then in 2018 the take-rate rose to 2.1%, but declined to 1.9% in 2019. In July 2016, the Volt became the first plug-in vehicle in the American market to achieve the 100,000 unit sales milestone. Leaf sales achieved the 100,000 unit milestone in October 2016, becoming the first all-electric vehicle in the country to pass that mark. The Model S achieved the mark of 100,000 sales in the U.S. in June 2017, launched in June 2012, the Model S hit this milestone quicker than both the Volt and the Leaf. Launched in July 2017, the Tesla Model 3 reached the 100,000 unit milestone in November 2018, hitting this milestone quicker than any previous model sold in the U.S.As of December 2018, the Chevrolet Volt plug-in hybrid listed as the all-time best selling plug-in electric car with 152,144 units of both generations. The Model S was the best selling plug-in car in the U.S. for three consecutive years, from 2015 to 2017, and the Model 3 topped sales in 2018 and 2019. The Model 3 surpassed in 2019 the discontinued Chevrolet Volt to become the all-time best selling plug-in car in U.S. history, with an estimated 300,471 units delivered since inception, followed by the Tesla Model S all-electric car with about 157,992, and the Chevrolet Volt with 157,054. Sales by powertrain As of December 2014, cumulative sales of plug-in electric vehicles in the U.S. since December 2010 were led by plug-in hybrids, with 150,946 units sold representing 52.7% of all plug-in car sales, while 135,444 all-electric cars (47.3%) had been delivered to retail customers. During 2015, the all-electric segment grew much faster, with a total of 72,303 all-electric cars sold, up 6.6% year-on-year, while plug-in hybrid were down 22.4% year-on-year, with 42,959 units sold. These results reversed the trend, and as of December 2015, a total of 206,508 all-electric cars and 193,904 plug-in hybrids have been sold since 2010, with all-electrics now representing 51.6% of cumulative sales. The lead of battery electric cars continued in 2016, with 84,246 all-electrics sold, up 18.4% from 2015, representing 53.6% of the plug-in segment 2016 sales, while sales of plug-in hybrids totaled 72,935 unis, up 69.1% from 2015. As of August 2016, the distribution of cumulative sales since 2010 between these two technologies is 52.8% all-electrics and 47.2% plug-in hybrids. Sales growth Sales of series production plug-in cars during its first two years in the U.S. market were lower than the initial expectations. Cumulative plug-in electric car sales since 2008 reached the 250,000 units in August 2014, 500,000 in August 2016, and the one million goal was achieved in September 2018.According to the U.S. Department of Energy, combined sales of plug-in hybrids and battery electric cars are climbing more rapidly and outselling by more than double sales of hybrid-electric vehicles over their respective 24 month introductory periods, as shown in the graph at the right. A more detailed analysis by the Office of Energy Efficiency and Renewable Energy over the same two-year introductory periods found that except for the initial months in the market, monthly sales of the Volt and the Leaf have been higher than the Prius HEV, and the Prius PHEV has outsold the regular Prius during its 8 months in the market. Over the first 24 months from introduction, the Prius HEV achieved monthly sales of over 1,700 in month 18, the Leaf achieved about 1,700 units in month 7, the Prius PHEV achieved nearly 1,900 sales in month 8, and the Volt achieved more than 2,900 sales in month 23. A 2016 analysis by the Consumer Federation of America (CFA) found that 5 years after its introduction, sales of plug-in electric cars in the U.S. continued to outsell conventional hybrids. The analysis considered sales between January 2011 and December 2015.An analysis by Scientific American found a similar trend at the international level when considering the global top selling PEVs over a 36-month introductory period. Monthly sales of the Volt, Prius PHV and Leaf are performing better than the conventional Prius during their respective introductory periods, with the exception of the Mitsubishi i-MiEV, which has been outsold most of the time by the Prius HEV over their 36-month introductory periods. Key market features According to Edmunds.com, leasing of plug-in cars instead of purchasing is dominant in the American market, with leasing accounting for 51% of all new all-electric cars and 73% of plug-in hybrids, compared with just 32% of gasoline-powered cars in 2016. As of 2016, the market of used plug-in electric cars is concentrated in California, the state with the biggest pool of used plug-in vehicles, especially all-electrics, followed by Colorado, Florida, Georgia, New York, Oregon and Texas. With the exception of used Teslas, all models depreciate more rapidly than conventionally powered cars and trucks. For all-electric cars depreciation varies between 60% to 75% in three years. In contrast, most conventionally powered vehicles in the same period depreciate between 45% to 50% . The Tesla Model S is more like conventional cars, with three-year depreciation of about 40%. And plug-in hybrids depreciate less than all-electric cars but still depreciate faster than conventionally powered cars.Researchers from the University of California, Davis conducted a study to identify the factors influencing the decision to adopt high-end battery electric vehicles (BEV), such as the Tesla Model S, as these vehicles are remarkably different from mainstream BEVs. Based on a questionnaire responded by 539 high-end adopters and in-depth interviews with 33 adopters, the 2016 study found that "environmental, performance, and technological motivations are reasons for adoption; the new technology brings a new segment of buyers into the market; and financial purchase incentives are not important in the consumer’s decision to adopt a high-end BEV." Car dealers' reluctance to sell With the exception of Tesla Motors, almost all new cars in the United States are sold through dealerships, so they play a crucial role in the sales of electric vehicles, and negative attitudes can hinder early adoption of plug-in electric vehicles. Dealers decide which cars they want to stock, and a salesperson can have a big impact on how someone feels about a prospective purchase. Sales people have ample knowledge of internal combustion cars while they do not have time to learn about a technology that represents a fraction of overall sales. As with any new technology, and in the particular case of advanced technology vehicles, retailers are central to ensuring that buyers, especially those switching to a new technology, have the information and support they need to gain the full benefits of adopting this new technology. A 2016 study indicated that 60% of Americans were not aware of electric cars.There are several reasons for the reluctance of some dealers to sell plug-in electric vehicles. PEVs do not offer car dealers the same profits as gasoline-powered cars. Plug-in electric vehicles take more time to sell because of the explaining required, which hurts overall sales and sales people commissions. Electric vehicles also may require less maintenance, resulting in loss of service revenue, and thus undermining the biggest source of dealer profits, their service departments. According to the National Automobile Dealers Association (NADS), dealers on average make three times as much profit from service as they do from new car sales. However, a NADS spokesman said there was not sufficient data to prove that electric cars would require less maintenance. According to the New York Times, BMW and Nissan are among the companies whose dealers tend to be more enthusiastic and informed, but only about 10% of dealers are knowledgeable on the new technology. A study conducted at the Institute of Transportation Studies (ITS), at the University of California, Davis (UC Davis) published in 2014 found that many car dealers are less than enthusiastic about plug-in vehicles. ITS conducted 43 interviews with six automakers and 20 new car dealers selling plug-in vehicles in California's major metro markets. The study also analyzed national and state-level J.D. Power 2013 Sales Satisfaction Index (SSI) study data on customer satisfaction with new car dealerships and Tesla retail stores. The researchers found that buyers of plug-in electric vehicles were significantly less satisfied and rated the dealer purchase experience much lower than buyers of non-premium conventional cars, while Tesla Motors earned industry-high scores. According to the findings, plug-in buyers expect more from dealers than conventional buyers, including product knowledge and support that extends beyond traditional offerings.In 2014 Consumer Reports published results from a survey conducted with 19 secret shoppers that went to 85 dealerships in four states, making anonymous visits between December 2013 and March 2014. The secret shoppers asked a number of specific questions about cars to test the salespeople's knowledge about electric cars. The consumer magazine decided to conduct the survey after several consumers who wanted to buy a plug-in car reported to the organization that some dealerships were steering them toward gasoline-powered models. The survey found that not all sales people seemed enthusiastic about making PEV sales; a few outright discouraged it, and even one dealer was reluctant to even show a plug-in model despite having one in stock. And many sales people seemed not to have a good understanding of electric-car tax breaks and other incentives or of charging needs and costs. Consumer Reports also found that when it came to answering basic questions, sales people at Chevrolet, Ford, and Nissan dealerships tended to be better informed than those at Honda and Toyota. The survey found that most of the Toyota dealerships visited recommended against buying a Prius Plug-in and suggested buying a standard Prius hybrid instead. Overall, the secret shoppers reported that only 13 dealers "discouraged sale of EV," with seven of them being in New York. However, at 35 of the 85 dealerships visited, the secret shoppers said sales people recommended buying a gasoline-powered car instead.The ITS-Davis study also found that a small but influential minority of dealers have introduced new approaches to better meet the needs of plug-in customers. Examples include marketing carpool lane stickers, enrolling buyers in charging networks, and preparing incentive paperwork for customers. Some dealers assign seasoned sales people as plug-in experts, many of whom drive plug-ins themselves to learn and be familiar with the technology and relate the cars' benefits to potential buyers. The study concluded also that carmakers could do much more to support dealers selling PEVs. Regional markets Concentration relative to population As of July 2016, the U.S. average concentration was 1.51 plug-in cars registered per 1,000 people, while California's concentration was 5.83 registrations per 1,000 people. At the time, only Norway exceeded California's plug-in concentration per capita, by 3.69 times. As of December 2017, the average national ownership per capita rose to 2.21 plug-ins per 1,000 people.In 2017 eight states had more than two plug-in vehicles registered per 1,000 people, of which, three are located in the West Coast. California had the highest concentration with 8.64 plug-ins per 1,000 people. Hawaii ranked second (5.12) followed by Washington (4.06), Oregon (3.84) Vermont (3.73), Colorado (2.33), Arizona (2.29), and Maryland (2.03). Mississippi (0.20), Arkansas (0.28), West Virginia (0.30), Louisiana (0.31), Wyoming (0.37), and North Dakota (0.39) had the lowest concentration of plug-in cars in 2017. In terms of growth from 2016 to 2017 for plug-in vehicle registrations per capita, five states had growth rates of 50% or higher: Vermont (56.4%) Maryland (54.2%), Massachusetts (52.5%), New Hampshire (50.2%), and Alaska (50.0%). The U.S. average growth rate from 2016 to 2017 was 30.2%. Market share by city and state The following table summarizes the ten states and metropolitan areas leading all-electric car adoption in terms of their market share of new light-vehicle registrations or sales during 2013 and 2014. A total of 52% of American plug-in electric car registrations from January to May 2013 were concentrated in five metropolitan areas: San Francisco (19.5%), Los Angeles (15.4%), Seattle (8.0%), New York (4.6%) and Atlanta (4.4%). From January to July 2013, the three cities with the highest all-electric car registrations were all located in California, Atherton and Los Altos in the Silicon Valley, followed by Santa Monica, located in Los Angeles County. Sales by model As of December 2018, there were 43 highway legal plug-in cars available in the American market for retail sales, 15 all-electric cars and 28 plug-in hybrids, plus several models of electric motorcycles, utility vans and neighborhood electric vehicles (NEVs). As of November 2018, sales were concentrated to a few models, with the top 10 best selling plug-in cars accounting for about 84% of total sales during the first eleven months of 2018. Car manufacturers are offering plug-in electric cars in the U.S. for retail customers under 21 brands or marques: Audi, BMW, Cadillac, Chevrolet, Chrysler, Fiat, Ford, Honda, Hyundai, Jaguar, Kia, Mercedes-Benz, MINI, Mitsubishi, Nissan, Porsche, Smart, Tesla, Toyota, Volkswagen, and Volvo.As of September 2016, only the Chevrolet Volt, Nissan Leaf, Tesla's Model S and Model X, BMW i3, Mitsubishi i, Porsche Panamera S E-Hybrid, Cadillac ELR, and Ford's C-Max and Fusion Energi plug-in hybrids were available nationwide. Several models, such as the Toyota RAV4, Fiat 500e, Honda Fit EV, and Chevrolet Spark EV, are compliance cars sold in limited markets, mainly California, available in order to raise an automaker's fleet average fuel economy to satisfy regulator requirements.As of November 2018, the top selling plug-in car manufacturers in the American market are Tesla with about 269,000 units delivered, GM with 203,941, Nissan with 126,875 units, Ford with 111,715, Toyota with 93,011 and the BMW Group with 79,679 plug-in electric cars. Top selling models The Nissan Leaf was the U.S. top selling plug-in car in 2011 (9,674), and the Chevrolet Volt topped sales in 2012 (23,461). Again in 2013, sales were led by the Chevrolet Volt with 23,094 units, followed by the Nissan Leaf with 22,610 cars, and the Tesla Model S with about 18,000 units. In 2013 the Model S was the top selling car in the American full-size luxury sedan category, ahead of the Mercedes-Benz S-Class (13,303), the top selling car in the category in 2012. In 2014 the Leaf took the sales lead, with 30,200 units sold, with the Volt ranking second with 18,805, followed by the Model S with 16,689 units.The Tesla Model S, with 25,202 units delivered, was the top selling plug-in car in 2015, followed by the Nissan Leaf with 17,269 units, and the Volt with 15,393. Again in 2016, the Model S was the best selling plug-in car with about 29,156 units delivered, followed by the Volt with 24,739, and the Model X with about 18,028. For the third consecutive year, the Tesla Model S was the top selling plug-in car with about 26,500 units sold in 2017, followed by the Chevrolet Bolt (23,297), Tesla Model X (~21,700).Sales in 2018 were led by the Tesla Model 3 with an estimated 139,782 units delivered, the first time a plug-in car sold more than 100 thousand units in a single year. Ranking next were the Toyota Prius Prime (27,595) and the Tesla Model X (~26,100). Sales in 2019 were topped again by the Tesla Model 3 with an estimated 158,925 units delivered, followed by the Toyota Prius Prime (23,630), the Tesla Model X (19,225), the Chevrolet Bolt EV (16,418) and the Tesla Model S (14,100).The following table presents cumulative sales of the all-time top 10 best-selling highway-capable plug-in electric cars launched in the American market since 2008, with sales through December 2020. Neighborhood electric vehicles Low-Speed Vehicles (LSVs) are defined as "four-wheeled motor vehicles whose top speed is 20 to 25 mph (32 to 40 km/h) to be used in residential areas, planned communities, industrial sites, and other areas with low density traffic, and low-speed zones." LSVs, more commonly known as neighborhood electric vehicles (NEVs), were defined in 1998 by the National Highway Traffic Safety Administration's Federal Motor Vehicle Safety Standard No. 500, which required safety features such as windshields and seat belts, but not doors or side walls.Since 1998 Global Electric Motorcars (GEM), the market leader in North America, has sold more than 50,000 GEM battery-electric vehicles worldwide as of October 2015. Modern production timeline This is a list of all highway-capable plug-in electric vehicles available for retail customers in the U.S. for sale or leasing since the early 1990s. 1990-2003 Chrysler TEVan (1993-1995) General Motors EV1 (1996-2003) Toyota RAV4 EV (1997-2003) Honda EV Plus (1997-1999) Nissan Altra (1998-2001) Dodge Caravan EPIC (1999 to 2001) Ford TH!NK City (1999-2003) 2008-2021 2008Tesla Roadster (production ended in 2011)2009Mini E (demonstration program ended in 2011)2010Chevrolet Volt (replaced by second generation in 2015) Nissan Leaf (replaced by second generation in 2017) Navistar eStar utility van2011Th!nk City (no longer in production) Smart ED 2nd gen (available for leasing only) Wheego Whip LiFe (no longer in production) Fisker Karma (no longer in production) Azure Transit Connect Electric delivery van (no longer in production) Mitsubishi i (Mitsubishi i MiEV in the rest of the world) Smith Newton delivery truck2012BMW ActiveE (demonstration program ended in 2014) Coda (no longer in production) Ford Focus Electric (limited production) Toyota Prius PHV (production ended in 2015) Boulder DV-500 delivery truck Amp Electric Vehicles (SUV and light truck conversions) Tesla Model S Honda Fit EV (limited production) Toyota RAV4 EV (2nd gen) (limited production) Ford C-Max Energi (production ended in 2018)2013Honda Accord Plug-in Hybrid Ford Fusion Energi Scion iQ EV (limited production available only for carsharing fleets, not for retail customers) Smart electric drive 3rd gen (available with battery leasing option) Chevrolet Spark EV (limited production) Fiat 500e (limited production) Porsche Panamera S E-Hybrid Cadillac ELR (limited production)2014BMW i3 Porsche 918 Spyder (limited edition) Mercedes-Benz B-Class Electric Drive BMW i8 (production ended in 2020) Volkswagen e-Golf Kia Soul EV Porsche Cayenne S E-Hybrid2015Mercedes-Benz S 500 Plug-in Hybrid Volvo XC 90 PHEV Tesla Model X Bolloré Bluecar (available only for the BlueIndy carsharing fleet - ended in 2020) Chevrolet Volt (second generation) (production ended in 2019) BMW X5 xDrive40e Hyundai Sonata PHEV Audi A3 Sportback e-tron Audi Q7 e-tron Quattro2016BMW 330e iPerformance Mercedes-Benz GLE 550e Plug-in Hybrid Toyota Prius Prime (second generation Prius PHEV) Chevrolet Bolt EV BMW 740e iPerformance Mercedes-Benz C 350e Plug-in Hybrid2017Chrysler Pacifica Hybrid BMW 530e iPerformance Tesla Model 3 Kia Optima PHEV Honda Clarity Electric Honda Clarity Plug-in Hybrid Volvo XC60 Plug-in Hybrid Mini Cooper S E ALL4 Hyundai Ioniq Electric Cadillac CT6 Plug-in Hybrid Volvo S90 T8 Plug-in Hybrid Mitsubishi Outlander P-HEV Nissan Leaf (second generation)2018Hyundai Ioniq Plug-in Mercedes-Benz GLC 350e 4MATIC Karma Revero (updated version of the Fisker Karma) Jaguar I-Pace2019Hyundai Kona Subaru Crosstrek Plug-in Hybrid Kia e-Niro Audi e-tron Mercedes-Benz EQC Porsche Taycan2020Toyota RAV4 Prime Tesla Model Y Kandi K23 and K27 Electra Meccanica SOLO (three-wheeler) Ford Mustang Mach-E Volvo V60 PHEV Engine Polestar Volvo S60 Plug-in Hybrid Polestar 1 Polestar 22021Volkswagen ID.4 Volvo XC40 Recharge Chevrolet Bolt EUV Ford Escape PHEV Lincoln Corsair Grand Touring PHEV Lucid Air Mazda MX-30 Rivian R1T pickup truck Future cars and trucks (2021–2024) The following is a list of electric cars and plug-in hybrids with market launch in the United States scheduled up to 2024. Nissan Ariya GMC Hummer EV Ford F-150 Lightning Hyundai Ioniq 5 Kia EV6 BMW i4 BMW iX Mercedes-Benz EQB Mercedes-Benz EQS Mercedes-Benz EQS SUV Chevrolet Blazer EV Chevrolet Equinox EV Chevrolet Silverado EV Cadillac Escalade IQ Cadillac Lyriq Cadillac Celestiq Polestar 3 Volvo C40 Recharge Volvo EX90 BrightDrop Zevo Tesla Cybertruck Lordstown Endurance pickup truck Honda Prologue Tesla Roadster second generation Toyota bZ4X Subaru Solterra Fisker eMotion Fisker Ocean Aptera (solar electric vehicle) Bentley Continental Plug-In Hybrid Audi Q4 e-tron Canoo Lifestyle U.S. electric vehicle organizations CalCars (The California Cars Initiative)' Drive Electric Colorado - partnership of Colorado Energy Office and several non-profit advocacy groups Drive Oregon Electric Auto Association of Northern Nevada Electric Auto Association Silicon Valley Electric Car Society Electric Vehicle Association (EVA) (North America) Humboldt Electric Vehicle Association NEDRA National Electric Drive Racing Association Oregon Electric Vehicle Association Plug In America PHEV Research Center Project EVIE RechargeIT (Google.org) San Francisco BayLEAFs Seattle Electric Vehicle Association World Electric Vehicle Association Zero Emission Transportation Association (ZETA) - industry group advocating for 100% EV sales by 2030; members include various businesses throughout the EV value chain: EV manufacturers (cars, pickups, buses), ride-sharing, EV batteries (raw materials, manufacture, recycling), EV charging, solar EV installation, electric companies. See also References == External links ==
energy in israel
Most energy in Israel comes from fossil fuels. The country's total primary energy demand is significantly higher than its total primary energy production, relying heavily on imports to meet its energy needs. Total primary energy consumption was 304 TWh (1.037 quad) in 2016, or 26.2 million tonne of oil equivalent.Electricity consumption in Israel was 57,149 GWh in 2017, while production was 64,675 GWh, with net exports of 4.94 TWh. The installed generating capacity was about 16.25 GW in 2014, almost all from fossil fuel power stations, mostly coal and gas fueled. Renewable energy accounted for a minor share of electricity production, with a small solar photovoltaic installed capacity. However, there are a total of over 1.3 million solar water heaters installed as a result of mandatory solar water heating regulations. In 2018, 70% of electricity came from natural gas, and 4% from renewables, of which 95% was solar PV.In 2020, the government committed that by 2030, renewables should reach 30%. This target was further revised in 2021, when Israel pledged at the United Nations Climate Change Conference (COP26) to phasing out coal for energy generation by 2025, and reaching net zero for greenhouse gas emissions by 2050. The transportation sector has historically relied almost entirely on petroleum derived fuels, as both private motorcars and public transit buses used to overwhelmingly rely on gasoline or diesel - and still do, despite efforts to change this. However, Israel is undertaking a mobility transition which includes the electrification of the Israel Railways network (beginning with the Tel Aviv-Jerusalem railway in 2018) and the construction of Jerusalem light rail (opened 2011), public transit cablecars in Haifa and Tel Aviv light rail. In 2018 Israel set the target date for the phase-out of fossil fuel vehicles (i.e. an end to future sales of new fossil fuel powered vehicles) for 2030. History Throughout Israel's history, securing the energy supply had been a major concern of Israeli policymakers. The Israel Electric Corporation, which traces its history to 1923, with the First Jordan Hydro-Electric Power House, is the main electricity generator and distributor in Israel.Petroleum exploration began in 1947 on a surface feature in the Heletz area in the southern coastal plain. The first discovery, Heletz-I, was completed in 1955, followed by the discovery and development of a few small wells in Kokhav, Brur, Ashdod and Zuk Tamrur in 1957. The combined Heletz-Brur-Kokhav field produced a total of 17.2 million barrels, a negligible amount compared with national consumption. Since the early 1950s, 480 oil and gas wells, land and offshore were drilled in Israel, most of which did not result in commercial success. In 1958–1961, several small gas fields were discovered in the southern Judean desert. From the Six-Day War until the Egyptian Separation Treaty in 1975, Israel produced large quantities of petroleum from the Abu Rodes oil field in Sinai.In 1951, the Arab states accused American oil interests in Saudi Arabia of selling oil to Central American governments who circumvented the Arab blockade against Israel by selling the oil back to the refinery in Haifa.Solar power in Israel has been the main renewable energy resource used in Israel since the 1950s, at first mostly for solar water heaters. Photovoltaics has only reached commercial scale in Israel in the 21st century but has since grown rapidly.In 2021, Prime Minister Naftali Bennet committed Israel at the United Nations Climate Change Conference (COP26) to phasing out coal for energy generation by 2025, and reaching net zero for greenhouse gas emissions by 2050. Primary energy Natural gas Since Israel’s creation in 1948, it has been dependent on energy imports from other countries. Specifically, Israel produced 7 billion cubic meters of natural gas in 2013, and imported 720 million cubic meters in 2011. Historically, Israel has imported natural gas through the Arish-Ashkelon pipeline from Egypt. Egypt is the second-largest natural gas producer in North Africa. In 2005 Egypt signed a 2.5 billion-dollar deal to supply Israel with 57 billion cubic feet of gas per year for fifteen years. Under this arrangement, Egypt supplies 40 percent of Israel's natural gas demand. The Israeli Electric Corporation (IEC) controls more than 95% of the electricity sector in Israel, and controls production, distribution, and transmission of electricity. The IEC has a natural gas distribution law which regulates the distribution of natural gas in Israel to empower market competition.The discoveries of the Tamar gas field in 2009 and the Leviathan gas field in 2010 off the coast of Israel were important. The natural gas reserves in these two fields (Leviathan has around 19 trillion cubic feet) could make Israel more energy secure. In 2013 Israel began commercial production of natural gas from the Tamar field and in 2019 from Leviathan. As of 2017, even by conservative estimates, Leviathan holds enough gas to meet Israel's domestic needs for 40 years.In addition, the Karish gas field started production in 2022 after Israel reached an agreement with Lebanon that ended a maritime border dispute between the two. Electricity Israel's electricity sector relies mainly on fossil fuels. In 2015, energy consumption in Israel was 52.86 TWh, or 6,562 kWh per capita. The Israel Electric Corporation (IEC), which is owned by the government, produces most electricity in Israel, with a production capacity of 11,900 megawatts in 2016. In 2016, IEC's share of the electricity market was 71%. Hydrocarbon fuels Most electricity in Israel comes from hydrocarbon fuels from the following IEC power plants: The following power plants belong to independent power producers and, although connected to the IEC’s distribution grid, are not operated by the IEC: Renewable energy As of 2019, Israel's renewable energy production capacity stood at 1,500 MW, almost all of it from solar energy, at 1,438 MW. Additional sources included wind power (27 MW), biogas (25 MW), hydroelectric power (7 MW) and other bio energy (3 MW). Of the solar energy, photovoltaics accounted for 1,190 MW, while concentrated solar power contributed another 248 MW from the Ashalim Power Station.In the same year, 4.7% of Israel's total electricity consumption came from solar photovoltaics. Production capacity of some 0.56 GW was installed in 2019.In addition to renewable energy, Israel is building multiple pumped-storage hydroelectricity plants, for a total capacity of 800 MW.In 2022, 11.8% of Israel's energy mix came from renewable energy sources, totaling 4,765 MW in renewable energy production capacity. The vast majority of Israel's renewable sources come from solar power, including from the Tze'elim, Ketura Sun, Ashalim Power Station, the 330 MW Dimona, and 250 MW Ta'anakh solar parks.Officials from the Israeli Government and The Electricity Authority have given the goal to reach 30% of the country's energy from renewable sources by 2030. Despite this goal, a May 2023 OECD report warned Israel was falling behind on its emissions reduction objectives, largely due to natural gas extraction.In June 2023, Israel's largest renewable energy project, Enlight Renewable Energy's Genesis Wind, began operations near the Israeli villages of Keshet and Yonatan in the Golan Heights. The new wind farm is 207MW, will provide 70,000 households with clean energy, has a 27 kilometer HV 161 kV underground cable, and will save about 180,000 tons of annual CO2 emissions.In 2023, citing lack of land for ground solar PV parks, Israel mandated that all newly constructed commercial buildings install rooftop photovoltaic solar panels. In September 2023, Israel added more than 2 GW to the national energy grid to connect renewable energy projects, specifically solar, to the grid. Nuclear energy Israel has no nuclear power generation as of 2013, although it operates a heavy water nuclear reactor at Negev Nuclear Research Center. In January 2007, Israeli Infrastructure Minister Binyamin Ben-Eliezer said his country should consider producing nuclear power for civilian purposes. However, as a result of the Fukushima nuclear disaster, Prime Minister Benjamin Netanyahu said on 17 March 2011, "I don't think we're going to pursue civil nuclear energy in the coming years." Solar water heating Israel is one of the world leaders in the use of solar thermal energy per capita. Since the early 1990s, all new residential buildings have been required by the government to install solar water-heating systems, and Israel's National Infrastructure Ministry estimates that solar panels for water-heating satisfy 4% of the country's total energy demand. Israel and Cyprus are the per-capita leaders in the use of solar hot water systems with over 90% of homes using them. The Ministry of National Infrastructures estimates solar water heating saves Israel 2 million barrels (320,000 m3) of oil a year. See also List of power stations in Israel == References ==
hydroelectricity
Hydroelectricity, or hydroelectric power, is electricity generated from hydropower (water power). Hydropower supplies one sixth of the world's electricity, almost 4500 TWh in 2020, which is more than all other renewable sources combined and also more than nuclear power. Hydropower can provide large amounts of low-carbon electricity on demand, making it a key element for creating secure and clean electricity supply systems. A hydroelectric power station that has a dam and reservoir is a flexible source, since the amount of electricity produced can be increased or decreased in seconds or minutes in response to varying electricity demand. Once a hydroelectric complex is constructed, it produces no direct waste, and almost always emits considerably less greenhouse gas than fossil fuel-powered energy plants. However, when constructed in lowland rainforest areas, where part of the forest is inundated, substantial amounts of greenhouse gases may be emitted.Construction of a hydroelectric complex can have significant environmental impact, principally in loss of arable land and population displacement. They also disrupt the natural ecology of the river involved, affecting habitats and ecosystems, and siltation and erosion patterns. While dams can ameliorate the risks of flooding, dam failure can be catastrophic. In 2021, global installed hydropower electrical capacity reached almost 1400 GW, the highest among all renewable energy technologies. Hydroelectricity plays a leading role in countries like Brazil, Norway and China. but there are geographical limits and environmental issues. Tidal power can be used in coastal regions. History Hydropower has been used since ancient times to grind flour and perform other tasks. In the late 18th century hydraulic power provided the energy source needed for the start of the Industrial Revolution. In the mid-1770s, French engineer Bernard Forest de Bélidor published Architecture Hydraulique, which described vertical- and horizontal-axis hydraulic machines, and in 1771 Richard Arkwright's combination of water power, the water frame, and continuous production played a significant part in the development of the factory system, with modern employment practices. In the 1840s the hydraulic power network was developed to generate and transmit hydro power to end users. By the late 19th century, the electrical generator was developed and could now be coupled with hydraulics. The growing demand arising from the Industrial Revolution would drive development as well. In 1878, the world's first hydroelectric power scheme was developed at Cragside in Northumberland, England, by William Armstrong. It was used to power a single arc lamp in his art gallery. The old Schoelkopf Power Station No. 1, US, near Niagara Falls, began to produce electricity in 1881. The first Edison hydroelectric power station, the Vulcan Street Plant, began operating September 30, 1882, in Appleton, Wisconsin, with an output of about 12.5 kilowatts. By 1886 there were 45 hydroelectric power stations in the United States and Canada; and by 1889 there were 200 in the United States alone. At the beginning of the 20th century, many small hydroelectric power stations were being constructed by commercial companies in mountains near metropolitan areas. Grenoble, France held the International Exhibition of Hydropower and Tourism, with over one million visitors 1925. By 1920, when 40% of the power produced in the United States was hydroelectric, the Federal Power Act was enacted into law. The Act created the Federal Power Commission to regulate hydroelectric power stations on federal land and water. As the power stations became larger, their associated dams developed additional purposes, including flood control, irrigation and navigation. Federal funding became necessary for large-scale development, and federally owned corporations, such as the Tennessee Valley Authority (1933) and the Bonneville Power Administration (1937) were created. Additionally, the Bureau of Reclamation which had begun a series of western US irrigation projects in the early 20th century, was now constructing large hydroelectric projects such as the 1928 Hoover Dam. The United States Army Corps of Engineers was also involved in hydroelectric development, completing the Bonneville Dam in 1937 and being recognized by the Flood Control Act of 1936 as the premier federal flood control agency.Hydroelectric power stations continued to become larger throughout the 20th century. Hydropower was referred to as "white coal". Hoover Dam's initial 1,345 MW power station was the world's largest hydroelectric power station in 1936; it was eclipsed by the 6,809 MW Grand Coulee Dam in 1942. The Itaipu Dam opened in 1984 in South America as the largest, producing 14 GW, but was surpassed in 2008 by the Three Gorges Dam in China at 22.5 GW. Hydroelectricity would eventually supply some countries, including Norway, Democratic Republic of the Congo, Paraguay and Brazil, with over 85% of their electricity. Future potential In 2021 the International Energy Agency (IEA) said that more efforts are needed to help limit climate change. Some countries have highly developed their hydropower potential and have very little room for growth: Switzerland produces 88% of its potential and Mexico 80%. In 2022, the IEA released a main-case forecast of 141 GW generated by hydropower over 2022-2027, which is slightly lower than deployment achieved from 2017-2022. Because environmental permitting and construction times are long, they estimate hydropower potential will remain limited, with only an additional 40 GW deemed possible in the accelerated case. Modernization of existing infrastructure In 2021 the IEA said that major modernisation refurbishments are required. : 67 Generating methods Conventional (dams) Most hydroelectric power comes from the potential energy of dammed water driving a water turbine and generator. The power extracted from the water depends on the volume and on the difference in height between the source and the water's outflow. This height difference is called the head. A large pipe (the "penstock") delivers water from the reservoir to the turbine. Pumped-storage This method produces electricity to supply high peak demands by moving water between reservoirs at different elevations. At times of low electrical demand, the excess generation capacity is used to pump water into the higher reservoir, thus providing demand side response. When the demand becomes greater, water is released back into the lower reservoir through a turbine. In 2021 pumped-storage schemes provided almost 85% of the world's 190 GW of grid energy storage and improve the daily capacity factor of the generation system. Pumped storage is not an energy source, and appears as a negative number in listings. Run-of-the-river Run-of-the-river hydroelectric stations are those with small or no reservoir capacity, so that only the water coming from upstream is available for generation at that moment, and any oversupply must pass unused. A constant supply of water from a lake or existing reservoir upstream is a significant advantage in choosing sites for run-of-the-river. Tide A tidal power station makes use of the daily rise and fall of ocean water due to tides; such sources are highly predictable, and if conditions permit construction of reservoirs, can also be dispatchable to generate power during high demand periods. Less common types of hydro schemes use water's kinetic energy or undammed sources such as undershot water wheels. Tidal power is viable in a relatively small number of locations around the world. Sizes, types and capacities of hydroelectric facilities Large facilities The largest power producers in the world are hydroelectric power stations, with some hydroelectric facilities capable of generating more than double the installed capacities of the current largest nuclear power stations. Although no official definition exists for the capacity range of large hydroelectric power stations, facilities from over a few hundred megawatts are generally considered large hydroelectric facilities. Currently, only seven facilities over 10 GW (10,000 MW) are in operation worldwide, see table below. Small Small hydro is hydroelectric power on a scale serving a small community or industrial plant. The definition of a small hydro project varies but a generating capacity of up to 10 megawatts (MW) is generally accepted as the upper limit. This may be stretched to 25 MW and 30 MW in Canada and the United States. Small hydro stations may be connected to conventional electrical distribution networks as a source of low-cost renewable energy. Alternatively, small hydro projects may be built in isolated areas that would be uneconomic to serve from a grid, or in areas where there is no national electrical distribution network. Since small hydro projects usually have minimal reservoirs and civil construction work, they are seen as having a relatively low environmental impact compared to large hydro. This decreased environmental impact depends strongly on the balance between stream flow and power production. Micro Micro hydro means hydroelectric power installations that typically produce up to 100 kW of power. These installations can provide power to an isolated home or small community, or are sometimes connected to electric power networks. There are many of these installations around the world, particularly in developing nations as they can provide an economical source of energy without purchase of fuel. Micro hydro systems complement photovoltaic solar energy systems because in many areas water flow, and thus available hydro power, is highest in the winter when solar energy is at a minimum. Pico Pico hydro is hydroelectric power generation of under 5 kW. It is useful in small, remote communities that require only a small amount of electricity. For example, the 1.1 kW Intermediate Technology Development Group Pico Hydro Project in Kenya supplies 57 homes with very small electric loads (e.g., a couple of lights and a phone charger, or a small TV/radio). Even smaller turbines of 200-300 W may power a few homes in a developing country with a drop of only 1 m (3 ft). A Pico-hydro setup is typically run-of-the-river, meaning that dams are not used, but rather pipes divert some of the flow, drop this down a gradient, and through the turbine before returning it to the stream. Underground An underground power station is generally used at large facilities and makes use of a large natural height difference between two waterways, such as a waterfall or mountain lake. A tunnel is constructed to take water from the high reservoir to the generating hall built in a cavern near the lowest point of the water tunnel and a horizontal tailrace taking water away to the lower outlet waterway. Calculating available power A simple formula for approximating electric power production at a hydroelectric station is: P = − η ( m ˙ g Δ h ) = − η ( ( ρ V ˙ ) g Δ h ) {\displaystyle P=-\eta \ ({\dot {m}}g\ \Delta h)=-\eta \ ((\rho {\dot {V}})\ g\ \Delta h)} where P {\displaystyle P} is power (in watts) η {\displaystyle \eta } (eta) is the coefficient of efficiency (a unitless, scalar coefficient, ranging from 0 for completely inefficient to 1 for completely efficient). ρ {\displaystyle \rho } (rho) is the density of water (~1000 kg/m3) V ˙ {\displaystyle {\dot {V}}} is the volumetric flow rate (in m3/s) m ˙ {\displaystyle {\dot {m}}} is the mass flow rate (in kg/s) Δ h {\displaystyle \Delta h} (Delta h) is the change in height (in meters) g {\displaystyle g} is acceleration due to gravity (9.8 m/s2)Efficiency is often higher (that is, closer to 1) with larger and more modern turbines. Annual electric energy production depends on the available water supply. In some installations, the water flow rate can vary by a factor of 10:1 over the course of a year. Properties Advantages Flexibility Hydropower is a flexible source of electricity since stations can be ramped up and down very quickly to adapt to changing energy demands. Hydro turbines have a start-up time of the order of a few minutes. Although battery power is quicker its capacity is tiny compared to hydro. It takes less than 10 minutes to bring most hydro units from cold start-up to full load; this is quicker than nuclear and almost all fossil fuel power. Power generation can also be decreased quickly when there is a surplus power generation. Hence the limited capacity of hydropower units is not generally used to produce base power except for vacating the flood pool or meeting downstream needs. Instead, it can serve as backup for non-hydro generators. High value power The major advantage of conventional hydroelectric dams with reservoirs is their ability to store water at low cost for dispatch later as high value clean electricity. In 2021 the IEA estimated that the "reservoirs of all existing conventional hydropower plants combined can store a total of 1 500 terawatt-hours (TWh) of electrical energy in one full cycle" which was "about 170 times more energy than the global fleet of pumped storage hydropower plants". Battery storage capacity is not expected to overtake pumped storage during the 2020s. When used as peak power to meet demand, hydroelectricity has a higher value than baseload power and a much higher value compared to intermittent energy sources such as wind and solar. Hydroelectric stations have long economic lives, with some plants still in service after 50–100 years. Operating labor cost is also usually low, as plants are automated and have few personnel on site during normal operation. Where a dam serves multiple purposes, a hydroelectric station may be added with relatively low construction cost, providing a useful revenue stream to offset the costs of dam operation. It has been calculated that the sale of electricity from the Three Gorges Dam will cover the construction costs after 5 to 8 years of full generation. However, some data shows that in most countries large hydropower dams will be too costly and take too long to build to deliver a positive risk adjusted return, unless appropriate risk management measures are put in place. Suitability for industrial applications While many hydroelectric projects supply public electricity networks, some are created to serve specific industrial enterprises. Dedicated hydroelectric projects are often built to provide the substantial amounts of electricity needed for aluminium electrolytic plants, for example. The Grand Coulee Dam switched to support Alcoa aluminium in Bellingham, Washington, United States for American World War II airplanes before it was allowed to provide irrigation and power to citizens (in addition to aluminium power) after the war. In Suriname, the Brokopondo Reservoir was constructed to provide electricity for the Alcoa aluminium industry. New Zealand's Manapouri Power Station was constructed to supply electricity to the aluminium smelter at Tiwai Point. Reduced CO2 emissions Since hydroelectric dams do not use fuel, power generation does not produce carbon dioxide. While carbon dioxide is initially produced during construction of the project, and some methane is given off annually by reservoirs, hydro has one of the lowest lifecycle greenhouse gas emissions for electricity generation. The low greenhouse gas impact of hydroelectricity is found especially in temperate climates. Greater greenhouse gas emission impacts are found in the tropical regions because the reservoirs of power stations in tropical regions produce a larger amount of methane than those in temperate areas.Like other non-fossil fuel sources, hydropower also has no emissions of sulfur dioxide, nitrogen oxides, or other particulates. Other uses of the reservoir Reservoirs created by hydroelectric schemes often provide facilities for water sports, and become tourist attractions themselves. In some countries, aquaculture in reservoirs is common. Multi-use dams installed for irrigation support agriculture with a relatively constant water supply. Large hydro dams can control floods, which would otherwise affect people living downstream of the project. Managing dams which are also used for other purposes, such as irrigation, is complicated. Disadvantages In 2021 the IEA called for "robust sustainability standards for all hydropower development with streamlined rules and regulations". Ecosystem damage and loss of land Large reservoirs associated with traditional hydroelectric power stations result in submersion of extensive areas upstream of the dams, sometimes destroying biologically rich and productive lowland and riverine valley forests, marshland and grasslands. Damming interrupts the flow of rivers and can harm local ecosystems, and building large dams and reservoirs often involves displacing people and wildlife. The loss of land is often exacerbated by habitat fragmentation of surrounding areas caused by the reservoir.Hydroelectric projects can be disruptive to surrounding aquatic ecosystems both upstream and downstream of the plant site. Generation of hydroelectric power changes the downstream river environment. Water exiting a turbine usually contains very little suspended sediment, which can lead to scouring of river beds and loss of riverbanks. The turbines also will kill large portions of the fauna passing through, for instance 70% of the eel passing a turbine will perish immediately. Since turbine gates are often opened intermittently, rapid or even daily fluctuations in river flow are observed. Drought and water loss by evaporation Drought and seasonal changes in rainfall can severely limit hydropower. Water may also be lost by evaporation. Siltation and flow shortage When water flows it has the ability to transport particles heavier than itself downstream. This has a negative effect on dams and subsequently their power stations, particularly those on rivers or within catchment areas with high siltation. Siltation can fill a reservoir and reduce its capacity to control floods along with causing additional horizontal pressure on the upstream portion of the dam. Eventually, some reservoirs can become full of sediment and useless or over-top during a flood and fail.Changes in the amount of river flow will correlate with the amount of energy produced by a dam. Lower river flows will reduce the amount of live storage in a reservoir therefore reducing the amount of water that can be used for hydroelectricity. The result of diminished river flow can be power shortages in areas that depend heavily on hydroelectric power. The risk of flow shortage may increase as a result of climate change. One study from the Colorado River in the United States suggest that modest climate changes, such as an increase in temperature in 2 degree Celsius resulting in a 10% decline in precipitation, might reduce river run-off by up to 40%. Brazil in particular is vulnerable due to its heavy reliance on hydroelectricity, as increasing temperatures, lower water flow and alterations in the rainfall regime, could reduce total energy production by 7% annually by the end of the century. Methane emissions (from reservoirs) Lower positive impacts are found in the tropical regions. In lowland rainforest areas, where inundation of a part of the forest is necessary, it has been noted that the reservoirs of power plants produce substantial amounts of methane. This is due to plant material in flooded areas decaying in an anaerobic environment and forming methane, a greenhouse gas. According to the World Commission on Dams report, where the reservoir is large compared to the generating capacity (less than 100 watts per square metre of surface area) and no clearing of the forests in the area was undertaken prior to impoundment of the reservoir, greenhouse gas emissions from the reservoir may be higher than those of a conventional oil-fired thermal generation plant.In boreal reservoirs of Canada and Northern Europe, however, greenhouse gas emissions are typically only 2% to 8% of any kind of conventional fossil-fuel thermal generation. A new class of underwater logging operation that targets drowned forests can mitigate the effect of forest decay. Relocation Another disadvantage of hydroelectric dams is the need to relocate the people living where the reservoirs are planned. In 2000, the World Commission on Dams estimated that dams had physically displaced 40-80 million people worldwide. Failure risks Because large conventional dammed-hydro facilities hold back large volumes of water, a failure due to poor construction, natural disasters or sabotage can be catastrophic to downriver settlements and infrastructure. During Typhoon Nina in 1975 Banqiao Dam in Southern China failed when more than a year's worth of rain fell within 24 hours (see 1975 Banqiao Dam failure). The resulting flood resulted in the deaths of 26,000 people, and another 145,000 from epidemics. Millions were left homeless. The creation of a dam in a geologically inappropriate location may cause disasters such as 1963 disaster at Vajont Dam in Italy, where almost 2,000 people died.The Malpasset Dam failure in Fréjus on the French Riviera (Côte d'Azur), southern France, collapsed on December 2, 1959, killing 423 people in the resulting flood.Smaller dams and micro hydro facilities create less risk, but can form continuing hazards even after being decommissioned. For example, the small earthen embankment Kelly Barnes Dam failed in 1977, twenty years after its power station was decommissioned, causing 39 deaths. Comparison and interactions with other methods of power generation Hydroelectricity eliminates the flue gas emissions from fossil fuel combustion, including pollutants such as sulfur dioxide, nitric oxide, carbon monoxide, dust, and mercury in the coal. Hydroelectricity also avoids the hazards of coal mining and the indirect health effects of coal emissions. In 2021 the IEA said that government energy policy should "price in the value of the multiple public benefits provided by hydropower plants". Nuclear power Nuclear power is relatively inflexible; although it can reduce its output reasonably quickly. Since the cost of nuclear power is dominated by its high infrastructure costs, the cost per unit energy goes up significantly with low production. Because of this, nuclear power is mostly used for baseload. By way of contrast, hydroelectricity can supply peak power at much lower cost. Hydroelectricity is thus often used to complement nuclear or other sources for load following. Country examples where they are paired in a close to 50/50 share include the electric grid in Switzerland, the Electricity sector in Sweden and to a lesser extent, Ukraine and the Electricity sector in Finland. Wind power Wind power goes through predictable variation by season, but is intermittent on a daily basis. Maximum wind generation has little relationship to peak daily electricity consumption, the wind may peak at night when power isn't needed or be still during the day when electrical demand is highest. Occasionally weather patterns can result in low wind for days or weeks at a time, a hydroelectric reservoir capable of storing weeks of output is useful to balance generation on the grid. Peak wind power can be offset by minimum hydropower and minimum wind can be offset with maximum hydropower. In this way the easily regulated character of hydroelectricity is used to compensate for the intermittent nature of wind power. Conversely, in some cases wind power can be used to spare water for later use in dry seasons. An example of this is Norway's trading with Sweden, Denmark, the Netherlands, Germany and the UK. Norway is 98% hydropower, while its flatland neighbors have wind power. In areas that do not have hydropower, pumped storage serves a similar role, but at a much higher cost and 20% lower efficiency. World hydroelectric capacity The ranking of hydroelectric capacity is either by actual annual energy production or by installed capacity power rating. In 2015 hydropower generated 16.6% of the worlds total electricity and 70% of all renewable electricity. In 2021, hydropower produced 4 200 TWh, more than half of total renewable generation for the year. Hydropower is produced in 150 countries, with the Asia-Pacific region (excluding China) generating 26% of global generation in 2021. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. Brazil, Canada, New Zealand, Norway, Paraguay, Austria, Switzerland, Venezuela and several other countries have a majority of the internal electric energy production from hydroelectric power. Paraguay produces 100% of its electricity from hydroelectric dams and exports 90% of its production to Brazil and to Argentina. Norway produces 96% of its electricity from hydroelectric sources. Large plants tend to be built by governments, so most capacity (70%) is publicly owned, even though most plants (nearly 70%) are owned and operated by the private sector, as of 2021.A hydroelectric station rarely operates at its full power rating over a full year; the ratio between annual average power and installed capacity rating is the capacity factor. The installed capacity is the sum of all generator nameplate power ratings. Economics The weighted average cost of capital is a major factor. See also References External links Hydroelectricity at Curlie Hydropower Reform Coalition Interactive demonstration on the effects of dams on rivers Archived 2019-07-25 at the Wayback Machine European Small Hydropower Association IEC TC 4: Hydraulic turbines (International Electrotechnical Commission - Technical Committee 4) IEC TC 4 portal with access to scope, documents and TC 4 website Archived 2015-04-27 at the Wayback Machine
octopus energy
Octopus Energy Group is a British renewable energy group specialising in sustainable energy. It was founded in 2015 with the backing of British fund management company Octopus Group, a British asset management company. Headquartered in London, the company has operations in the United Kingdom, France, Germany, Italy, Spain, Australia, Japan, New Zealand and the United States. On completion of the acquisition of Shell Energy, expected towards the end of 2023, Octopus Energy will be the UK's second largest domestic energy provider. Octopus Energy Group operates a wide range of business divisions, including Octopus Energy Retail, Octopus Energy for Business, Octopus Energy Services, Octopus Electric Vehicles, Octopus Energy Generation, Octopus Hydrogen, Kraken, Kraken Flex and the not-for-profit Centre for Net Zero. The company also supplies software services to other energy suppliers. History Octopus Energy was established in August 2015 as a subsidiary of Octopus Capital Limited. Trading began in December 2015. Greg Jackson is the founder of the company and holds the position of chief executive.By April 2018, the company had 198,000 customers and had made an energy procurement deal with Shell. Later in 2018, Octopus gained the 100,000 customers of Iresa Limited, under Ofgem's "supplier of last resort" process, after Iresa ceased trading. The same year, Octopus replaced SSE as the energy supplier for M&S Energy, a brand of Marks & Spencer, and bought Affect Energy, which had 22,000 customers.In 2018 Hanwha Energy Retail Australia (Nectr) chose the Kraken platform, developed by Octopus to provide billing, CRM and other technology services to support its launch into the Australian retail energy market.In August 2019, an agreement with Midcounties Co-operative saw Octopus gain more than 300,000 customers, taking its total beyond 1 million. Three Co-op brands were affected: Octopus acquired the customers of the GB Energy and Flow Energy brands, and began to operate the accounts of Co-op Energy customers on a white label basis, while Midcounties retained responsibility for acquiring new Co-op Energy customers.In both 2018 and 2019, Octopus was the only energy supplier to earn "Recommended Provider" status from the Which? consumer organisation. In January 2020, Octopus was ranked first in a Which? survey and was one of three recommended providers. In January 2021, Octopus was ranked second and was one of two recommended providers, becoming the only energy provider in the UK to have been named as a recommended provider four years running.In January 2020, ENGIE UK announced that it was selling its residential energy supply business (comprising around 70,000 UK residential customers) to Octopus. The same month saw the launch of London Power, a partnership with the Mayor of London.In 2020 Octopus completed two funding rounds totalling $577m, making the company the highest funded UK tech start-up that year.In November 2020 Octopus acquired Manchester-based smart grid energy software company Upside Energy, which in June 2021 rebranded as KrakenFlex.The same month saw Octopus launch the not-for-profit Octopus Centre for Net Zero (OCNZ), a research organisation tasked with creating models and policy recommendations for potential paths to a green energy future.In February 2021, CEO Greg Jackson said on a BBC News interview that Octopus does not operate a human resources department.In March 2021 the Financial Times listed Octopus at number 23 on their list of the fastest growing companies in Europe.In July 2021 Octopus rose 12 places on the UK Customer Service Index to 17th, making it the only energy company in the Top 50.In 2021, Octopus built the UK's first R&D and training centre for the decarbonisation of heat. Located in Slough, the centre will train 1000 heat pump engineers per year and develop new heating systems. In September 2021, Octopus was appointed as the Supplier of Last Resort (SOLR) for Avro Energy, acquiring Avro Energy's domestic customers and increasing their customer base to 3.1m customers.In November 2021, Octopus announced in Manchester that it had signed a deal with the city region as part of a bid to become carbon neutral by 2038.In October 2022, Octopus reached an agreement to acquire Bulb Energy's 1.5 million customers.In February 2023, Marks & Spencer announced it was pulling out of the energy supply business and ending its five-year partnership with Octopus; the 60,000 M&S Energy customers would transfer to Octopus Energy in April.In September 2023, Octopus announced it would be acquiring Shell's household energy business in the UK (trading as Shell Energy) and in Germany, in a deal expected to complete in late 2023. It is anticipated the acquisition will increase the company's domestic and business customer base to 6.5 million, likely making it the second largest supplier of energy in the UK. Investments In September 2019 Octopus acquired German start-up 4hundred for £15m; the acquisition of 4hundred, which had 11,000 customers, was Octopus' first overseas expansion.In May 2020, Australian electricity and gas supplier Origin Energy paid A$507 million for a 20% stake in Octopus. This meant Octopus gained "unicorn" status, as a startup company with a value in excess of £1 billion. In September of that year, Octopus acquired Evolve Energy, a US Silicon Valley-based start-up, in a $5m deal. The acquisition was the first step in Octopus' $100m US expansion; at the time of the acquisition, Octopus announced it was aiming to acquire 25 million US customers, and 100 million global customers in total, by 2027.In December 2020, Tokyo Gas paid about 20 billion yen ($193 million) for a 9.7% stake in Octopus, valuing the company at $2.1bn. Octopus and Tokyo Gas agreed to launch the Octopus brand in Japan via a 30:70 joint venture to provide electricity from renewable sources, amongst other services. Origin invested a further $50m at the same time, to maintain its 20% stake.In August 2021, Octopus entered the Spanish market with the acquisition of green energy start-up Umeme. Upon the acquisition, Octopus announced it was targeting a million Spanish energy accounts under its brand by 2027. In November 2021, Octopus acquired Italian energy retailer SATO Luce e Gas, rebranding the business as Octopus Energy Italy, investing an initial £51m and targeting 5% of the Italian market by 2025. As a result of these acquisitions, Octopus now has retail, generation or technology licences in 13 countries across four continents.In September 2021, Generation Investment Management, co-founded and chaired by Al Gore, purchased a 13% stake in Octopus Energy Group in a deal worth $600m. The investment increased the company's valuation to $4.6bn, with the cash injection to be used by Octopus to increase its investment in new technologies for cheaper and faster decarbonisation.In December 2021, Octopus Energy announced a long-term partnership with Canada Pension Plan Investment Board, raising US$300m and taking the valuation of Octopus Energy Group to approximately $5bn.In January 2022 Octopus Energy entered the French market with the acquisition of Plüm énergie, a French energy start-up with 100,000 retail and corporate accounts. Plüm was subsequently rebranded as Octopus Energy France. Operations Gas and electricity supply As of March 2023 the company has nearly 3 million domestic and business customers.Besides industry-standard fixed and variable tariffs, the company is known for innovative tariffs which are made possible by the national rollout of smart meters. These include: Octopus Tracker – gas and electricity prices change every day, and are based on wholesale prices for that day, with disclosure of overheads and the company's profit margin. Octopus Agile – electricity prices change every half hour, according to a schedule published the previous day, determined from wholesale prices. The price occasionally goes negative (i.e. customers are paid to use electricity) at times of high generation and low demand. Octopus Go – a tariff with a reduced rate for an overnight period, intended for owners of electric vehicles.In March 2019, Octopus announced it had partnered with Amazon's Alexa virtual assistant, to optimise home energy use through the Agile Octopus time-of-use tariff.As part of their partnership agreed in August 2019, Midcounties Co-operative and Octopus established a joint venture to develop the UK's community energy market and encourage small-scale electricity generation. Brands Besides the Octopus Energy brand, as of September 2021 customers are supplied under the Ebico Energy, Affect Energy, Co-op Energy, M&S Energy and London Power brands. Electricity generation In its early years the company did not generate gas or electricity, instead making purchases on the wholesale markets. In 2019, Octopus stated that all its electricity came from renewable sources, and began to offer a "green" gas tariff with carbon offsetting. In July 2021, Octopus acquired sister company Octopus Renewables which claims to be the UK's largest investor in solar farms, and also invests in wind power and anaerobic digesters. At the time of the acquisition, the generation assets were reported to be worth over £3.4 billion.In October 2020, Octopus partnered with Tesla Energy to power the Tesla Energy Plan, which is designed to power a home with 100% clean energy either from solar panels or Octopus. The plan allows households to become part of the UK Tesla Virtual Power Plant, which connects a network homes that generate, store and return electricity to grid at peak times.In January 2021, Octopus acquired two 73 metre (240') wind turbines to power their 'Fan Club' tariff, which offers households living near its turbines cheaper electricity prices when the wind is blowing strongly. Customers on the tariff get a 20% discount on the unit price when the turbines are spinning, and a 50% discount when the wind is above 8m/s (20 mph). In November 2021, Octopus Energy announced plans to raise £4 billion to fund the global expansion of its Fan Club model, which would be expanded to include solar farms. By 2030, Octopus aims to supply around 2.5 million households with green electricity through Fan Club schemes.Also in November 2021, Octopus Energy Group signed a deal with Elia Group at COP26 to build a "smart" green grid across Belgium and Germany. The company’s flexibility platform KrakenFlex will be used together with Elia Group’s energy data affiliate, re.alto, to enable electric vehicles, heat pumps and other green technologies to be used for grid balancing.In January 2022, it was announced that Octopus Renewables had bought the Broons/Biterne-Sud wind farm in Cotes d’Armor, northeast Brittany, France from Energiequelle for an undisclosed price. In June of that year, the group's fund management team bought the rights to develop the 35MW Gaishecke wind farm near Frankfurt, Germany. Investment trust Octopus Renewables is contracted as the investment manager for Octopus Renewables Infrastructure Trust, an investment trust established in 2019 which owns wind and solar generation in the UK, Europe and Australia. Electric vehicle charging The Electric Universe service, which aims to simplify public charging of electric vehicles, was launched in 2020 as Electric Juice and rebranded in 2022. As of September 2022, more than 450 charging companies were taking part, allowing customers access to over 300,000 chargers in over 50 countries through a single card and/or app. All drivers can use the service, and Octopus Energy customers have the option of paying their charging costs through their domestic energy bills. Software development Octopus Energy licenses their proprietary customer management system called Kraken, which runs on Amazon's cloud computing service. It was first licensed by UK rival Good Energy in late 2019, for an initial three-year term, to manage its 300,000 customers.In March 2020 it was announced that E.ON and its Npower subsidiary had licensed the technology to manage their combined 10 million customers. In May 2021 it was announced that E.ON had completed the migration of all two million former npower customers to its Kraken-powered E.ON Next customer service platform. The migration was hailed as being responsible for E.ON's financial recovery in the UK. The Kraken software was also licensed to Australia's Origin Energy as part of their May 2020 agreement.In November 2021, EDF Energy agreed a deal with Octopus Energy Group to move its five million customers onto its Kraken platform. The customer accounts will be migrated onto Kraken from 2023, increasing the number of energy accounts contracted to be served via Kraken to over 20 million worldwide. Kraken also has strategic partnerships with Hanwha Group and Tokyo Gas. Marketing Climate change In 2019 Octopus launched a 'Portraits from the Precipice' campaign, which sought to raise awareness of climate change and encouraged customers to switch to greener energy deals. The campaign artwork was exhibited at over 5,000 sites, making it the largest ever digital out-of-home (DOOH) art exhibition. As a result of the campaign, Octopus registered a 163% increase in sign-ups and gained 37,000 customers. The campaign won the 2020 Marketing Week Masters award for utilities, and the 2020 Energy Institute award for Public Engagement. Solar Energy at COP26 In November 2021 Octopus financed 'Grace of the Sun', a large-scale art piece by Robert Montgomery made using the Little Sun solar lamps designed by Olafur Eliasson and Frederik Ottesen. The project, which coincided with COP26, was realised in Glasgow through collaboration with the local art community, and was designed as a call for global leaders to invest in renewable energies such as solar PV, in order to power a sustainable future. References External links Official website Armitage, Jim (24 December 2020). "Interview with Greg Jackson". Evening Standard.
ecotricity
Ecotricity is a British energy company based in Stroud, Gloucestershire, England, specialising in selling green energy to consumers that it primarily generates from its 87.2 megawatt wind power portfolio. It is built on the principle of heavily reinvesting its profit in building more of its own green energy generation.The company was founded in 1995 by Dale Vince, who remains in control; in 2022 he announced an intention to sell the company, which has around 200,000 domestic and business customers. Besides the supply of gas and electricity, Ecotricity's initiatives include the creation of one of Britain's first electric vehicle charging networks, which was sold to Gridserve in 2021. History Ecotricity was started by Dale Vince in 1995 as Renewable Energy Company Limited, with a single wind turbine he had used to power an old army truck in which he lived on a hill near Stroud. Vince later went on to build commercial wind-monitoring equipment, which the company still does today, using the name Nexgen. Ecotricity started generation with a 40-metre turbine in 1996, which at the time was the largest in the country.In 2007, Vince ran an advertisement on the back page of The Guardian newspaper inviting Richard Branson to his house to discuss solutions to climate change over a carbon-free breakfast. The ad ran the day after Branson appeared on TV with American former vice president Al Gore, who had managed to persuade Branson that climate change was an issue. The ad included Vince's personal mobile phone number.Ecotricity was a winner in the 2007 Ashden Awards for sustainable energy. The awards congratulated Ecotricity for its environmental contribution, saying: "The company's turbines are delivering 46 GW·h/yr of renewable electricity and avoiding around 46,000 tonnes of CO2 emissions a year. The installed capacity is expected to double by the end of 2007."In July 2009, Ecotricity started legal proceedings against French power company EDF Energy for the alleged misuse of the green Union Flag logo, used to promote EDF's Team Green Britain campaign. Ecotricity had previously used a green Union flag in its own advertising and claimed confused customers had contacted it to ask why Ecotricity was co-operating with EDF. In January 2012, it was announced that Ecotricity has invested in the development of Searaser pump-to-shore wave energy machines, and in June said they were to be deployed in the autumn of that year. In October 2014, Ecotricity and marine consultants DNV GL were moving from laboratory trials to sea trials.In 2013, Ecotricity's electricity supply became 100% renewable, rather than a mix.In October 2014, it was announced that Ecotricity had partnered with Skanska to build and finance new turbines, which added a further 100MW to its existing 70 MW capacity, The following month, the company decided not to attempt new planning applications in England because of the political climate, instead concentrating on Scotland. It went on to spin its small turbine manufacturer out into a subsidiary called Britwind, which, in collaboration with a local company, offered free electricity to crofters in return for installing a small turbine, keeping any excess power generated.In March 2015, Ecotricity announced it had refinanced its existing wind farms with the aim of using the extra capital to expand production to 100 megawatts by November 2016.In 2016, Ecotricity had approximately a 25% shareholding stake in competitor Good Energy, which has been sustained to 2020.In the 2017/2018 financial year the company had a turnover of £176 million, with a gross profit of £55 million and a loss on ordinary activities before tax of £4.9 million, but after charges and revaluation of investments had a "Total comprehensive (loss) for the year" of £9.5 million. It gave £416,000 to charity.By 2019, the company had 200,000 customers. A corporate restructure in 2020 created Green Britain Group Limited; the company's directors are Dale Vince and Kate Vince, and its subsidiaries include Ecotricity Limited and Forest Green Rovers Football Club Limited.In January 2021 the company agreed to buy 3 Megawatt-hours yearly from United Downs Deep Geothermal Power, the UK first geothermal plant. In summer 2021, Ecotricity made a bid to take over Good Energy, where it already owned 27% of the shares, which was rejected.In April 2022, Dale Vince stated an intention to sell the company. It was reported that the company planned to build a further 2,500MW of renewable energy generation, which would require investment of some £2 billion. Tariffs Before August 2013, Ecotricity ran a mix of fuels. Ecotricity's proportion of renewable energy rose from 24.1% in 2007 to 51.1% in 2011 (compared with a national average of 7.9%), with plans for a further increase to 60% by 2012.In the past, a substantial proportion of the electricity (25.9% in 2007) sold by Ecotricity to customers came from nuclear sources. This proportion had decreased to 16% by 2010, and 2.6% by 2011. Ecotricity also provided a 100% renewable energy tariff called New Energy Plus, in which renewable energy was bought in from other suppliers to top up renewable energy produced by Ecotricity. Wind In Conisholme in Lincolnshire on 8 January 2009 two of the blades of one of the company's turbines were damaged. In February 2013 the go-ahead was given for Ecotricity to build its largest windfarm, a 66 megawatt, 22 turbine farm at Heckington Fen in Lincolnshire.In February 2013, Ecotricity revealed a prototype 6 kW vertical axis wind turbine called the "urbine". Solar Ecotricity also produces solar energy, with its first "sun park" opening in 2011. In April 2016 it bought SunEdison's UK business supplying domestic solar panels. Gas From May 2010 it became the first UK company to supply eco-friendly gas, produced in the Netherlands by anaerobic digestion of sugar beet waste and in 2015 it was planning to have its own digesters fed by locally sourced grass from marginal land of grade 3 or poorer by 2017. The first of these would have produced 78.8GWh a year from 75,000t of grass and forage rye silage.In August 2015, Ecotricity announced plans to build an anaerobic digester at Sparsholt College in Hampshire that would take grasscuttings from local farms and supply the resulting six megawatts of gas to the grid with the overall aim of training students in the technology. This joined the first announced in Gloucestershire in April and was followed by a third three megawatt plant announced in August in Somerset.On 25 April 2016, planning permission for the site at Sparsholt College was refused. In July 2016, a new application was made to build the facility at the college site, which was approved in October 2016. The new proposal included "[...] new and revised traffic data and assessment, new traffic plans to keep vehicle movements away from Sparsholt village and a commitment to protect local road infrastructure.". Also, "[Ecotricity] consulted representatives of the nearby parish councils and incorporated their requests, wherever possible into the routing plans and operational controls."By the start of 2019 the company had not built any biogas plants but still intended to do so. Microtricity feed-in tariff Ecotricity offers the Feed-in Tariff as a voluntary licensee under the name "Microtricity", offering payments to people who generate and export electricity from low-carbon sources such as solar panels. As of March 2022, Ecotricity does not offer a Smart Export Guarantee tariff to small low-carbon generators such as domestic solar panel systems. Side projects Greenbird Ecotricity is the sponsor of the Ecotricity Greenbird, a land yacht that set a new world land speed record for wind-powered vehicles on 26 March 2009 on the dry Lake Ivanpah. Nemesis Ecotricity has built an electric sports car called Nemesis that was built as a demonstration of what electric cars are capable of: an endurance trip from Land's End to John o' Groats is planned recharging only from electricity produced by wind power. In September 2012 the car broke the UK electric land speed record reaching an average speed of 151 miles per hour (243 km/h). Vehicle recharging (2011 to 2021) In July 2011, Ecotricity launched a free vehicle charging network sited around the country at 14 of the Welcome Break Motorway service areas, linking London in the south with Exeter in the west and Edinburgh in the north. The charging points were initially equipped with both a UK standard 13amp domestic socket and a high power IEC 62196 32amp 3-phase socket. There were plans for charging points at RoadChef sites also.In October 2012, the company started to add 50 kW CHAdeMO fast charging stations, allowing compatible cars to recharge within 30 minutes. In April 2014 it was announced that it would be adding support for Combined Charging System connectors, and by September Ecotricity had over 120 chargers, branded as the Electric Highway. In May 2014, Ecotricity brought an interim high-court injunction against electric car manufacturer Tesla over its vehicle charging network; this was resolved in an out of court settlement.In 2014, the Ecotricity vehicle charging network had sporadic software issues to do with the addition of a new connector which left some chargers not working or not connecting to specific cars.In December 2014, the network covered 90% of the UK's motorway service stations, with sites also at Land's End and John o' Groats. By December 2015 it had 6,500 members using it once a week or more, and the network, which had hitherto been free to use, began to require payment. From 11 July 2016, a 20-minute fast-charge cost £5, later changed to £6 for 30 minutes, but charges remained free for customers of Ecotricity. Following feedback from users, a balance between the needs of EV drivers and PHEV drivers led to a £3 connection fee, waived for Ecotricity customers, and 17p per KWh.In 2018 the Ecotricity EV tariff on its motorway network was 30p/KWh for non-Ecotricity customers and half this for customers. Access was via a mobile phone app. To help with using this, some of the charging points were fitted with short-range, restricted, WiFi to enable connection in poor mobile signal areas. By the start of 2019, Ecotricity had over 300 charging points.In early 2021, Ecotricity and GRIDSERVE announced a partnership which would see the network expanded and contactless payment facilities added. Funding for the programme came from Hitachi Capital (UK), also a shareholder in GRIDSERVE. In mid 2021, it was announced that GRIDSERVE had purchased the remaining stake from Ecotricity, taking full ownership of the charging network. Distributed energy storage Ecotricity has investigated supplying 100 houses with an internet-connected grid energy storage system that will take the homes off the grid at peak times. Mobile phone network Ecotricity launched a mobile virtual network called Ecotalk in 2018; plans had been discussed by Vince in 2013. Money from customer's bills is used to buy land for nature conservation, in part through a partnership with the RSPB. Small turbine manufacture In May 2014, Ecotricity rescued Evance, a manufacturer of small (5 kW) wind turbines, from administration, saving the company's 29 jobs. Branded "Brit Wind" in January 2017 they announced one million pounds worth of sales to Japan as well as sales to France, Norway, Denmark, the US and Belgium. Political donations The company has donated to several political parties that support subsidies for renewable energy. In November 2013 it donated £20,000 to the Green Party. On 10 February 2015 Ecotricity announced that it would be donating £250,000 to the electoral fighting fund of the UK Labour Party. This decision alienated some of its customers, in particular supporters of the Green Party as they felt some Labour policies are at odds with Ecotricity's avowed green ethical stance.Ecotricity had already donated £120,000 to Labour in November 2014, including £20,000 to the local group in Stroud which was trying (unsuccessfully) to unseat Neil Carmichael, an opponent of wind farms in Gloucestershire. In the six months before the 2015 general election Ecotricity donated a total of £380,000 to Labour. The day after the election of 7 May 2015 the company donated £50,000 to the Liberal Democrats, including £20,000 to the group in the Kingston upon Thames constituency which had been lost by Ed Davey, the pro-renewables Secretary of State for Energy and Climate Change. Ecotricity donated £20,000 to Keir Starmer's 2020 Labour Party leadership election campaign. Grid-level storage At the end of 2017 Ecotricity was granted planning permission to build one of the UK's first grid scale battery storage projects on its Alveston site. The 10 megawatt project is intended to share the grid connection with the three new windmills there, providing the company with peak-shaving. Virtual power plant In May 2018 it was announced that Ecotricity would start building a Virtual power plant to more efficiently use and manage the electricity usage. Diamonds In October 2020, Vince announced the company would make lab grown diamonds using carbon dioxide captured from the air, water and power from their own green supply. See also Green electricity in the United Kingdom Wind power in the United Kingdom Energy policy of the United Kingdom Energy in the United Kingdom References External links Official website
kyoto protocol and government action
The Kyoto Protocol was an international treaty which extended the 1992 United Nations Framework Convention on Climate Change. A number of governments across the world took a variety of actions. Annex I In total, Annex I Parties managed a cut of 3.3% in greenhouse gas (GHG) emissions between 1990 and 2004 (UNFCCC, 2007, p. 11). In 2007, projections indicated rising emissions of 4.2% between 1990 and 2010. This projection assumed that no further mitigation action would be taken. The reduction in the 1990s was driven significantly by economic restructuring in the economies-in-transition (EITs. See Kyoto Protocol § Intergovernmental Emissions Trading for the list of EITs). Emission reductions in the EITs had little to do with climate change policy (Carbon Trust, 2009, p. 24). Some reductions in Annex I emissions have occurred due to policy measures, such as promoting energy efficiency (UNFCCC, 2007, p. 11). Australia On the change of government following the election in November 2007, Prime Minister Kevin Rudd signed the ratification immediately after assuming office on 3 December 2007, just before the meeting of the UN Framework Convention on Climate Change; it took effect in March 2008. Australia's target is to limit its emissions to 8% above their 1990 level over the 2008–2012 period, i.e., their average emissions over the 2008–2012 period should be kept below 108% of their 1990 level (IEA, 2005, p. 51). According to the Australian government, Australia should meet its Kyoto target (IEA, 2005, p. 56; DCCEE, 2010).When he was in the opposition, Rudd commissioned Ross Garnaut to report on the economic effects of reducing greenhouse gas emissions. The report was submitted to the Australian government on 30 September 2008. The policy of the Rudd government contrasts with that of the former Australian government, which refused to ratify the agreement on the ground that following the protocol would be costly. Policy Australia's position, under Prime Minister John Howard, was that it did not intend to ratify the treaty (IEA, 2005, p. 51). The justification for this was that: the treaty did not cover 70% of global emissions; developing countries are excluded from emission limitations; and the-then largest GHG emitter, the US, had not ratified the treaty.The Howard government did intend to meet its Kyoto target, but without ratification (IEA, 2005, p. 51). As part of the 2004 budget, A$1.8 billion was committed towards its climate change strategy. A$700 million was directed towards low-emission technologies (IEA, 2005, p. 56). The Howard government, along with the United States, agreed to sign the Asia Pacific Partnership on Clean Development and Climate at the ASEAN regional forum on 28 July 2005. Furthermore, the state of New South Wales (NSW) commenced the NSW greenhouse gas abatement scheme. This mandatory scheme of greenhouse gas emissions trading commenced on 1 January 2003 and is currently in trial by the state government in NSW alone. Notably, this scheme allows accredited certificate providers to trade emissions from households in the state. As of 2006, the scheme is still in place despite the outgoing Prime Minister's clear dismissal of emissions trading as a credible solution to climate change. Following the example of NSW, the national emissions trading scheme (NETS) has been established as an initiative of state and territory governments of Australia, all of which have Labor Party governments, except Western Australia. The purpose of NETS is to establish an intra-Australian carbon trading scheme to coordinate policy among regions. As the Constitution of Australia does not refer specifically to environmental matters (apart from water), the allocation of responsibility is to be resolved at a political level. In the later years of the Howard administration (1996–2007), the states governed by the Labor took steps to establish a NETS (a) to take action in a field where there were few mandatory federal steps and (b) as a means of facilitating ratification of the Kyoto Protocol by the incoming Labor government. In May 2009, Kevin Rudd delayed and changed the carbon pollution reduction scheme: the scheme would begin in 2011/2012, a year later than initially scheduled (it had been scheduled to begin on 1 July 2010); there would be one-year fixed price of A$10 per permit in 2011/2012 (previously, price was to under the price cap of $40); there would be an unlimited amount of permits available from the government in the first year (previously, estimated 300 million tons of carbon dioxide (CO2) was to be auctioned off); a higher percentage of permits would be handed out, rather than auctioned off (previously, 60% or 90% of permits were to be handed out); compensation would be canceled in 2010/2011 and reduced in 2011/2012; households can reduce their carbon footprint by buying and retiring permits into an Australian carbon trust (previously, no such scheme was included); subject to an international agreement, Australia would commit to a reduction of 25% from the 2000 level by 2020 (previously, there was to be a reduction of 15%); 5% out of the 25% reduction could be achieved by the government purchase of international off-sets (previously, no such scheme was included).Greenpeace Greenpeace has called clause 3.7 of the Kyoto Protocol the "Australia clause" on the ground that it unfairly made Australia a major beneficiary. The clause allows annex 1 countries with a high rate of land clearing in 1990 to set the level in that year as a base. Greenpeace argues that since Australia had an extremely high level of land clearing in 1990, Australia's "baseline" was unusually high compared to other countries. Emissions In 2002, Australia represented about 1.5% of global greenhouse gas (GHG) emissions (IEA, 2005, p. 51). Over the 1990–2002 period, Australia's gross emissions rose by 22%, which was surpassed by only four other International Energy Agency (IEA) members (IEA, 2005, p. 54). This was in large part due to economic growth. Net emissions (including changes in land-use and forestry) increased by 1.3% over this period. In 2005, Australia's GHG emissions made up 1.2% of the global total (MNP, 2007).Per-capita emissions are a country's total emissions divided by its population (Banuri et al., 1996, p. 95). In 2005, per capita emissions in Australia were 26.3 tons per capita (MNP, 2007). Canada On 17 December 2002, Canada ratified the treaty that came into force in February 2005, requiring it to reduce emissions to 6% below 1990 levels during the 2008–2012 commitment period (IEA, 2004, p. 52). Under Canada's Kyoto Protocol Implementation Act (KPIA), the National Round Table on the Environment and the Economy (NRTEE) is required to respond to the government's climate change plans (Canadian Government, 2010). In the assessment of NRTEE (2008), "Canada is not pursuing a policy objective of meeting the Kyoto Protocol emissions reductions targets. [...] [The] projected emissions profile described in the 2008 [government plan] would leave Canada in non-compliance with the Kyoto Protocol."On 13 December 2011, a day after the end of the 2011 United Nations Climate Change Conference, Canada's environment minister, Peter Kent, announced that Canada would withdraw from the Kyoto Protocol. Emissions In 2001, Canadian emissions had grown by more than 20% above their 1990 level (IEA, 2004, p. 49). High population and economic growth, added to the expansion of CO2 emissions-intensive sectors, such as oil sand production, were responsible for this growth in emissions. By 2004, CO2 emissions had risen to 27% above the level in 1990. In 2006 they were down to 21.7% above 1990 levels.In 2005, Canada's GHG emissions made up 2% of the global total (MNP, 2007). Per capita emissions in Canada were 23.2 tons per capita. Projections In 2004, Canada's emission projections under a business-as-usual scenario (i.e., predicted emissions should policy not be changed) indicated a rise of 33% on the 1990 level by 2010 (IEA, 2004, p. 52). This is a gap of approximately 240 Mt between its target and projected emissions. Politics When the treaty was ratified in 2002, numerous polls showed support for the Kyoto protocol at around 70%. Despite strong public support, there was still some opposition, particularly by the Canadian Alliance, a precursor to the governing Conservative Party, some business groups, and energy concerns, using arguments similar to those being voiced in the U.S. In particular, there was a fear that since U.S. companies would not be affected by the Kyoto Protocol, Canadian companies would be at a disadvantage. In 2005, a "war of words" was ongoing, primarily between Alberta, Canada's primary oil and gas producer, and the federal government. Between 1998 and 2004, Canada committed $3.7 billion towards investment on climate change activities (IEA, 2004, p. 52). The Climate Change Plan for Canada, released in November 2002, described priority areas for climate change policy. In January 2006, a Conservative minority government under Stephen Harper was elected, who previously has expressed opposition to Kyoto, and in particular to the international emission trading. Rona Ambrose, who replaced Stéphane Dion as the environment minister, has since endorsed and expressed interests in some types of emission trading. On 25 April 2006, Ambrose announced that Canada would have no chance of meeting its targets under Kyoto, but would look to participate in the Asia-Pacific Partnership on Clean Development and Climate sponsored by the U.S. "We've been looking at the Asia-Pacific Partnership for a number of months now because the key principles around [it] are very much in line with where our government wants to go," Ambrose told reporters. On 2 May 2006, it was reported that the funding to meet the Kyoto standards had been cut, while the Harper government develops a new plan to take its place. As the co-chair of the UN Climate Change Conference in Nairobi in November 2006, the Canadian government received criticism from environmental groups and other governments for its position. On 4 January 2007, Rona Ambrose moved from the Ministry of the Environment to become Minister of Intergovernmental Affairs. The environment portfolio went to John Baird, the former President of the Treasury Board. The federal government has introduced legislation to set mandatory emissions targets for industry, but they will not take effect until 2012, with a benchmark date of 2006 as opposed to Kyoto's 1990. The government has since begun working with opposition parties to modify the legislation. A private member's bill was put forth by Pablo Rodriguez, Liberal, to force the government to "ensure that Canada meets its global climate change obligations under the Kyoto Protocol." With the support of the Liberals, the New Democratic Party and the Bloc Québécois, and with the current minority situation, the bill passed the House of Commons on 14 February 2007 with a vote of 161 to 113. The Senate passed the bill, and it received royal assent on 22 June 2007. However, the government, as promised, has largely ignored the bill, which was to force the government 60 days to form a detailed plan, citing economic reasons.In May 2007, the Friends of the Earth sued the federal government for failing to meet the Kyoto Protocol obligations to cut greenhouse gas emissions. The obligations were based on a clause in the Canadian Environmental Protection Act that requires Ottawa to "prevent air pollution that violates an international agreement binding on Canada". Canada's obligation to the treaty began in 2008. Regardless of the federal policy, some provinces are pursuing policies to restrain emissions, including Quebec, Ontario, British Columbia and Manitoba as part of the Western Climate Initiative. Since 2003 Alberta operates a carbon offset program.Environmental groups Environmental groups in Canada are working together to demand that Canadian politicians take the threat of climate change seriously and make the necessary changes to ensure the safety and health of future generations. Participating groups have created a petition called KYOTOplus, on which signatories commit to the following acts: • set a national target to cut greenhouse gas emissions at least 25 per cent from 1990 levels by 2020; • implement an effective national plan to reach this target and help developing countries adapt and build low-carbon economies; and • adopt a strengthened second phase of the Kyoto Protocol at the United Nations climate change conference at Copenhagen, Denmark in December 2009. KYOTOplus is a national, non-partisan, petition-centered campaign for urgent federal government action on climate change. There are over fifty partner organizations, including: Climate Action Network Canada, Sierra Club Canada, Sierra Youth Coalition, Oxfam Canada, the Canadian Youth Climate Coalition, Greenpeace Canada, KAIROS: Canadian Ecumenical Justice Initiatives and the David Suzuki Foundation. Withdrawal of Canada On 13 December 2011, Canada's environment minister, Peter Kent, announced that Canada would withdraw from the Kyoto Protocol. The announcement was a day after the end of the 2011 United Nations Climate Change Conference (the 17th Conference of the Parties, or "COP 17"). At COP 17, the representatives of the Canadian government gave their support to a new international climate change agreement that "includes commitments from all major emitters.": 1  Canadian representatives also stated that "the Kyoto Protocol is not where the solution lies – it is an agreement that covers fewer than 30 per cent of global emissions (...).": 2 The Canadian government invoked Canada's legal right to formally withdraw from the Kyoto Protocol on 12 December 2011. Canada was committed to cutting its greenhouse emissions to 6% below 1990 levels by 2012, but in 2009 emissions were 17% higher than in 1990. Environment minister Peter Kent cited Canada's liability to "enormous financial penalties" under the treaty unless it withdrew. He also suggested that the recently signed Durban agreement may provide an alternative way forward. Commentary Christiana Figueres, Executive Secretary of the UNFCCC, said that she regretted Canada's decision to withdraw from the Kyoto treaty, and that "[whether] or not Canada is a Party to the Kyoto Protocol, it has a legal obligation under the [UNFCCC] to reduce its emissions, and a moral obligation to itself and future generations to lead in the global effort."Canada's decision received a mostly negative response from representatives of other ratifying countries. A spokesman for France's foreign ministry called the move "bad news for the fight against climate change." Japan's own environment minister, Goshi Hosono, urged Canada to stay in the protocol. Some countries, including India, were worried that Canada's decision might jeopardise future conferences.A spokesperson for the island nation of Tuvalu, significantly threatened by rising sea levels, accused Canada of an "act of sabotage" against his country. Australian government minister Greg Combet, however, defended the decision, saying that it did not mean Canada would not continue to "play its part in global efforts to tackle climate change". China called Canada's decision to withdraw from the Kyoto Protocol "regrettable" and said that it went against the efforts of the international community. Canada's move came days after climate-change negotiators met to hammer-out a global deal in Durban, South Africa.Foreign Ministry spokesman Liu Weimin expressed China's dismay at the news that Canada had pulled out of the Kyoto Protocol. Noting that the timing was particularly bad, because negotiators at the just-concluded Durban conference made what he described as important progress on the issue of the Kyoto Protocol's second commitment period.The UK's Guardian newspaper reported on Canada's decision to withdraw from the Kyoto treaty. According to the Guardian, "Canada's inaction was blamed by some on its desire to protect the lucrative but highly polluting exploitation of tar sands, the second biggest oil reserve in the world." Europe European Union On 31 May 2002, all fifteen then-members of the European Union deposited the relevant ratification paperwork at the UN. Under the Kyoto Protocol, the 15 member countries that were Member States of the EU when the Protocol was agreed (EU-15) are committed to reducing their collective GHG emissions in the period 2008–12 to 8% below levels in 1990 (EEA, 2009, p. 9). All but one EU Member State (Austria) anticipate that they will meet their commitments under the Kyoto Protocol (EEA, 2009, pp. 11–12). Denmark has committed itself to reducing its emissions by 21%. On 10 January 2007, the European Commission announced plans for a European Union energy policy that included a unilateral 20% reduction in GHG emissions by 2020.The EU has consistently been one of the major nominal supporters of the Kyoto Protocol, negotiating hard to get wavering countries on board.In December 2002, the EU created an emissions trading system (EU ETS) in an effort to meet these tough targets. Quotas were introduced in six key industries: energy, steel, cement, glass, brick making, and paper/cardboard. There are also fines for member nations that fail to meet their obligations, starting at €40/ton of carbon dioxide in 2005, and rising to €100/ton in 2008. The position of the EU is not without controversy in Protocol negotiations, however. One criticism is that, rather than reducing 8%, all the EU member countries should cut 15% as the EU insisted a uniform target of 15% for other developed countries during the negotiation while allowing itself to share a big reduction in the former East Germany to meet the 15% goal for the entire EU. According to Aldy et al. (2003, p. 7), the "hot air" in German and UK targets allows the EU to meet its Kyoto target at low cost.Both the EU (as the European Community) and its member states are signatories to the Kyoto treaty. Greece, however was excluded from the Kyoto Protocol on Earth Day (22 April 2008) due to unfulfilled commitment of creating the adequate mechanisms of monitoring and reporting emissions, which is the minimum obligation, and delivering false reports by having no other data to report. A United Nations committee has decided to reinstate Greece in the emissions-trading system of the Kyoto Protocol after a seven-month suspension (on 15 November). Emissions In 2005, the EU-27 made up 11% of total global GHG emissions (MNP, 2007). Per capita emissions were 10.6 tons per capita. Transport CO2 emissions in the EU grew by 32% between 1990 and 2004 The share of transport in CO2 emissions was 21% in 1990, but by 2004 this had grown to 28%. In 2017, 27% of the EU-28 greenhouse gas emissions came from transportation with 5% of these emissions coming from international aviation and maritime emissions, this was a 2.2% overall increase in this sector from the year before. France France's Kyoto commitment is to cap its emissions at their 1990 levels (Stern, 2007, p. 456). The country has a national objective to reduce emissions by 25% from their 1990 levels by 2020, and a long-term target to reduce emissions 75–80% by 2050. In 2002, France's total GHG emissions were roughly equivalent to 1990 levels, and 6.4% below 1990 levels when accounting for sink enhancements, as allowed under the Protocol (IEA, 2004, p. 58). In 2001, France's per capita emissions were 6.32 tCO2 per capita. Only five other IEA countries had lower levels (p. 59). France's CO2 intensity of GDP (energy-related CO2 emissions per gross domestic production (GDP)) was the fifth-lowest among all IEA countries. In 2004, France shut down its last coal mine, and now gets 80% of its electricity from nuclear power and therefore has relatively low CO2 emissions, except for its transport sector. Germany Germany has taken on a target under the Kyoto Protocol to reduce its GHG emissions by 21% compared with the base year 1990 (and in some cases, 1995) (IEA, 2007, pp. 44–45). Through 2004, Germany reduced its total GHG emissions by 17.4% (p. 45). Including the effects of land-use change increases this to 18.5%. The two main approaches Germany has used to meet its Kyoto target are reductions from the EU ETS, and reductions from the transport, household, and small business sectors (p. 51). Germany's progress towards its Kyoto target benefits from its reunification in 1990 (Liverman, 2008, p. 12). This is because of the reduction in emissions of East Germany after the fall of the Berlin Wall. CO2 emissions in Germany fell 12% between 1990 and 1995 (Barrett, 1998, p. 34). Germany reduced gas emissions by 22.4% between 1990 and 2008.On 28 June 2006, the German government announced that it would exempt its coal industry from requirements under the E.U. internal emission trading system. Claudia Kemfert, an energy professor at the German Institute for Economic Research in Berlin said, "For all its support for a clean environment and the Kyoto Protocol, the cabinet decision is very disappointing. The energy lobbies have played a big role in this decision." However, Germany's voluntary commitment to reduce CO2 emissions by 21% from the level in 1990 has practically been met, because emission has already been reduced by 19%. Germany is thus contributing 75% of the 8% reduction promised by the E.U. United Kingdom According to the UK government, projections indicate that the UK's GHG emissions will fall about 23% below base year levels by 2010 (DECC, 2009, p. 3). The UK's Kyoto target of a 12.5% reduction in emissions on their 1990 level (Stern, 2007, p. 456) benefits from the country's relatively high emissions in that year (1990) (Liverman, 2008, p. 12). Compared to their 1990 level, UK CO2 emissions in 1995 were lower by 7%. This was despite the fact that the UK had not adopted a radical policy to reduce emissions (Barrett, 1998, p. 34).Since 1990, the UK has privatized its energy-consuming industries, which has helped to increase their energy efficiency (US Senate, 2005, p. 218). The UK has also liberalized its electricity and gas systems, resulting in a change from coal to gas (the "dash for gas"), which has lowered emissions. It is estimated that these changes have contributed about half of the total observed reductions in UK CO2 emissions. The energy policy of the United Kingdom fully endorses goals for carbon dioxide emissions reduction and has committed to proportionate reduction in national emissions on a phased basis. The U.K. is a signatory to the Kyoto Protocol. On 13 March 2007, a draft Climate Change Bill was published after cross-party pressure over several years, led by environmental groups. Informed by the Energy White Paper 2003, the bill aims to achieve a mandatory reduction of 60% in the carbon emission from the 1990 level by 2050, with an intermediate target of between 26% and 32% by 2020. On 26 November 2008, the Climate Change Act became law with a target of 80% reduction over 1990. The U.K. is the first country to ratify a law with such a long-range and significant carbon reduction target. The U.K. currently appears on course to meet its Kyoto limitation for the basket of greenhouse gases, assuming the government is able to curb CO₂ emissions between 2007 and 2008 to 2012. Although the overall greenhouse gas emissions in the UK have fallen, annual net carbon dioxide emission has increased by about 2% since the Labour Party came to power in 1997. As a result, it now seems highly unlikely that the government will be able to honour its pledge to cut carbon dioxide emissions by 20% from the 1990 level by 2010, unless an immediate and drastic action is taken under after the ratification of the Climate Change Bill. Norway Norway's commitment under the Kyoto Protocol is to restrict its increase of GHGs to 1% above the 1990 level by the commitment period 2008–2012 (IEA, 2005, p. 46). In 2003, total emissions were 9% above the 1990 level. 99% of Norway's electricity from CO2-free hydropower. Oil and gas extraction activities contributed 74% to the total increase of CO2 in the period 1990–2003. The Norwegian government (2009, p. 11) projected a rise in GHG emissions of 15% from 1990 to 2010. Measures and policies adopted after autumn 2008 are not included in the baseline scenario (i.e., the predicted emissions that would occur without additional policy measures) for this projection (p. 55). Between 1990 and 2007, Norway's greenhouse gas emissions increased by 12%. As well as directly reducing their own greenhouse gas emissions, Norway's idea for carbon neutrality is to finance reforestation in China, a legal provision of the Kyoto protocol. Japan Japan ratified the Kyoto Protocol in June 2002, and has committed to reducing its GHG emissions by 6% below their 1990 levels (IEA, 2008, p. 47). Estimates for 2005 showed that Japan's emissions were 7.8% higher than in the base year. To meet its Kyoto target, the government aims for a 0.6% reduction in domestic GHG emissions compared with the base year. It also aims to meet part of its target through a forest sink of 13 million tonnes of carbon, which is equivalent to a 3.8% cut. Another reduction of 1.6% is aimed for using the Kyoto flexible mechanisms. According to IEA (2008, p. 45), Japan is a world leader in the field of sustainable energy policies. The legislation guiding Japan's efforts to reduce emissions is the Kyoto Protocol Target Achievement Plan, passed in 2005 and later amended (p. 47). This Plan includes about 60 policies and measures. Most of these policies and measures are related to improved energy efficiency. When measured using market exchange rates, Japan's energy intensity in terms of total primary energy supply per unit of GDP is the lowest among IEA countries (p. 53). Measured in terms of purchasing power parity, its energy intensity is one of the lowest. Emissions In 2005, Japan's energy-related CO2 per capita emissions were 9.5 metric tons per head of population (World Bank, 2010, p. 362). Japan's total energy-related CO2 emissions made up 4.57% of global emissions in this year. Over the period 1850–2005, Japan's cumulative energy-related CO2 emissions were 46.1 billion metric tons. New Zealand New Zealand signed the Kyoto Protocol to the UNFCCC on 22 May 1998 and ratified it on 19 December 2002. New Zealand's target is to limit net greenhouse gas emissions for the five-year 2008–2012 commitment period to five times the 1990 gross volume of GHG emissions. New Zealand may meet this target by either reducing emissions or by obtaining carbon credits from the international market or from domestic carbon sinks. The credits may be any of the Kyoto units; Assigned amount units (AAU), removal units (RMU), Emission Reduction Units (ERU) and Certified Emission Reduction (CER) units. In April 2012, the projection of New Zealand's net Kyoto position was a surplus of 23.1 million emissions units valued at NZ$189 million, based on an international carbon price of 5.03 Euro per tonne. On 9 November 2012, the New Zealand Government announced it would make climate pledges for the period from 2013 to 2020 under the UNFCCC process instead of adopting a binding limit under a second commitment period of the Kyoto Protocol.At the 2012 United Nations Climate Change Conference New Zealand was awarded two 'Fossil of the Day' awards for "actively hampering international progress". The New Zealand Youth Delegation heavily criticised the New Zealand government, saying New Zealand's decision not to sign up for a second commitment period under the Kyoto Protocol was "embarrassing, short-sighted and irresponsible". Russia Under the Kyoto Protocol, the Russian Federation committed itself to keeping its GHG emissions at the base year level during the first Kyoto commitment period from 2008–2012 (UNFCCC, 2009, p. 3). UNFCCC (2009, p. 11) reported that Russian GHG emissions were projected to decline by 28% relative to base year level by 2010. The process of economic transition in the Russian Federation was accompanied by a sharp decline in its GDP in the 1990s (p. 4). Since 1998, the Russian Federation has experienced strong economic growth. In the period 1990–2006, emissions decreased by 33%. The difference between GDP and emissions was mainly driven by: shifts in the structure of the economy; reduced share of oil and coal in the primary energy supply and an increase in the share of natural gas and nuclear power; a decline in the transport and agriculture sectors; a decrease in population; an increase in energy efficiency.Russia accounts for about two-thirds of the expected emission savings from Joint Implementation (JI) projects by 2012 (Carbon Trust, 2009, p. 21). These savings are projected to amount to 190 megatonnes of carbon dioxide equivalent (CO2-eq) over the 2008–2012 period (p. 23). Politics The interest of the Russian government in accessing the Kyoto Protocol was associated with the G-8 meeting in Genoa in 2001, where the heads of state of the eight countries had a very emotional discussion about the need to ratify the Kyoto Protocol. Russian President Vladimir Putin, who was neutral in the discussion, proposed to organise a conference where politicians and scientists representatives could discuss all issues related to the ratification of the Kyoto Protocol. This proposal was supported unanimously and in 2003 Russia hosted the World Conference on Climate Change. Since 2001, Vladimir Putin had received a large number of appeals from the heads of foreign states about the need for Russia to ratify the Kyoto Protocol, so he instructed Andrey Illarionov to find out whether the ratification of the Kyoto Protocol was in Russia's national interest. Not fully trusting experts of Intergovernmental Panel on Climate Change, Andrey Illarionov decided to address the President of the Russian Academy of Sciences, Yury Osipov, and climatologist, Yuri Izrael, with a request to involve Russian leading scientists in the discussion of this issue. On May 17, 2004, Yury Osipov outlined his position on the adoption of the Kyoto Protocol by Vladimir Putin. Yury Osipov noted that during the discussion, scientists had the opinion that the Kyoto Protocol does not have a scientific basis and is not effective for achieving the final goal of the UN Framework Convention on Climate Change. If Russia would ratify the Kyoto Protocol, then it would be impossible for its economy to double the GDP.Despite negative attitudes of scientists Vladimir Putin approved the treaty on 4 November 2004, and Russia officially notified the United Nations of its ratification on 18 November 2004. The issue of Russian ratification was particularly closely watched in the international community, as the accord was brought into force 90 days after Russian ratification (16 February 2005). President Putin had earlier decided in favor of the protocol in September 2004, along with the Russian cabinet, against the opinion of the Russian Academy of Sciences, of the Ministry for Industry and Energy, and of the then-president's economic adviser, Andrey Illarionov, and in exchange for the EU's support for Russia's admission into the WTO. As anticipated, after this, ratification by the lower (22 October 2004) and upper house of parliament did not encounter any obstacles. There is an ongoing scientific debate on whether Russia will actually gain from selling credits for unused AAUs. United States The United States has not ratified the Kyoto Protocol (IEA, 2007, p. 90). Doing so would have committed it to reduce GHG emissions by 7% below 1990 levels by 2012. Emissions of GHGs in the US increased by 16% between 1990 and 2005 (IEA, 2007, p. 83). In this period, the most substantial increase in volume were emissions from energy use, followed by industrial processes. In 2002, the US government set a goal to reduce the GHG emissions of the US economy per unit of economic output (the emissions intensity of the economy) (IEA, 2007, p. 87). The set goal is to reduce the GHG intensity of the US economy by 18% by 2012. To achieve this, policy has focused on supporting energy research and development, including support for carbon capture and storage (CCS), renewables, methane capture and use, and nuclear power. The America's Climate Security Act of 2007, also more commonly referred to in the U.S. as the "Cap and trade Bill", was proposed for greater U.S. alignment with the Kyoto standards and goals. Emissions Between 2001–2007, growth in US CO2 emissions was only 3%, comparable with to that of IEA Europe, and lower than that of a number of other countries, some of which are parties to the Kyoto Protocol (IEA, 2007, p. 90). In 2005, the US made up 16% of global GHG emissions, and had per capita emissions of 24.1 tons of GHG per capita (MNP, 2007). Politics The United States (US), although a signatory to the Kyoto Protocol, has neither ratified nor withdrawn from the Protocol. The signature alone is merely symbolic, as the Kyoto Protocol is non-binding on the United States unless ratified. Clinton administration On 25 July 1997, before the Kyoto Protocol was finalized (although it had been fully negotiated, and a penultimate draft was finished), the US Senate unanimously passed by a 95–0 vote the Byrd–Hagel Resolution (S. Res. 98), which stated the sense of the Senate was that the United States should not be a signatory to any protocol that did not include binding targets and timetables for developing nations as well as industrialized nations or "would result in serious harm to the economy of the United States". On 12 November 1998, Vice President Al Gore symbolically signed the protocol. Both Gore and Senator Joseph Lieberman indicated that the protocol would not be acted upon in the Senate until there was participation by the developing nations. The Clinton Administration never submitted the protocol to the Senate for ratification. The Clinton Administration released an economic analysis in July 1998, prepared by the Council of Economic Advisors, which concluded that with emissions trading among the annex B/annex I countries, and participation of key developing countries in the "Clean Development Mechanism"—which grants the latter business-as-usual emissions rates through 2012—the costs of implementing the Kyoto Protocol could be reduced as much as 60% from many estimates. Estimates of the cost of achieving the Kyoto Protocol carbon reduction targets in the United States, as compared by the Energy Information Administration (EIA), predicted losses to GDP of between 1.0% and 4.2% by 2010, reducing to between 0.5% and 2.0% by 2020. Some of these estimates assumed that action had been taken by 1998, and would be increased by delays in starting action. Bush administration Under the Presidency of George W. Bush, the US government recognized climate change as a serious environmental challenge (IEA, 2007, p. 87). The policy of the Bush administration was to reduce the GHG emissions of the US economy per unit of economic output (the emissions intensity of the economy). This policy allowed for absolute increases in emissions. The Bush administration viewed a policy to reduce absolute emissions as incompatible with continued economic growth. A number of states set state-level GHG targets, despite the absence of a federal level target. President George W. Bush did not submit the treaty for Senate ratification based on the exemption granted to China (now the world's largest gross emitter of carbon dioxide, although emission is low per capita). Bush opposed the treaty because of the strain he believed the treaty would put on the economy; he emphasized the uncertainties that he believed were present in the scientific evidence. Furthermore, the U.S. was concerned with broader exemptions of the treaty. For example, the U.S. did not support the split between annex I countries and others.At the G8 meeting in June 2005 administration officials expressed a desire for "practical commitments industrialized countries can meet without damaging their economies". According to those same officials, the United States is on track to fulfil its pledge to reduce its carbon intensity 18% by 2012. In 2002, the US National Environmental trust labelled carbon intensity, "a bookkeeping trick which allows the administration to do nothing about global warming while unsafe levels of emissions continue to rise." The United States has signed the Asia Pacific Partnership on Clean Development and Climate, a pact that allows those countries to set their goals for reducing greenhouse gas emissions individually, but with no enforcement mechanism. Supporters of the pact see it as complementing the Kyoto Protocol while being more flexible.The Administration's position was not uniformly accepted in the US For example, economist Paul Krugman noted that the target 18% reduction in carbon intensity is still actually an increase in overall emissions. The White House has also come under criticism for downplaying reports that link human activity and greenhouse gas emissions to climate change and that a White House official, former oil industry advocate and current Exxon Mobil officer, Philip Cooney, watered down descriptions of climate research that had already been approved by government scientists, charges the White House denies. Critics point to the Bush administration's close ties to the oil and gas industries. In June 2005, State Department papers showed the administration thanking Exxon executives for the company's "active involvement" in helping to determine climate change policy, including the US stance on Kyoto. Input from the business lobby group Global Climate Coalition was also a factor.In 2002, Congressional researchers who examined the legal status of the Protocol advised that signature of the UNFCCC imposes an obligation to refrain from undermining the Protocol's object and purpose, and that while the President probably cannot implement the Protocol alone, Congress can create compatible laws on its own initiative. Obama administration President Barack Obama did not take any action with the senate that would change the position of the United States towards this protocol. When Obama was in Turkey in April 2009, he said that "it doesn't make sense for the United States to sign [the Kyoto Protocol] because [it] is about to end". At this time, two years and eleven months remained from the four-year commitment period. States and local governments The Framework Convention on Climate Change is a treaty negotiated between countries at the UN; thus individual states are not free to participate independently within this Protocol to the treaty. Nonetheless, several separate initiatives have started at the level of state or city. Eight Northeastern US states created the Regional Greenhouse Gas Initiative (RGGI), a state level emissions capping and trading program, using their own independently-developed mechanisms. Their first allowances were auctioned in November 2008. Participating states: Maine, New Hampshire, Vermont, Connecticut, New York, New Jersey, Delaware, Massachusetts, and Maryland (these states represent over 46 million people, 20% of the US population). Observer states and regions: Pennsylvania, District of Columbia, Rhode Island.On 27 September 2006, California Governor Arnold Schwarzenegger signed into law the bill AB 32, also known as the Global Warming Solutions Act, establishing a timetable to reduce the state's greenhouse-gas emissions, which rank at 12th-largest in the world, by 25% by the year 2020. This law effectively puts California in line with the Kyoto limitations, but at a date later than the 2008–2012 Kyoto commitment period. Many of the features of the Californian system are similar to the Kyoto mechanisms, although the scope and targets are different. The parties in the Western Climate Initiative expect to be compatible with some or all of the Californian model. As of 14 June 2009, 944 US cities in 50 states, the District of Columbia and Puerto Rico, representing over 80 million Americans support Kyoto after Mayor Greg Nickels of Seattle started a nationwide effort to get cities to agree to the protocol. On 29 October 2007, it was reported that Seattle met their target reduction in 2005, reducing their greenhouse gas emissions by 8 percent since 1990. Large participating cities: Albany; Albuquerque; Alexandria; Ann Arbor; Arlington; Atlanta; Austin; Baltimore; Berkeley; Boston; Charleston;Chattanooga; Chicago; Cleveland; Dallas; Denver; Des Moines; Erie; Fayetteville; Hartford; Honolulu; Indianapolis; Jersey City; Lansing; Las Vegas; Lexington; Lincoln; Little Rock; Los Angeles; Louisville; Madison; Miami; Milwaukee; Minneapolis; Nashville; New Orleans; New York City; Oakland; Omaha; Orlando; Pasadena; Philadelphia; Phoenix; Pittsburgh; Portland; Providence; Richmond; Sacramento; Salt Lake City; San Antonio; San Francisco; San Jose; Santa Ana; Santa Fe; Seattle; St. Louis; Tacoma; Tallahassee; Tampa; Topeka; Tulsa; Virginia Beach; Washington, D.C.; West Palm Beach; Wilmington; Wilmington. There is a full list of cities and mayors. Non-Annex I UNFCCC (2005) compiled and synthesized information reported to it by non-Annex I Parties. Most reporting non-Annex I Parties belonged in the low-income group, with very few classified as middle-income (p. 4). Most Parties included information on policies relating to sustainable development. Sustainable development priorities mentioned by non-Annex I Parties included poverty alleviation and access to basic education and health care (p. 6). Many non-Annex I Parties are making efforts to amend and update their environmental legislation to include global concerns such as climate change (p. 7). A few Parties, e.g., South Africa and Iran, stated their concern over how efforts to reduce emissions could affect their economies. The economies of these countries are highly dependent on income generated from the production, processing, and export of fossil fuels. Emissions GHG emissions, excluding land use change and forestry (LUCF), reported by 122 non-Annex I Parties for the year 1994 or the closest year reported, totalled 11.7 billion tonnes (billion = 1,000,000,000) of CO2-eq. CO2 was the largest proportion of emissions (63%), followed by methane (26%) and nitrous oxide (N2O) (11%). The energy sector was the largest source of emissions for 70 Parties, whereas for 45 Parties the agriculture sector was the largest. Per capita emissions (in tonnes of CO2-eq, excluding LUCF) averaged 2.8 tonnes for the 122 non-Annex I Parties. The Africa region's aggregate emissions were 1.6 billion tonnes, with per capita emissions of 2.4 tonnes. The Asia and Pacific region's aggregate emissions were 7.9 billion tonnes, with per capita emissions of 2.6 tonnes. The Latin America and Caribbean region's aggregate emissions were 2 billion tonnes, with per capita emissions of 4.6 tonnes. The "other" region includes Albania, Armenia, Azerbaijan, Georgia, Malta, Moldova, and Macedonia. Their aggregate emissions were 0.1 billion tonnes, with per capita emissions of 5.1 tonnes.Parties reported a high level of uncertainty in LUCF emissions, but in aggregate, there appeared to only be a small difference of 1.7% with and without LUCF. With LUCF, emissions were 11.9 billion tonnes, without LUCF, total aggregate emissions were 11.7 billion tonnes. Brazil Brazil has a national objective to increase the share of alternative renewable energy sources (biomass, wind and small hydropower) to 10% by 2030. It also has programmes to protect public forests from deforestation (Stern, 2007, p. 456). People's Republic of China China has a number of domestic policy measures that affect its GHG emissions (Jones et al., 2008, p. 26). These include a target to reduce the energy intensity of their GDP by 20% during the 2005–10 period. China plans to expand renewable energy generation to 15% of total capacity by 2020 (Wang et al., p. 86). Other policies include (Jones et al., 2008, p. 26): support for research and development; reduced indirect taxation on renewable electricity generation; investment subsidies, energy efficiency standards, and the closure of the most energy-inefficient state-owned enterprises.From 1995–2004, China energy efficiency efforts reduced its energy intensity by 30% (Wang et al., 2010, p. 87). From 2006–09, China achieved a 14.4% reduction in energy intensity. Renewables account for 8% of China's energy and 17% of its electricity. In response to the financial crisis, China implemented one of the world's largest stimulus's in efficient and clean energy (p. 85). Emissions In 2005, China made up 17% of global GHG emissions, with per capita emissions of 5.8 tons of GHG per head (MNP, 2007). Another way of measuring GHG emissions is to measure the cumulative emissions that a country has emitted over time (IEA, 2007b, p. 199). Over a long time period, cumulative emissions provide an indication of a country's total contribution to GHG concentrations in the atmosphere. Measured over the time period 1900–2005, China's cumulative energy-related CO2 emissions made up 8% of the global total (IEA, 2007b, p. 201). Clean Development Mechanism (CDM) A report by the Carbon Trust (2009) assessed the use of CDM in China. The CDM has been used to finance projects in China for renewable energy and HFC-23 reductions (HFC's are powerful greenhouse gases). For renewables, the CDM was judged to have helped to stimulate wind and small hydro power projects. Critics have argued that these policies would generally have taken place without the CDM (Carbon Trust, 2009, p. 56). India India signed and ratified the Protocol in August 2002. Since India is exempted from the framework of the treaty, it is expected to gain from the protocol in terms of transfer of technology and related foreign investments. At the G8 meeting in June 2005, Indian Prime Minister Manmohan Singh pointed out that the per-capita emission rates of the developing countries are a tiny fraction of those in the developed world. Following the principle of common but differentiated responsibility, India maintains that the major responsibility of curbing emission rests with the developed countries, which have accumulated emissions over a long period of time. However, the U.S. and other Western nations assert that India, along with China, will account for most of the emissions in the coming decades, owing to their rapid industrialization and economic growth. Policies in India related to greenhouse gas emissions have included (Stern, 2007, p. 456; Jones et al., 2008, p. 26): the 11th Five Year Plan, which contains mandatory and voluntary measures to increase efficiency in power generation and distribution increased use of nuclear power and renewable energy a target to increase energy efficiency by 20% by 2016–17 expanded electricity supply to villages policies designed to increase tree and forest cover building codes designed to reduce energy consumptionEmissions In 2005, India accounted for 5% of global GHG emissions, with per capita emissions of 2.1 tons of GHG per head of population (MNP, 2007). Over the time period 1900–2005, India's contribution to the global total of cumulative energy-related CO2 emissions was 2% (IEA, 2007b, p. 201). Pakistan Although the Minister of State for environment Malik Min Aslam was at first not very receptive, he subsequently convinced the Shoukat Aziz cabinet to ratify the Protocol. The decision was taken in 2001 but due to international circumstances, it was announced in Argentina in 2004 and accepted in 2005, opening the way for the creation of a policy framework. On 11 January 2005, Pakistan submitted its instruments of accession to the Kyoto Protocol. The Ministry of Environment assigned the task to work as designated national authority (DNA). According to a news story by Khan (2009), it was expected that the Protocol would help Pakistan lower dependence on fossil fuels through renewable energy projects.Pakistan had a per capita income of US$492 in 2002–2003, and is a low-income country (Pakistan government, 2003, p. 15). The Pakistan government is concentrating on reducing the vulnerability of the country to current climatic events (p. 17). Though Pakistan is a developing country, the government is taking different steps to lower the pollution.CDM In February 2006, the national CDM operational strategy was approved, and on 27 April 2006, the first CDM project was approved by DNA. It was reduction of large N2O from nitric acid production (investor: Mitsubishi, Japan), with an estimated annual production of 1 million CERs. Finally, in November 2006, the first CDM project was registered with the United Nations Framework Convention on Climate Change (UNFCCC). Pakistan has specified preferences for the CDM projects, including (Pakistan government, 2006, pp. 3–4): alternative and renewable energy energy efficiency fossil fuel co-generation (co-generation is the use of waste heat from thermal electricity-generation plants (Verbruggen, 2007)) Land use, land use change, and forestry, e.g., biodiversity protection waste management, e.g., reducing GHG emissions from latrines and animal waste (EcoSecurities, 2007, p. 72)So far, 23 CDM so far have been approved by the Pakistan government (n.d.).Emissions Over the period from July 1993 to June 1994, Pakistan's energy sector was by far the highest contributor to CO2 emissions, with a share of 81% of total CO2 emissions (Pakistan government, 2003, pp. 16). Pakistan's energy-related CO2 emissions rose by 94.1% between 1990 and 2005 (World Bank, 2010, p. 362).Pakistan's per capita emissions in 2005 were 0.8 tCO2 per head (p. 362). In 2005, Pakistan contributed 0.45% of the global total in energy-related CO2 emissions. Pakistan's cumulative emissions over the period 1850–2005 was 2.4 billion metric tons. Cumulative emissions before 1971 are based on data for East and West Pakistan. Asia Pacific Partnership on Clean Development and Climate The Asia-Pacific Partnership for Clean Development and Climate (APP) is a US-led effort to accelerate the voluntary development and deployment of clean energy technologies (UNEP, 2007, p. 257). The purpose of the Partnership is to address the issues of energy security, air pollution, and climate change (IEA, 2007, p. 51). The partner countries are Australia, Canada, China, India, Japan, Korea, and the United States (APP, n.d., p. 1). According to the APP (n.d.), the APP contributes to Partners' efforts under the UNFCCC, while "complementing" the Kyoto Protocol. Footnotes References External links The UNFCCC website contains national communications submitted by UNFCCC Parties on their current climate change policies, and in-depth reviews of Annex I country submissions. The International Energy Agency website contains reviews of the energy policies in IEA member countries.
switchgear
In an electric power system, a switchgear is composed of electrical disconnect switches, fuses or circuit breakers used to control, protect and isolate electrical equipment. Switchgear is used both to de-energize equipment to allow work to be done and to clear faults downstream. This type of equipment is directly linked to the reliability of the electricity supply. The earliest central power stations used simple open knife switches, mounted on insulating panels of marble or asbestos. Power levels and voltages rapidly escalated, making opening manually operated switches too dangerous for anything other than isolation of a de-energized circuit. Oil-filled switchgear equipment allows arc energy to be contained and safely controlled. By the early 20th century, a switchgear line-up would be a metal-enclosed structure with electrically operated switching elements using oil circuit breakers. Today, oil-filled equipment has largely been replaced by air-blast, vacuum, or SF6 equipment, allowing large currents and power levels to be safely controlled by automatic equipment. High-voltage switchgear was invented at the end of the 19th century for operating motors and other electric machines. The technology has been improved over time and can now be used with voltages up to 1,100 kV.Typically, switchgear in substations is located on both the high- and low-voltage sides of large power transformers. The switchgear on the low-voltage side of the transformers may be located in a building, with medium-voltage circuit breakers for distribution circuits, along with metering, control, and protection equipment. For industrial applications, a transformer and switchgear line-up may be combined in one housing, called a unitized substation (USS). According to the latest research by Visiongain, a market research company, the worldwide switchgear market is expected to achieve $152.5 billion by 2029 at a CAGR of 5.9%. Growing investment in renewable energy and enhanced demand for safe and secure electrical distribution systems are expected to generate the increase. Components A switchgear assembly has two types of components: Power-conducting components, such as switches, circuit breakers, fuses, and lightning arrestors, that conduct or interrupt the flow of electrical power. Control systems such as control panels, current transformers, potential transformers, protective relays, and associated circuitry, that monitor, control, and protect the power-conducting components. Functions One of the basic functions of switchgear is protection, which is interruption of short-circuit and overload fault currents while maintaining service to unaffected circuits. Switchgear also provides isolation of circuits from power supplies. Switchgear is further used to enhance system availability by allowing more than one source to feed a load. History Switchgear is as old as electricity generation. The first models were very primitive: all components were simply fixed to a wall. Later they were mounted on wooden panels. For reasons of fire protection, the wood was replaced by slate or marble. This led to a further improvement, because the switching and measuring devices could be attached to the front, while the wiring was on the back. Housing Switchgear for lower voltages may be entirely enclosed within a building. For higher voltages (over about 66 kV), switchgear is typically mounted outdoors and insulated by air, although this requires a large amount of space. Gas-insulated switchgear saves space compared with air-insulated equipment, although the equipment cost is higher. Oil insulated switchgear presents an oil spill hazard. Switches may be manually operated or have motor drives to allow for remote control. Circuit breaker types A switchgear may be a simple open-air isolator switch or it may be insulated by some other substance. An effective although more costly form of switchgear is the gas-insulated switchgear (GIS), where the conductors and contacts are insulated by pressurized sulfur hexafluoride gas (SF6). Other common types are oil or vacuum insulated switchgear. The combination of equipment within the switchgear enclosure allows them to interrupt fault currents of thousands of amps. A circuit breaker (within a switchgear enclosure) is the primary component that interrupts fault currents. The quenching of the arc when the circuit breaker pulls apart the contacts (disconnects the circuit) requires careful design. Circuit breakers fall into these six types: Oil Oil circuit breakers rely upon the vaporization of some of the oil to blast a jet of oil along the arc's path. The vapor released by the arcing consists of hydrogen gas. Mineral oil has better insulating properties than air. Whenever there is a separation of current-carrying contacts in the oil, the arc in the circuit breaker is initialized at the moment of separation of contacts, and due to this arc the oil is vaporized and decomposed to mostly hydrogen gas and ultimately creates a hydrogen bubble around the electric arc. This highly compressed gas bubble around the turn prevents the re-striking of the arc after the current reaches zero crossing of the cycle. The oil circuit breaker is one of the oldest types of circuit breakers. Air Air circuit breakers may use compressed air (puff) or the magnetic force of the arc itself to elongate the arc. As the length of the sustainable arc is dependent on the available voltage, the elongated arc will eventually exhaust itself. Alternatively, the contacts are rapidly swung into a small sealed chamber, the escaping of the displaced air thus blowing out the arc. Circuit breakers are usually able to terminate all current flow very quickly: typically between 30 ms and 150 ms depending upon the age and construction of the device. Gas Gas (SF6) circuit breakers sometimes stretch the arc using a magnetic field, and then rely upon the dielectric strength of the SF6 gas to quench the stretched arc. Hybrid Hybrid switchgear is a type which combines the components of traditional air-insulated switchgear (AIS) and SF6 gas-insulated switchgear (GIS) technologies. It is characterized by a compact and modular design, which encompasses several different functions in one module. Vacuum Circuit breakers with vacuum interrupters have minimal arcing characteristics (as there is nothing to ionize other than the contact material), so the arc quenches when it is stretched by a small amount (<2–8 mm). Near zero current the arc is not hot enough to maintain a plasma, and current ceases; the gap can then withstand the rise of voltage. Vacuum circuit breakers are frequently used in modern medium-voltage switchgear to 40,500 volts. Unlike the other types, they are inherently unsuitable for interrupting DC faults. The reason vacuum circuit breakers are unsuitable for breaking high DC voltages is that with DC there is no "current zero" period. The plasma arc can feed itself by continuing to gasify the contact material. Carbon dioxide Breakers that use carbon dioxide as the insulating and arc extinguishing medium work on the same principles as a sulfur hexafluoride (SF6) breaker. Because SF6 is a greenhouse gas more potent than CO2, by switching from SF6 to CO2 it is possible to reduce the greenhouse gas emissions by 10 tons during the product lifecycle. Protective circuitry Circuit breakers and fuses Circuit breakers and fuses disconnect when current exceeds a predetermined safe level. However they cannot sense other critical faults, such as unbalanced currents—for example, when a transformer winding contacts ground. By themselves, circuit breakers and fuses cannot distinguish between short circuits and high levels of electrical demand. Merz-Price circulating current scheme Differential protection depends upon Kirchhoff's current law, which states that the sum of currents entering or leaving a circuit node must equal zero. Using this principle to implement differential protection, any section of a conductive path may be considered a node. The conductive path could be a transmission line, a winding of a transformer, a winding in a motor, or a winding in the stator of an alternator. This form of protection works best when both ends of the conductive path are physically close to each other. This scheme was invented in Great Britain by Charles Hesterman Merz and Bernard Price.Two identical current transformers are used for each winding of a transformer, stator, or other device. The current transformers are placed around opposite ends of a winding. The current through both ends should be identical. A protective relay detects any imbalance in currents, and trips circuit breakers to isolate the device. In the case of a transformer, the circuit breakers on both the primary and secondary would open. Distance relays A short circuit at the end of a long transmission line appears similar to a normal load, because the impedance of the transmission line limits the fault current. A distance relay detects a fault by comparing the voltage and current on the transmission line. A large current along with a voltage drop indicates a fault. Classification Several different classifications of switchgear can be made: A single line-up may incorporate several different types of devices, for example, air-insulated bus, vacuum circuit breakers, and manually operated switches may all exist in the same row of cubicles. Ratings, design, specifications and details of switchgear are set by a multitude of standards. In North America mostly IEEE and ANSI standards are used, much of the rest of the world uses IEC standards, sometimes with local national derivatives or variations. Safety To help ensure safe operation sequences of switchgear, trapped-key interlocking provides predefined scenarios of operation. For example, if only one of two sources of supply are permitted to be connected at a given time, the interlock scheme may require that the first switch must be opened to release a key that will allow closing the second switch. Complex schemes are possible. Indoor switchgear can also be type tested for internal arc containment (e.g., IEC 62271-200). This test is important for user safety as modern switchgear is capable of switching large currents.Switchgear is often inspected using thermal imaging to assess the state of the system and predict failures before they occur. Other methods include partial discharge (PD) testing, using either fixed or portable testers, and acoustic emission testing using surface-mounted transducers (for oil equipment) or ultrasonic detectors used in outdoor switchyards. Temperature sensors fitted to cables to the switchgear can permanently monitor temperature build-up. SF6 equipment is invariably fitted with alarms and interlocks to warn of loss of pressure, and to prevent operation if the pressure falls too low. The increasing awareness of dangers associated with high fault levels has resulted in network operators specifying closed-door operations for earth switches and racking breakers. Many European power companies have banned operators from switch rooms while operating. Remote racking systems are available which allow an operator to rack switchgear from a remote location without the need to wear a protective arc flash hazard suit. Switchgear systems require continuous maintenance and servicing to remain safe to use and fully optimized to provide such high voltages. See also Arc flash Circuit breaker Disconnector Electrical safety Electric arc High voltage Remote racking system Short circuit References == External links ==
meat
Meat is animal flesh that is eaten as food. Humans have hunted, farmed, and scavenged other animals for meat since prehistoric times. The establishment of settlements in the Neolithic Revolution allowed the domestication of animals such as chickens, sheep, rabbits, pigs, and cattle. This eventually led to their use in meat production on an industrial scale in slaughterhouses. Meat is mainly composed of water, protein, and fat. It is edible raw but is normally eaten after it has been cooked and seasoned or processed in a variety of ways. Unprocessed meat will spoil or rot within hours or days as a result of infection with, and decomposition by, bacteria and fungi. Meat is important to the food industry, economies, and cultures around the world. There are nonetheless people who choose not to eat meat (vegetarians) or any animal products (vegans), for reasons such as taste preferences, ethics, environmental concerns, health concerns or religious dietary rules. Terminology The word meat comes from the Old English word mete, which referred to food in general. The term is related to mad in Danish, mat in Swedish and Norwegian, and matur in Icelandic and Faroese, which also mean 'food'. The word mete also exists in Old Frisian (and to a lesser extent, modern West Frisian) to denote important food, differentiating it from swiets (sweets) and dierfied (animal feed). Most often, meat refers to skeletal muscle and associated fat and other tissues, but it may also describe other edible tissues such as offal.: 1  Meat is sometimes also used in a more restrictive sense to mean the flesh of mammalian species (pigs, cattle, sheep, goats, etc.) raised and prepared for human consumption, to the exclusion of fish, other seafood, insects, poultry, or other animals.In the context of food, meat can also refer to "the edible part of something as distinguished from its covering (such as a husk or shell)", for example, coconut meat.In English, there are also specialized terms for the meat of particular animals. These terms originated with the Norman conquest of England in 1066: while the animals retained their English names, their meat as brought to the tables of the invaders was referred to them with the Norman French words for the respective animal. In time, these appellations came to be used by the entire population. History Hunting and farming Paleontological evidence suggests that meat constituted a substantial proportion of the diet of the earliest humans.: 2  Early hunter-gatherers depended on the organized hunting of large animals such as bison and deer.: 2 The domestication of animals, of which we have evidence dating back to the end of the last glacial period (c. 10,000 BCE),: 2  allowed the systematic production of meat and the breeding of animals with a view to improving meat production.: 2  Animals that are now principal sources of meat were domesticated in conjunction with the development of early civilizations: Sheep, originating from western Asia, were domesticated with the help of dogs prior to the establishment of settled agriculture, likely as early as the 8th millennium BCE.: 3  Several breeds of sheep were established in ancient Mesopotamia and Egypt by 3500–3000 BCE.: 3  Today, more than 200 sheep-breeds exist. Cattle were domesticated in Mesopotamia after settled agriculture was established about 5000 BCE,: 5  and several breeds were established by 2500 BCE.: 6  Modern domesticated cattle fall into the groups Bos taurus (European cattle) and Bos taurus indicus (zebu), both descended from the now-extinct aurochs.: 5  The breeding of beef cattle, cattle optimized for meat production as opposed to animals best suited for work or dairy purposes, began in the middle of the 18th century.: 7 Domestic pigs, which are descended from wild boars, are known to have existed about 2500 BCE in modern-day Hungary and in Troy; earlier pottery from Tell es-Sultan (Jericho) and Egypt depicts wild pigs.: 8  Pork sausages and hams were of great commercial importance in Greco-Roman times.: 8  Pigs continue to be bred intensively as they are being optimized to produce meat best suited for specific meat products.: 9  Goats are among the earliest animals domesticated by humans. The most recent genetic analysis confirms the archaeological evidence that the wild bezoar ibex of the Zagros Mountains is the likely original ancestor of probably all domestic goats today. Neolithic farmers began to herd wild goats primarily for easy access to milk and meat, as well as to their dung, which was used as fuel; and their bones, hair, and sinew were used for clothing, building, and tools. The earliest remnants of domesticated goats dating 10,000 years Before Present are found in Ganj Dareh in Iran. Goat remains have been found at archaeological sites in Jericho, Choga Mami, Djeitun, and Çayönü, dating the domestication of goats in Western Asia at between 8,000 and 9,000 years ago. Studies of DNA evidence suggests 10,000 years ago as the domestication date. Chicken were domesticated around 6000 BCE in Southeast Asia, according to genomic analysis, and spread to China and India 2000–3000 years later. Archaeological evidence supports domestic chickens in Southeast Asia well before 6000 BCE, China by 6000 BCE and India by 2000 BCE.Other animals are or have been raised or hunted for their flesh. The type of meat consumed varies much between different cultures, changes over time, depending on factors such as tradition and the availability of the animals. The amount and kind of meat consumed also varies by income, both between countries and within a given country. Deer are hunted for their meat (venison) in various regions. Horses are commonly eaten in France, Italy, Germany and Japan, among other countries. Horses and other large mammals such as reindeer were hunted during the late Paleolithic in western Europe. Dogs are consumed in China, South Korea and Vietnam. Dogs are also occasionally eaten in the Arctic regions. Historically, dog meat has been consumed in various parts of the world, such as Hawaii, Japan, Switzerland and Mexico. Cats are consumed in Southern China, Peru and sometimes also in Northern Italy. Guinea pigs are raised for their flesh in the Andes. Whales and dolphins are hunted, partly for their flesh, in Japan, Alaska, Siberia, Canada, the Faroe Islands, Greenland, Iceland, Saint Vincent and the Grenadines and by two small communities in Indonesia. Modern agriculture employs a number of techniques, such as progeny testing, to speed artificial selection by breeding animals to rapidly acquire the qualities desired by meat producers.: 10  For instance, in the wake of well-publicised health concerns associated with saturated fats in the 1980s, the fat content of United Kingdom beef, pork and lamb fell from 20–26 percent to 4–8 percent within a few decades, due to both selective breeding for leanness and changed methods of butchery.: 10  Methods of genetic engineering aimed at improving the meat production qualities of animals are now also becoming available.: 14 Even though it is a very old industry, meat production continues to be shaped strongly by the evolving demands of customers. The trend towards selling meat in pre-packaged cuts has increased the demand for larger breeds of cattle, which are better suited to producing such cuts.: 11  Even more animals not previously exploited for their meat are now being farmed, especially the more agile and mobile species, whose muscles tend to be developed better than those of cattle, sheep or pigs.: 11  Examples are the various antelope species, the zebra, water buffalo and camel,: 11ff  as well as non-mammals, such as the crocodile, emu and ostrich.: 13  Another important trend in contemporary meat production is organic farming which, while providing no organoleptic benefit to meat so produced, meets an increasing demand for organic meat. Consumption Meat consumption varies worldwide, depending on cultural or religious preferences, as well as economic conditions. Vegetarians and vegans choose not to eat meat because of taste preferences, ethical, economic, environmental, religious, or health concerns that are associated with meat production and consumption. According to the analysis of the FAO, the overall consumption for white meat between 1990 and 2009 has dramatically increased. Poultry meat has increased by 76.6% per kilo per capita and pig meat by 19.7%. Bovine meat has decreased from 10.4 kg (22 lb 15 oz) per capita in 1990 to 9.6 kg (21 lb 3 oz) per capita in 2009.Overall, diets that include meat are the most common worldwide according to the results of a 2018 Ipsos MORI study of 16–64 years olds in 28 countries. Ipsos states "An omnivorous diet is the most common diet globally, with non-meat diets (which can include fish) followed by over a tenth of the global population." Approximately 87% of people include meat in their diet in some frequency. 73% of meat eaters included it in their diet regularly and 14% consumed meat only occasionally or infrequently. Estimates of the non-meat diets were also broken down. About 3% of people followed vegan diets, where consumption of meat, eggs, and dairy are abstained from. About 5% of people followed vegetarian diets, where consumption of meat is abstained from, but egg and/or dairy consumption is not strictly restricted. About 3% of people followed pescetarian diets, where consumption of the meat of land animals is abstained from, fish meat and other seafood is consumed, and egg and/or dairy consumption may or may not be strictly restricted. History A bioarchaeological (specifically, isotopic analysis) study of early medieval England found, based on the funerary record, that high-meat protein diets were extremely rare, and that (contrary to previously held assumptions) elites did not consume more meat than non-elites, and men did not consume more meat than women.In the nineteenth century meat consumption in Britain was the highest in Europe, exceeded only by that in British colonies. In the 1830s consumption per head in Britain was about 34 kilograms (75 lb) a year, rising to 59 kilograms (130 lb) in 1912. In 1904 laborers were found to consume 39 kilograms (87 lb) a year while aristocrats ate 140 kilograms (300 lb). There were estimated to be 43,000 meat purveyor establishments in Britain in 1910, with "possibly more money invested in the meat industry than in any other British business" except the finance industry. The US was a meat importing country by 1926.Truncated lifespan as a result of intensive breeding allowed more meat to be produced from fewer animals. The world cattle population was about 600 million in 1929, with 700 million sheep and goats and 300 million pigs. According to a study, the average lifespan of livestock pigs is ~2 years (7% of "maximum expected lifespan"). For dairy cattle the lifespan is ~5 years (27%). Animal growth and development Agricultural science has identified several factors bearing on the growth and development of meat in animals. Genetics Several economically important traits in meat animals are heritable to some degree (see the adjacent table) and can thus be selected for by animal breeding. In cattle, certain growth features are controlled by recessive genes which have not so far been controlled, complicating breeding.: 18  One such trait is dwarfism; another is the doppelender or "double muscling" condition, which causes muscle hypertrophy and thereby increases the animal's commercial value.: 18  Genetic analysis continues to reveal the genetic mechanisms that control numerous aspects of the endocrine system and, through it, meat growth and quality.: 19 Genetic engineering techniques can shorten breeding programs significantly because they allow for the identification and isolation of genes coding for desired traits, and for the reincorporation of these genes into the animal genome.: 21  To enable such manipulation, research is ongoing (as of 2006) to map the entire genome of sheep, cattle and pigs.: 21  Some research has already seen commercial application. For instance, a recombinant bacterium has been developed which improves the digestion of grass in the rumen of cattle, and some specific features of muscle fibres have been genetically altered.: 22 Experimental reproductive cloning of commercially important meat animals such as sheep, pig or cattle has been successful. Multiple asexual reproduction of animals bearing desirable traits is anticipated,: 22  although this is not yet practical on a commercial scale. Environment Heat regulation in livestock is of great economic significance, because mammals attempt to maintain a constant optimal body temperature. Low temperatures tend to prolong animal development and high temperatures tend to retard it.: 22  Depending on their size, body shape and insulation through tissue and fur, some animals have a relatively narrow zone of temperature tolerance and others (e.g. cattle) a broad one.: 23  Static magnetic fields, for reasons still unknown, also retard animal development.: 23 Nutrition The quality and quantity of usable meat depends on the animal's plane of nutrition, i.e., whether it is over- or underfed. Scientists disagree about how exactly the plane of nutrition influences carcass composition.: 25 The composition of the diet, especially the amount of protein provided, is also an important factor regulating animal growth.: 26  Ruminants, which may digest cellulose, are better adapted to poor-quality diets, but their ruminal microorganisms degrade high-quality protein if supplied in excess.: 27  Because producing high-quality protein animal feed is expensive (see also Environmental impact below), several techniques are employed or experimented with to ensure maximum utilization of protein. These include the treatment of feed with formalin to protect amino acids during their passage through the rumen, the recycling of manure by feeding it back to cattle mixed with feed concentrates, or the partial conversion of petroleum hydrocarbons to protein through microbial action.: 30 In plant feed, environmental factors influence the availability of crucial nutrients or micronutrients, a lack or excess of which can cause a great many ailments.: 29  In Australia, for instance, where the soil contains limited phosphate, cattle are being fed additional phosphate to increase the efficiency of beef production.: 28  Also in Australia, cattle and sheep in certain areas were often found losing their appetite and dying in the midst of rich pasture; this was at length found to be a result of cobalt deficiency in the soil.: 29  Plant toxins are also a risk to grazing animals; for instance, sodium fluoroacetate, found in some African and Australian plants, kills by disrupting the cellular metabolism.: 29  Certain man-made pollutants such as methylmercury and some pesticide residues present a particular hazard due to their tendency to bioaccumulate in meat, potentially poisoning consumers.: 30 Animal welfare Livestock animals have shown relatively high intelligence which may raise animal ethics rationale for safeguarding their well-being. Pigs in particular are considered by some to be the smartest known domesticated animal in the world (e.g. more intelligent than pet dogs) which not only experience pain but also have notable depths, levels and/or variety/diversity of emotions (including boredom), cognition, intelligence, and/or sentience. Complications include that without or reduced meat production, many livestock animals may never live (see also: natalism), and that their life (relative timespan of existence) is typically short – in the case of pigs ~7% of their "maximum expected lifespan". Human intervention Meat producers may seek to improve the fertility of female animals through the administration of gonadotrophic or ovulation-inducing hormones.: 31  In pig production, sow infertility is a common problem – possibly due to excessive fatness.: 32  No methods currently exist to augment the fertility of male animals.: 32  Artificial insemination is now routinely used to produce animals of the best possible genetic quality, and the efficiency of this method is improved through the administration of hormones that synchronize the ovulation cycles within groups of females.: 33 Growth hormones, particularly anabolic agents such as steroids, are used in some countries to accelerate muscle growth in animals.: 33  This practice has given rise to the beef hormone controversy, an international trade dispute. It may also decrease the tenderness of meat, although research on this is inconclusive,: 35  and have other effects on the composition of the muscle flesh.: 36ff  Where castration is used to improve control over male animals, its side effects are also counteracted by the administration of hormones.: 33  Myostatin-based muscle hypertrophy has also been used.Sedatives may be administered to animals to counteract stress factors and increase weight gain.: 39  The feeding of antibiotics to certain animals has been shown to improve growth rates also.: 39  This practice is particularly prevalent in the US, but has been banned in the EU, partly because it causes antimicrobial resistance in pathogenic microorganisms.: 39 Biochemical composition Numerous aspects of the biochemical composition of meat vary in complex ways depending on the species, breed, sex, age, plane of nutrition, training and exercise of the animal, as well as on the anatomical location of the musculature involved.: 94–126  Even between animals of the same litter and sex there are considerable differences in such parameters as the percentage of intramuscular fat.: 126 Main constituents Adult mammalian muscle flesh consists of roughly 75 percent water, 19 percent protein, 2.5 percent intramuscular fat, 1.2 percent carbohydrates and 2.3 percent other soluble non-protein substances. These include nitrogenous compounds, such as amino acids, and inorganic substances such as minerals.: 76 Muscle proteins are either soluble in water (sarcoplasmic proteins, about 11.5 percent of total muscle mass) or in concentrated salt solutions (myofibrillar proteins, about 5.5 percent of mass).: 75  There are several hundred sarcoplasmic proteins.: 77  Most of them – the glycolytic enzymes – are involved in the glycolytic pathway, i.e., the conversion of stored energy into muscle power.: 78  The two most abundant myofibrillar proteins, myosin and actin,: 79  are responsible for the muscle's overall structure. The remaining protein mass consists of connective tissue (collagen and elastin) as well as organelle tissue.: 79 Fat in meat can be either adipose tissue, used by the animal to store energy and consisting of "true fats" (esters of glycerol with fatty acids),: 82  or intramuscular fat, which contains considerable quantities of phospholipids and of unsaponifiable constituents such as cholesterol.: 82 Red and white Meat can be broadly classified as "red" or "white" depending on the concentration of myoglobin in muscle fibre. When myoglobin is exposed to oxygen, reddish oxymyoglobin develops, making myoglobin-rich meat appear red. The redness of meat depends on species, animal age, and fibre type: Red meat contains more narrow muscle fibres that tend to operate over long periods without rest,: 93  while white meat contains more broad fibres that tend to work in short fast bursts.: 93 Generally, the meat of adult mammals such as cows, sheep, and horses is considered red, while chicken and turkey breast meat is considered white. Nutritional information All muscle tissue is very high in protein, containing all of the essential amino acids, and in most cases is a good source of zinc, vitamin B12, selenium, phosphorus, niacin, vitamin B6, choline, riboflavin and iron. Several forms of meat are also high in vitamin K. Muscle tissue is very low in carbohydrates and does not contain dietary fiber. While taste quality may vary between meats, the proteins, vitamins, and minerals available from meats are generally consistent. The fat content of meat can vary widely depending on the species and breed of animal, the way in which the animal was raised, including what it was fed, the anatomical part of the body, and the methods of butchering and cooking. Wild animals such as deer are typically leaner than farm animals, leading those concerned about fat content to choose game such as venison. Decades of breeding meat animals for fatness is being reversed by consumer demand for meat with less fat. The fatty deposits that exist with the muscle fibers in meats soften meat when it is cooked and improve the flavor through chemical changes initiated through heat that allow the protein and fat molecules to interact. The fat, when cooked with meat, also makes the meat seem juicier. The nutritional contribution of the fat is mainly calories as opposed to protein. As fat content rises, the meat's contribution to nutrition declines. In addition, there is cholesterol associated with fat surrounding the meat. The cholesterol is a lipid associated with the kind of saturated fat found in meat. The increase in meat consumption after 1960 is associated with, though not definitively the cause of, significant imbalances of fat and cholesterol in the human diet.The table in this section compares the nutritional content of several types of meat. While each kind of meat has about the same content of protein and carbohydrates, there is a very wide range of fat content. Production Meat is produced by killing an animal and cutting flesh out of it. These procedures are called slaughter and butchery, respectively. There is ongoing research into producing meat in vitro; that is, outside of animals. Transport Upon reaching a predetermined age or weight, livestock are usually transported en masse to the slaughterhouse. Depending on its length and circumstances, this may exert stress and injuries on the animals, and some may die en route.: 129  Unnecessary stress in transport may adversely affect the quality of the meat.: 129  In particular, the muscles of stressed animals are low in water and glycogen, and their pH fails to attain acidic values, all of which results in poor meat quality.: 130  Consequently, and also due to campaigning by animal welfare groups, laws and industry practices in several countries tend to become more restrictive with respect to the duration and other circumstances of livestock transports. Slaughter Animals are usually slaughtered by being first stunned and then exsanguinated (bled out). Death results from the one or the other procedure, depending on the methods employed. Stunning can be effected through asphyxiating the animals with carbon dioxide, shooting them with a gun or a captive bolt pistol, or shocking them with electric current.: 134ff  In most forms of ritual slaughter, stunning is not allowed. Draining as much blood as possible from the carcass is necessary because blood causes the meat to have an unappealing appearance and is a breeding ground for microorganisms.: 1340  The exsanguination is accomplished by severing the carotid artery and the jugular vein in cattle and sheep, and the anterior vena cava in pigs.: 137 The act of slaughtering animals for meat, or of raising or transporting animals for slaughter, may engender both psychological stress and physical trauma in the people involved. Additionally, slaughterhouse workers are exposed to noise of between 76 and 100 dB from the screams of animals being killed. 80 dB is the threshold at which the wearing of ear protection is recommended. Dressing and cutting After exsanguination, the carcass is dressed; that is, the head, feet, hide (except hogs and some veal), excess fat, viscera and offal are removed, leaving only bones and edible muscle.: 138  Cattle and pig carcases, but not those of sheep, are then split in half along the mid ventral axis, and the carcase is cut into wholesale pieces.: 138  The dressing and cutting sequence, long a province of manual labor, is progressively being fully automated.: 138 Conditioning Under hygienic conditions and without other treatment, meat can be stored at above its freezing point (−1.5 °C) for about six weeks without spoilage, during which time it undergoes an aging process that increases its tenderness and flavor.: 141 During the first day after death, glycolysis continues until the accumulation of lactic acid causes the pH to reach about 5.5. The remaining glycogen, about 18 g per kg, is believed to increase the water-holding capacity and tenderness of the flesh when cooked.: 87  Rigor mortis sets in a few hours after death as ATP is used up, causing actin and myosin to combine into rigid actomyosin and lowering the meat's water-holding capacity,: 90  causing it to lose water ("weep").: 146  In muscles that enter rigor in a contracted position, actin and myosin filaments overlap and cross-bond, resulting in meat that is tough on cooking: 144  – hence again the need to prevent pre-slaughter stress in the animal. Over time, the muscle proteins denature in varying degree, with the exception of the collagen and elastin of connective tissue,: 142  and rigor mortis resolves. Because of these changes, the meat is tender and pliable when cooked just after death or after the resolution of rigor, but tough when cooked during rigor.: 142  As the muscle pigment myoglobin denatures, its iron oxidates, which may cause a brown discoloration near the surface of the meat.: 146  Ongoing proteolysis also contributes to conditioning. Hypoxanthine, a breakdown product of ATP, contributes to the meat's flavor and odor, as do other products of the decomposition of muscle fat and protein.: 155 Additives When meat is industrially processed in preparation of consumption, it may be enriched with additives to protect or modify its flavor or color, to improve its tenderness, juiciness or cohesiveness, or to aid with its preservation. Meat additives include the following: Salt is the most frequently used additive in meat processing. It imparts flavor but also inhibits microbial growth, extends the product's shelf life and helps emulsifying finely processed products, such as sausages. Ready-to-eat meat products normally contain about 1.5 to 2.5 percent salt. Salt water or similar substances may also be injected into poultry meat to improve the taste and increase the weight, in a process called plumping. Nitrite is used in curing meat to stabilize the meat's color and flavor, and inhibits the growth of spore-forming microorganisms such as C. botulinum. The use of nitrite's precursor nitrate is now limited to a few products such as dry sausage, prosciutto or parma ham. Phosphates used in meat processing are normally alkaline polyphosphates such as sodium tripolyphosphate. They are used to increase the water-binding and emulsifying ability of meat proteins, but also limit lipid oxidation and flavor loss, and reduce microbial growth. Erythorbate or its equivalent ascorbic acid (vitamin C) is used to stabilize the color of cured meat. Sweeteners such as sugar or corn syrup impart a sweet flavor, bind water and assist surface browning during cooking in the Maillard reaction. Seasonings impart or modify flavor. They include spices or oleoresins extracted from them, herbs, vegetables and essential oils. Flavorings such as monosodium glutamate impart or strengthen a particular flavor. Tenderizers break down collagens to make the meat more palatable for consumption. They include proteolytic enzymes, acids, salt and phosphate. Dedicated antimicrobials include lactic, citric and acetic acid, sodium diacetate, acidified sodium chloride or calcium sulfate, cetylpyridinium chloride, activated lactoferrin, sodium or potassium lactate, or bacteriocins such as nisin. Antioxidants include a wide range of chemicals that limit lipid oxidation, which creates an undesirable "off flavor", in precooked meat products. Acidifiers, most often lactic or citric acid, can impart a tangy or tart flavor note, extend shelf-life, tenderize fresh meat or help with protein denaturation and moisture release in dried meat. They substitute for the process of natural fermentation that acidifies some meat products such as hard salami or prosciutto. Misidentification With the rise of complex supply chains, including cold chains, in developed economies, the distance between the farmer or fisherman and customer has grown, increasing the possibility for intentional and unintentional misidentification of meat at various points in the supply chain.In 2013, reports emerged across Europe that products labelled as containing beef actually contained horse meat. In February 2013 a study was published showing that about one-third of raw fish are misidentified across the United States. Imitation Various forms of imitation meat have been created for people who wish not to eat meat but still want to taste its flavor and texture. Meat imitates are typically some form of processed soybean (tofu, tempeh), but they can also be based on wheat gluten, pea protein isolate, or even fungi (quorn). Environmental impact Various environmental effects are associated with meat production. Among these are greenhouse gas emissions, fossil energy use, water use, water quality changes, and effects on grazed ecosystems. The livestock sector may be the largest source of water pollution (due to animal wastes, fertilizers, pesticides), and it contributes to emergence of antibiotic resistance. It accounts for over 8% of global human water use. It is a significant driver of biodiversity loss and ecosystems, as it causes deforestation and requires large amounts of land for pasture and feed crops, ocean dead zones, land degradation, pollution, overfishing and climate change.The occurrence, nature and significance of environmental effects varies among livestock production systems. Grazing of livestock can be beneficial for some wildlife species, but not for others. Targeted grazing of livestock is used as a food-producing alternative to herbicide use in some vegetation management. Land use Meat production is by far the biggest cause of land use, as it accounts for nearly 40% of the global land surface. Just in the contiguous United States, 34% of its land area (265 million hectares or 654 million acres) are used as pasture and rangeland, mostly feeding livestock, not counting 158 million hectares (391 million acres) of cropland (20%), some of which is used for producing feed for livestock. Roughly 75% of deforested land around the globe is used for livestock pasture. Deforestation from practices like slash-and-burn releases CO2 and removes the carbon sink of grown tropical forest ecosystems which substantially mitigate climate change. The land use is a major pressure on pressure on fertile soils which is important for global food security. Climate change The rising global consumption of carbon-intensive meat products has "exploded the global carbon footprint of agriculture," according to some top scientists. Meat production is responsible for 14.5% and possibly up to 51% of the world's anthropogenic greenhouse gas emissions. Some nations show very different impacts to counterparts within the same group, with Brazil and Australia having emissions over 200% higher than the average of their respective income groups and driven by meat consumption.According to the Assessing the Environmental Impacts of Consumption and Production report produced by United Nations Environment Programme's (UNEP) international panel for sustainable resource management, a worldwide transition in the direction of a meat and dairy free diet is indispensable if adverse global climate change were to be prevented. A 2019 report in The Lancet recommended that global meat (and sugar) consumption be reduced by 50 percent to mitigate climate change. Meat consumption in Western societies needs to be reduced by up to 90% according to a 2018 study published in Nature. The 2019 special report by the Intergovernmental Panel on Climate Change called for significantly reducing meat consumption, particularly in wealthy countries, in order to mitigate and adapt to climate change. Biodiversity loss Meat consumption is considered one of the primary contributors of the sixth mass extinction. A 2017 study by the World Wildlife Fund found that 60% of global biodiversity loss is attributable to meat-based diets, in particular from the vast scale of feed crop cultivation needed to rear tens of billions of farm animals for human consumption puts an enormous strain on natural resources resulting in a wide-scale loss of lands and species. Currently, livestock make up 60% of the biomass of all mammals on earth, followed by humans (36%) and wild mammals (4%). In November 2017, 15,364 world scientists signed a Warning to Humanity calling for, among other things, drastically diminishing our per capita consumption of meat and "dietary shifts towards mostly plant-based foods". The 2019 Global Assessment Report on Biodiversity and Ecosystem Services, released by IPBES, also recommended reductions in meat consumption in order to mitigate biodiversity loss. A 2021 Chatham House report asserted that a significant shift towards plant-based diets would free up the land to allow for the restoration of ecosystems and thriving biodiversity.A July 2018 study in Science says that meat consumption is set to rise as the human population increases along with affluence, which will increase greenhouse gas emissions and further reduce biodiversity. Reducing environmental impact The environmental impact of meat production can be reduced by conversion of human-inedible residues of food crops. Manure from meat-producing livestock is used as fertilizer; it may be composted before application to food crops. Substitution of animal manures for synthetic fertilizers in crop production can be environmentally significant, as between 43 and 88 MJ of fossil fuel energy are used per kg of nitrogen in manufacture of synthetic nitrogenous fertilizers. Reducing meat consumption The IPCC and many others, including scientific reviews of the literature and data on the topic, have concluded that meat production has to be reduced substantially for any sufficient mitigation of climate change and, at least initially, largely through shifts towards plant-based diets in cases (e.g. countries) where meat consumption is high. A review names broad potential measures such as "restrictions or fiscal mechanisms". Personal Carbon Allowances that allow a certain amount of free meat consumption per person would be a form of restriction, meat taxes would be a type of fiscal mechanism. Meat can be replaced by, for example, high-protein iron-rich low-emission legumes and common fungi, but there are also dietary supplements (e.g. of vitamin B12 and zinc) and/or fortified foods, cultured meat, microbial foods, mycoprotein, meat substitutes, and other alternatives. Farms can be transitioned to meet new demands, workers can enter relevant job retraining programs, and land previously used for meat production can be rewilded.The biologists Rodolfo Dirzo, Gerardo Ceballos, and Paul R. Ehrlich emphasize that it is the "massive planetary monopoly of industrial meat production that needs to be curbed" while respecting the cultural traditions of indigenous peoples, for whom meat is an important source of protein. Spoilage and preservation The spoilage of meat occurs, if untreated, in a matter of hours or days and results in the meat becoming unappetizing, poisonous or infectious. Spoilage is caused by the practically unavoidable infection and subsequent decomposition of meat by bacteria and fungi, which are borne by the animal itself, by the people handling the meat, and by their implements. Meat can be kept edible for a much longer time – though not indefinitely – if proper hygiene is observed during production and processing, and if appropriate food safety, food preservation and food storage procedures are applied. Without the application of preservatives and stabilizers, the fats in meat may also begin to rapidly decompose after cooking or processing, leading to an objectionable taste known as warmed over flavor. Methods of preparation Fresh meat can be cooked for immediate consumption, or be processed, that is, treated for longer-term preservation and later consumption, possibly after further preparation. Fresh meat cuts or processed cuts may produce iridescence, commonly thought to be due to spoilage but actually caused by structural coloration and diffraction of the light. A common additive to processed meats for both preservation and the prevention of discoloration is sodium nitrite. This substance is a source of health concerns because it may form carcinogenic nitrosamines when heated.Meat is prepared in many ways, as steaks, in stews, fondue, or as dried meat like beef jerky. It may be ground then formed into patties (as hamburgers or croquettes), loaves, or sausages, or used in loose form (as in "sloppy joe" or Bolognese sauce). Some meat is cured by smoking, which is the process of flavoring, cooking, or preserving food by exposing it to the smoke from burning or smoldering plant materials, most often wood. In Europe, alder is the traditional smoking wood, but oak is more often used now, and beech to a lesser extent. In North America, hickory, mesquite, oak, pecan, alder, maple, and fruit-tree woods are commonly used for smoking. Meat can also be cured by pickling, preserving in salt or brine (see salted meat and other curing methods). Other kinds of meat are marinated and barbecued, or simply boiled, roasted, or fried. Meat is generally eaten cooked, but many recipes call for raw beef, veal or fish (tartare). Steak tartare is a meat dish made from finely chopped or minced raw beef or horse meat. Meat is often spiced or seasoned, particularly with meat products such as sausages. Meat dishes are usually described by their source (animal and part of body) and method of preparation (e.g., a beef rib). Meat is a typical base for making sandwiches. Popular varieties of sandwich meat include ham, pork, salami and other sausages, and beef, such as steak, roast beef, corned beef, pepperoni, and pastrami. Meat can also be molded or pressed (common for products that include offal, such as haggis and scrapple) and canned. Health There is concern and debate regarding the potential association of meat, in particular red and processed meat, with a variety of health risks. The 2015–2020 Dietary Guidelines for Americans asked men and teenage boys to increase their consumption of vegetables or other underconsumed foods (fruits, whole grains, and dairy) while reducing intake of protein foods (meats, poultry, and eggs) that they currently overconsume. Contamination Various toxic compounds can contaminate meat, including heavy metals, mycotoxins, pesticide residues, dioxins, polychlorinated biphenyl (PCBs). Processed, smoked and cooked meat may contain carcinogens such as polycyclic aromatic hydrocarbons.Toxins may be introduced to meat as part of animal feed, as veterinary drug residues, or during processing and cooking. Often, these compounds can be metabolized in the body to form harmful by-products. Negative effects depend on the individual genome, diet, and history of the consumer. Any chemical's toxicity is also dependent on the dose and timing of exposure. Cancer There are concerns about a relationship between the consumption of meat, in particular processed and red meat, and increased cancer risk. The International Agency for Research on Cancer (IARC), a specialized agency of the World Health Organization (WHO), classified processed meat (e.g., bacon, ham, hot dogs, sausages) as, "carcinogenic to humans (Group 1), based on sufficient evidence in humans that the consumption of processed meat causes colorectal cancer." IARC also classified red meat as "probably carcinogenic to humans (Group 2A), based on limited evidence that the consumption of red meat causes cancer in humans and strong mechanistic evidence supporting a carcinogenic effect."Cancer Research UK, National Health Service (NHS) and the National Cancer Institute have stated that red and processed meat intake increases risk of bowel cancer. The American Cancer Society in their "Diet and Physical Activity Guideline", stated "evidence that red and processed meats increase cancer risk has existed for decades, and many health organizations recommend limiting or avoiding these foods." The Canadian Cancer Society have stated that "eating red and processed meat increases cancer risk".A 2021 review found an increase of 11–51% risk of multiple cancer per 100g/d increment of red meat, and an increase of 8–72% risk of multiple cancer per 50g/d increment of processed meat. Bacterial contamination Bacterial contamination has been seen with meat products. A 2011 study by the Translational Genomics Research Institute showed that nearly half (47%) of the meat and poultry in U.S. grocery stores were contaminated with S. aureus, with more than half (52%) of those bacteria resistant to antibiotics. A 2018 investigation by the Bureau of Investigative Journalism and The Guardian found that around 15 percent of the US population suffers from foodborne illnesses every year. The investigation also highlighted unsanitary conditions in US-based meat plants, which included meat products covered in excrement and abscesses "filled with pus". Diabetes A 2022 review found that consumption of 100 g/day of red meat and 50 g/day of processed meat were associated with an increased risk of diabetes.Diabetes UK advises people to limit their intake of red and processed meat. Infectious diseases Meat production and trade substantially increases risks for infectious diseases, including of pandemics – "directly through increased contact with wild and farmed animals [(zoonosis)] or indirectly through its impact on the environment (e.g., biodiversity loss, water use, climate change)". For example, avian influenza from poultry meat production can be a threat to human health. Furthermore, the use of antibiotics in meat production contributes to antimicrobial resistance – which contributes to millions of deaths – and makes it harder to control infectious diseases. Changes in consumer behavior In response to changing meat prices as well as health concerns about saturated fat and cholesterol (see lipid hypothesis), consumers have altered their consumption of various meats. A USDA report points out that consumption of beef in the United States between 1970 and 1974 and 1990–1994 dropped by 21%, while consumption of chicken increased by 90%. During the same period of time, the price of chicken dropped by 14% relative to the price of beef. From 1995 to 1996, beef consumption increased due to higher supplies and lower prices. Cooking Meat can transmit certain diseases, but complete cooking and avoiding recontamination reduces this possibility.Several studies published since 1990 indicate that cooking muscle meat creates heterocyclic amines (HCAs), which are thought to increase cancer risk in humans. Researchers at the National Cancer Institute published results of a study which found that human subjects who ate beef rare or medium-rare had less than one third the risk of stomach cancer than those who ate beef medium-well or well-done. While eating muscle meat raw may be the only way to avoid HCAs fully, the National Cancer Institute states that cooking meat below 100 °C (212 °F) creates "negligible amounts" of HCAs. Also, microwaving meat before cooking may reduce HCAs by 90%.Nitrosamines, present in processed and cooked foods, have been noted as being carcinogenic, being linked to colon cancer. Also, toxic compounds called PAHs, or polycyclic aromatic hydrocarbons, present in processed, smoked and cooked foods, are known to be carcinogenic. Heart disease A 2012 review found that processed red meat increases risk of coronary heart disease, whilst unprocessed red meat has a smaller increase or no risk. The review concluded that that neither unprocessed red nor processed meat consumption is beneficial for cardiometabolic health.A 2021 review concluded that, except for poultry, at 50 g/day unprocessed red and processed meat appear to be risk factors for ischemic heart disease, increasing the risk by about 9 and 18% respectively.A 2022 review found that high consumption of red meat was associated with a 15 % increased risk of cardiovascular disease. Sociology Meat is part of the human diet in most cultures, where it often has symbolic meaning and important social functions. Some people choose not to eat meat (vegetarianism) or any food made from animals (veganism). The reasons for not eating all or some meat may include ethical objections to killing animals for food, health concerns, environmental concerns or religious dietary laws. Ethics Ethical issues regarding the consumption of meat include objecting to the act of killing animals or to the agricultural practices used in meat production. Reasons for objecting to killing animals for consumption may include animal rights, environmental ethics, or an aversion to inflicting pain or harm on other sentient creatures. Some people, while not vegetarians, refuse to eat the flesh of certain animals (such as cows, pigs, cats, dogs, horses, or rabbits) due to cultural or religious traditions. Philosophy The founders of Western philosophy disagreed about the ethics of eating meat. Plato's Republic has Socrates describe the ideal state as vegetarian.: 10  Pythagoras believed that humans and animals were equal and therefore disapproved of meat consumption, as did Plutarch, whereas Zeno and Epicurus were vegetarian but allowed meat-eating in their philosophy.: 10  Conversely, Aristotle's Politics assert that animals, as inferior beings, exist to serve humans, including as food.: 10  Augustine drew on Aristotle to argue that the universe's natural hierarchy allows humans to eat animals, and animals to eat plants.: 10  Enlightenment philosophers were likewise divided. Descartes wrote that animals are merely animated machines, and Kant considered them inferior beings for lack of discernment; means rather than ends.: 11  But Voltaire and Rousseau disagreed.: 11  The latter argued that meat-eating is a social rather than a natural act, because children are not interested in meat.: 11 Later philosophers examined the changing practices of eating meat in the modern age as part of a process of detachment from animals as living beings. Norbert Elias, for instance, noted that in medieval times cooked animals were brought to the table whole, but that since the Renaissance only the edible parts are served, which are no longer recognizably part of an animal.: 12  Modern eaters, according to Noëlie Vialles, demand an "ellipsis" between meat and dead animals; for instance, calves' eyes are no longer considered a delicacy as in the Middle Ages, but provoke disgust.: 12  Even in the English language, distinctions emerged between animals and their meat, such as between cattle and beef, pigs and pork.: 12  Fernand Braudel wrote that since the European diet of the 15th and 16th century was particularly heavy in meat, European colonialism helped export meat-eating across the globe, as colonized peoples took up the culinary habits of their colonizers, which they associated with wealth and power.: 15 Religious traditions The religion of Jainism has always opposed eating meat, and there are also schools of Buddhism and Hinduism that condemn the eating of meat. Jewish dietary rules (Kashrut) allow certain (kosher) meat and forbid other (treif). The rules include prohibitions on the consumption of unclean animals (such as pork, shellfish including mollusca and crustacea, and most insects), and mixtures of meat and milk. Similar rules apply in Islamic dietary laws: The Quran explicitly forbids meat from animals that die naturally, blood, the meat of swine (porcine animals, pigs), and animals dedicated to other than Allah (either undedicated or dedicated to idols) which are haram as opposed to halal. Sikhism forbids meat of slowly slaughtered animals ("kutha") and prescribes killing animals with a single strike ("jhatka"), but some Sikh groups oppose eating any meat. Psychology Research in applied psychology has investigated practices of meat eating in relation to morality, emotions, cognition, and personality characteristics. Psychological research suggests meat eating is correlated with masculinity, support for social hierarchy, and reduced openness to experience. Research into the consumer psychology of meat is relevant both to meat industry marketing and to advocates of reduced meat consumption. Gender Unlike most other food, meat is not perceived as gender-neutral, and is particularly associated with men and masculinity. Sociological research, ranging from African tribal societies to contemporary barbecues, indicates that men are much more likely to participate in preparing meat than other food.: 15  This has been attributed to the influence of traditional male gender roles, in view of a "male familiarity with killing" (Goody) or roasting being more violent as opposed to boiling (Lévi-Strauss).: 15  By and large, at least in modern societies, men also tend to consume more meat than women, and men often prefer red meat whereas women tend to prefer chicken and fish.: 16 See also References External links The dictionary definition of meat at Wiktionary American Meat Science Association website Qualitionary – Legal Definitions – Meat IARC Monographs Q&A Archived October 5, 2018, at the Wayback Machine IARC Monographs Q&A on the carcinogenicity of the consumption of red meat and processed meat. Archived November 23, 2018, at the Wayback Machine