title
stringlengths 2
145
| content
stringlengths 86
178k
|
---|---|
green building and wood | Green building is a technique that aims to create structures that are environmentally responsible and resource-efficient throughout their lifecycle – including siting, design, construction, operation, maintenance, renovation, and demolition.
A 2009 report by the U.S. General Services Administration evaluated 12 sustainably designed GSA buildings and found they cost less to operate.
Wood products from responsible sources are a good choice for most green building projects – both new construction and renovations. Wood grows naturally using energy from the sun and is renewable, sustainable, and recyclable. It is an effective insulator and uses far less energy to produce than concrete or steel. Wood can also mitigate climate change because wood products continue to store carbon absorbed by the tree during its growing cycle, and because substituting wood for fossil fuel-intensive materials such as steel and concrete result in ‘avoided’ greenhouse gas emissions.
Life cycle assessment
A life cycle assessment can help avoid a narrow outlook on environmental, social, and economic concerns by assessing each and every impact associated with all the stages of a process from cradle to grave (i.e., from raw materials through materials processing, manufacture, distribution, use, repair and maintenance, and disposal or recycling).
A comprehensive review of scientific literature from Europe, North America, and Australia pertaining to life cycle assessment of wood products concluded, among other things;
Fossil fuel consumption, the potential contributions to the greenhouse effect, and the quantities of solid waste tend to be minor for wood products compared to competing products.
Wood products that have been installed and are used in an appropriate way tend to have a favorable environmental profile compared to functionally equivalent products out of other materials.A study by the Canadian Wood Council compared the life cycle impacts of three 2,400-square-foot (220 m2) homes designed primarily in wood, steel, and concrete over the first 20 years of their lifespans. Relative to the wood design, the steel and concrete designs released more air pollution, produced more solid wastes, used more resources, required more energy, emitted more greenhouse gases, and discharged more water pollution.When the complete life cycle is considered, including use and disposal, the great majority of the studies indicate that wood products have lower greenhouse gas emissions. In the few cases where wood products cause greater greenhouse gas emissions than their non-wood counterparts, the cause was inappropriate post-use disposal.Tools are available that enable architects to judge the relative environmental merits of building materials. They include the ATHENA Impact Estimator for Buildings, which is capable of modeling 95% of the building stock in North America, and the ATHENA® EcoCalculator for Assemblies provides instant life cycle assessment results for common assemblies based on detailed assessments previously conducted using the Estimator. The EcoCalculator is available free from the non-profit Athena Sustainable Materials Institute in order to encourage greater use of LCA by design and building professionals.
Wood and climate change
Trees absorb carbon dioxide and store it in biomass (wood, leaves, roots). When trees decompose or burn, much of the stored carbon is released back into the atmosphere, mainly as carbon dioxide, and some of the carbon remains in the forest debris and soils.Harvested wood is used for products such as structural lumber or furniture, the carbon is stored for decades or longer. A typical 2,400-square-foot (220 m2) home in North America contains 29 metric tons of carbon or the equivalent of offsetting the greenhouse gas emissions produced by driving a passenger car over five years (about 12,500 liters of gasoline.)
When wood replaces a fossil fuel for energy, or a construction material with a greater greenhouse gas footprint, this lowers greenhouse gas emissions.Studies show that wood products are associated with far less greenhouse gas emissions over their lifetime than other major building materials. Substituting a cubic meter of blocks or brick with wood results in a significant saving of 0.75 to one tone of carbon dioxide.Increasing the use of wood products in construction and for other long-lived uses, plus the use of wood byproducts and wood waste as biomass replacement for fossil fuels, can contribute to atmospheric greenhouse gas stabilization. The sustainable management of forests for the production of wood products is a feasible and beneficial part of an overall strategy to mitigate climate change.Securing the Future, a United Kingdom government strategy for sustainable development, stated: “Forestry practices can make a significant contribution by reducing greenhouse gas emissions through increasing the amount of carbon removed from the atmosphere by the national forest estate, by burning wood for fuel, and by using wood as a substitute for energy-intensive materials such as concrete and steel.”
The role of wood in carbon balances
FPInnovations, a Canadian non-profit research organization, conducted a literature review of 66 scientific peer-reviewed articles regarding the net impact on atmospheric greenhouse gases due to wood product use within a life cycle perspective. It showed several ways wood product substitution affects greenhouse gas balances, including:
Less fossil fuel consumption in manufacturing;
Avoidance of industrial process carbon emissions from cement manufacturing when wood products replace cement-based products;
Carbon storage in wood products and in the forest; and
Avoided fossil fuel emissions when wood biofuels replace fossil fuels.
Energy efficiency
As high-performance buildings use less operating energy, the embodied energy required to extract, process, transport and install building materials may make up as much as 30% of the overall life cycle energy consumption. Studies such as the U.S. LCI Database Project show buildings built primarily with wood will have a lower embodied energy than those built primarily with brick, concrete or steel.
A recent case study of the Eugene Kruger Building in Quebec, Canada determined that the all-wood solution adopted for this 8,000-square-metre academic building resulted in a 40% reduction in embodied energy compared to steel and concrete alternatives.
A 2002 study compared production energy values for building components (e.g. walls, floors, roofs) made predominantly of wood, steel and concrete, and found that wood construction has a range of energy use from 185 to 280 Gigajoules (GJ), concrete from 265 to 521 GJ, and steel from 457 to 649 GJ. Wood construction will generally use less energy than other materials, although the high end of the range of wood construction energy overlaps with the low end of the range of concrete construction.The passive design uses natural processes – convection, absorption, radiation, and conduction – to minimize energy consumption and improve thermal comfort. Researchers in Europe have identified wood as a suitable material for the development of passive buildings due to its unique combination of properties, including thermal resistance, natural finish, structural integrity, and lightweight and weatherproof qualities. Passive design is beginning to be incorporated in small buildings in North America through the use of structural wood panels.
Due to its cellular structure and many tiny air pockets, wood is a better insulator than steel and concrete in most climates – 400 times better than steel and 10 times better than concrete. More insulation is needed for steel and concrete to achieve the same thermal performance.A 2002 study prepared by the National Association of Home Builders Research Center Inc. compared long-term energy use in two nearly identical side-by-side homes, one framed with conventional dimensional lumber and the second framed with cold-formed steel. It found the steel-framed house used 3.9% more natural gas in the winter and 10.7% more electricity in the summer.
Health and well-being
Solid wood products, particularly flooring, are often specified in environments where occupants are known to have allergies to dust or other particulates.
Wood itself is considered to be hypo-allergenic and its smooth surfaces prevent the buildup of particles common in soft finishes like carpet. The use of wood products can also improve air quality by absorbing or releasing moisture in the air to moderate humidity.
A study at the University of British Columbia and FPInnovations found that the visual presence of wood in a room lowers sympathetic nervous system (SNS) activation in occupants, further establishing the positive link between wood and human health. SNS activation is the way human bodies prepare themselves to deal with stress. It increases blood pressure and heart rate while inhibiting digestion, recovery and repair functions in order to deal with immediate threats. While necessary in the short term, prolonged periods in an SNS-activated state have a negative effect on the body's physiological and psychological health.
The study supports wood's value as a tool in evidence-based design (EBD) – a growing field that seeks to promote health and other positive outcomes such as increased productivity and well-being based on scientifically credible evidence. So far, EBD has focused largely on healthcare and, in particular, patient recovery.
Reducing waste
Green building seeks to avoid wasting energy, water and materials during construction. Design and building professionals can reduce construction waste through design optimization, using right-sized framing members, for example, or pre-manufactured and engineered components.
The wood industry reduces waste in a similar way by optimizing sawmill operations and by using wood chips and sawdust to produce paper and composite products, or as fuel for renewable bioenergy. North American wood producers use 98 percent of every tree harvested and brought to a mill.Rather than demolishing structures at the end of their useful life, they can be deconstructed to reclaim useful building materials rather than dumping them in the landfill.When used properly, wood, concrete and steel can last for decades or centuries. In North America, most structures are demolished because of external forces such as zoning changes and rising land values. Designing for flexibility and adaptability secures the greatest value for the embodied energy in building materials.
Wood is versatile and flexible, making it the easiest construction material for renovations. Wood buildings can be redesigned to suit changing needs, whether this involves adding a new room or moving a window or door. Wood structures are typically easy to adapt to new uses because the material is so light and easy to work with. Few homeowner or professional remodelers have the skill and equipment needed to alter steel-frame structures.Structural wood members can typically be reclaimed and reused for the same or similar purpose with only minor modifications or wastage, or remilled and fashioned into alternate products such as window and door frames.
To reduce the amount of wood that goes to landfill, the CO2 Neutral Alliance (a coalition of government, NGOs and the forest industry) created the website dontwastewood.com. The site includes resources for regulators, municipalities, developers, contractors, owner/operators and individuals/homeowners looking for information on wood recycling.
Responsible sourcing
Wood is a responsible environmental choice for construction as long as it comes from forests that are managed sustainably. Illegal logging and the international trade in illegally logged timber is a major problem for many timber-producing countries in the developing world. It causes environmental damage, costs governments billions of dollars in lost revenue, promotes corruption, undermines the rule of law and good governance and funds armed conflict. Consumer countries can use their buying power by ensuring the wood products they buy are from known and legal sources.Deforestation, which is the permanent removal of forests where the land is converted to other uses such as agriculture or housing, is also a significant problem in developing countries, and globally accounts for 17% of the world's greenhouse gas emissions.
The forests most vulnerable to destruction are in tropical regions of the world, where the rate of deforestation was estimated at 32,000,000 acres (130,000 km2) a year from 1990 to 2005. According to the State of the World's Forests Report, 2007, “the world lost about 3 percent of its forest area from 1990 to 2005; but, in North America, the total forest area remained virtually constant.” When forest land is converted for other uses, a portion of the deforestation can be offset by afforestation—such as the planting of trees on land that has been bare of trees for a long time.Voluntary third-party forest certification is a credible tool for communicating the environmental and social performance of forest operations. With forest certification, an independent organization develops standards of good forest management, and independent auditors issue certificates to forest operations that comply with those standards. This certification verifies that forests are well-managed – as defined by a particular standard – and ensures that certified wood and paper products come from legal and responsible sources.
Green building rating systems
A 2010 study by the Light House Sustainable Building Centre in British Columbia, Canada examined the ways in which the world's major voluntary green building rating systems incorporate wood. It found that rating systems for single-family homes in North America were the most inclusive of wood products and rating systems for commercial buildings and buildings outside of North America were the least inclusive. Systems studied included BREEAM (United Kingdom), Built Green (United States and Canada), CASBEE (Japan), Green Globes (United States), Green Star (Australia), LEED (launched in United States and used in countries such as Canada, China, India and Mexico), Living Building Challenge (United States and Canada), the NAHB – National Green Building Program (United States), and the SB Tool (Canada and UK).
In most cases, the rating systems offer credits/point for the use of wood in the following areas: certified wood; recycled /reused /salvaged materials; and local sourcing of materials. In some cases, building techniques and skills (such as advanced framing) and waste minimization are recognized, and most demand that all wood adhesives, resins, engineered and composite products contain no added urea formaldehyde and have strict limits on VOC (volatile organic compound) content.
LEED certified wood credit
In December 2010, the U.S. Green Building Council failed to get enough yes votes from members for a proposed rewrite of the certified wood policy in its Leadership in Energy and Environmental Design (LEED) rating system. Since its inception, LEED has only accepted wood certified to Forest Stewardship Council standards.
The two largest third-party forest certification standards in the United States – the Forest Stewardship Council (FSC) and the Sustainable Forestry Initiative (SFI) – opposed the proposed benchmarks. FSC questioned their rigor and SFI claimed the process was overly detailed and complex.
A number of organizations, including the National Association of State Foresters, the Canadian Institute of Forestry, and the Society of American Foresters called for LEED to recognize all credible certification programs to encourage the use of wood as a green building material.
In its 2008-2009 Forest Products Annual Review, the United Nations Economic Commission for Europe/Food and Agriculture Organization stated that green building initiatives (GBI) can be a mixed blessing for wood products. “GBI standards giving exclusive recognition to particular
forest-certification brands may help drive demand for these brands at the expense of wider appreciation of the environmental merits of wood.” In its 2009-2010 review, the UNECE/FAO reported growing convergence between certification systems: "Over the years, many of the issues that previously divided the (certification) systems have become much less distinct. The largest certification systems now generally have the same structural programmatic requirements."
See also
Formaldehyde
Pollution
Power tool
Sawdust (wood dust) | Health hazards of sawdust
Solvent
TVOC
Wood glue
Wood preservative | Health hazards of wood preservative
== References == |
erating | eRating is a certification, education, and labeling program for passenger vehicles in the United States. It was developed by Certification for Sustainable Transport (CST) at the University of Vermont.CST uses eRating to rate vehicles based on several criteria. These include greenhouse gas emissions per passenger mile, emissions, alternative fuels, purchase of carbon offsets, and training programs that promote energy efficient driving.
If the vehicle meets enough standards, it receives an eRating certification.
Program description
The eRating program is an independent, third party certification, education and labeling initiative for owners and operators, manufacturers and passengers of transportation vehicles. The eRating also functions as a sustainability index that weighs factors such as greenhouse gasses per passenger mile, environmental impacts, and even the use of alternative fuels and technologies in the transportation industry.Two major works were used in developing the eRating : the Federal Trade Commission's Part 260-Guide for the Use of Environmental Marketing Claims, and the International Social and Environmental Accreditation and Labeling Alliance (ISEAL) planning framework. In May 2012, CST finalized Step B-3, and transitioned to step E-1 to launch the eRating program. The CST was designed to help improve economic, environmental, and energy efficiency within the passenger transportation sector. The program uses research-based criteria to evaluate vehicles and includes the driver training programs "Idle Free" and "Eco-Driving 101" to improve efficiency.
Idle Free Training Program
This course teaches drivers about the health, environmental, and financial impacts of idling a vehicle. Health experts, vehicle manufacturers, and vehicle operators have all given their testimony about the advantages of idle-free driving. Upon completion of the course, drivers are then able to take an "idle-free pledge", in which they promise to follow the idling guidelines set forth in the course. Completion of this course earns 20 points towards an eRating certification.
Eco-Driving 101 Training Program
This course teaches drivers about eco-driving. Eco-driving is a set of simple driving habits that result in using less fuel, generating fewer emissions, and increasing safety. The course first explains the science behind eco-driving, as well as the environmental and mechanical benefits of doing so. Drivers are taught techniques such as avoiding aggressive acceleration, speeding and braking monitoring speed to maintain efficient and consistent speed; keeping RPM levels as low as possible for the speed and keeping the vehicle properly maintained, that they can use in their everyday driving in order to cut back on fuel consumption.The typical eco-driver can increase fuel efficiency by 10-30%. For organizations putting their drivers through this online training program, 20 points will be awarded towards the organizations eRating certification once they reach the 80% driver completion threshold.
Benefits
The program benefits owners, operators, and manufacturers by helping them reduce vehicle operation costs, save energy, and promote their businesses. The program also benefits consumers by helping them identify and choose the highest performing, lowest impact forms of transportation. Whether displayed on a bus, boat, train, car, bicycle, or plane, an eRating certification label signifies that CST has thoroughly evaluated and certified the vehicle.
eRating levels and criteria
Various features of the vehicle being considered for certification are examined. There are four levels of certification for a vehicle(s) in the eRating program: e1, e2, e3, and e4. The CST uses a point system, or sustainability index, to determine the certification level of a given vehicle. Points are given for more efficient features and attributes of the vehicle, with 100 being required for entry level, or e1, certification. e1 certification represents entry-level certification and e4 certification indicates the highest level. Only the most energy efficient, lowest impacts forms of transportation are eligible for certification; a certification label on a vehicle, be it e1, e2, e3, or e4, lets consumers know that the vehicle has met a set of rigorous sustainability criteria.
Certification standards overview
The eRating program aims to provide recognition through certification to transportation systems, fleets, operators and individual vehicles that help the passenger transportation sector:
Reduce greenhouse gas and other harmful emissions
Increase energy efficiency
Utilize alternative fuels and new technologiesThe eRating program offered four levels of certification on a per vehicle basis to qualifying operators: e1, e2, e3 and e4 certification; e1 certification represents entry-level certification and e4 certification indicates the highest level of certification available. The application process determines which level of certification an operator qualifies. Points are earned based on the following:
Vehicle technology
Operation of the vehicle(s) at certain efficiency levels
Use of particular operating procedures within a company
Use of specific policies and educational programs within a company
Greenhouse gas emissions per passenger mile
Points are earned if the vehicle's greenhouse gas emission levels are at least 50% below the U.S. average for 2000-2009 (.274 kg per passenger mile). Minimum qualifications for vehicles must be greater than or equal to an average of 148 passenger miles per gallon. Higher levels of efficiency earn greater points.
Criteria pollutant emissions
Points are awarded for the use of technologies that reduce emissions such as carbon monoxide, sulfur dioxide, volatile organic compounds and nitrogen oxide. Pollutant producing vehicles, such as those with leaky exhaust systems or that produce excessive amounts of smoke, are automatically disqualified for certification.
Alternative fuels
The use of alternative fuels besides gasoline or diesel earns a vehicle points ranging from 5-100 towards certification. Qualifying fuels must be used a minimum of 80% of the time.
Greenhouse gas offsets
Optional points can be earned by purchasing greenhouse gas offsets from endorsed carbon-trading programs. These GHG reduction credits must be purchased through the Climate Action Reserve or Verified Carbon Standard or from another verified organization. These credits must be purchased in the region of intended use and must not be more than two years old. Generally 1 point will be awarded toward certification for every 5% of emissions offset.
== References == |
energy transition | An energy transition (or energy system transformation) is a significant structural change in an energy system regarding supply and consumption. Currently, a transition to sustainable energy (mostly renewable energy) is underway to limit climate change. It is also called renewable energy transition. The current transition is driven by a recognition that global greenhouse-gas emissions must be drastically reduced. This process involves phasing-down fossil fuels and re-developing whole systems to operate on low carbon electricity. A previous energy transition took place during the industrial revolution and involved an energy transition from wood and other biomass to coal, followed by oil and most recently natural gas.As of 2019, 85% of the world's energy needs are met by burning fossil fuels.: 46 Energy production and consumption are responsible for 76% of annual human-caused greenhouse gas emissions as of 2018. To meet the goals of the 2015 Paris Agreement on climate change, emissions must be reduced as soon as possible and reach net-zero by mid-century. Since the late 2010s, the renewable energy transition is also driven by the rapidly increasing competitiveness of both solar and wind power. Another motivation for the transition is to limit other environmental impacts of the energy industry.The renewable energy transition includes a shift from internal combustion engine powered vehicles to more public transport, reduced air travel and electric vehicles. Building heating is being electrified, with heat pumps as the most efficient technology by far. For electrical grid scale flexibility, energy storage and super grids are vital to allow for variable, weather-dependent technologies.
Definition
An energy transition is a broad shift in technologies and behaviours that are needed to replace one source of energy with another.: 202–203 A prime example is the change from a pre-industrial system relying on traditional biomass, wind, water and muscle power to an industrial system characterized by pervasive mechanization, steam power and the use of coal.
The IPCC does not define energy transition in the glossary of its Sixth Assessment Report but it does define transition as: "The process of changing from one state or condition to another in a given period of time. Transition can occur in individuals, firms, cities, regions and nations, and can be based on incremental or transformative change."
Development of the term
After the 1973 oil crisis, the term energy transition was coined by politicians and media. It was popularised by US President Jimmy Carter in his 1977 Address on the Nation on Energy, calling to "look back into history to understand our energy problem. Twice in the last several hundred years, there has been a transition in the way people use energy ... Because we are now running out of gas and oil, we must prepare quickly for a third change to strict conservation and to the renewed use of coal and to permanent renewable energy sources like solar power." The term was later globalised after the 1979 second oil shock, during the 1981 United Nations Conference on New and Renewable Sources of Energy.From the 1990s, debates on energy transition have increasingly taken climate change mitigation into account. Since the adoption of the Paris Agreement in 2015, all 196 participating parties have agreed to reach carbon neutrality by mid-century. Parties to the agreement committed "to limit global warming to "well below 2 °C, preferably 1.5 °C compared to pre-industrial levels". This requires a rapid energy transition with a downshift of fossil fuel production to stay within the carbon emissions budget.In this context, the term energy transition encompasses a reorientation of energy policy. This could imply a shift from centralized to distributed generation. It also includes attempts to replace overproduction and avoidable energy consumption with energy-saving measures and increased efficiency.The historical transitions from locally-supplied wood, water and wind energies to globally supplied fossil and nuclear fuels has induced growth in end-use demand through the rapid expansion of engineering research, education and standardisation. The mechanisms for the whole-systems changes include new discipline in Transition Engineering amongst all engineering professions, entrepreneurs, researchers and educators.
Examples of past energy transitions
Historic approaches to past energy transitions are shaped by two main discourses. One argues that humankind experienced several energy transitions in its past, while the other suggests the term "energy additions" as better reflecting the changes in global energy supply in the last three centuries.
The chronologically first discourse was most broadly described by Vaclav Smil. It underlines the change in the energy mix of countries and the global economy. By looking at data in percentages of the primary energy source used in a given context, it paints a picture of the world's energy systems as having changed significantly over time, going from biomass to coal, to oil, and now a mix of mostly coal, oil and natural gas. Until the 1950s, the economic mechanism behind energy systems was local rather than global. Historically, there is a correlation between an increasing demand for energy and availability of different energy sources.The second discourse was most broadly described by Jean-Baptiste Fressoz. It emphasises that the term "energy transition" was first used by politicians, not historians, to describe a goal to achieve in the future – not as a concept to analyse past trends. When looking at the sheer amount of energy being used by humankind, the picture is one of ever-increasing consumption of all the main energy sources available to humankind. For instance, the increased use of coal in the 19th century did not replace wood consumption, indeed more wood was burned. Another example is the deployment of passenger cars in the 20th century. This evolution triggered an increase in both oil consumption (to drive the car) and coal consumption (to make the steel needed for the car). In other words, according to this approach, humankind never performed a single energy transition in its history but performed several energy additions.
Contemporary energy transitions differ in terms of motivation and objectives, drivers and governance. As development progressed, different national systems became more and more integrated becoming the large, international systems seen today. Historical changes of energy systems have been extensively studied. While historical energy changes were generally protracted affairs, unfolding over many decades, this does not necessarily hold true for the present energy transition, which is unfolding under very different policy and technological conditions.For current energy systems, many lessons can be learned from history. The need for large amounts of firewood in early industrial processes in combination with prohibitive costs for overland transportation led to a scarcity of accessible (e.g. affordable) wood, and eighteenth century glass-works "operated like a forest clearing enterprise". When Britain had to resort to coal after largely having run out of wood, the resulting fuel crisis triggered a chain of events that two centuries later culminated in the Industrial Revolution. Similarly, increased use of peat and coal were vital elements paving the way for the Dutch Golden Age, roughly spanning the entire 17th century. Another example where resource depletion triggered technological innovation and a shift to new energy sources is 19th century whaling: whale oil eventually became replaced by kerosene and other petroleum-derived products. To speed up the energy transition it is also conceivable that there will be government buyouts or bailouts of coal mining regions.
Drivers for current energy transition
A rapid energy transition to very-low or zero-carbon sources is required to mitigate the effects of climate change.: 66 : 11 Coal, oil and gas combustion account for 89% of CO2 emissions: 20 and still provide 78% of primary energy consumption.: 12 In spite of the knowledge about the risks of climate change since the 1980s and the vanishing carbon budget for a 1.5 °C path, the global deployment of renewable energy could not catch up with the increasing energy demand for many years. Coal, oil and gas were cheaper. Only in countries with special tariffs and subsidies, wind and solar power gained a considerable share, limited to the power sector.From 2010 to 2019, competitiveness of wind and solar power has massively increased. Unit costs of solar energy dropped sharply by 85%, wind energy by 55%, and lithium-ion batteries by 85%.: 11 This makes wind and solar power the cheapest form for new installations in many regions. Levelized costs for combined photovoltaics with storage for a few hours are already lower than for gas peaking power plants. In 2021, the new electricity generating capacity of renewables exceeded 80% of all installed power.Another important driver is energy security and independence, with increasing importance in Europe because of the 2022 Russian invasion of Ukraine.The deployment of renewable energy can also include co-benefits of climate change mitigation: positive socio-economic effects on employment, industrial development, health and energy access. Depending on the country and the deployment scenario, replacing coal power plants can more than double the number of jobs per average MW capacity. In non-electrified rural areas, the deployment of solar mini-grids can significantly improve electricity access. Employment opportunities by the green transition are associated with the use of renewable energy sources or building activity for infrastructure improvements and renovations. Additionally, the replacement of coal-based energy with renewables can lower the number of premature deaths caused by air pollution and reduce health costs.Governments ambition to attract international support for green growth initiatives and public demand for a clean environment have been found to be drivers of the energy transition in developing countries, such as Vietnam.
Key technologies and approaches
The emissions reductions necessary to keep global warming below 2 °C will require a system-wide transformation of the way energy is produced, distributed, stored, and consumed. For a society to replace one form of energy with another, multiple technologies and behaviours in the energy system must change.: 202–203 Many climate change mitigation pathways envision three main aspects of a low-carbon energy system:
The use of low-emission energy sources to produce electricity
Electrification – that is increased use of electricity instead of directly burning fossil fuels
Accelerated adoption of energy efficiency measures: 7.11.3
Renewable energy
The most important energy sources in the low carbon energy transition are wind power and solar power. They could reduce net emissions by 4 billion tons CO2 equivalent per year each, half of it with lower net lifetime costs than the reference.: 38 Other renewable energy sources include bioenergy, geothermal energy and tidal energy, but they currently have higher net lifetime costs.: 38 By 2022, hydroelectricity is the largest source of renewable electricity in the world, providing 16% of the world's total electricity in 2019. However, because of its heavy dependence on geography and the generally high environmental and social impact of hydroelectric power plants, the growth potential of this technology is limited. Wind and solar power are considered more scalable, but still require vast quantities of land and materials. They have higher potential for growth. These sources have grown nearly exponentially in recent decades thanks to rapidly decreasing costs. In 2019, wind power supplied 5.3% worldwide electricity while solar power supplied 2.6%.While production from most types of hydropower plants can be actively controlled, production from wind and solar power depends on the weather. Electrical grids must be extended and adjusted to avoid wastage. Dammed hydropower is a dispatchable source, while solar and wind are variable renewable energy sources. These sources require dispatchable backup generation or energy storage to provide continuous and reliable electricity. For this reason, storage technologies also play a key role in the renewable energy transition. As of 2020, the largest scale storage technology is pumped storage hydroelectricity, accounting for the great majority of energy storage capacity installed worldwide. Other important forms of energy storage are electric batteries and power to gas.
Integration of variable renewable energy sources
With the integration of renewable energy, local electricity production is becoming more variable. It has been recommended that "coupling sectors, energy storage, smart grids, demand side management, sustainable biofuels, hydrogen electrolysis and derivatives will ultimately be needed to accommodate large shares of renewables in energy systems".: 28 Fluctuations can be smoothened by combining wind and sun power and by extending electricity grids over large areas. This reduces the dependence on local weather conditions.
With highly variable prices, electricity storage and grid extension become more competitive. Researchers have found that "costs for accommodating the integration of variable renewable energy sources in electricity systems are expected to be modest until 2030".: 39 Furthermore, "it will be more challenging to supply the entire energy system with renewable energy".: 28 Fast fluctuations increase with a high integration of wind and solar energy. They can be addressed by operating reserves. Large-scale batteries can react within seconds and are increasingly used to keep the electricity grid stable.
100% renewable energy
Nuclear power
In the 1970s and 1980s, nuclear power gained a large share in some countries. In France and Slovakia more than half of the electrical power is still nuclear. It is a low carbon energy source but comes with risks and increasing costs. Since the late 1990s, deployment has slowed down. Decommissioning increases as many reactors are close to the end of their lifetime. Germany stopped its last three nuclear power plants by mid April 2023. On the other hand, the China General Nuclear Power Group is aiming for 200 GW by 2035, produced by 150 additional reactors.
Economic and geopolitical aspects
A shift in energy sources has the potential to redefine relations and dependencies between countries, stakeholders and companies. Countries or land owners with resources – fossil or renewable – face massive losses or gains depending on the development of any energy transition. In 2021, energy costs reached 13% of global gross domestic product.
Global rivalries have contributed to the driving forces of the economics behind the low carbon energy transition. Technological innovations developed within a country have the potential to become an economic force.
Influences
The energy transition discussion is heavily influenced by contributions from the fossil fuel industries.
One way that oil companies are able to continue their work despite growing environmental, social and economic concerns is by lobbying local and national governments.
Historically, the fossil fuel lobby has been highly successful in limiting regulations. From 1988 to 2005, Exxon Mobil, one of the largest oil companies in the world, spent nearly $16 million in anti-climate change lobbying and providing misleading information about climate change to the general public. The fossil fuel industry acquires significant support through the existing banking and investment structure. The concept that the industry should no longer be financially supported has led to the social movement known as divestment. Divestment is defined as the removal of investment capital from stocks, bonds or funds in oil, coal and gas companies for both moral and financial reasons.Banks, investing firms, governments, universities, institutions and businesses are all being challenged with this new moral argument against their existing investments in the fossil fuel industry and many; such as Rockefeller Brothers Fund, the University of California, New York City and more; have begun making the shift to more sustainable, eco-friendly investments.
Social and environmental aspects
Impacts
A renewable energy transition can present negative social impacts for some people who rely on the existing energy economy or who are affected by mining for minerals required for the transition. This has led to calls for a just transition, which the IPCC defines as, "A set of principles, processes and practices that aim to ensure that no people, workers, places, sectors, countries or regions are left behind in the transition from a high-carbon to a low carbon economy."Use of local energy sources may stabilise and stimulate some local economies, create opportunities for energy trade between communities, states and regions, and increase energy security.Coal mining is economically important in some regions, and a transition to renewables would decrease its viability and could have severe impacts on the communities that rely on this business. Not only do these communities face energy poverty already, but they also face economic collapse when the coal mining businesses move elsewhere or disappear altogether. This broken system perpetuates the poverty and vulnerability that decreases the adaptive capacity of coal mining communities. Potential mitigation could include expanding the program base for vulnerable communities to assist with new training programs, opportunities for economic development and subsidies to assist with the transition.Increasing energy prices resulting from an energy transition may negatively impact developing countries including Vietnam and Indonesia.Increased mining for lithium, cobalt, nickel, copper, and other critical minerals needed for expansion of renewable energy infrastructure has created increased environmental conflict and environmental justice issues for some communities.
Labour
A large portion of the global workforce works directly or indirectly for the fossil fuel economy. Moreover, many other industries are currently dependent on unsustainable energy sources (such as the steel industry or cement and concrete industry). Transitioning these workforces during the rapid period of economic change requires considerable forethought and planning. The international labor movement has advocated for a just transition that addresses these concerns.
Recently, an energy crisis is upon the nations of Europe as a result of dependence on Russia's natural gas, which was cut off during the Russia-Ukraine war.
This goes to show that humanity is still heavily dependent on fossil fuel energy sources and care should be taken to have a smooth transition, less energy-shortage shocks cripple the very efforts to effectively energise the transition.
Risks and barriers
Despite the widespread understanding that a transition to low carbon energy is necessary, there are a number of risks and barriers to making it more appealing than conventional energy. Low carbon energy rarely comes up as a solution beyond combating climate change, but has wider implications for food security and employment. This further supports the recognized dearth of research for clean energy innovations, which may lead to quicker transitions. Overall, the transition to renewable energy requires a shift among governments, business, and the public. Altering public bias may mitigate the risk of subsequent administrations de-transitioning – through perhaps public awareness campaigns or carbon levies.Amongst the key issues to consider in relation to the pace of the global transition to renewables is how well individual electric companies are able to adapt to the changing reality of the power sector. For example, to date, the uptake of renewables by electric utilities has remained slow, hindered by their continued investment in fossil fuel generation capacity.Incomplete regulations on clean energy uptake and concerns about electricity shortages have been identified as key barriers to the energy transition in coal-dependent, fast developing economies such as Vietnam.
Examples by country
From 2000–2012 coal was the source of energy with the total largest growth. The use of oil and natural gas also had considerable growth, followed by hydropower and renewable energy. Renewable energy grew at a rate faster than any other time in history during this period. The demand for nuclear energy decreased, in part due to fear mongering and inaccurate media portrayal of some nuclear disasters (Three Mile Island in 1979, Chernobyl in 1986, and Fukushima in 2011).
More recently, consumption of coal has declined relative to low carbon energy. Coal dropped from about 29% of the global total primary energy consumption in 2015 to 27% in 2017, and non-hydro renewables were up to about 4% from 2%.
Asia
India
India has set renewable energy goals to transition 50% of its total energy consumption into renewable sources in the Paris climate accords. As of 2022 the Central Electricity Authority are well on track of achieving their goals, producing 160 GW electricity from clean sources like solar, wind, hydro power and nuclear power plants, this is 40% of its total capacity. India is ranked third on Ernst and Young's renewable energy country attractive index behind the USA and China.
Hydro electric power plants are a major part of India's energy infrastructure since the days of its independence in 1947. Former prime Minister Jawahar Lal Nehru called them the " temples of modern India" and believed them to be key drivers of modernity and industrialism for the nascent republic. Notable examples of hydro power stations include the 2400 MW Tehri hydropower complex, the 1960 MW Koyna hydroelectric project and the 1670 MW Srisailam Dam. Recently, India has given due importance to emerging renewable technologies like solar power plants and wind farms. They house 3 of the world's top 5 solar farms, including world's largest 2255 MW Bhadla Solar Park in and world's second-largest solar park of 2000 MW Pavgada Solar Park and 100 MW Kurnool Ultra mega solar park.
While there has been positive change, India has to cut down its reliance on traditional coal based power production as it still accounts for around 50% of its energy production. India is also moving towards its goal for electrification of the automotive industry, aiming to have at least 30% EV ownership among private vehicles by 2030.
Europe
European Union
The European Green Deal is a set of policy initiatives by the European Commission with the overarching aim of making Europe climate neutral in 2050. An impact assessed plan will also be presented to increase the EU's greenhouse gas emission reductions target for 2030 to at least 50% and towards 55% compared with 1990 levels. The plan is to review each existing law on its climate merits, and also introduce new legislation on the circular economy, building renovation, biodiversity, farming and innovation. The president of the European Commission, Ursula von der Leyen, stated that the European Green Deal would be Europe's "man on the Moon moment", as the plan would make Europe the first climate-neutral continent.A survey found that digitally advanced companies put more money into energy-saving strategies. In the European Union, 59% of companies that have made investments in both basic and advanced technologies have also invested in energy efficiency measures, compared to only 50% of US firms in the same category. Overall, there is a significant disparity between businesses' digital profiles and investments in energy efficiency.
Germany
Germany has played an outsized role in the transition away from fossil fuels and nuclear power to renewables. The energy transition in Germany is known as die Energiewende (literally, "the energy turn") indicating a turn away from old fuels and technologies to new one. The key policy document outlining the Energiewende was published by the German government in September 2010, some six months before the Fukushima nuclear accident; legislative support was passed in September 2010.
The policy has been embraced by the German federal government and has resulted in a huge expansion of renewables, particularly wind power. Germany's share of renewables has increased from around 5% in 1999 to 17% in 2010, reaching close to the OECD average of 18% usage of renewables. In 2022 Germany has a share of 46,2 % and surpassed the OECD average. A large driver for this increase in the shares of renewables energy are decreases in cost of capital. Germany boasts some of the lowest cost of capitals for renewable solar and wind onshore energy worldwide. In 2021 the International Renewable Energy Agency reported capital costs of around 1.1% and 2.4% for solar and wind onshore. This constitutes a significant decrease from previous numbers in the early 2000s, where capital costs hovered around 5.1% and 4.5% respectively. This decrease in capital costs was influenced by a variety of economic and political drivers. Following the global financial crisis of 2008-2009, Germany eased the refinancing regulations on banks by giving out cheap loans with low interest rates in order to stimulate the economy again.During this period, the industry around renewable energies also started to experience learning effects in manufacturing, project organisation as well as financing thanks to rising investment and order volumes. This coupled with various forms of subsidies contributed to a large reduction of the capital cost and the levelized cost of electricity (LCOE) for solar and onshore wind power. As the technologies have matured and become integral parts of the existing sociotechnical systems it is to be expected that in the future, experience effects and general interest rates will be key determinants for the cost-competitiveness of these technologies.Producers have been guaranteed a fixed feed-in tariff for 20 years, guaranteeing a fixed income. Energy co-operatives have been created, and efforts were made to decentralize control and profits. The large energy companies have a disproportionately small share of the renewables market. Nuclear power stations were closed, and the existing nine stations will close earlier than necessary, in 2022.
The reduction of reliance on nuclear stations has had the consequence of increased reliance on fossil fuels. One factor that has inhibited efficient employment of new renewable energy has been the lack of an accompanying investment in power infrastructure to bring the power to market. It is believed 8300 km of power lines must be built or upgraded.Different Länder have varying attitudes to the construction of new power lines. Industry has had their rates frozen and so the increased costs of the Energiewende have been passed on to consumers, who have had rising electricity bills. Germans in 2013 had some of the highest electricity costs in Europe. Nonetheless, for the first time in more than ten years, electricity prices for household customers fell at the beginning of 2015.
Switzerland
Due to the high share of hydroelectricity (59.6%) and nuclear power (31.7%) in electricity production, Switzerland's per capita energy-related CO2 emissions are 28% lower than the European Union average and roughly equal to those of France. On 21 May 2017, Swiss voters accepted the new Energy Act establishing the 'energy strategy 2050'. The aims of the energy strategy 2050 are: to reduce energy consumption; to increase energy efficiency ; and to promote renewable energies (such as water, solar, wind and geothermal power as well as biomass fuels). The Energy Act of 2006 forbids the construction of new nuclear power plants in Switzerland.
United Kingdom
By law production of greenhouse gas emissions by the United Kingdom will be reduced to net zero by 2050. To help in reaching this statutory goal national energy policy is mainly focusing on the country's off-shore wind power and delivering new and advanced nuclear power. The increase in national renewable power – particularly from biomass – together with the 20% of electricity generated by nuclear power in the United Kingdom meant that by 2019 low carbon British electricity had overtaken that generated by fossil fuels.In order to meet the net zero target energy networks must be strengthened. Electricity is only a part of energy in the United Kingdom, so natural gas used for industrial and residential heat and petroleum used for transport in the United Kingdom must also be replaced by either electricity or another form of low-carbon energy, such as sustainable bioenergy crops or green hydrogen.Although the need for the energy transition is not disputed by any major political party, in 2020 there is debate about how much of the funding to try and escape the COVID-19 recession should be spent on the transition, and how many jobs could be created, for example in improving energy efficiency in British housing. Some believe that due to post-covid government debt that funding for the transition will be insufficient. Brexit may significantly affect the energy transition, but this is unclear as of 2020. The government is urging UK business to sponsor the climate change conference in 2021, possibly including energy companies but only if they have a credible short term plan for the energy transition.
See also
References
=== Sources === |
hydrofluorocarbon | Hydrofluorocarbons (HFCs) are man-made organic compounds that contain fluorine and hydrogen atoms, and are the most common type of organofluorine compounds. Most are gases at room temperature and pressure. They are frequently used in air conditioning and as refrigerants; R-134a (1,1,1,2-tetrafluoroethane) is one of the most commonly used HFC refrigerants. In order to aid the recovery of the stratospheric ozone layer, HFCs were adopted to replace the more potent chlorofluorocarbons (CFCs), which were phased out from use by the Montreal Protocol, and hydrochlorofluorocarbons (HCFCs) which are presently being phased out. HFCs replaced older chlorofluorocarbons such as R-12 and hydrochlorofluorocarbons such as R-21. HFCs are also used in insulating foams, aerosol propellants, as solvents and for fire protection.
They may not harm the ozone layer as much as the compounds they replace, but they still contribute to global warming --- with some like trifluoromethane having 11,700 times the warming potential of carbon dioxide. Their atmospheric concentrations and contribution to anthropogenic greenhouse gas emissions are rapidly increasing, causing international concern about their radiative forcing.
Chemistry
Fluorocarbons with few C–F bonds behave similarly to the parent hydrocarbons, but their reactivity can be altered significantly. For example, both uracil and 5-fluorouracil are colourless, high-melting crystalline solids, but the latter is a potent anti-cancer drug. The use of the C-F bond in pharmaceuticals is predicated on this altered reactivity. Several drugs and agrochemicals contain only one fluorine center or one trifluoromethyl group.
Environmental regulation
Unlike other greenhouse gases in the Paris Agreement, hydrofluorocarbons are included in other international negotiations.In September 2016, the New York Declaration on Forests urged a global reduction in the use of HFCs. On 15 October 2016, due to these chemicals' contribution to climate change, negotiators from 197 nations meeting at a summit of the United Nations Environment Programme in Kigali, Rwanda reached a legally-binding accord (the Kigali Amendment) to phase down hydrofluorocarbons (HFCs) in an amendment to the Montreal Protocol. As of February 2020, 16 U.S. states ban or are phasing down HFCs.COVID-19 relief legislation, which included a measure that would require chemical manufacturers to phase down the production and use of HFCs, was passed by the United States House of Representatives and United States Senate on December 21, 2020. The U.S. Environmental Protection Agency signed a final rule phasing down HFCs on 23 September, 2021.
See also
Greenhouse gas § Sources - comparative chart
== References == |
alliance of small island states | Alliance of Small Island States (AOSIS) is an intergovernmental organization of low-lying coastal and small island countries. AOSIS was established in 1990, ahead of the Second World Climate Conference. The main purpose of the alliance is to consolidate the voices of Small Island Developing States (SIDS) to address global warming.
These island countries are particularly vulnerable to climate change and its related effects on the ocean, including sea level rise, coastal erosion and saltwater intrusion. The members are among the nations least responsible for climate change, having contributed less than 1% to the world's greenhouse gas emissions. These states advocate for international policy and mechanisms for addressing the inequity of climate impacts.
Organization
AOSIS functions primarily as an ad hoc lobby and negotiating voice for SIDS through the United Nations (UN) system. It has no regular budget, permanent secretariat or formal charter. There is a Bureau, which is made up of the chair-person and two vice chairs.AOSIS also uses partnerships, for example with the United Nations Development Programme and the European Commission.
Mission
AOSIS' core focus areas are climate change, sustainable development and ocean conservation.SIDS are among the nations least responsible for climate change, having contributed less than 1% to the world's greenhouse gas emissions. They are particularly vulnerable to its effects, with some islands at risk of becoming uninhabitable due to sea level rise. AOSIS has consistently raised this threat of uninhabitability created by climate change in climate negotiations.SIDS, of which AOSIS is predominantly comprised, account for less than 1% of the global GDP, territory, and population, meaning that individually SIDS hold little political weight in international climate negotiations. The aim of AOSIS is to amplify the voices of its members by joining together states which face similar issues. This is to increase their ability to influence climate negotiations and raise awareness for its concerns.
Actions
AOSIS has been very active from its inception. It has played a leading role in the global arena in raising awareness on climate change and advocating for action to address climate change. The creation of the alliance marked the beginning of the growth in influence of SIDS in climate politics. Despite their size and their relatively small economic and political weight, AOSIS member states have pulled above their weight in climate change negotiations.AOSIS played an important role in establishing the United Nations Framework Convention on Climate Change (UNFCCC) and was an important actor in the negotiations of the Framework in 1992. Its advocacy was instrumental to the inclusion of references to the greater vulnerability and special needs of SIDS in Article 4.8 of the UNFCCC. However, AOSIS was unsuccessful in its attempts to persuade nations to include commitments to specified greenhouse gas emission reduction targets in the Framework.AOSIS continued to advocate for the special needs of SIDS during the Earth Summit in Rio de Janeiro in 1992 and the 'special case' of SIDS was recognised in Agenda 21, the political action plan which resulted from the Summit. AOSIS' proposal to create an 'international insurance fund', funded by developed countries to compensate SIDS for damage caused by climate change, was turned down. In Rio, AOSIS broadened its mandate beyond climate change to also include the sustainable development of SIDS. AOSIS negotiated for the inclusion of a small program area on the sustainable development of small islands in Agenda 21. Agenda 21 was not legally binding, and some academics contend that the program was too vague to promote meaningful action.AOSIS did manage to secure the inclusion in Agenda 21 of a call for a global conference on this issue, which led to the first Global Conference on the Sustainable Development of Small Island States, held in Barbados in 1994. AOSIS played a prominent role at the Conference. It was the first UN conference entirely devoted to SIDS. The Conference resulted in the translation of Agenda 21 into a more comprehensive programme, the Barbados Programme of Action on the Sustainable Development of Small Island Developing States. The five year review of the Barbados Conference, conducted at a special session of the UN General Assembly in 1999, found that the SIDS efforts to make progress towards sustainable development had been limited, while the ten year review of the Barbados Conference, which took the form of an international meeting in Mauritius in 2005, found that its implementation was largely unsuccessful.AOSIS put forward the first draft text in the Kyoto Protocol negotiations as early as 1994. AOSIS member states Fiji and Antigua and Barbuda were the first states to ratify the Kyoto Protocol in 1998.AOSIS has used formal and informal meetings scheduled in advance of UN climate change conferences to raise awareness and political momentum for its mission. AOSIS has also used the media to raise awareness for its concerns. For example, in the lead up to the 2009 UN Climate Change Conference (UNFCCC) in Copenhagen, members of the cabinet of the Maldives, an AOSIS member state, held an underwater cabinet meeting to create awareness of the threat that climate change poses to the very existence of the Maldives. The stunt garnered international attention.At the UN Climate Change Conference in Berlin in 1995, AOSIS advocated very strongly for a commitment to timetables and target measures for climate change. It gained the support of developed nations including China, Brazil, and India. AOSIS had advocated since 2008 for the inclusion of a temperature target to restrain global warming to 1.5 °C above pre-industrial levels. Many of the AOSIS member states were present at the Conference in Copenhagen. Democracy Now! reported that members from the island state of Tuvalu interrupted a session of the Conference on 10 December 2009 to demand that global temperature rise be limited to 1.5 °C instead of the proposed 2 °C. This advocacy continued in the lead up to the 2015 UNFCCC in Paris. AOSIS initiated the negotiating agenda item which would lead to the inclusion of the 1.5 °C target and was important in gaining support for its inclusion from vulnerable African and Asian countries and LDC countries. According to writer and activist Mark Lynas, the inclusion of the 1.5 °C target in the Paris Agreement was 'almost entirely' due to the advocacy of SIDS and other developing countries.At the 2013 Warsaw climate change conference, AOSIS pushed for the establishment of an international mechanism on loss and damages stressed by the wreckage of Supertyphoon Haiyan. As the existence of many AOSIS member states are put at risk by climate change, AOSIS has threatened lawsuits. The results of a recent review of the literature show that potential liability for climate change-related losses for AOSIS is over $570 trillion. AOSIS raised this issue again at the 2015 UNFCCC in Paris. AOSIS was instrumental in the inclusion of Article 8 in the Paris Agreement, which 'recognizes the importance of averting, minimizing and addressing loss and damage' caused by climate change, although the article does not 'provide a basis for any liability of compensation'. As in previous climate agreements, AOSIS members were among the first to ratify the Paris Agreement, with Fiji ratifying first, followed days later by the Republic of Marshall Islands, Palau, the Maldives, and others.AOSIS member state Fiji co-hosted the UN Oceans Conference in 2017. Ministers from AOSIS member states, including Fiji, Tuvalu, and Palau used this conference to again raise awareness of the real risk that the impact of climate change poses to the very existence of their nations and to advocate for action to address climate change. Fiji also presided over the 2017 UN Climate Change Conference, making it the first SIDS to preside over a UN conference on climate change, although the event took place in Bonn due to Fiji's remote location, small size and limited infrastructure.
AOSIS membership
AOSIS has a membership of 39 global states, of which 37 are members of the UN while 2 (Cook Islands and Niue) participate within the UN, and an additional five states are observers. The alliance represents 28% of the developing countries, and 20% of the UN's total membership. Most SIDS are AOSIS members.AOSIS has a heterogeneous membership. Member states are spread across many global regions. AOSIS' focus is SIDS, however its membership also includes low-lying coastal countries, for example Belize and Guyana, and larger islands, for example Papua New Guinea. As well as geographical differences, the member nations also vary economically, as AOSIS includes both wealthy member nations, for example Singapore, and LDC countries, for example Comoros.
The common factor which unites AOSIS members is their particular vulnerability to climate change.Some academics contend that AOSIS' heterogeneity has weakened it effectiveness, particularly in regard to its lobbying for sustainable development.
Member states
The member states are:
AOSIS also has five observers: American Samoa, Guam, Netherlands Antilles, Puerto Rico, and the United States Virgin Islands.
Chairmanship
There have been 13 chairs of AOSIS since its establishment, with the Permanent Representative of Samoa, Ambassador Pa’olelei Luteru, as the current chair.
Honours
In 2010, AOSIS was awarded the first Frederick R. Anderson Award for Outstanding Achievement in Addressing Climate Change by the Center for International Environmental Law.
See also
Africa, the Caribbean and the Pacific (ACP)
Barbados Programme of Action (BPOA)
Climate change mitigation
Islands First
Least Developed Countries (LDC)
World Ocean Conference
Politics of global warming
References
External links
Official website |
expansion of heathrow airport | The expansion of Heathrow Airport is a series of proposals to add to the runways at London's busiest airport beyond its two long runways which are intensively used to serve four terminals and a large cargo operation. The plans are those presented by Heathrow Airport Holdings and an independent proposal by Heathrow Hub with the main object of increasing capacity.In early December 2006, the Department for Transport published a progress report on the strategy which confirmed the original vision of expanding the runways. In November 2007 the government started a public consultation on its proposal for a slightly shorter third runway (2,000 metres (6,560 ft)) and a new passenger terminal.The plan was publicly supported by many businesses, the aviation industry, the British Chambers of Commerce, the Confederation of British Industry, the Trades Union Congress and the then Labour government. It was publicly opposed by Conservative and Liberal Democrat parties as opposition parties and then as a coalition government, by Boris Johnson (then Mayor of London), many environmental, local advocacy groups and prominent individuals. Although the expansion was cancelled on 12 May 2010 by the new coalition government, the Airport Commission published its various-options comparative study "Final Report" on 1 July 2015 which preferred the plan.On 25 October 2016, a new northwest runway and terminal was adopted as central Government policy. In late June 2018, the resultant National Policy Statement: Airports was debated and voted on by the House of Commons; the House voted 415–119 in favour of the third runway, within which outcome many local MPs, including a majority of those from London, opposed or abstained.
On 27 February 2020, in an application for judicial review brought by environmental campaigning groups, London councils, and the Mayor of London, Sadiq Khan, the Court of Appeal ruled that the government's decision to proceed with building the third runway were unlawful, as the government's commitments to combat climate change under the Paris Agreement were not taken into account. In response, the government announced it would not appeal against the decision, but Heathrow announced its intention to appeal to the Supreme Court.On 16 December 2020, the UK Supreme Court lifted the ban on the third runway, allowing a planning application via a Development Consent Order to go ahead. However as of 2023 largely post-COVID pandemic, falling passenger numbers and concerns about investment costs have stalled the project.
Plans
Third runway and additional terminal
In January 2009, the then Transport Secretary Geoff Hoon announced that the UK government supported the expansion of Heathrow by building a third runway, 2,200 m (7,218 ft) long serving a new passenger terminal, a hub for public and private transport set apart from the Central hub between terminals 2 and 3, the southern hub of 4 and western hub of Terminal 5. The government would encourage the airport operator (BAA) to apply for planning permission and to carry out the work. The government anticipated that the new runway would be operational in 2015 or soon after. In 2009 the government stipulated it would limit extra flights to 125,000 per year until 2020, rather than the full capacity of circa 222,000. The third runway plans drafted involve compulsory acquisition and demolition of approximately 700 homes for which 125% market value would be paid to compensate families.
In January 2009 more detailed plans for a third runway were government backed subject to funding, legal and parliamentary approval, together with a terminal which would include a Heathrow Hub railway station to provide the first extra-London rail link using the Great Western Main Line, perhaps at the global definition of "high speed", involving the national High Speed 2 new railway project.In March 2010 the route for High Speed 2 was announced. It did not include a direct connection with Heathrow, but did include a new station at Old Oak Common before reaching the London terminus of Paddington – also served by Crossrail.On 12 May 2010, expansion was cancelled as part of the coalition agreement agreed by the new Conservative-Liberal Democrat government. BAA formally dropped its plans on 24 May 2010. However, London First, a lobby group representing many of London's businesses and major employers, continued to press the coalition government to rethink their opposition to the expansion of the airport.On 1 July 2015, the Airports Commission recommended the third runway with further terminal, with a projected capacity (on completion) of 740,000 flights per year.On 25 June 2018, the House of Commons voted 415–119 in favour of the third runway. The project has received approval from most of the government. A judicial review of the decision was launched by four London boroughs affected by the expansion – Wandsworth, Richmond, Hillingdon and Hammersmith & Fulham – in partnership with Greenpeace and London mayor Sadiq Khan. Khan had previously said he would take legal action if it were passed by Parliament.
Northwest runway
In July 2013, the airport submitted three new proposals for expansion to the Airports Commission, which was established to review airport capacity in south-east England. The commission was chaired by Howard Davies who, at the time of his appointment was in the employ of GIC Private Limited and a member of its International Advisory Board. Since 2012, GIC Private Limited has been one of Heathrow's principal owners. Davies resigned these positions upon confirmation of his appointment to lead the Airports Commission, although it has been observed that he failed to identify these interests when invited to complete the commission's register of interests. Each of the three proposals that were to be considered by the Commission involved the construction of a third runway, to either the north, the north-west or the south-west of the current airport site.The Commission released its interim report in December 2013. This shortlisted three options:
the north-west third runway option at Heathrow
extending an existing runway at Heathrow
a second runway at Gatwick AirportThe full report was published on 1 June 2015; this confirmed the north-west runway and a new sixth terminal as the commission's chosen proposal. The Commission estimated the cost as around £18.6 billion; £4 billion higher than Heathrow's own estimate.The north-west runway and terminal plan was approved by Government on 25 October 2016. In January 2018, in a public consultation, Heathrow unveiled another option with the new runway 300 m (330 yd) shorter, to reduce costs from £16.8 billion to £14.3 billion. This option would still require the M25 motorway to be diverted to a tunnel under the runway, 150 m (160 yd) west of its current route.The financing of the expansion has yet to be arranged; Heathrow Airport Holdings' finances are already highly leveraged. In 2017 borrowings were £13.4 billion, with shareholder equity of only £0.7 billion.
Support
Reasons for expansion
The principal argument stated in favour of expanding Heathrow is to enhance the economic growth of the UK. As the UK's major hub airport, Heathrow can attract many transfer passengers, and so can support a very wide range of direct flight destinations at high frequencies. It is the world's second busiest airport, based on numbers of international passengers. The government claims that Heathrow's connectivity helps London (and nearby counties) especially compete with other European cities for business investment, which in turn produces economic benefits for the rest of the UK. Should Heathrow's connectivity decline compared to London's European competitors, the UK would fall behind.The government's argument is that Heathrow is on the brink of suffering a decline in connectivity. Heathrow's runways are now operating at around 99% capacity, which increases delays when flights are disrupted, and risks competing European airports gaining destinations (at Heathrow's loss). The government estimates that building a third runway would allow Heathrow to increase its connectivity, bringing £5.5bn of economic benefits over the period 2020–2080. However, the British Chambers of Commerce estimated the economic benefits are £30 billion for the UK economy over the same time scale, and has also stated that every year the programme is delayed costs the UK between £900 million and £1.1 billion.Some of the capacity added to Heathrow by the new third runway could be used to reinstate or improve flight connections to UK cities. Several cities have seen their connections to Heathrow reduced or lost over recent years as airlines have reallocated the airport's limited capacity to more profitable long-haul flights.It was suggested that a third runway would increase Heathrow's resilience to disruption, and so reduce emissions from aircraft waiting to land.Construction was estimated to provide up to 60,000 jobs. Operating the expanded Heathrow was expected to create up to 8,000 new jobs at Heathrow by 2030, with multiplier benefits to West London.
Supporters
The UK's Brown ministry took the lead in driving forward the expansion of Heathrow. The particular members of that government most closely associated with that drive were the prime minister Gordon Brown and past Transport Secretaries Alistair Darling, Ruth Kelly, Geoff Hoon and Andrew Adonis. Peter Mandelson, the then Business Secretary, also voiced his support for the scheme.The majority of the UK Conservative Party leadership including former Chancellor George Osborne were also in favour of expansion.The stance of both Labour and the Conservatives was broadly supported by a number of groups and prominent individuals:
Aviation sector: including BAA Limited and Flying Matters.
Airlines: including All Nippon Airways, British Airways, Delta Air Lines, easyJet, Singapore Airlines and Virgin Atlantic.
Airports: including Glasgow Airport, Liverpool John Lennon Airport, Leeds Bradford Airport, Newcastle International Airport and Aberdeen Airport.
Business organisations: Confederation of British Industry, British Chambers of Commerce and 32 local chambers of commerce, including the London Chamber of Commerce and West London Business.
Local authorities: Slough
Manufacturing & freight sector: including the Freight Transport Association, the British International Freight Association, the EEF, Segro and Black & Decker.
Trade unions: including the GMB Union, Trades Union Congress and Unite the Union.However, since 21 November 2022, the CEO of Virgin Atlantic Shai Weiss indicated a pause in the company's support for the expansion of Heathrow, announcing support had moved from 'unequivocal' to 'tentative', mentioning Heathrow's increase of passenger charges as a reason.
Advocacy in support of expansion
In May 2007, the British Airports Authority (BAA) and several other companies involved with aviation established Flying Matters to lobby the UK government and generally advocate for the development of the airport following on from a suggestion from Sir Richard Branson of Virgin Atlantic that aviation industry needed to develop a shared solution to climate change. The organisation was created to help demonstrate that the aviation sector was "taking climate change seriously". In 2009 Greenpeace acquired and published a detailed confidential report into the group activities and plans which claimed that The Department for Transport was independently approaching Flying Matters for support on key issues on the Climate change bill.Prior to the 2007 party conferences Flying Matters issued a number of press releases aimed at the Conservative Party which challenged their opposition to the third runway: "Voters in key marginals shun Conservative proposals for higher taxes on air travel", "'Green' holiday tax plan puts Conservatives 6 per cent behind Labour in 30 most important marginals in the Country","Families will be priced out of air travel if Heathrow fails to expand" and "Stopping new runways would cost half a million new UK jobs". The objectives outlined in the leaked 'draft Strategy and programme for 2009–10' later confirmed that the organisation felt that it was "Essential to help establish a foundation from which the Conservatives could amend their position post election". The organisation's budget for 2008–2009 was £390 thousand.
Lobbying
The aviation sector had close links with political decision makers which many players moving between roles through the controversial 'revolving door'. For example: Joe Irvin was advisor to John Prescott from 1996 and 2001 (Secretary of State for the Environment, Transport and the Regions as well as Deputy Prime Minister) before working for various element of the aviation lobby and becoming head of corporate affairs at BAA in 2006 before he became 'Special Advisor' to Gordon Brown in 2007 when he became prime minister. He was succeeded at BAA by Tom Kelly who took the title 'group director of corporate and public affairs' and had been official spokesman for Tony Blair when he was prime minister.Freedom to Fly was formed during the preparation phase of the "Future of Aviation white paper 2003" by BAA and others It was 'fronted' by Joe Irvin, a former political adviser to John Prescott who subsequently became Director of Public Affairs at BAA Limited Their director, Dan Hodges, is the son of Glenda Jackson, Labour MP and former Aviation Minister.
Opposition
Greenhouse gas emissions
Environmental objections have included that the increased CO2 emissions caused by the additional flights will add to global warming. They have argued that claimed economic benefits would be more than wiped out by the cost of the CO2 emissions. The government estimated that a third runway would generate an extra 210.8 Mt (million tons) CO2 annually, but in cost-benefits analysis costs this at £13.33 per ton using 2006 prices, giving a 2020–2080 "cost" of £2.8bn. This is a small fraction of the government's own official estimate of the cost of carbon, which rises from £32.90 in 2020 to £108.20 in 2080 (in 2007 prices). If these figures are used, the carbon cost of the third runway alone rises to £13.3bn (2006 prices), enough to wipe out the economic benefits. However, the British Chambers of Commerce released a report stating the economic benefits as £30 billion over the same time scale, considerably more than the carbon cost of the expansion.The World Development Movement has claimed that the proposed additional flights from Heathrow's third runway would emit the same amount of CO2 as the whole of Kenya. However, the then Transport Secretary Ruth Kelly stated that carbon emissions would not actually rise overall in the environment, since carbon trading would be used to ensure that these increases from Heathrow are offset by reductions elsewhere in the economy (although such schemes do not account for the fact that aviation carbon emissions cause more harm owing to their being emitted at a higher altitude).
Community destruction
Some 700 homes, a church and eight Grade II-listed buildings would have to be demolished or abandoned, the high street in Harmondsworth split, a graveyard "bulldozed" and the "entire village of Sipson could disappear". John McDonnell, MP for Hayes and Harlington, suggested in 2007 that up to 4,000 houses would actually have to be demolished or abandoned, but aviation minister Jim Fitzpatrick defended the plans, saying anyone evicted from their home as a result of expansion would be fully compensated. BAA has committed to preserving the Grade I-listed parish church and Great Barn at Harmondsworth, and has given assurances that the value of properties affected by a possible third runway will be protected.
Noise pollution
Building a third runway at Heathrow would expose hundreds of thousands of residents in London and Berkshire to sustained high levels of aircraft noise for the first time.
Subsidiary arguments
There are alternatives to a third runway that maintain London's connectivity (see below).
Reductions in emissions caused by fewer aircraft delays (a buffer of spare capacity) and a few fewer flights from some regional airports would be dwarfed by the increased emissions from the additional flights serving Heathrow, as reflected by the promise to open up many airports not currently connected which will now connect to the UK.
Job creation claims are invalid. If the money supporting the new jobs generated by a third runway was not spent at an expanded Heathrow, it would be spent elsewhere in the economy.
Opponents of expansion
Three House of Commons-represented political parties, many advocacy groups, associations and prominent people are publicly opposed to expansion. Notably:
Plaid Cymru (including its five MPs at the time of the June 2018 vote on whether to approve the National Policy Statement)
The Liberal Democrats (including thus all 11 MPs at the June 2018 vote, however it gained six MPs from defections of which five voted for expansion in 2018).
The Green Party (including its MP).
28⁄46 of London Labour MPs: Rosena Allin-Khan, Diane Abbott, Dawn Butler, Lyn Brown, Karen Buck, Ruth Cadbury, Jeremy Corbyn, Marsha De Cordova, Jon Cruddas, John Cryer, Janet Daby, Emma Dent Coad, Clive Efford, Barry Gardiner, Helen Hayes, Kate Hoey, Rupa Huq, Sarah Jones, Shadow chancellor John McDonnell, Kate Osamor, Teresa Pearce, Matthew Pennycook, Steve Reed, Ellie Reeves, Andy Slaughter, Keir Starmer, Emily Thornberry, Catherine West
6⁄19 of London Conservative MPs: Bob Blackman, Zac Goldsmith, Justine Greening, Greg Hands, Matthew Offord, and Theresa Villiers. The 2018 vote drew five absences/abstentions from others in the nineteen.
Sadiq Khan, the Mayor of London, and his predecessors, Boris Johnson and Ken Livingstone.
International campaign groups criticising expansion of fossil-fuel powered passenger aviation (foremost group: Plane Stupid) and local anti-aviation impacts groups (foremost group: Hacan ClearSkies).
24 local authorities (including the London Borough of Hillingdon)
Environmental campaign groups: Greenpeace, RSPB, Friends of the Earth and WWF
The National Trust
Developmental charities: Oxfam, Christian Aid
Advocacy against expansion
Various methods were proposed and adopted in attempt to halt expansion:
The Conservatives and Liberal Democrats opposed construction and cancelled expansion when elected in the 2010 general election.
In August 2007, the Camp for Climate Action took place within a mile of Heathrow. The camp ran for a week and on its final day some 1000–1400 people protested and 200 people blockaded British Airports Authority HQ. Before the camp BAA requested the "mother of all injunctions" which could have restricted the movements of 5 million people from 15 different organisations, including the RSPB, Greenpeace, the Campaign for the Protection of Rural England, the Woodland Trust, Friends of the Earth, and the National Trust. The injunction would technically have included the Queen; patron of the RSPB and CPRE, Prince Charles; in his position as President of the National Trust, and even some of BAA's own staff.In February 2008, five members of Plane Stupid who have resisted expansion throughout the process staged a 2-hour protest on the roof of the Palace of Westminster (Houses of Parliament) in protest at the close links between BAA and the government. Two large banners were unfurled which read "BAA HQ" and "No 3rd runway at Heathrow".In April 2008, Plane Stupid claimed that their group was infiltrated by Toby Kendall, 24, an employee of C2i International. The Times reported that he had gone undercover in the group using the name of "Ken Tobias." Airport operator, BAA, who have often been a target of Plane Stupid's campaign, confirmed to The Times that they had been in contact with C2i International but denied ever hiring the company. C2i offered their clients "The ability to operate effectively and securely in a variety of hostile environments". and at the time listed 'aerospace' at the top of a list of industries for which it worked.
In January 2009, Greenpeace and partners (including actress Emma Thompson and impressionist Alistair McGowan) bought a piece of land on the site of the proposed third runway called Airplot. Their aim is to maximise the opportunities to put legal obstacles in the way of expansion. Although this action is similar to the tactics first employed in the early 1980s by FoE with the 'Alice's Meadow' campaign; it differs in that it relies on the concept of multiple beneficial ownership rather than the division of the field into microplots. The field was bought for an undisclosed sum from a local land owner. Also in January, Climate Rush staged a "picnic protest" at Heathrow airport against the construction of the third runway. Hundreds of people attended the protest, dressed in Edwardian period dress. In the same month the glass doors of the Department for Transport were also broken by members of the organisation.In March 2009, senior MPs demanded a Commons investigation into evidence of a "revolving door" policy between Downing Street, Whitehall and BAA Limited (BAA is a major UK airport operator).Also in March 2009, Plane Stupid protester Leila Deen threw green custard over Business Secretary Lord Mandelson at a low carbon summit hosted by Gordon Brown, in protest at the frequent meetings between Roland Rudd, who represents airport operator BAA, and Mandelson and other ministers in the run-up to Labour's decision to go ahead with plans for a third runway at Heathrow.Hounslow Council examined the possibility of legal action to prevent expansion, with the support of other London councils and the mayor (Boris Johnson).In February 2010, The Daily Telegraph reported that the Department for Transport was being investigated by the Information Commissioner's Office and could face a criminal investigation over allegations that it may have deleted or concealed emails to prevent them from being disclosed under the Freedom of Information Act 2000. The investigation followed a complaint by Justine Greening MP.In March 2010 campaigners "won a High Court battle" when Lord Justice Carnwath ruled that the government's policy support for a third runway would need to be looked at again, and called for a review "of all the relevant policy issues, including the impact of climate change policy". The Department for Transport vowed to "robustly defend" the third runway plan. Following the announcement, Gordon Brown, the prime minister, said it was the right decision, that it was "vital not just to our national economy, but enables millions of citizens to keep in touch with their friends and families" and that the judgement would not change its plans. Shadow transport secretary Theresa Villiers said that the ruling meant "Labour's flagship transport policies were in complete disarray".On 6 August 2018, lawyers for Friends of the Earth filed papers at the High Court asking for the Airports National Policy Statement (NPS) to be quashed. Friends of the Earth argues that the Airports NPS constitutes a breach of the UK's climate change policy and its sustainable development duties.
Alternatives to expansion
The main suggested alternatives to Heathrow expansion included:
greater use of regional airports in the UK
a new airport elsewhere
greater use of rail travel (including on the proposed High Speed 2 line) to reduce domestic flights
Greater use of regional airports
The United Kingdom has a number of regional airports, which it had been argued can be utilised further to reduce the airport capacity strain on South East England and benefit the whole of the United Kingdom. The 2003 Aviation White Paper mainly argued that increased use of regional airports would increase airport capacity in South East England; and the 2010 coalition government concurred with this view. Politicians proposing this plan included Theresa Villiers and John Leech. Business leaders to back the plan include bosses at Birmingham and Cardiff Airports. The CEO of Manchester Airports Group, the largest British-owned operator of airports and member of the influential Aviation Foundation along with Virgin Atlantic, British Airways and BAA Limited, has also proposed greater use of regional airports.A number of airline bosses expressed their dissatisfaction at the over-emphasis on the South East in aviation policy. Laurie Berryman of Emirates Airlines said in 2013 that "The business community doesn't want to come to Heathrow or the South East. They would rather fly long-haul from their local airport." A number of airlines have filled in the gap when British Airways have left regional airports over the past decade.Another major issue at regional airports was "leakage", or passengers who need to get connecting flights from a regional airport to an international airport. Manchester Airport is by far the busiest and largest airport outside South East England, with two runways. Four million passengers – about 20% of all passengers – need to fly from Manchester to London to get connecting long-haul flights abroad. Likewise, many more millions fly from other regional airports to connecting flights in London. Advocates argue that flying to international destinations directly from regional airports would immediately create more airport capacity in the South East at a fraction of the cost and time of having a build a new runway or airport. Furthermore, numerous regional airports are underused, and need no immediate expense to take on more passengers. Manchester is the only airport in the United Kingdom other than Heathrow to have two runways and is severely under capacity: Manchester carries 20 million passengers, but has the capacity to carry at least 50 million.Proponents of this idea also suggest the new High Speed 2 network will be vital to the success of regional airports in the future. HS2 will link the three airports of Birmingham, Manchester and East Midlands with London. Furthermore, journey times will be competitive: a journey from London Euston to Birmingham Airport will be less than 50 minutes, and to Manchester about 65 minutes – in comparison, the Heathrow Express service to London Paddington takes 25 minutes. Currently rail links exist from London Euston to Birmingham International which takes about 70 minutes, whilst journeys to Manchester take over two hours with a change required at Manchester Piccadilly. It was hoped airlines would create a "north-south hub" with more flights from Manchester, with passengers who live or work in London being only an hour away from the airport – thus spreading demand to regional airports and creating more international hub capacity in the South East.
Thames Estuary Airport
Since the 1970s, there have been various proposals to complement or replace Heathrow by a new airport in the Thames Estuary. This would have the advantage of avoiding flights taking off and landing over London, with all the accompanying noise and pollution, and would avoid destroying homes, nature and amenity land on the western edge of London. In November 2008, the Mayor of London, Boris Johnson, announced a feasibility study into building an airport on an artificial island off the Isle of Sheppey.Critics pointed variously to the construction costs, threat to jobs at Heathrow, and opponents in green ideology as with all expansion cite increased CO2 emissions if more flights are scheduled than at present.Following an election pledge not to build a third runway, Prime Minister David Cameron was keen to implement the Thames Estuary hub. However, airlines spoke out against plans to partially fund the airport with around £8 billion in landing charges from Heathrow. An aviation review was set for the end of 2012 and Cameron had advised: "I do understand it is vitally important that we maintain the sort of hub status that Britain has. There are lots of different options that can be looked at."
High-speed rail
The three main parties represented within the UK support a high-speed railway to the north.
In September 2008, the Conservative opposition proposed such a northern railway and suggested that it would reduce the need for short-haul flights, by encouraging passengers to complete their journey by train instead of flying. By pruning short-haul flights from Heathrow, international flights could increase and so connectivity would be enhanced. The reduction could be 66,430 domestic flights per year, or 30% of the capacity of the third runway.
In March 2010, in the final months of the Labour government, it published detailed plans for High Speed 2 which would link London with Birmingham and then Scotland, incorporating a new Old Oak Common railway station in West London which would 'improve surface access by rail to Heathrow Airport.' Some "modal shift " to rail from road and air was expected, but not for passengers who arrived at Heathrow by air, who were likely to continue to go by air to their UK destination.
See also
2016 Richmond Park by-election
Aviation and the environment
Air transport and the environment (United Kingdom)
Heathrow Airport transport proposals
Mitigation of aviation's environmental impact
Notes
References
Documents referenced from 'Notes' section
Other references for article
External links
Heathrow Expansion, Heathrow Airport official website on the Heathrow expansion plan
BBC NEWS Q&A: A third runway at Heathrow
Heathrow expansion – London Borough of Hillingdon Archived 18 October 2019 at the Wayback Machine
Heathrow expansion – London Borough of Richmond upon Thames
Stop Heathrow Expansion (campaign group)
Heathrow Association for the Control of Aircraft Noise (HACAN)
Airports Commission: interim report, 17 December 2013
Airports Commission: final report, 1 July 2015 |
electricity sector in the united kingdom | The United Kingdom has a National Grid that covers most of mainland Great Britain and several of the surrounding islands, as well as some connectivity to other countries. The electrical sector supplies power at 50 Hz AC, and ~240 volts is supplied to consumers. In 2020 the electricity sector's grid supply came from 55% low-carbon power (including 24.8% from wind, 17.2% nuclear power, 4.4% solar, 1.6% hydroelectricity, 6.5% biomass), 36.1% fossil fuelled power (almost all from natural gas), and 8.4% imports. Renewable power is showing strong growth, while fossil fuel generator use in general and coal use in particular is shrinking, with historically dominant coal generators now mainly being run in winter due to pollution and costs, and contributed just 1.6% of the supply in 2020.The use of electricity declined 9% from 2010 to 2017, attributed largely to a decline in industrial activity and a switch to more energy efficient lighting and appliances. By 2018 per capita electrical generation had fallen to the same level as in 1984.In 2008 nuclear electricity production was 53.2 TW·h, equivalent to 860 kWh per person. In 2014, 28.1 TW·h of energy was generated by wind power, which contributed 9.3% of the UK's electricity requirement. In 2015, 40.4 TW·h of energy was generated by wind power, and the quarterly generation record was set in the three-month period from October to December 2015, with 13% of the nation's electricity demand met by wind. Wind power contributed 15% of UK electricity generation in 2017 and 18.5% in the final quarter of 2017. In 2019, National Grid announced that low-carbon generation technologies had produced more electricity than fossil generators for the first time in Britain.
History
National grid
The first to use Nikola Tesla's three-phase high-voltage electric power distribution in the United Kingdom was Charles Merz, of the Merz & McLellan consulting partnership, at his Neptune Bank Power Station near Newcastle upon Tyne. This opened in 1901, and by 1912 had developed into the largest integrated power system in Europe. The rest of the country, however, continued to use a patchwork of small supply networks.
In 1925, the British government asked Lord Weir, a Glaswegian industrialist, to solve the problem of Britain's inefficient and fragmented electricity supply industry. Weir consulted Merz, and the result was the Electricity (Supply) Act 1926, which recommended that a "national gridiron" supply system be created.
The 1926 Act created the Central Electricity Board, which set up the UK's first synchronised, nationwide AC grid, running at 132 kV, 50 Hz.
The grid was created with 4,000 miles of cables: mostly overhead cables, linking the 122 most efficient power stations. The first "grid tower" was erected near Edinburgh on 14 July 1928, and work was completed in September 1933, ahead of schedule and on budget. It began operating in 1933 as a series of regional grids with auxiliary interconnections for emergency use. Following the unauthorised but successful short term parallelling of all regional grids by the night-time engineers on 29 October 1937, by 1938 the grid was operating as a national system. By then, the growth in the number of electricity users was the fastest in the world, rising from three quarters of a million in 1920 to nine million in 1938.
It proved its worth during the Blitz when South Wales provided power to replace lost output from Battersea and Fulham power stations.
The grid was nationalised by the Electricity Act 1947, which also created the British Electricity Authority. In 1949, the British Electricity Authority decided to upgrade the grid by adding 275 kV links.
At its inception in 1950, the 275 kV Transmission System was designed to form part of a national supply system, with an anticipated total demand of 30,000 MW by 1970. This predicted demand was already exceeded by 1960. The rapid load growth led the Central Electricity Generating Board (CEGB) to carry out a study of future transmission needs, completed in September 1960. The study is described in a paper presented to the Institution of Electrical Engineers by Booth, Clark, Egginton and Forrest in 1962.
Considered in the study, together with the increased demand, was the effect on the transmission system of the rapid advances in generator design, resulting in projected power stations of 2,000–3,000 MW installed capacity. These new stations were mostly to be sited where advantage could be taken of a surplus of cheap low-grade fuel and adequate supplies of cooling water, but these locations did not coincide with the load centres. West Burton with 4 × 500 MW machines, sited at the Nottinghamshire coalfield near the River Trent, is a typical example. These developments shifted the emphasis on the transmission system, from interconnection to the primary function of bulk power transfers from the generation areas to the load centres, such as the anticipated transfer in 1970 of some 6,000 MW from The Midlands to the Home counties.
Continued reinforcement and extension of the existing 275 kV systems were examined as possible solutions. However, in addition to the technical problem of very high fault levels, many more lines would have been required to obtain the estimated transfers at 275 kV. As this was not consistent with the CEGB's policy of preservation of amenities, a further solution was sought. Consideration was given to both a 400 kV and a 500 kV scheme as the alternatives, either of which gave a sufficient margin for future expansion. A 400 kV system was chosen, for two main reasons. First, the majority of the 275 kV lines could be uprated to 400 kV, and secondly it was envisaged that operation at 400 kV could commence in 1965, compared with 1968 for a 500 kV scheme. Design work was started, and to meet the 1965 timescale, the contract engineering for the first projects had to run concurrently with the design. This included the West Burton 400 kV Indoor Substation, the first section of which was commissioned in June 1965. From 1965, the grid was partly upgraded to 400 kV, beginning with a 150-mile (241 km) line from Sundon to West Burton, to become the Supergrid.
With the development of the national grid and the switch to using electricity, United Kingdom electricity consumption increased by around 150% between the post war nationalisation of the industry in 1948 and the mid-1960s. During the 1960s growth slowed as the market became saturated.
On the breakup of the CEGB in 1990, the ownership and operation of the National Grid in England and Wales passed to National Grid Company plc, later to become National Grid Transco, and now National Grid plc. In Scotland the grid was already split into two separate entities, one for southern and central Scotland and the other for northern Scotland, connected by interconnectors. The first is owned and maintained by SP Energy Networks, a subsidiary of Scottish Power, and the other by SSE. However, National Grid plc remains the System Operator for the whole British Grid.
Generation
The mode of generation has changed over the years.
During the 1940s some 90% of the generating capacity was fired by coal, with oil providing most of the remainder.
The United Kingdom started to develop a nuclear generating capacity in the 1950s, with Calder Hall being connected to the grid on 27 August 1956. Though the production of weapons-grade plutonium was the main reason behind this power station, other civil stations followed, and 26% of the nation's electricity was generated from nuclear power at its peak in 1997.
During the 1960s and 70s, coal plants were built to supply consumption despite economic challenges. During the 1970s and 80s some nuclear sites were built. From the 1990s gas power plants benefited from the Dash for Gas supplied by North Sea gas. After the 2000s, renewables like solar and wind added significant capacity. In Q3 2016, nuclear and renewables each supplied a quarter of British electricity, with coal supplying 3.6%.Despite the flow of North Sea oil from the mid-1970s, oil fuelled generation remained relatively small and continued to decline.
Starting in 1993, and continuing through the 1990s, a combination of factors led to a so-called Dash for Gas, during which the use of coal was scaled back in favour of gas-fuelled generation. This was sparked by political concerns, the privatisation of the National Coal Board, British Gas and the Central Electricity Generating Board; the introduction of laws facilitating competition within the energy markets; the availability of cheap gas from the North Sea and elsewhere and the high efficiency and reduced pollution from combined cycle gas turbine (CCGT) generation. In 1990 just 1.09% of all gas consumed in the country was used in electricity generation; by 2004 the figure was 30.25%.By 2004, coal use in power stations had fallen to 50.5 million tonnes, representing 82.4% of all coal used in 2004 (a fall of 43.6% compared to 1980 levels), though up slightly from its low in 1999. On several occasions in May 2016, Britain burned no coal for electricity for the first time since 1882. On 21 April 2017, Britain went a full day without using coal power for the first time since the Industrial Revolution, according to the National Grid.From the mid-1990s new renewable energy sources began to contribute to the electricity generated, adding to a small hydroelectricity generating capacity.
UK 'energy gap'
In the early years of the 2000s, concerns grew over the prospect of an 'energy gap' in United Kingdom generating capacity. This was forecast to arise because it was expected that a number of coal fired power stations would close due to being unable to meet the clean air requirements of the European Large Combustion Plant Directive (directive 2001/80/EC). In addition, the United Kingdom's remaining Magnox nuclear stations were to have closed by 2015. The oldest AGR nuclear power station has had its life extended by ten years, and it was likely many of the others could be life-extended, reducing the potential gap suggested by the current accounting closure dates of between 2014 and 2023 for the AGR power stations.A report from the industry in 2005 forecast that, without action to fill the gap, there would be a 20% shortfall in electricity generation capacity by 2015. Similar concerns were raised by a report published in 2000 by the Royal Commission on Environmental Pollution (Energy – The Changing Climate). The 2006 Energy Review attracted considerable press coverage – in particular in relation to the prospect of constructing a new generation of nuclear power stations, in order to prevent the rise in carbon dioxide emissions that would arise if other conventional power stations were to be built.
Among the public, according to a November 2005 poll conducted by YouGov for Deloitte, 35% of the population expected that by 2020 the majority of electricity generation would come from renewable energy (more than double the government's target, and far larger than the 5.5% generated as of 2008), 23% expected that the majority will come from nuclear power, and only 18% that the majority will come from fossil fuels. 92% thought the Government should do more to explore alternative power generation technologies to reduce carbon emissions.
Plugging the energy gap
The first move to plug the United Kingdom's projected energy gap was the construction of the conventionally gas-fired Langage Power Station and Marchwood Power Station which became operational in 2010.
In 2007, proposals for the construction of two new coal-fired power stations were announced, in Tilbury, Essex and in Kingsnorth, Kent. If built, they would have been the first coal-fired stations to be built in the United Kingdom in 20 years.Beyond these new plants, there were a number of options that might be used to provide the new generating capacity, while minimising carbon emissions and producing less residues and contamination. Fossil fuel power plants might provide a solution if there was a satisfactory and economical way of reducing their carbon emissions. Carbon capture might provide a way of doing this; however the technology is relatively untried and costs are relatively high. As of 2006 there were no power plants in operation with a full carbon capture and storage system, and as of 2018 the situation is that there are no viable CCS systems worldwide.
Energy gap disappears
However, due to reducing demand in the late-2000s recession removing any medium term gap, and high gas prices, in 2011 and 2012 over 2 GW of older, less efficient, gas generation plant was mothballed. In 2011 electricity demand dropped 4%, and about 6.5 GW of additional gas-fired capacity is being added over 2011 and 2012. Early in 2012 the reserve margin stood at the high level of 32%.Another important factor in reduced electrical demand in recent years has come from the phasing out of incandescent light bulbs and a switch to compact fluorescent and LED lighting. Research by the University of Oxford has shown that the average annual electrical consumption for lighting in a UK home fell from 720 kWh in 1997 to 508 kWh in 2012. Between 2007 and 2015, the UK's peak electrical demand fell from 61.5 GW to 52.7.GW.In June 2013, the industry regulator Ofgem warned that the UK's energy sector faced "unprecedented challenges" and that "spare electricity power production capacity could fall to 2% by 2015, increasing the risk of blackouts". Proposed solutions "could include negotiating with major power users for them to reduce demand during peak times in return for payment".The use of electricity declined 9% from 2010 to 2017, attributed largely to a decline in industrial activity and a switch to more energy efficient lighting and appliances. By 2018 per capita electrical generation had fallen to the same level as in 1984.In January 2019 Nick Butler, in the Financial Times, wrote: "costs of all forms of energy (apart from nuclear) have fallen dramatically and there is no shortage of supply", partly based on the reserve capacity auction for 2021–2022 achieving extremely low prices.
Production
Modes of production
In 2020, total electricity production stood at 312 TWh (down from a peak of 385 TWh in 2005), generated from the following sources:
Gas: 35.7% (0.05% in 1990)
Nuclear: 16.1% (19% in 1990)
Wind: 24.2% (0% in 1990), of which:Onshore Wind: 11.1%
Offshore Wind: 13%Coal: 1.8% (67% in 1990)
Bio-Energy: 12.6% (0% in 1990)
Solar: 4.2% (0% in 1990)
Hydroelectric: 2.2% (2.6% in 1990)
Oil and other: 3.3% (12% in 1990)The UK Government energy policy had targeted a total contribution from renewables to achieve 10% by 2010, but it was not until 2012 that this figure was exceeded; renewable energy sources supplied 11.3% (41.3 TWh) of the electricity generated in the United Kingdom in 2012.
The Scottish Government has a target of generating 17% to 18% of Scotland's electricity from renewables by 2010, rising to 40% by 2020.
The gross production of electricity was 393 TWh in 2004 which gave the 9th position in the world top producers in 2004.The 6 major companies which dominate the British electricity market ("The Big Six") are: EDF, Centrica (British Gas), E.ON, RWE npower, Scottish Power and Southern & Scottish Energy.
The UK is planning to reform its Electricity Market. It has introduced a capacity mechanism and a Contract for Difference (CfD) subsidised purchase to encourage the building of new more environmentally friendly generation.
Gas and coal
Electricity produced with gas was 160 TWh in 2004 and 177 TWh in 2008. In both years the United Kingdom was the fourth highest producer of electricity from gas. In 2005 the UK produced 3.2% of the world total natural gas; ranking fifth after Russia (21.8%), United States (18%), Canada (6.5%) and Algeria (3.2%). In 2009 the UK’s own gas production was less and natural gas was also imported.Due to reducing demand in the late-2000s recession and high gas prices, in 2011 and 2012 over 2 GW of older, less efficient, gas generation plant was mothballed.On several occasions in May 2016, Britain burned no coal for electricity for the first time since 1882. Due to lower gas prices, economy of coal plants is strained, and 3 coal plants closed in 2016. On 21 April 2017, the mainland grid burnt no coal to make electricity for the first complete 24 hour period. And in spring/summer 2020 from 10 April, the UK grid ran for 68 days, without burning any coal.In August and September 2021, the UK had to restart coal plants, amidst a lack of wind, as power imports from Europe were insufficient to satisfy demand.
Nuclear power
Nuclear power in the United Kingdom generates around a quarter of the country's electricity as of 2016, projected to rise to a third by 2035. The UK has 15 operational nuclear reactors at seven plants (14 advanced gas-cooled reactors (AGR) and one pressurised water reactor (PWR)), as well as nuclear reprocessing plants at Sellafield and the Tails Management Facility (TMF) operated by Urenco in Capenhurst.
Renewable energy
From the mid-1990s renewable energy began to contribute to the electricity generated in the United Kingdom, adding to a small hydroelectricity generating capacity. Renewable energy sources provided for 11.3% of the electricity generated in the United Kingdom in 2012, reaching 41.3 TWh of electricity generated. As of 2nd quarter 2017, renewables generated 29.8% of the UK's electricity.Currently, the biggest renewable source of energy in the UK is wind power, and the UK has some of the best wind resources in Europe. The UK has relatively small hydroelectricity deployment and resources, although some pumped storage exists. Solar power is rapidly growing and provides significant power during daylight hours, but total energy provided is still small. Biofuels are also used as a significant sources of power. Geothermal is not highly accessible and is not a significant source. Tidal resources are present and experimental projects are being tested, but are likely to be expensive.
Wind power delivers a growing percentage of the energy of the United Kingdom and by the beginning of February 2018, it consisted of 8,655 wind turbines with a total installed nameplate capacity of over 18.4 gigawatts: 12,083 megawatts of onshore capacity and 6,361 megawatts of offshore capacity. This placed the United Kingdom at this time as the world's sixth largest producer of wind power. Polling of public opinion consistently shows strong support for wind power in the UK, with nearly three quarters of the population agreeing with its use, even for people living near onshore wind turbines. Wind power is expected to continue growing in the UK for the foreseeable future, RenewableUK estimates that more than 2 GW of capacity will be deployed per year for the next five years. Within the UK, wind power was the second largest source of renewable energy after biomass in 2013.In 2014, Imperial College predicted that Britain could have 40% of electricity from solar power in sunny days by 2020 in 10 million homes compared to a half a million homes in start of 2014. If a third of households would generate solar energy it could equal 6% of British total electricity consumption.
Diesel
Britain has a number of Diesel farms for supplying high demand hours of the day, normally in the winter, when other generators such as wind farms or solar farms may happen to have low output. Many of the diesel generators run for fewer than 200 hours a year.
Power stations
Storage
The UK has some large pumped storage systems, notably Dinorwig Power Station which can provide 1.7 GW for over 5 hours, having a storage capacity of about 9 GWh.It also has significant grid battery storage which can supply several gigawatts for a few hours. As of May 2021, 1.3 GW of battery storage was operating in the United Kingdom, with 16 GW of projects in the pipeline potentially deployable over the next few years. In 2022, UK capacity grew by 800 MWh, ending at 2.4 GW / 2.6 GWh.In December 2019, the Minety Battery Energy Storage Project started construction, located near Minety, Wiltshire and developed by Penso Power. Chinese investment provided the finance and the China Huaneng Group is responsible for construction and operation. The designed capacity was 100 MWh and uses LiFePo4 battery technology. It started operation in July 2021. In 2020 Penso Power decided to expand the project by 50 MWh, which is expected to start operation later in 2021. It is the biggest storage battery facility in Europe.
Consumption
Lighting
The European Commission banned low efficiency general-purpose, non-directional incandescent lightbulbs from 2012, and similarly shaped higher-efficiency halogen bulbs were banned in 2018. A few specialised bulb types such as for use in ovens are exempt from the ban.
Export/import
There are 2GW of undersea interconnections between the GB grid and northern France (HVDC Cross-Channel), a second 1 GW connection with France (IFA2), Northern Ireland (HVDC Moyle), the Isle of Man (Isle of Man to England Interconnector), 1 GW with the Netherlands (BritNed), 1 GW Belgium (NEMO Link), 1.4 GW with Norway (North Sea Link) and the Republic of Ireland (EWIC).
The export of electricity was 1–3% of consumption between 2004 and 2009. According to IEA the UK was the sixth highest electricity importer, importing 11 TWh, after Brazil (42 TWh), Italy (40 TWh), United States (33 TWh), Netherlands (16 TWh) and Finland (14 TWh).There are also future plans to lay cables to link the GB grid with Iceland (Icelink), Norway (Scotland–Norway interconnector) and Denmark (Viking Link).
The longest cable, North Sea Link, is 720-kilometre long to connect Blyth, Northumberland, north-eastern England, to Kvilldall, south-western Norway.
Pricing
The electricity market is deregulated in the UK, and the cost per MWh for much of the generated electricity is paid at the locational marginal price, which is occasionally negative during low consumption and high winds, starting in 2019. The price is traded on a spot market (APX Power UK owned by the APX Group).
Electricity billing
In the UK, an electricity supplier is a retailer of electricity. For each supply point the supplier has to pay the various costs of transmission, distribution, meter operation, data collection, tax etc. The supplier then adds in energy costs and the supplier's own charge.
Pollution
The UK historically had a coal-driven grid that generated large amounts of CO2 and other pollutants including SO2 and nitrogen oxides, leading to some acid rain found in Norway and Sweden. Coal plants had to be fitted with scrubbers which added to costs.In 2019 the electricity sector of the UK emitted 0.256 kg of CO2 per kWh of electricity.
See also
Energy in the United Kingdom
Energy policy of the United Kingdom
National Grid (Great Britain)
Lists of power stations in the United Kingdom
High-voltage substations in the United Kingdom
List of high-voltage transmission links in the United Kingdom
== References == |
green vehicle | A green vehicle, clean vehicle, eco-friendly vehicle or environmentally friendly vehicle is a road motor vehicle that produces less harmful impacts to the environment than comparable conventional internal combustion engine vehicles running on gasoline or diesel, or one that uses certain alternative fuels. Presently, in some countries the term is used for any vehicle complying or surpassing the more stringent European emission standards (such as Euro6), or California's zero-emissions vehicle standards (such as ZEV, ULEV, SULEV, PZEV), or the low-carbon fuel standards enacted in several countries.Green vehicles can be powered by alternative fuels and advanced vehicle technologies and include hybrid electric vehicles, plug-in hybrid electric vehicles, battery electric vehicles, compressed-air vehicles, hydrogen and fuel-cell vehicles, neat ethanol vehicles, flexible-fuel vehicles, natural gas vehicles, clean diesel vehicles, and some sources also include vehicles using blends of biodiesel and ethanol fuel or gasohol. In 2021, with an EPA-rated fuel economy of 142 miles per gallon gasoline equivalent (mpg-e) (1.7 L/100 km), the 2021 Tesla Model 3 Standard Range Plus RWD became the most efficient EPA-certified vehicle considering all fuels and all years, surpassing the 2020 Tesla Model 3 Standard Range Plus and 2019 Hyundai Ioniq Electric.Several authors also include conventional motor vehicles with high fuel economy, as they consider that increasing fuel economy is the most cost-effective way to improve energy efficiency and reduce carbon emissions in the transport sector in the short run. As part of their contribution to sustainable transport, these vehicles reduce air pollution and greenhouse gas emissions, and contribute to energy independence by reducing oil imports.An environmental analysis extends beyond just the operating efficiency and emissions. A life-cycle assessment involves production and post-use considerations. A cradle-to-cradle design is more important than a focus on a single factor such as energy efficiency.
Energy efficiency
Cars with similar production of energy costs can obtain, during the life of the car (operational phase), large reductions in energy costs through several measures:
The most significant is by using alternative propulsion:
An efficient engine that reduces the vehicle's consumption of petroleum (i.e. petroleum electric hybrid vehicle), or that uses renewable energy sources throughout its working life.
Using biofuels instead of petroleum fuels.
Proper maintenance of a vehicle such as engine tune-ups, oil changes, and maintaining proper tire pressure can also help.
Removing unnecessary items from a vehicle reduces weight and improves fuel economy as well.
Types
Green vehicles include vehicles types that function fully or partly on alternative energy sources other than fossil fuel or less carbon-intensive than gasoline or diesel.
Another option is the use of alternative fuel composition in conventional fossil fuel-based vehicles, making them function partially on renewable energy sources. Other approaches include personal rapid transit, a public transportation concept that offers automated, on-demand, non-stop transportation on a network of specially built guideways.
Electric and fuel cell-powered
Examples of vehicles with reduced petroleum consumption include electric cars, plug-in hybrids and fuel cell-powered hydrogen cars.
Electric cars are typically more efficient than fuel cell-powered vehicles on a Tank-to-wheel basis. They have better fuel economy than conventional internal combustion engine vehicles but are hampered by range or maximum distance attainable before discharging the battery. The electric car batteries are their main cost. They provide a 0% to 99.9% reduction in CO2 emissions compared to an ICE (gasoline, diesel) vehicle, depending on the source of electricity.
Hybrid electric vehicles
Hybrid cars may be partly fossil fuel (or biofuel) powered and partly electric or hydrogen-powered. Most combine an internal combustion engine with an electric engine, though other variations too exist. The internal combustion engine is often either a gasoline or Diesel engine (in rare cases a Stirling engine may even be used). They are more expensive to purchase but cost redemption is achieved in a period of about 5 years due to better fuel economy.
Compressed air cars, stirling vehicles, and others
Compressed air cars, stirrling-powered vehicles, and Liquid nitrogen vehicles are less polluting than an electrical vehicles, since the vehicle and its components are environment friendly.
Solar car races are held on a regular basis in order to promote green vehicles and other "green technology". These sleek driver-only vehicles can travel long distances at highway speeds using only the electricity generated instantaneously from the sun.
Improving conventional cars
A conventional vehicle can become a greener vehicle by mixing in renewable fuels or using less carbon intensive fossil fuel. Typical gasoline-powered cars can tolerate up to 10% ethanol. Brazil manufactured cars that run on neat ethanol, though there were discontinued. Another available option is a flexible-fuel vehicle which allows any blend of gasoline and ethanol, up to 85% in North America and Europe, and up to 100% in Brazil. Another existing option is to convert a conventional gasoline-powered to allow the alternative use of CNG. Pakistan, Argentina, Brazil, Iran, India, Italy, and China have the largest fleets of natural gas vehicles in the world.Diesel-powered vehicles can often transition completely to biodiesel, though the fuel is a very strong solvent, which can occasionally damage rubber seals in vehicles built before 1994. More commonly, however, biodiesel causes problems simply because it removes all of the built-up residue in an engine, clogging filters, unless care is taken when switching from dirty fossil-fuel derived diesel to bio-diesel. It is very effective at 'de-coking' the diesel engines combustion chambers and keeping them clean. Biodiesel is the lowest emission fuel available for diesel engines. Diesel engines are the most efficient car internal combustion engines. Biodiesel is the only fuel allowed in some North American national parks because spillages will completely bio-degrade within 21 days. Biodiesel and vegetable oil fuelled, diesel engined vehicles have been declared amongst the greenest in the US Tour de Sol competition.
This presents problems, as biofuels can use food resources in order to provide mechanical energy for vehicles. Many experts point to this as a reason for growing food prices, particularly US Bio-ethanol fuel production which has affected maize prices. In order to have a low environmental impact, biofuels should be made only from waste products, or from new sources like algae.
Electric Motor and Pedal Powered Vehicles
Multiple companies are offering and developing two, three, and four wheel vehicles combining the characteristics of a bicycle with electric motors. US Federal, State and Local laws do not clearly nor consistently classify these vehicles as bicycles, electric bicycles, motorcycles, electric motorcycles, mopeds, Neighborhood Electric Vehicle, motorised quadricycle or as a car. Some laws have limits on top speeds, power of the motors, range, etc. while others do not.
Other
Public transportation vehicles are not usually included in the green vehicle category, but Personal rapid transit (PRT) vehicles probably should be. All vehicles that are powered from the track have the advantage of potentially being able to use any source of electric energy, including sustainable ones, rather than requiring liquid fuels. They can also switch regenerative braking energy between vehicles and the electric grid rather than requiring energy storage on the vehicles. Also, they can potentially use the entire track area for solar collectors, not just the vehicle surface. The potential PRT energy efficiency is much higher than that which traditional automobiles can attain.
Solar vehicles are electric vehicles powered by solar energy obtained from solar panels on the surface (generally, the roof) of the vehicle. Photovoltaic (PV) cells convert the Sun's energy directly into electrical energy. Solar vehicles are not practical day-to-day transportation devices at present, but are primarily demonstration vehicles and engineering exercises, often sponsored by government agencies. However, some cities have begun offering solar-powered buses, including the Tindo in Adelaide, Australia.
Wind-powered electric vehicles primarily use wind-turbines installed at a strategic point of the vehicle, which are then converted into electric energy which causes the vehicle to propel.
Animal powered vehicles
Horse and carriage are just one type of animal propelled vehicle. Once a common form of transportation, they became far less common as cities grew and automobiles took their place. In dense cities, the waste produced by large numbers of transportation animals was a significant health problem. Oftentimes the food is produced for them using diesel powered tractors, and thus there is some environmental impact as a result of their use.
Human powered vehicles
Human-powered transport includes walking, bicycles, velomobiles, row boats, and other environmentally friendly ways of getting around. In addition to the health benefits of the exercise provided, they are far more environmentally friendly than most other options. The only downside is the speed limitations, and how far one can travel before getting exhausted.
Benefits of green vehicle use
Environmental
Vehicle emissions contribute to the increasing concentration of gases linked to climate change. In order of significance, the principal greenhouse gases associated with road transport are carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O). Road transport is the third largest source of greenhouse gases emitted in the UK, and accounts for about 27% of total emissions, and 33% in the United States. Of the total greenhouse gas emissions from transport, over 85% are due to CO2 emissions from road vehicles. The transport sector is the fastest growing source of greenhouse gases.
Health
Vehicle pollutants have been linked to human ill health including the incidence of respiratory and cardiopulmonary disease and lung cancer. A 1998 report estimated that up to 24,000 people die prematurely each year in the UK as a direct result of air pollution. According to the World Health Organization, up to 13,000 deaths per year among children (aged 0–4 years) across Europe are directly attributable to outdoor pollution. The organization estimates that if pollution levels were returned to within EU limits, more than 5,000 of these lives could be saved each year.
Monetary
Hybrid taxi fleet operators in New York have also reported that reduced fuel consumption saves them thousands of dollars per year.
Criticism
A study by CNW Marketing Research suggested that the extra energy cost of manufacture, shipping, disposal, and the short lives of some of these types of vehicle (particularly gas-electric hybrid vehicles) outweighs any energy savings made by their using less petroleum during their useful lifespan. This type of argument is the long smokestack argument. Critics of the report note that the study prorated all of Toyota's hybrid research-and-development costs across the relatively small number of Priuses on the road, rather than using the incremental cost of building a vehicle; used109,000 miles (175,000 km) for the length of life of a Prius (Toyota offers a 150,000-mile (240,000 km) warranty on the Prius' hybrid components, including the battery), and calculated that a majority of a car's cradle-to-grave energy gets expended during the vehicle's production, not while it is driven.Norwegian Consumer Ombudsman official Bente Øverli stated that "Cars cannot do anything good for the environment except less damage than others." Based on this opinion, Norwegian law severely restricts the use of "greenwashing" to market automobiles, strongly prohibiting advertising a vehicle as being environmentally friendly, with large fines issued to violators.Some studies try to compare environmental impact of electric and petrol vehicles over complete life cycle, including production, operation, and dismantling.
In general, results differ vastly dependent on the region considered, due to difference in energy sources to produce electricity that fuels electric vehicles. When considering only CO2 emissions, it is noted that production of electric cars generate about twice as much emissions as that of internal combustion cars. However, emissions of CO2 during operation are much larger (on average) than during production. For electric cars, emissions caused during operation depend on energy sources used to produce electricity and thus vary a lot geographically. Studies suggest that when taking into account both production and operation, electric cars would cause more emissions in economies where production of electricity is not clean, e.g., it is mostly coal based. For this reason, some studies found that driving electric cars is less environmentally damaging in western US states than in eastern ones, where less electricity is produced using cleaner sources. Similarly, in countries like India, Australia or China, where large portion of electricity is produced by using coal, driving electric vehicles would cause larger environmental damage than driving petrol vehicles. When justifying use of electric cars over petrol cars, these kinds of studies do not provide sufficiently clear results. Environmental impact is calculated based on fuel mix used to produce electricity that powers electric cars. However, when a gas vehicle is replaced by an equivalent electric vehicle, additional power must be installed in electrical grid. This additional capacity would normally not be based on the same ratios of energy sources ("clean" versus fossil fuels) than the current capacity. Only when additional electricity production capacity installed to switch from petrol to electric vehicles would predominantly consist of clean sources, switch to electric vehicles could reduce environmental damage. Another common problem in methodology used in comparative studies is that it only focuses on specific kinds of environmental impact. While some studies focus only on emission of gas pollutants over life cycle or only on greenhouse gas emissions such as CO2, comparison should also account for other environmental impacts such as pollutants released otherwise during production and operation or ingredients that can not be effectively recycled. Examples include use of lighter high performing metals, lithium batteries and more rare metals in electric cars, which all have high environmental impact.
A study that also looked at factors other than energy consumption and carbon emissions has suggested that there is no such thing as an environmentally friendly car.The use of vehicles with increased fuel efficiency is usually considered positive in the short term but criticism of any hydrocarbon-based personal transport remains. The Jevons paradox suggests that energy efficiency programs are often counter-productive, even increasing energy consumption in the long run. Many environmental researchers believe that sustainable transport may require a move away from hydrocarbon fuels and from our present automobile and highway paradigm.
National and international promotion
European Union
The European Union is promoting the marketing of greener cars via a combination of binding and non-binding measures. As of April 2010, 15 of the 27 member states of the European Union provide tax incentives for electrically chargeable vehicles and some alternative fuel vehicles, which includes all Western European countries except Italy and Luxembourg, plus the Czech Republic and Romania. The incentives consist of tax reductions and exemptions, as well as of bonus payments for buyers of electric cars, plug-in hybrids, hybrid electric vehicles and natural gas vehicles.
United States
The United States Environmental Protection Agency (EPA) is promoting the marketing of greener cars via the SmartWay program. The SmartWay and SmartWay Elite designation mean that a vehicle is a better environmental performer relative to other vehicles. This US EPA designation is arrived at by taking into account a vehicle's Air Pollution Score and Greenhouse Gas Score. Higher Air Pollution Scores indicate vehicles that emit lower amounts of pollutants that cause smog relative to other vehicles. Higher Greenhouse Gas Scores indicate vehicles that emit lower amounts of carbon dioxide and have improved fuel economy relative to other vehicles.
To earn the SmartWay designation, a vehicle must earn at least a 6 on the Air Pollution Score and at least a 6 on the Greenhouse Gas Score, but have a combined score of at least 13. SmartWay Elite is given to those vehicles that score 9 or better on both the Greenhouse Gas and Air Pollution Scores.
A Green Vehicle Marketing Alliance, in conjunction with the Oak Ridge National Laboratory (ORNL), periodically meets, and coordinates marketing efforts.
Green car rankings
Several automobile magazines, motor vehicle specialized publications and environmental groups publish annual rankings or listings of the best green cars of a given year. The following table presents a selection of the annual top pickings.
Electric vehicle motor shows
Dedicated electric and green vehicle motor shows:
Alternative Vehicle and Fuel Show (AVFS), Fair of Valladolid, Spain, in November.
Green Fleet Expo, Royal Botanical Gardens (Ontario), in May.
Green-Car-Guide Live!, Arena and Convention Centre in Liverpool, in June
Electric & Hybrid Vehicle Technology Expo, (Sindelfingen, Germany, April / Novi, Detroit, Michigan, September). [2] Archived 2019-06-28 at the Wayback Machine
European Electric Motor Show, Helsinki Exhibition & Convention Centre, in November
See also
Notes and references
Further reading
Leitman, Seth; Brant, Bob (October 2008). Build Your Own Electric Vehicle, 2nd Edition. McGraw-Hill, Inc. ISBN 978-0-07-154373-6.
Tobin Smith; Jim Woods; Liz Claman (2008). "Waving the Green Flag, Clean Transportation". Billion Dollar Green. John Wiley and Sons. pp. 35–46. ISBN 978-0-470-34377-7.
DFE2008 Automobile Engines, Wikiversity
External links
2013 VehicleTechnologies Market Report, Oak Ridge National Laboratory
Alternative Fuels and Advanced Vehicle Data Center
AU Green Vehicle Guide
Clean Car Calculator (Institute for Energy Efficiency)
Clean Cities - 2014 Vehicle Buyer's Guide, National Renewable Energy Laboratory (NREL), U.S. Department of Energy, Clean Cities program. December 2013.
Cradle-to-Grave Lifecycle Analysis of U.S. Light-Duty Vehicle-Fuel Pathways: A Greenhouse Gas Emissions and Economic Assessment of Current (2015) and Future (2025-2030) Technologies Archived 2020-08-12 at the Wayback Machine (includes estimated cost of avoided GHG emissions from different AFV technologies), Argonne National Laboratory, June 2016.
Earth cars
EPA Green Vehicle Guide
Green Cars (Autocar)
Green Car Center (Yahoo)
Green Car Guide Archived 2014-02-07 at the Wayback Machine.
Infographic: Green Cars 101 (2011)
Green cars and eco driving Archived 2013-01-21 at archive.today
Green Progress
Model Year 2014 Fuel Economy Guide , U.S. Environmental Protection Agency and U.S. Department of Energy, April 2014.
Progressive Insurance Automotive X PRIZE homepage
Small Efficient Vehicles Wiki: People's Car Project
State of Charge: Electric Vehicles’ Global Warming Emissions and Fuel-Cost Savings across the United States Archived 2012-10-21 at the Wayback Machine (UCS)
Top Ten EPA-Rated Fuel Sippers (2016) - including BEVs and PHEVs
UCS Hybrid Scorecard Archived 2012-05-18 at the Wayback Machine (Union of Concerned Scientists) |
joint implementation | Joint Implementation (JI) is one of three flexibility mechanisms set out in the Kyoto Protocol to help countries with binding greenhouse gas emissions targets (the Annex I countries) meet their treaty obligations. Under Article 6, any Annex I country can invest in a project to reduce greenhouse gas emissions in any other Annex I country (referred to as a "Joint Implementation Project") as an alternative to reducing emissions domestically. In this way countries can lower the costs of complying with their Kyoto targets by investing in projects where reducing emissions may be cheaper and applying the resulting Emission Reduction Units (ERUs) towards their commitment goal.
A JI project might involve, for example, replacing a coal-fired power plant with a more efficient combined heat and power plant. Most JI projects are expected to take place in the economies in transition (the EIT Parties) noted in Annex B of the Kyoto Protocol. Currently Russia and Ukraine are slated to host the greatest number of JI projects.Unlike the case of the Clean Development Mechanism, the JI has caused less concern of spurious emission reductions, as the JI project, in contrast to the CDM project, takes place in a country which has a commitment to reduce emissions under the Kyoto Protocol.
The process of receiving credit for JI projects is somewhat complex. Emission reduction projects are awarded credits called Emission Reduction Units (ERUs), which represents an emission reduction equivalent to one tonne of CO2 equivalent. The ERUs come from the host country's pool of assigned emissions credits, known as Assigned Amount Units, or AAUs. Each Annex I party has a predetermined amount of AAUs, calculated on the basis of its 1990 greenhouse gas emission levels. By requiring JI credits to come from a host country's pool of AAUs, the Kyoto Protocol ensures that the total amount of emissions credits among Annex I parties does not change for the duration of the Kyoto Protocol's first commitment period.
Projects
The formal crediting period for JI was aligned with the first commitment period of the Kyoto Protocol, and did not start until January 2008 (Carbon Trust, 2009, p. 20). In November 2008, only 22 JI projects had been officially approved and registered. By 2012, it is expected that the total number of ERUs generated by JI will be around 300 million. This estimate is based on values taken from project plans, and makes no adjustment to account for delivery in practice.
Russia accounts for about two-thirds of these projected savings, with the remainder divided up roughly equally between Ukraine and the EU's New Member States. Emission savings include cuts in methane, HFC, and N2O emissions.
In December 2012, ERU prices crashed to a low of 15c before recovering to 23c after news that EU’s Climate Change Committee was to vote on a ban of ERUs from countries that have not signed up to a second commitment period under the Kyoto Protocol. In January 2013, Bloomberg reported that Emission Reduction Unit prices declined 89 percent in the 2012 year.
See also
Assigned amount units
Emission Reduction Unit
Removal Units
Certified Emission Reduction
Clean Development Mechanism
Flexible Mechanisms
Obtaining ownership of land by productive useThe Third Period ( Phase three ) of the EU - Ts is expected to start by the end of 2012
The future of the JI is expected to be decided by the committee of the UNFFCC.
External links
UNFCCC Joint Implementation on the UNFCCC pages.
Foundation Joint Implementation Network Host of the Joint Implementation Quarterly (JIQ) newsletter.
The original Kyoto Protocol
== Notes == |
pastoral greenhouse gas research consortium | The Pastoral Greenhouse Gas Research Consortium (PGGRC) carries out research to find methods of reducing greenhouse gas emissions from livestock. The consortium, established in 2004, has a Memorandum of Understanding with the New Zealand Government. Almost half of the greenhouse gas emissions in New Zealand are due to agriculture and since the New Zealand government has signed and ratified the Kyoto Protocol methods are being sought to seek a reduction of these emissions.
In 2003 the Government attempted to impose an Agricultural emissions research levy on farmers to fund research into agricultural emissions reduction but it proved to be unpopular and the proposal was abandoned. The PGGRC is an alternative method of addressing agricultural emissions.
An independent review in 2006 found that the PGGRC was producing world-leading research and is excellent value for money.
Partners
The partners in the consortium are:
AgResearch
Fonterra
Fert Research
PGG Wrightson
DairyNZ
Deer Research
Meat & Wool New ZealandAssociate members are the Ministry of Agriculture and Forestry, NIWA and Foundation for Research, Science and Technology. Research is carried out by AgResearch, DairyNZ, LIC and Lincoln University.
See also
Agriculture in New Zealand
Climate change in New Zealand
Environment of New Zealand
References
External links
Pastoral Greenhouse Gas Research Consortium |
ibm | The International Business Machines Corporation (doing business as IBM), nicknamed Big Blue, is an American multinational technology corporation headquartered in Armonk, New York and is present in over 175 countries. It specializes in computer hardware, middleware, and software, and provides hosting and consulting services in areas ranging from mainframe computers to nanotechnology. IBM is the largest industrial research organization in the world, with 19 research facilities across a dozen countries, and held the record for most annual U.S. patents generated by a business for 29 consecutive years from 1993 to 2021.IBM was founded in 1911 as the Computing-Tabulating-Recording Company (CTR), a holding company of manufacturers of record-keeping and measuring systems. It was renamed "International Business Machines" in 1924 and soon became the leading manufacturer of punch-card tabulating systems. For the next several decades, IBM would become an industry leader in several emerging technologies, including electric typewriters, electromechanical calculators, and personal computers. During the 1960s and 1970s, the IBM mainframe, exemplified by the System/360, was the dominant computing platform, and the company produced 80 percent of computers in the U.S. and 70 percent of computers worldwide.After entering the multipurpose microcomputer market in the 1980s with the IBM Personal Computer, which became the most popular standard for personal computers, IBM began losing its market dominance to emerging competitors. Beginning in the 1990s, the company began downsizing its operations and divesting from commodity production, most notably selling its personal computer division to the Lenovo Group in 2005. IBM has since concentrated on computer services, software, supercomputers, and scientific research. Since 2000, its supercomputers have consistently ranked among the most powerful in the world, and in 2001 it became the first company to generate more than 3,000 patents in one year, beating this record in 2008 with over 4,000 patents. As of 2022, the company held 150,000 patents.As one of the world's oldest and largest technology companies, IBM has been responsible for several technological innovations, including the automated teller machine (ATM), dynamic random-access memory (DRAM), the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, the SQL programming language, and the UPC barcode. The company has made inroads in advanced computer chips, quantum computing, artificial intelligence, and data infrastructure. IBM employees and alumni have won various recognitions for their scientific research and inventions, including six Nobel Prizes and six Turing Awards.IBM is a publicly traded company and one of 30 companies in the Dow Jones Industrial Average. It is among the world's largest employers, with over 297,900 employees worldwide in 2022. Despite its relative decline within the technology sector, IBM remains the seventh largest technology company by revenue, and 49th largest overall, according to the 2022 Fortune 500. It is also consistently ranked among the world's most recognizable, valuable, and admired brands.
History
IBM was founded in 1911 in Endicott, New York; as the Computing-Tabulating-Recording Company (CTR) and was renamed "International Business Machines" in 1924. IBM is incorporated in New York and has operations in over 170 countries.In the 1880s, technologies emerged that would ultimately form the core of International Business Machines (IBM). Julius E. Pitrap patented the computing scale in 1885; Alexander Dey invented the dial recorder (1888); Herman Hollerith (1860–1929) patented the Electric Tabulating Machine; and Willard Bundy invented a time clock to record workers' arrival and departure times on a paper tape in 1889. On June 16, 1911, their four companies were amalgamated in New York State by Charles Ranlett Flint forming a fifth company, the Computing-Tabulating-Recording Company (CTR) based in Endicott, New York. The five companies had 1,300 employees and offices and plants in Endicott and Binghamton, New York; Dayton, Ohio; Detroit, Michigan; Washington, D.C.; and Toronto.They manufactured machinery for sale and lease, ranging from commercial scales and industrial time recorders, meat and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr., fired from the National Cash Register Company by John Henry Patterson, called on Flint and, in 1914, was offered a position at CTR. Watson joined CTR as general manager and then, 11 months later, was made President when antitrust cases relating to his time at NCR were resolved. Having learned Patterson's pioneering business practices, Watson proceeded to put the stamp of NCR onto CTR's companies. He implemented sales conventions, "generous sales incentives, a focus on customer service, an insistence on well-groomed, dark-suited salesmen and had an evangelical fervor for instilling company pride and loyalty in every worker". His favorite slogan, "THINK", became a mantra for each company's employees. During Watson's first four years, revenues reached $9 million ($152 million today) and the company's operations expanded to Europe, South America, Asia and Australia. Watson never liked the clumsy hyphenated name "Computing-Tabulating-Recording Company" and on February 14, 1924, chose to replace it with the more expansive title "International Business Machines" which had previously been used as the name of CTR's Canadian Division. By 1933, most of the subsidiaries had been merged into one company, IBM.
The Nazis reportedly made extensive use of Hollerith punch card and alphabetical accounting equipment and IBM's majority-owned German subsidiary, Deutsche Hollerith Maschinen GmbH (Dehomag), supplied this equipment from the early 1930s. This equipment was critical to Nazi efforts to categorize citizens of both Germany and other nations that fell under Nazi control through ongoing censuses. This census data was used to facilitate the round-up of Jews and other targeted groups, and to catalog their movements through the machinery of the Holocaust, including internment in the concentration camps. Nazi concentration camps operated a Hollerith department called Hollerith Abteilung, which had IBM machines, including calculating and sorting machines. There is much debate amongst the history community about whether IBM was complicit in the use of these machines, whether the machines used were IBM branded, and even whether tabulating machines were used for this purpose at all.IBM has several leadership development and recognition programs to acknowledge and foster employee potential and achievements. For early-career high potential employees, IBM sponsors leadership development programs by discipline (e.g., general management (GMLDP), human resources (HRLDP), finance (FLDP)). Each year, the company also selects 500 IBM employees for the IBM Corporate Service Corps (CSC), which gives top employees a month to do humanitarian work abroad. For certain interns, IBM also has a program called Extreme Blue that partners top business and technical students to develop high-value technology and compete to present their business case to the company's CEO at internship's end.The company also has various designations for exceptional individual contributors such as Senior Technical Staff Member (STSM), Research Staff Member (RSM), Distinguished Engineer (DE), and Distinguished Designer (DD). Prolific inventors can also achieve patent plateaus and earn the designation of Master Inventor. The company's most prestigious designation is that of IBM Fellow. Since 1963, the company names a handful of Fellows each year based on technical achievement. Other programs recognize years of service such as the Quarter Century Club established in 1924, and sellers are eligible to join the Hundred Percent Club, composed of IBM salesmen who meet their quotas, convened in Atlantic City, New Jersey. Each year, the company also selects 1,000 IBM employees annually to award the Best of IBM Award, which includes an all-expenses-paid trip to the awards ceremony in an exotic location.
IBM built the Automatic Sequence Controlled Calculator, an electromechanical computer, during World War II. It offered its first commercial stored-program computer, the vacuum tube based IBM 701, in 1952. The IBM 305 RAMAC introduced the hard disk drive in 1956. The company switched to transistorized designs with the 7000 and 1400 series, beginning in 1958.
In 1956, the company demonstrated the first practical example of artificial intelligence when Arthur L. Samuel of IBM's Poughkeepsie, New York, laboratory programmed an IBM 704 not merely to play checkers but "learn" from its own experience. In 1957, the FORTRAN scientific programming language was developed. In 1961, IBM developed the SABRE reservation system for American Airlines and introduced the highly successful Selectric typewriter.
In 1963, IBM employees and computers helped NASA track the orbital flights of the Mercury astronauts. A year later, it moved its corporate headquarters from New York City to Armonk, New York. The latter half of the 1960s saw IBM continue its support of space exploration, participating in the 1965 Gemini flights, 1966 Saturn flights, and 1969 lunar mission. IBM also developed and manufactured the Saturn V's Instrument Unit and Apollo spacecraft guidance computers.
On April 7, 1964, IBM launched the first computer system family, the IBM System/360. It spanned the complete range of commercial and scientific applications from large to small, allowing companies for the first time to upgrade to models with greater computing capability without having to rewrite their applications. It was followed by the IBM System/370 in 1970. Together the 360 and 370 made the IBM mainframe the dominant mainframe computer and the dominant computing platform in the industry throughout this period and into the early 1980s. They and the operating systems that ran on them such as OS/VS1 and MVS, and the middleware built on top of those such as the CICS transaction processing monitor, had a near-monopoly-level market share and became the thing IBM was most known for during this period.In 1969, the United States of America alleged that IBM violated the Sherman Antitrust Act by monopolizing or attempting to monopolize the general-purpose electronic digital computer system market, specifically computers designed primarily for business, and subsequently alleged that IBM violated the antitrust laws in IBM's actions directed against leasing companies and plug-compatible peripheral manufacturers. Shortly after, IBM unbundled its software and services in what many observers believed was a direct result of the lawsuit, creating a competitive market for software. In 1982, the Department of Justice dropped the case as "without merit".Also in 1969, IBM engineer Forrest Parry invented the magnetic stripe card that would become ubiquitous for credit/debit/ATM cards, driver's licenses, rapid transit cards, and a multitude of other identity and access control applications. IBM pioneered the manufacture of these cards, and for most of the 1970s, the data processing systems and software for such applications ran exclusively on IBM computers. In 1974, IBM engineer George J. Laurer developed the Universal Product Code. IBM and the World Bank first introduced financial swaps to the public in 1981, when they entered into a swap agreement. The IBM PC, originally designated IBM 5150, was introduced in 1981, and it soon became an industry standard.
In 1991 IBM began spinning off its many divisions into autonomous subsidiaries (so-called "Baby Blues") in an attempt to make the company more manageable and to streamline IBM by having other investors finance those companies. These included AdStar, dedicated to disk drives and other data storage products; IBM Application Business Systems, dedicated to mid-range computers; IBM Enterprise Systems, dedicated to mainframes; Pennant Systems, dedicated to mid-range and large printers; Lexmark, dedicated to small printers; and more. Lexmark was acquired by Clayton & Dubilier in a leveraged buyout shortly after its formation.In September 1992, IBM completed the spin-off of their various non-mainframe and non-midrange, personal computer manufacturing divisions, combining them into an autonomous wholly owned subsidiary known as the IBM Personal Computer Company (IBM PC Co.). This corporate restructuring came after IBM reported a sharp drop in profit margins during the second quarter of fiscal year 1992; market analysts attributed the drop to a fierce price war in the personal computer market over the summer of 1992. The corporate restructuring was one of the largest and most expensive in history up to that point. By the summer of 1993, the IBM PC Co. had divided into multiple business units itself, including Ambra Computer Corporation and the IBM Power Personal Systems Group, the former an attempt to design and market "clone" computers of IBM's own architecture and the latter responsible for IBM's PowerPC-based workstations.In 1993, IBM posted an $8 billion loss – at the time the biggest in American corporate history. Lou Gerstner was hired as CEO from RJR Nabisco to turn the company around. In 2002 IBM acquired PwC Consulting, the consulting arm of PwC which was merged into its IBM Global Services.
In 1998, IBM merged the enterprise-oriented Personal Systems Group of the IBM PC Co. into IBM's own Global Services personal computer consulting and customer service division. The resulting merged business units then became known simply as IBM Personal Systems Group. In 1999, IBM stopped selling their computers at retail outlets after their market share in this sector had fallen considerably behind competitors Compaq and Dell. Immediately afterwards, the IBM PC Co. was dissolved and merged into IBM Personal Systems Group.On September 14, 2004, LG and IBM announced that their business alliance in the South Korean market would end at the end of that year. Both companies stated that it was unrelated to the charges of bribery earlier that year. Xnote was originally part of the joint venture and was sold by LG in 2012.In 2005, the company sold all of its personal computer business to Chinese technology company Lenovo and, in 2009, it acquired software company SPSS Inc. Later in 2009, IBM's Blue Gene supercomputing program was awarded the National Medal of Technology and Innovation by U.S. President Barack Obama. In 2011, IBM gained worldwide attention for its artificial intelligence program Watson, which was exhibited on Jeopardy! where it won against game-show champions Ken Jennings and Brad Rutter. The company also celebrated its 100th anniversary in the same year on June 16. In 2012, IBM announced it had agreed to buy Kenexa and Texas Memory Systems, and a year later it also acquired SoftLayer Technologies, a web hosting service, in a deal worth around $2 billion. Also that year, the company designed a video surveillance system for Davao City.In 2014, IBM announced it would sell its x86 server division to Lenovo for $2.1 billion. while continuing to offer Power ISA-based servers. Also that year, IBM began announcing several major partnerships with other companies, including Apple Inc., Twitter, Facebook, Tencent, Cisco, UnderArmour, Box, Microsoft, VMware, CSC, Macy's, Sesame Workshop, the parent company of Sesame Street, and Salesforce.com.In 2015, IBM announced three major acquisitions: Merge Healthcare for $1 billion, data storage vendor Cleversafe, and all digital assets from The Weather Company, including Weather.com and the Weather Channel mobile app. Also that year, IBM employees created the film A Boy and His Atom, which was the first molecule movie to tell a story. In 2016, IBM acquired video conferencing service Ustream and formed a new cloud video unit. In April 2016, it posted a 14-year low in quarterly sales. The following month, Groupon sued IBM accusing it of patent infringement, two months after IBM accused Groupon of patent infringement in a separate lawsuit.In 2015, IBM bought the digital part of The Weather Company, Truven Health Analytics for $2.6 billion in 2016, and in October 2018, IBM announced its intention to acquire Red Hat for $34 billion, which was completed on July 9, 2019.IBM announced in October 2020 that it would divest the Managed Infrastructure Services unit of its Global Technology Services division into a new public company. The new company, Kyndryl, will have 90,000 employees, 4,600 clients in 115 countries, with a backlog of $60 billion. IBM's spin off was greater than any of its previous divestitures, and welcomed by investors. IBM appointed Martin Schroeter, who had been IBM's CFO from 2014 through the end of 2017, as CEO of Kyndryl.On March 7, 2022, a few days after the start of the Russian invasion of Ukraine, IBM CEO Arvind Krishna published a Ukrainian flag and announced that "we have suspended all business in Russia". All Russian articles were also removed from the IBM website. On June 7, Krishna announced that IBM would carry out an "orderly wind-down" of its operations in Russia.In 2023, IBM acquired Manta Software Inc. to complement its data and AI governance capabilities for an undisclosed amount. On November 16, 2023, IBM suspended ads on Twitter after Elon Musk's antisemitic comments.
Headquarters and offices
IBM is headquartered in Armonk, New York, a community 37 miles (60 km) north of Midtown Manhattan. A nickname for the company is the "Colossus of Armonk". Its principal building, referred to as CHQ, is a 283,000-square-foot (26,300 m2) glass and stone edifice on a 25-acre (10 ha) parcel amid a 432-acre former apple orchard the company purchased in the mid-1950s. There are two other IBM buildings within walking distance of CHQ: the North Castle office, which previously served as IBM's headquarters; and the Louis V. Gerstner, Jr., Center for Learning (formerly known as IBM Learning Center (ILC)), a resort hotel and training center, which has 182 guest rooms, 31 meeting rooms, and various amenities.IBM operates in 174 countries as of 2016, with mobility centers in smaller market areas and major campuses in the larger ones. In New York City, IBM has several offices besides CHQ, including the IBM Watson headquarters at Astor Place in Manhattan. Outside of New York, major campuses in the United States include Austin, Texas; Research Triangle Park (Raleigh-Durham), North Carolina; Rochester, Minnesota; and Silicon Valley, California.
IBM's real estate holdings are varied and globally diverse. Towers occupied by IBM include 1250 René-Lévesque (Montreal, Canada) and One Atlantic Center (Atlanta, Georgia, US). In Beijing, China, IBM occupies Pangu Plaza, the city's seventh tallest building and overlooking Beijing National Stadium ("Bird's Nest"), home to the 2008 Summer Olympics.
IBM India Private Limited is the Indian subsidiary of IBM, which is headquartered at Bangalore, Karnataka. It has facilities in Coimbatore, Chennai, Kochi, Ahmedabad, Delhi, Kolkata, Mumbai, Pune, Gurugram, Noida, Bhubaneshwar, Surat, Visakhapatnam, Hyderabad, Bangalore and Jamshedpur.
Other notable buildings include:
the IBM Rome Software Lab (Rome, Italy),
Hursley House (Winchester, UK),
330 North Wabash (Chicago, Illinois, United States),
the Cambridge Scientific Center (Cambridge, Massachusetts, United States),
the IBM Toronto Software Lab (Toronto, Canada),
the IBM Building, Johannesburg (Johannesburg, South Africa),
the IBM Building (Seattle) (Seattle, Washington, United States),
the IBM Hakozaki Facility (Tokyo, Japan),
the IBM Yamato Facility (Yamato, Japan),
the IBM Canada Head Office Building (Ontario, Canada)
the Watson IoT Headquarters (Munich, Germany).Defunct IBM campuses include the IBM Somers Office Complex (Somers, New York), Spango Valley (Greenock, Scotland), and Tour Descartes (Paris, France). The company's contributions to industrial architecture and design include works by Marcel Breuer, Eero Saarinen, Ludwig Mies van der Rohe, I.M. Pei and Ricardo Legorreta. Van der Rohe's building in Chicago was recognized with the 1990 Honor Award from the National Building Museum.IBM was recognized as one of the Top 20 Best Workplaces for Commuters by the United States Environmental Protection Agency (EPA) in 2005, which recognized Fortune 500 companies that provided employees with excellent commuter benefits to help reduce traffic and air pollution. In 2004, concerns were raised related to IBM's contribution in its early days to pollution in its original location in Endicott, New York.
Finance
For the fiscal year 2020, IBM reported earnings of $5.6 billion, with an annual revenue of $73.6 billion. IBM's revenue has fallen for 8 of the last 9 years. IBM's market capitalization was valued at over $127 billion as of April 2021. IBM ranked No. 38 on the 2020 Fortune 500 rankings of the largest United States corporations by total revenue. In 2014, IBM was accused of using "financial engineering" to hit its quarterly earnings targets rather than investing for the longer term.
Products and services
IBM has a large and diverse portfolio of products and services. As of 2016, these offerings fall into the categories of cloud computing, artificial intelligence, commerce, data and analytics, Internet of things (IoT), IT infrastructure, mobile, digital workplace and cybersecurity.IBM Cloud includes infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS) offered through public, private and hybrid cloud delivery models. For instance, the IBM Bluemix PaaS enables developers to quickly create complex websites on a pay-as-you-go model. IBM SoftLayer is a dedicated server, managed hosting and cloud computing provider, which in 2011 reported hosting more than 81,000 servers for more than 26,000 customers. IBM also provides Cloud Data Encryption Services (ICDES), using cryptographic splitting to secure customer data.IBM also hosts the industry-wide cloud computing and mobile technologies conference InterConnect each year.Hardware designed by IBM for these categories include IBM's Power microprocessors, which are employed inside many console gaming systems, including Xbox 360, PlayStation 3, and Nintendo's Wii U. IBM Secure Blue is encryption hardware that can be built into microprocessors, and in 2014, the company revealed TrueNorth, a neuromorphic CMOS integrated circuit and announced a $3 billion investment over the following five years to design a neural chip that mimics the human brain, with 10 billion neurons and 100 trillion synapses, but that uses just 1 kilowatt of power. In 2016, the company launched all-flash arrays designed for small and midsized companies, which includes software for data compression, provisioning, and snapshots across various systems.IT outsourcing also represents a major service provided by IBM, with more than 60 data centers worldwide. IBM Developer is IBM's source for emerging software technologies, and SPSS is a software package used for statistical analysis. IBM's Kenexa suite provides employment and retention solutions, and includes the BrassRing, an applicant tracking system used by thousands of companies for recruiting. IBM also owns The Weather Company, which provides weather forecasting and includes weather.com and Weather Underground.Smarter Planet is an initiative that seeks to achieve economic growth, near-term efficiency, sustainable development, and societal progress, targeting opportunities such as smart grids, water management systems, solutions to traffic congestion, and greener buildings.Services provisions include Redbooks, which are publicly available online books about best practices with IBM products, and developerWorks, a website for software developers and IT professionals with how-to articles and tutorials, as well as software downloads, code samples, discussion forums, podcasts, blogs, wikis, and other resources for developers and technical professionals.IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. Watson was debuted in 2011 on the American game show Jeopardy!, where it competed against champions Ken Jennings and Brad Rutter in a three-game tournament and won. Watson has since been applied to business, healthcare, developers, and universities. For example, IBM has partnered with Memorial Sloan Kettering Cancer Center to assist with considering treatment options for oncology patients and for doing melanoma screenings. Several companies use Watson for call centers, either replacing or assisting customer service agents.
In January 2019, IBM introduced its first commercial quantum computer: IBM Q System One.IBM also provides infrastructure for the New York City Police Department through their IBM Cognos Analytics to perform data visualizations of CompStat crime data.In March 2020, it was announced that IBM will build the first quantum computer in Germany. The computer should allow researchers to harness the technology without falling foul of the EU's increasingly assertive stance on data sovereignty.In June 2020, IBM announced that it was exiting the facial recognition business. In a letter to congress, IBM's Chief Executive Officer Arvind Krishna told lawmakers, "now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."In May 2022, IBM announced the company had signed a multi-year Strategic Collaboration Agreement with Amazon Web Services to make a wide variety of IBM software available as a service on AWS Marketplace. Additionally, the deal includes both companies making joint investments that make it easier for companies to consume IBM's offering and integrate them with AWS, including developer training and software development for select markets.In November 2022, the company came out with a chip called the 433-qubit Osprey. Time called it "the world's most powerful quantum processor" and noted that if the processor's speed were represented in bits, the number would be larger than the total number of atoms in the universe.In an effort to streamline its products and services, beginning in the 1990s, IBM has regularly sold off low margin assets while shifting its focus to higher-value, more profitable markets. In 1991, the company spun off its printer and keyboard manufacturing division to Lexmark, in 2005 it sold its personal computer (ThinkPad/ThinkCentre) business to Lenovo, in 2015 it adopted a "fabless" model with semiconductors design and offloaded manufacturing to GlobalFoundries, and in 2021 it spun-off its managed infrastructure services unit into a new public company named Kyndryl. IBM also announced the acquisition of the enterprise software company Turbonomic for $1.5 billion. In 2022, IBM announced it would sell Watson Health to private equity firm Francisco Partners. IBM also started a collaboration with new Japanese manufacturer Rapidus in late 2022, which led GlobalFoundries to file a lawsuit against IBM the following year.
Research
Research has been part of IBM since its founding, and its organized efforts trace their roots back to 1945, when the Watson Scientific Computing Laboratory was founded at Columbia University in New York City, converting a renovated fraternity house on Manhattan's West Side into IBM's first laboratory. Now, IBM Research constitutes the largest industrial research organization in the world, with 12 labs on 6 continents. IBM Research is headquartered at the Thomas J. Watson Research Center in New York, and facilities include the Almaden lab in California, Austin lab in Texas, Australia lab in Melbourne, Brazil lab in São Paulo and Rio de Janeiro, China lab in Beijing and Shanghai, Ireland lab in Dublin, Haifa lab in Israel, India lab in Delhi and Bangalore, Tokyo lab, Zurichlab and Africa lab in Nairobi.
In terms of investment, IBM's R&D expenditure totals several billion dollars each year. In 2012, that expenditure was approximately $6.9 billion. Recent allocations have included $1 billion to create a business unit for Watson in 2014, and $3 billion to create a next-gen semiconductor along with $4 billion towards growing the company's "strategic imperatives" (cloud, analytics, mobile, security, social) in 2015.IBM has been a leading proponent of the Open Source Initiative, and began supporting Linux in 1998. The company invests billions of dollars in services and software based on Linux through the IBM Linux Technology Center, which includes over 300 Linux kernel developers. IBM has also released code under different open-source licenses, such as the platform-independent software framework Eclipse (worth approximately $40 million at the time of the donation), the three-sentence International Components for Unicode (ICU) license, and the Java-based relational database management system (RDBMS) Apache Derby. IBM's open source involvement has not been trouble-free, however (see SCO v. IBM).
Famous inventions and developments by IBM include: the automated teller machine (ATM), dynamic random access memory (DRAM), the electronic keypunch, the financial swap, the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, RISC, the SABRE airline reservation system, SQL, the Universal Product Code (UPC) bar code, and the virtual machine. Additionally, in 1990 company scientists used a scanning tunneling microscope to arrange 35 individual xenon atoms to spell out the company acronym, marking the first structure assembled one atom at a time. A major part of IBM research is the generation of patents. Since its first patent for a traffic signaling device, IBM has been one of the world's most prolific patent sources. In 2021, the company held the record for most patents generated by a business for 29 consecutive years for the achievement.Five IBM employees have received the Nobel Prize: Leo Esaki, of the Thomas J. Watson Research Center in Yorktown Heights, N.Y., in 1973, for work in semiconductors; Gerd Binnig and Heinrich Rohrer, of the Zurich Research Center, in 1986, for the scanning tunneling microscope; and Georg Bednorz and Alex Müller, also of Zurich, in 1987, for research in superconductivity. Six IBM employees have won the Turing Award, including the first female recipient Frances E. Allen. Ten National Medals of Technology (USA) and five National Medals of Science (USA) have been awarded to IBM employees.
Brand and reputation
IBM is nicknamed Big Blue partly due to its blue logo and color scheme, and also in reference to its former de facto dress code of white shirts with blue suits. The company logo has undergone several changes over the years, with its current "8-bar" logo designed in 1972 by graphic designer Paul Rand. It was a general replacement for a 13-bar logo, since period photocopiers did not render narrow (as opposed to tall) stripes well. Aside from the logo, IBM used Helvetica as a corporate typeface for 50 years, until it was replaced in 2017 by the custom-designed IBM Plex.
IBM has a valuable brand as a result of over 100 years of operations and marketing campaigns. Since 1996, IBM has been the exclusive technology partner for the Masters Tournament, one of the four major championships in professional golf, with IBM creating the first Masters.org (1996), the first course cam (1998), the first iPhone app with live streaming (2009), and first-ever live 4K Ultra High Definition feed in the United States for a major sporting event (2016). As a result, IBM CEO Ginni Rometty became the third female member of the Master's governing body, the Augusta National Golf Club. IBM is also a major sponsor in professional tennis, with engagements at the U.S. Open, Wimbledon, the Australian Open, and the French Open. The company also sponsored the Olympic Games from 1960 to 2000, and the National Football League from 2003 to 2012.In 2012, IBM's brand was valued at $75.5 billion and ranked by Interbrand as the third-best brand worldwide. That same year, it was also ranked the top company for leaders (Fortune), the number two green company in the U.S. (Newsweek), the second-most respected company (Barron's), the fifth-most admired company (Fortune), the 18th-most innovative company (Fast Company), and the number one in technology consulting and number two in outsourcing (Vault). In 2015, Forbes ranked IBM as the fifth-most valuable brand, and for 2020, the Drucker Institute named IBM the No. 3 best-managed company. During the 2022 Russian invasion of Ukraine, IBM donated $250,000 to Polish Humanitarian Action and the same amount to People in Need, Czech Republic.In terms of ESG, IBM reported its total CO2e emissions (direct and indirect) for the twelve months ending December 31, 2020 at 621 kilotons (-324 /-34.3% year-on-year). In February 2021, IBM committed to achieve net zero greenhouse gas emissions by the year 2030.
People and culture
Employees
IBM has one of the largest workforces in the world, and employees at Big Blue are referred to as "IBMers". The company pioneered in several employment practices unheard of at the time. IBM was among the first corporations to provide group life insurance (1934), survivor benefits (1935), training for women (1935), paid vacations (1937), and training for disabled people (1942). IBM hired its first black salesperson in 1946, and in 1952, CEO Thomas J. Watson, Jr. published the company's first written equal opportunity policy letter, one year before the U.S. Supreme Court decision in Brown vs. Board of Education and 11 years before the Civil Rights Act of 1964. The Human Rights Campaign has rated IBM 100% on its index of gay-friendliness every year since 2003, with IBM providing same-sex partners of its employees with health benefits and an anti-discrimination clause. Additionally, in 2005, IBM became the first major company in the world to formally commit to not using genetic information in employment decisions. In 2017, IBM was named to Working Mother's 100 Best Companies List for the 32nd consecutive year.IBM has several leadership development and recognition programs to recognize employee potential and achievements. For early-career high potential employees, IBM sponsors leadership development programs by discipline (e.g., general management (GMLDP), human resources (HRLDP), finance (FLDP)). Each year, the company also selects 500 IBM employees for the IBM Corporate Service Corps (CSC), which gives top employees a month to do humanitarian work abroad. For certain interns, IBM also has a program called Extreme Blue that partners with top business and technical students to develop high-value technology and compete to present their business case to the company's CEO at internship's end.The company also has various designations for exceptional individual contributors such as Senior Technical Staff Member (STSM), Research Staff Member (RSM), Distinguished Engineer (DE), and Distinguished Designer (DD). Prolific inventors can also achieve patent plateaus and earn the designation of Master Inventor. The company's most prestigious designation is that of IBM Fellow. Since 1963, the company names a handful of Fellows each year based on technical achievement. Other programs recognize years of service such as the Quarter Century Club established in 1924, and sellers are eligible to join the Hundred Percent Club, composed of IBM salesmen who meet their quotas, convened in Atlantic City, New Jersey. Each year, the company also selects 1,000 IBM employees annually to award the Best of IBM Award, which includes an all-expenses-paid trip to the awards ceremony in an exotic location.
IBM's culture has evolved significantly over its century of operations. In its early days, a dark (or gray) suit, white shirt, and a "sincere" tie constituted the public uniform for IBM employees. During IBM's management transformation in the 1990s, CEO Louis V. Gerstner Jr. relaxed these codes, normalizing the dress and behavior of IBM employees. The company's culture has also given to different plays on the company acronym (IBM), with some saying it stands for "I've Been Moved" due to relocations and layoffs, others saying it stands for "I'm By Myself" pursuant to a prevalent work-from-anywhere norm, and others saying it stands for "I'm Being Mentored" due to the company's open door policy and encouragement for mentoring at all levels. In terms of labor relations, the company has traditionally resisted labor union organizing, although unions represent some IBM workers outside the United States. In Japan, IBM employees also have an American football team complete with pro stadium, cheerleaders and televised games, competing in the Japanese X-League as the "Big Blue".In 2015, IBM started giving employees the option of choosing Mac as their primary work device, next to the option of a PC or a Linux distribution. In 2016, IBM eliminated forced rankings and changed its annual performance review system to focus more on frequent feedback, coaching, and skills development.
IBM alumni
Many IBM employees have achieved notability outside of work and after leaving IBM. In business, former IBM employees include:
Apple Inc. CEO Tim Cook
former EDS CEO and politician Ross Perot
Former Microsoft chairman John W. Thompson
SAP co-founder Hasso Plattner
Gartner founder Gideon Gartner
Advanced Micro Devices (AMD) CEO Lisa Su
Cadence Design Systems CEO Anirudh Devgan
former Citizens Financial Group CEO Ellen Alemany
former Yahoo! chairman Alfred Amoroso
former AT&T CEO C. Michael Armstrong
former Xerox Corporation CEOs David T. Kearns and G. Richard Thoman
former Fair Isaac Corporation CEO Mark N. Greene
Citrix Systems co-founder Ed Iacobucci
former Lenovo CEO Steve Ward
former Teradata CEO Kenneth SimondsIn government, former IBM employees include:
Patricia Roberts Harris (United States Secretary of Housing and Urban Development)
Samuel K. Skinner (U.S. Secretary of Transportation and as the White House Chief of Staff)
Mack Mattingly (Diplomat)
Thom Tillis (American politician)
Scott Walker (Former Governor of Wisconsin)
Arthur K. Watson (Former diplomat)
Todd Akin (US politician)
Glenn Andrews (Former US representative from Alabama)
Robert Garcia, (Former US representative)
Katherine Harris (Former US politician),
Amo Houghton (US politician)
Jim Ross Lightfoot (Former US House of Representatives)
Thomas J. Manton (US politician)
Donald W. Riegle Jr. (Former US senator)
Ed Zschau (US politician)Other former IBM employees include:
NASA astronaut Michael J. Massimino
Canadian astronaut and former Governor General Julie Payette
musician Dave Matthews
Harvey Mudd College president Maria Klawe
Western Governors University president emeritus Robert Mendenhall
former University of Kentucky president Lee T. Todd Jr.
NFL referee Bill Carollo
former Rangers F.C. chairman John McClelland
recipient of the Nobel Prize in Literature J. M. Coetzee
Board and shareholders
The company's 15-member board of directors are responsible for overall corporate management and includes the current or former CEOs of Anthem, Dow Chemical, Johnson and Johnson, Royal Dutch Shell, UPS, and Vanguard as well as the president of Cornell University and a retired U.S. Navy admiral.In 2011, IBM became the first technology company Warren Buffett's holding company Berkshire Hathaway invested in. Initially he bought 64 million shares costing $10.5 billion. Over the years, Buffett increased his IBM holdings, but by the end of 2017 had reduced them by 94.5% to 2.05 million shares; by May 2018, he was completely out of IBM.
See also
IBM SkillsBuild
List of electronics brands
List of largest Internet companies
List of largest manufacturing companies by revenue
Tech companies in the New York City metropolitan region
Top 100 US Federal Contractors
References
Further reading
External links
Official website
IBM companies grouped at OpenCorporates
Business data for IBM: |
budget | A budget is a calculation plan, usually but not always financial, for a defined period, often one year or a month. A budget may include anticipated sales volumes and revenues, resource quantities including time, costs and expenses, environmental impacts such as greenhouse gas emissions, other impacts, assets, liabilities and cash flows. Companies, governments, families, and other organizations use budgets to express strategic plans of activities in measurable terms.A budget expresses intended expenditures along with proposals for how to meet them with resources. A budget may express a surplus, providing resources for use at a future time, or a deficit in which expenditures exceed income or other resources.
Government
The budget of a government is a summary or plan of the anticipated resources (often but not always from taxes) and expenditures of that government. There are three types of government budgets: the operating or current budget, the capital or investment budget, and the cash or cash flow budget.
By country
United States
The federal budget is prepared by the Office of Management and Budget, and submitted to Congress for consideration. Invariably, Congress makes many and substantial changes. Nearly all American states are required to have balanced budgets, but the federal government is allowed to run deficits.
India
The budget is prepared by the Budget Division Department of Economic Affairs of the Ministry of Finance annually. The Finance Minister is the head of the budget making committee. The present Indian Finance minister is Nirmala Sitharaman. The Budget includes supplementary excess grants and when a proclamation by the President as to failure of Constitutional machinery is in operation in relation to a State or a Union Territory, preparation of the Budget of such State.The first budget of India was submitted on 18 February 1860 by James Wilson.
P C Mahalanobis is known as the father of Indian budget.
Iran
2022–23 Iranian national budget is the latest one. Documents related to budget program are not released.
Philippines
The Philippine budget is considered the most complicated in the world, incorporating multiple approaches in one single budget system: line-item (budget execution), performance (budget accountability), and zero-based budgeting. The Department of Budget and Management (DBM) prepares the National Expenditure Program and forwards it to the Committee on Appropriations of the House of Representatives to come up with a General Appropriations Bill (GAB). The GAB will go through budget deliberations and voting; the same process occurs when the GAB is transmitted to the Philippine Senate.
After both houses of Congress approves the GAB, the President signs the bill into a General Appropriations Act (GAA); also, the President may opt to veto the GAB and have it returned to the legislative branch or leave the bill unsigned for 30 days and lapse into law. There are two types of budget bill veto: the line-item veto and the veto of the whole budget.
Personal
A personal budget or home budget is a finance plan that allocates future personal income towards expenses, savings and debt repayment. Past spending and personal debt are considered when creating a personal budget. There are several methods and tools available for creating, using, and adjusting a personal budget. For example, jobs are an income source, while bills and rent payments are expenses. A third category (other than income and expenses) may be assets (such as property, investments, or other savings or value) representing a potential reserve for funds in case of budget shortfalls.
Corporate budget
The budget of a business, division, or corporation
is a financial forecast for the near-term future, aggregating the expected revenues and expenses of the various departments – operations, human resources, IT, etc.
It is thus a key element in integrated business planning, with measurable targets correspondingly devolved to departmental managers (and becoming KPIs);
budgets can then also refer to non-cash resources, such as staff or time.The budgeting process typically requires considerable effort,
often involving dozens of staff; final sign off resides with both the financial director and operations director.
The budget is typically compiled on an annual basis - although, e.g. in mining,
this may be quarterly - while the monitoring is ongoing;
see Financial risk management § Corporate finance.
Here, if the actual figures delivered come close to those budgeted, this suggests that managers understand their business and have been successful in delivering.
On the other hand, if the figures diverge this sends an "out of control" signal;
additionally, the share price could suffer where these figures have been communicated to analysts.
Criticism is sometimes directed at the nature of budgeting, and its impact on the organization.
Additional to the cost in time and resources, two phenomena are identified as problematic:
It is suggested that managers will often "game the system" in specifying targets that are easily attainable, and / or in asking for more resources than required, such that the required resources will be budgeted as a compromise.
A second observation is that managers' thinking may emphasize short term, operational thinking at the expense of a long term and strategic perspective;
for the relationship with strategy, see Strategic planning § Strategic planning vs. financial planning.
Professionals employed in this area are often designated "Budget Analyst",
a specialized financial analyst role.
This usually sits within the company's financial management area in general, sometimes, specifically, in "FP&A" (Financial planning and analysis).
Types of budgets
Sale budget – an estimate of future sales, often broken down into both units. It is used to create company and sales goals.
Production budget – an estimate of the number of units that must be manufactured to meet the sales goals. The production budget also estimates the various costs involved with manufacturing those units, including labor and material. Created by product oriented companies.
Capital budget – used to determine whether an organization's long-term investments such as new machinery, replacement machinery, new plants, new products, and research development projects are worth pursuing.
Cash flow/cash budget – a prediction of future cash receipts and expenditures for a particular time period. It usually covers a period in the short-term future. The cash flow budget helps the business to determine when income will be sufficient to cover expenses and when the company will need to seek outside financing.
Conditional budgeting is a budgeting approach designed for companies with fluctuating income, high fixed costs, or income depending on sunk costs, as well as NPOs and NGOs.
Marketing budget – an estimate of the funds needed for promotion, advertising, and public relations in order to market the product or service.
Project budget – a prediction of the costs associated with a particular company project. These costs include labour, materials, and other related expenses. The project budget is often broken down into specific tasks, with task budgets assigned to each. A cost estimate is used to establish a project budget.
Revenue budget – consists of revenue receipts of government and the expenditure met from these revenues. Revenues are made up of taxes and other duties that the government levies. Various countries and unions have created four types of tax jurisdictions: interstate, state, local and tax jurisdictions with a special status (Free-trade zones). Each of them provides a money flow to the corresponding revenue budget levels.
Expenditure budget – includes spending data items.
Flexibility budget – it is established for fixed cost and variable rate is determined per activity measure for variable cost.
Appropriation budget – a maximum amount is established for certain expenditure based on management judgement.
Performance budget – it is mostly used by organization and ministries involved in the development activities. This process of budget takes into account the end results.
Zero based budget – A budget type where every item added to the budget needs approval and no items are carried forward from the prior years budget. This type of budget has a clear advantage when the limited resources are to be allocated carefully and objectively. Zero based budgeting takes more time to create as all pieces of the budget need to be reviewed by management.
Personal budget – A budget type focusing on expenses for self or for home, usually involves an income to budget.
References
External links
The dictionary definition of budget at Wiktionary
Media related to Budget at Wikimedia Commons
Quotations related to Budget at Wikiquote
Origin of the word |
food | Food is any substance consumed by an organism for nutritional support. Food is usually of plant, animal, or fungal origin and contains essential nutrients such as carbohydrates, fats, proteins, vitamins, or minerals. The substance is ingested by an organism and assimilated by the organism's cells to provide energy, maintain life, or stimulate growth. Different species of animals have different feeding behaviours that satisfy the needs of their metabolisms and have evolved to fill a specific ecological niche within specific geographical contexts.
Omnivorous humans are highly adaptable and have adapted to obtain food in many different ecosystems. Humans generally use cooking to prepare food for consumption. The majority of the food energy required is supplied by the industrial food industry, which produces food through intensive agriculture and distributes it through complex food processing and food distribution systems. This system of conventional agriculture relies heavily on fossil fuels, which means that the food and agricultural systems are one of the major contributors to climate change, accounting for as much as 37% of total greenhouse gas emissions.
The food system has significant impacts on a wide range of other social and political issues, including sustainability, biological diversity, economics, population growth, water supply, and food security. Food safety and security are monitored by international agencies like the International Association for Food Protection, the World Resources Institute, the World Food Programme, the Food and Agriculture Organization, and the International Food Information Council.
Definition and classification
Food is any substance consumed to provide nutritional support and energy to an organism. It can be raw, processed, or formulated and is consumed orally by animals for growth, health, or pleasure. Food is mainly composed of water, lipids, proteins, and carbohydrates. Minerals (e.g., salts) and organic substances (e.g., vitamins) can also be found in food. Plants, algae, and some microorganisms use photosynthesis to make some of their own nutrients. Water is found in many foods and has been defined as a food by itself. Water and fiber have low energy densities, or calories, while fat is the most energy-dense component. Some inorganic (non-food) elements are also essential for plant and animal functioning.Human food can be classified in various ways, either by related content or by how it is processed. The number and composition of food groups can vary. Most systems include four basic groups that describe their origin and relative nutritional function: Vegetables and Fruit, Cereals and Bread, Dairy, and Meat. Studies that look into diet quality group food into whole grains/cereals, refined grains/cereals, vegetables, fruits, nuts, legumes, eggs, dairy products, fish, red meat, processed meat, and sugar-sweetened beverages. The Food and Agriculture Organization and World Health Organization use a system with nineteen food classifications: cereals, roots, pulses and nuts, milk, eggs, fish and shellfish, meat, insects, vegetables, fruits, fats and oils, sweets and sugars, spices and condiments, beverages, foods for nutritional uses, food additives, composite dishes and savoury snacks.
Food sources
In a given ecosystem, food forms a web of interlocking chains with primary producers at the bottom and apex predators at the top. Other aspects of the web include detrovores (that eat detritis) and decomposers (that break down dead organisms). Primary producers include algae, plants, bacteria and protists that acquire their energy from sunlight. Primary consumers are the herbivores that consume the plants, and secondary consumers are the carnivores that consume those herbivores. Some organisms, including most mammals and birds, diet consists of both animals and plants, and they are considered omnivores. The chain ends with the apex predators, the animals that have no known predators in its ecosystem. Humans are considered apex predators.Humans are omnivores, finding sustenance in vegetables, fruits, cooked meat, milk, eggs, mushrooms and seaweed. Cereal grain is a staple food that provides more food energy worldwide than any other type of crop. Corn (maize), wheat, and rice account for 87% of all grain production worldwide. Just over half of the world's crops are used to feed humans (55 percent), with 36 percent grown as animal feed and 9 percent for biofuels. Fungi and bacteria are also used in the preparation of fermented foods like bread, wine, cheese and yogurt.
Bacteria
Without bacteria, life would scarcely exist because bacteria convert atmospheric nitrogen into nutritious ammonia. Ammonia is the precursor to proteins, nucleic acids, and most vitamins. Since the advent of industrial process for nitrogen fixation, the Haber-Bosch Process, the majority of ammonia in the world is human-made.
Plants
Photosynthesis is the source of most energy and food for nearly all life on earth. Photosynthesis is one main source of biomass, the food for plants, algae and certain bacteria and, indirectly, organisms higher in the food chain. Energy from the sun is absorbed and used to transform water and carbon dioxide in the air or soil into oxygen and glucose. The oxygen is then released, and the glucose stored as an energy reserve.Plants also absorb important nutrients and minerals from the air, natural waters, and soil. Carbon, oxygen and hydrogen are absorbed from the air or water and are the basic nutrients needed for plant survival. The three main nutrients absorbed from the soil for plant growth are nitrogen, phosphorus and potassium, with other important nutrients including calcium, sulfur, magnesium, iron boron, chlorine, manganese, zinc, copper molybdenum and nickel.Plants as a food source are divided into seeds, fruits, vegetables, legumes, grains and nuts. Where plants fall within these categories can vary, with botanically described fruits such as the tomato, squash, pepper and eggplant or seeds like peas commonly considered vegetables. Food is a fruit if the part eaten is derived from the reproductive tissue, so seeds, nuts and grains are technically fruit. From a culinary perspective, fruits are generally considered the remains of botanically described fruits after grains, nuts, seeds and fruits used as vegetables are removed. Grains can be defined as seeds that humans eat or harvest, with cereal grains (oats, wheat, rice, corn, barley, rye, sorghum and millet) belonging to the Poaceae (grass) family and pulses coming from the Fabaceae (legume) family. Whole grains are foods that contain all the elements of the original seed (bran, germ, and endosperm). Nuts are dry fruits, distinguishable by their woody shell.Fleshy fruits (distinguishable from dry fruits like grain, seeds and nuts) can be further classified as stone fruits (cherries and peaches), pome fruits (apples, pears), berries (blackberry, strawberry), citrus (oranges, lemon), melons (watermelon, cantaloupe), Mediterranean fruits (grapes, fig), tropical fruits (banana, pineapple). Vegetables refer to any other part of the plant that can be eaten, including roots, stems, leaves, flowers, bark or the entire plant itself. These include root vegetables (potatoes and carrots), bulbs (onion family), flowers (cauliflower and broccoli), leaf vegetables (spinach and lettuce) and stem vegetables (celery and asparagus).The carbohydrate, protein and lipid content of plants is highly variable. Carbohydrates are mainly in the form of starch, fructose, glucose and other sugars. Most vitamins are found from plant sources, with exceptions of vitamin D and vitamin B12. Minerals can also be plentiful or not. Fruit can consist of up to 90% water, contain high levels of simple sugars that contribute to their sweet taste, and have a high vitamin C content. Compared to fleshy fruit (excepting Bananas) vegetables are high in starch, potassium, dietary fiber, folate and vitamins and low in fat and calories. Grains are more starch based and nuts have a high protein, fibre, vitamin E and B content. Seeds are a good source of food for animals because they are abundant and contain fibre and healthful fats, such as omega-3 fats. Complicated chemical interactions can enhance or depress bioavailability of certain nutrients. Phytates can prevent the release of some sugars and vitamins.Animals that only eat plants are called herbivores, with those that mostly just eat fruits known as frugivores, leaves, while shoot eaters are folivores (pandas) and wood eaters termed xylophages (termites). Frugivores include a diverse range of species from annelids to elephants, chimpanzees and many birds. About 182 fish consume seeds or fruit. Animals (domesticated and wild) use as many types of grasses that have adapted to different locations as their main source of nutrients.Humans eat thousands of plant species; there may be as many as 75,000 edible species of angiosperms, of which perhaps 7,000 are often eaten. Plants can be processed into breads, pasta, cereals, juices and jams or raw ingredients such as sugar, herbs, spices and oils can be extracted. Oilseeds are pressed to produce rich oils – sunflower, flaxseed, rapeseed (including canola oil) and sesame.Many plants and animals have coevolved in such a way that the fruit is a good source of nutrition to the animal who then excretes the seeds some distance away, allowing greater dispersal. Even seed predation can be mutually beneficial, as some seeds can survive the digestion process. Insects are major eaters of seeds, with ants being the only real seed dispersers. Birds, although being major dispersers, only rarely eat seeds as a source of food and can be identified by their thick beak that is used to crack open the seed coat. Mammals eat a more diverse range of seeds, as they are able to crush harder and larger seeds with their teeth.
Animals
Animals are used as food either directly or indirectly. This includes meat, eggs, shellfish and dairy products like milk and cheese. They are an important source of protein and are considered complete proteins for human consumption as they contain all the essential amino acids that the human body needs. One 4-ounce (110 g) steak, chicken breast or pork chop contains about 30 grams of protein. One large egg has 7 grams of protein. A 4-ounce (110 g) serving of cheese has about 15 grams of protein. And 1 cup of milk has about 8 grams of protein. Other nutrients found in animal products include calories, fat, essential vitamins (including B12) and minerals (including zinc, iron, calcium, magnesium).Food products produced by animals include milk produced by mammary glands, which in many cultures is drunk or processed into dairy products (cheese, butter, etc.). Eggs laid by birds and other animals are eaten and bees produce honey, a reduced nectar from flowers that is used as a popular sweetener in many cultures. Some cultures consume blood, such as in blood sausage, as a thickener for sauces, or in a cured, salted form for times of food scarcity, and others use blood in stews such as jugged hare.
Taste
Animals, specifically humans, typically have five different types of tastes: sweet, sour, salty, bitter, and umami. The differing tastes are important for distinguishing between foods that are nutritionally beneficial and those which may contain harmful toxins. As animals have evolved, the tastes that provide the most energy are the most pleasant to eat while others are not enjoyable, although humans in particular can acquire a preference for some substances which are initially unenjoyable. Water, while important for survival, has no taste.Sweetness is almost always caused by a type of simple sugar such as glucose or fructose, or disaccharides such as sucrose, a molecule combining glucose and fructose. Sourness is caused by acids, such as vinegar in alcoholic beverages. Sour foods include citrus, specifically lemons and limes. Sour is evolutionarily significant as it can signal a food that may have gone rancid due to bacteria. Saltiness is the taste of alkali metal ions such as sodium and potassium. It is found in almost every food in low to moderate proportions to enhance flavor. Bitter taste is a sensation considered unpleasant characterised by having a sharp, pungent taste. Unsweetened dark chocolate, caffeine, lemon rind, and some types of fruit are known to be bitter. Umami, commonly described as savory, is a marker of proteins and characteristic of broths and cooked meats. Foods that have a strong umami flavor include cheese, meat and mushrooms.
While most animals taste buds are located in their mouth, some insects taste receptors are located on their legs and some fish have taste buds along their entire body. Dogs, cats and birds have relatively few taste buds (chickens have about 30), adult humans have between 2000 and 4000, while catfish can have more than a million. Herbivores generally have more than carnivores as they need to tell which plants may be poisonous. Not all mammals share the same tastes: some rodents can taste starch, cats cannot taste sweetness, and several carnivores (including hyenas, dolphins, and sea lions) have lost the ability to sense up to four of the five taste modalities found in humans.
Digestion
Food is broken into nutrient components through digestive process. Proper digestion consists of mechanical processes (chewing, peristalsis) and chemical processes (digestive enzymes and microorganisms). The digestive systems of herbivores and carnivores are very different as plant matter is harder to digest. Carnivores mouths are designed for tearing and biting compared to the grinding action found in herbivores. Herbivores however have comparatively longer digestive tracts and larger stomachs to aid in digesting the cellulose in plants.
See also
Food pairing
References
Further reading
Collingham, E.M. (2011). The Taste of War: World War Two and the Battle for Food
Katz, Solomon (2003). The Encyclopedia of Food and Culture, Scribner
Mobbs, Michael (2012). Sustainable Food Sydney: NewSouth Publishing, ISBN 978-1-920705-54-1
Nestle, Marion (2007). Food Politics: How the Food Industry Influences Nutrition and Health, University Presses of California, revised and expanded edition, ISBN 0-520-25403-1
The Future of Food (2015). A panel discussion at the 2015 Digital Life Design (DLD) Annual Conference. "How can we grow and enjoy food, closer to home, further into the future? MIT Media Lab's Kevin Slavin hosts a conversation with food artist, educator, and entrepreneur Emilie Baltz, professor Caleb Harper from MIT Media Lab's CityFarm project, the Barbarian Group's Benjamin Palmer, and Andras Forgacs, the co-founder and CEO of Modern Meadow, who is growing 'victimless' meat in a lab. The discussion addresses issues of sustainable urban farming, ecosystems, technology, food supply chains and their broad environmental and humanitarian implications, and how these changes in food production may change what people may find delicious ... and the other way around." Posted on the official YouTube Channel of DLD
External links
Cookbook – Wikibooks
Food, BBC Radio 4 discussion with Rebecca Spang, Ivan Day and Felipe Fernandez-Armesto (In Our Time, 27 December 2001)
Media related to food at Wikimedia Commons
Official website of Food Timeline
The dictionary definition of food at Wiktionary |
british gas | British Gas (trading as Scottish Gas in Scotland) is an energy and home services provider in the United Kingdom. It is the trading name of British Gas Services Limited and British Gas New Heating Limited, both subsidiaries of Centrica. Serving around twelve million homes in the United Kingdom, British Gas is the biggest energy supplier in the country, and is considered one of the Big Six dominating the gas and electricity market in the United Kingdom.In 2023, British Gas came under criticism after a series of media stories highlighting their poor customer service and practice of breaking into homes to force people onto pre-payment meters. After an internal enquiry, they announced they would no longer use the third-party contractor responsible.In a 2023 review, the UK government regulator for the energy sector, Ofgem, found that British Gas had 'moderate weaknesses' with their customer service processes, alongside the majority of suppliers. They are described as the "worst firm to be moved to" by MoneySavingExpert.com after a 2022 survey.
History
1812–1948
The Gas Light and Coke Company was the first public utility company in the world. It was founded by Frederick Albert Winsor and incorporated by royal charter on 30 April 1812 under the seal of King George III.It continued to thrive for the next 136 years, expanding into domestic services whilst absorbing many smaller companies including the Aldgate Gas Light and Coke Company (1819), the City of London Gas Light and Coke Company (1870), the Equitable Gas Light Company (1871), the Great Central Gas Consumer's Company (1870), Victoria Docks Gas Company (1871), Western Gas Light Company (1873), Imperial Gas Light and Coke Company (1876), Independent Gas Light and Coke Company (1876), the London Gas Light Company (1883), Richmond Gas Company (1925), Brentford Gas Company (1926), Pinner Gas Company (1930) and Southend-on-Sea and District Gas Company (1932).On 1 May 1949, the GLCC became the major part of the new North Thames Gas Board, one of Britain's twelve regional gas boards after the passing of the Gas Act 1948 by Clement Attlee's post-war Labour government.
1948–1973
In the beginning of the 1900s, the gas market in the United Kingdom was mainly run by county councils and small private firms. At this time the use of a flammable gas (often known as "town gas") piped to houses as a fuel was still being marketed to consumers, by such means as the National Gas Congress and Exhibition in 1913. The gas used in the 19th and early 20th centuries was coal gas, but in the period of 1967–77, British domestic coal gas supplies were replaced by natural gas.
In 1948, Clement Attlee's Labour government reshaped the gas industry, bringing in the Gas Act 1948. The act (on the vesting date of 1 April 1949) nationalised the gas industry in the United Kingdom and 1,062 privately owned and municipal gas companies were merged into twelve area gas boards, each a separate body with its own management structure.
The twelve gas boards were: Eastern, East Midlands, Northern, North Eastern, North Thames, North West, Scottish, Southern, South Eastern, South West, Wales, and West Midlands. Each area board was divided into geographical groups or divisions which were often further divided into smaller districts. These boards simply became known as the "gas board", a term still sometimes used when referring to British Gas.In addition, the Gas Act established the Gas Council, its constitution was such that control lay effectively with the area boards. The council consisted of a chairman and deputy chairman, both appointed by the minister, and the chairmen of each of the twelve area boards. The council served as a channel of communication with the minister; undertook labour negotiations; undertook research; and acted as spokesperson for the gas industry generally.The Gas Act 1965 shifted the balance of power to the centre: it put the Gas Council on the same footing as the area boards, with the powers to borrow up to £900 million, to manufacture or acquire gas and to supply gas in bulk to any area board. In May 1968, the Gas Council moved to large new offices at 59 Bryanston Street, Marble Arch, London.
1973–1986
In the beginning of the 1970s, the gas industry was again restructured after the Gas Act 1972 was passed. The act merged all the area boards, created the British Gas Corporation and abolished the Gas Council.From its inception, the corporation was responsible for development and maintenance of the supply of gas to Great Britain, in addition to satisfying reasonable demand for gas throughout the country. Its leadership, like that of the area boards, was appointed and supervised by the Secretary of State for Trade and Industry until 1974, when those powers were vested in the newly created position of Secretary of State for Energy.
1986–1997
The Conservative Government, led by Prime Minister Margaret Thatcher introduced the Gas Act 1986, which led to the privatisation of the company, and on 8 December 1986, its shares floated on the London stock market as British Gas plc. In the hope of encouraging individuals to become shareholders, the offer was advertised with the "If you see Sid... Tell him!" campaign.
The initial public offering of 135p per share valued the company at £9 billion.
1997–2020
In February 1997, eleven years after it had been privatised, British Gas plc demerged to become the entirely separate BG Group and the Gas Sales and Gas Trading, Services and Retail businesses.
The Gas Sales and Gas Trading and Services and Retail businesses, together with the gas production business of the North and South Morecambe gas fields, were transferred to Centrica, which continues to own and operate the British Gas retail brand.British Gas acquired Dyno-Rod in October 2004. In April 2016, it was announced that 224,000 residential customers had left the company, citing customers coming to the end of their fixed deals and then moving on to other suppliers as the main reason for this loss.
In the same month (April 2016) British Gas also announced it would be closing a call centre and office in Oldbury (West Midlands), with a loss of approximately 680 jobs. In May 2018, Centrica announced that British Gas had lost 100,000 customers since the start of the year. However, the parent company was still likely to hit its targets of 2018, and pay dividends of 12p per share.
British Gas is now led by chief executive, Sarwjit Sambhi, who oversees a business that provides energy and services to around ten million homes, and employs over 28,000 staff based across the United Kingdom. A further seven hundred job cuts in the United Kingdom were announced by Centrica in July 2019, amid growing marketplace challenges, which include the loss of 742,000 customers in 2018, and the government's price cap.
Vehicle fleet
In May 2007, British Gas signed a deal which saw 1,000 Volkswagen Caddy vans being supplied to the firm, which were fitted with a bespoke racking system and a speed limiter, designed by Siemens. The deal was renewed in September 2015.
In 2020 British Gas announced they would be introducing an all electric fleet of vans, with all diesel vehicles to be replaced by 2025. The company are currently replacing diesel vehicles with the Vauxhall Vivaro-e.
2021
In April 2021, British Gas changed the contractual terms and conditions for thousands of its workers. Those who did not accept the changes by midday on 14 April 2021, were told to leave the firm. This resulted in a public outcry over the treatment of long-time workers, in particular over social media and with support from workers' unions and the opposition Labour Party.
Advertising, sponsorship and marketing
British Gas has actively been involved in sports sponsorship, including a six-year deal with the British swimming team which commenced in March 2009, and is expected to net the team £15 million, and from 2006 to 2009, it sponsored the Southern Football League of England.The company's extensive television advertising has featured many high-profile individuals, and in the beginning of the 1990s, one advertisement included Cheryl Tweedy as a small child, more than ten years before the beginning of her pop music career. In November 2012, the Information Commissioner's Office publicly listed British Gas as one of a number of companies that it had concerns about due to unsolicited telephone calls for marketing.
The concerns were based on complaints. In response, British Gas said that "We uphold the highest standards when contacting people in their homes, and only use contact information if we have express permission to do so."In July 2014, regulator Ofgem reached an agreement with British Gas for the company to pay £1 million in compensation to hundreds of people, who had been advised to switch from other suppliers to British Gas by British Gas advisers using exaggerated claims. On 20 September 2015, British Gas launched an advert, including their new mascot, Wilbur the Penguin.
Distribution network operators
British Gas is a supplier of both gas and electricity for homes across the country. However the infrastructure (pipes) which delivers the gas to consumers is owned and maintained by other companies. Similarly, the network of towers and cables that distributes their electricity is maintained by distribution network operators (DNOs) which vary from region to region, and not by British Gas. So, as with other electricity suppliers, if there is an electrical power outage, it is necessary to contact the appropriate DNO, rather than British Gas or your other electricity supplier.
See also
Centrica
Gas meter
References
Further reading
Brady, Robert A. (1950). Crisis in Britain: Plans and Achievements of the Labour Government. University of California Press. On nationalization 1945–1950: pp. 132–182
External links
Official website
Catalogue of the British Gas operational research archives, held at the Modern Records Centre, University of Warwick
Catalogue of the British Gas North West operational research reports, held at the Modern Records Centre, University of Warwick |
the climate group | The Climate Group is a non-profit organisation that works with businesses and government leaders aiming to address climate change. The Group has programmes focusing on renewable energy and reducing greenhouse gas emissions.
Launched in 2004, the organisation operates globally with offices in the UK (headquarters), the United States and India. It acts as the secretariat for the Under2 Coalition, an alliance of state and regional governments around the world that are committed to reducing their greenhouse gas emissions to net-zero levels by 2050. As of 2022, the Under2 Coalition brings together over 270 governments representing 1.75 billion people and 50% of the world economy.The organisation's business initiative, part of the We Mean Business Coalition, attempts to grow corporate demand for renewable energy, energy productivity and electric transport, through which it would plan to accelerate the transition to a zero-emissions economy while helping businesses reduce carbon footprints.
History
The Climate Group was initiated in 2003 and launched in 2004 by ex-CEO and co-founder, Steve Howard, together with ex-Chief Operating Officer, Jim Walker and former Communications Director, Alison Lucas. It evolved from research led by the Rockefeller Brothers Fund and was established to encourage more major companies and sub-national governments to take action on climate change. To join, a company or government had to sign the organisation's leadership principles. Former UK prime minister Tony Blair has supported the group since its launch and has appeared at a number of the organisation's events.
The Climate Group's international network of States and Regions included a number of prominent leaders of sub-national governments that have been or are involved in its policy work in developing renewable energy and reducing greenhouse gas emissions. These include, or have included, Scottish First Minister Alex Salmond; Welsh First Minister Carwyn Jones; Prince Albert of Monaco; former Governor of California, Arnold Schwarzenegger; former Premier of Manitoba, Gary Doer; former Premier of Quebec, Jean Charest; former Premier of South Australia Mike Rann and President of Poitou-Charentes, Ségolène Royal. In successive years, Schwarzenegger, Charest and Salmond each received: The Climate Group's international climate leadership award from Co-chair Mike Rann.In 2011, Mark Kenber, previously deputy-CEO, took over from Steve Howard as CEO. He resigned from the post in 2016.
In 2017, Helen Clarkson became CEO.
Funding
The Climate Group states that it functions independently of any corporate and government entities. It funds its work from a variety of revenue streams. The organisation's 2004 launch was supported primarily by philanthropic organisations, including the Rockefeller Brothers Fund, the DOEN Foundation, the John D and Catherine T. MacArthur Foundation, and the Esmee Fairbairn Foundation. The organisation's 2007-2008 annual report indicated that over 75% of its funding at the time was from philanthropic donations, foundations and other non-governmental organisations, as well as from the now-discontinued philanthropic HSBC Climate Partnership.
HSBC Climate Partnership
In 2007, HSBC announced that The Climate Group, along with WWF, Earthwatch, and the Smithsonian Tropical Research Institute, would be a partner in the HSBC Climate Partnership, and donated US$100 million to fund joint work – the largest-ever single corporate philanthropic donation to the environment. The results of this programme can be seen in HSBC's 2010 Partnership Review, and HSBC's Clean Cities film of December 2010. The Clean Cities film specifically outlines some of The Climate Group's achievements enabled by this programme, including LED pilots in New York, clean technology finance in Mumbai, consumer campaigns in London, and cutting employee carbon footprints in Hong Kong.
References
External links
Official website |
regulation and monitoring of pollution | To protect the environment from the adverse effects of pollution, many nations worldwide have enacted legislation to regulate various types of pollution as well as to mitigate the adverse effects of pollution. At the local level, regulation usually is supervised by environmental agencies or the broader public health system. Different jurisdictions often have different levels regulation and policy choices about pollution. Historically, polluters will lobby governments in less economically developed areas or countries to maintain lax regulation in order to protect industrialisation at the cost of human and environmental health.The modern environmental regulatory environment has its origins in the United States with the beginning of industrial regulations around Air and Water pollution connected to industry and mining during the 1960s and 1970s.Because many of pollutants have trans-boundary impacts, the UN and other treaty bodies have been used to regulate pollutants that circulate as air pollution, water pollution or trade in wastes. Early international agreements were successful at addressing Global Environmental issues, such as Montreal Protocol, which banned Ozone depleting chemicals in 1987, with more recent agreements focusing on broader, more widely dispersed chemicals such as persistent organic pollutants in the Stockholm Convention on Persistent Organic Pollutants created in 2001, such as PCBs, and the Kyoto Protocol in 1997 which initiated collaboration on addressing greenhouse gases to mitigate climate change.
Regulation and monitoring by region
International
Since pollution crosses political boundaries international treaties have been made through the United Nations and its agencies to address international pollution issues.
Greenhouse gas emissions
The Kyoto Protocol is an amendment to the United Nations Framework Convention on Climate Change (UNFCCC), an international treaty on global warming. It also reaffirms sections of the UNFCCC. Countries which ratify this protocol commit to reduce their emissions of carbon dioxide and five other greenhouse gases, or engage in emissions trading if they maintain or increase emissions of these gases. A total of 141 countries have ratified the agreement. Notable exceptions include the United States and Australia, who have signed but not ratified the agreement. The stated reason for the United States not ratifying is the exemption of large emitters of greenhouse gases who are also developing countries, like China and India.An UN environmental conference held in Bali 3–14 December 2007 with the participation from 180 countries aims to replace the Kyoto Protocol, which will end in 2012. During the first day of the conference United States, Saudi Arabia and Canada were presented with the "Fossil-of-the-day-award", a symbolic bag of coal for their negative impact on the global climate. The bags included the flags of the respective countries.
Toxic wastes
Canada
In Canada the regulation of pollution and its effects are monitored by a number of organizations depending on the nature of the pollution and its location. The three levels of government (Federal – Canada Wide; Provincial; and Municipal) equally share in the responsibilities, and in the monitoring and correction of pollution.
China
China's rapid industrialization has substantially increased pollution. China has some relevant regulations: the 1979 Environmental Protection Law, which was largely modeled on U.S. legislation, but the environment continues to deteriorate. Twelve years after the law, only one Chinese city was making an effort to clean up its water discharges. This indicates that China is about 30 years behind the U.S. schedule of environmental regulation and 10 to 20 years behind Europe. In July 2007, it was reported that the World Bank reluctantly censored a report revealing that 750,000 people in China die every year as a result of pollution-related diseases. China's State Environment Protection Agency and the Health Ministry asked the World Bank to cut the calculations of premature deaths from the report fearing the revelation would provoke "social unrest".
Europe
The basic European rules are included in the Directive 96/61/EC of 24 September 1996 concerning integrated pollution prevention and control (IPPC) and the National Emission Ceilings Directive.
United Kingdom
In the 1840s, the United Kingdom brought onto the statute books legislation to control water pollution and was strengthened in 1876 in the Rivers Pollution Prevention Act and was subsequently extended to all freshwaters in the Rivers (Prevention of Pollution) Act 1951 and applied to coastal waters by the Rivers (Prevention of Pollution) Act 1961The Environmental Protection Act of 1990 established the system of Integrated Pollution Control(IPC). Currently the clean up of historic contamination is controlled under a specific statutory scheme found in Part IIA of the Environmental Protection Act 1996 (Part IIA), as inserted by the Environment Act 1995, and other ‘rules’ found in regulations and statutory guidance. The Act came into force in England in April 2000.
Within the current regulatory framework, Pollution Prevention and Control (PPC) is a regime for controlling pollution from certain designated industrial activities. The regime introduces the concept of Best Available Techniques (BAT) to environmental regulations. Operators must use the BAT to control pollution from their industrial activities to prevent, and where that is not practicable, to reduce to acceptable levels, pollution to air, land and water from industrial activities. The Best Available Techniques also aim to balance the cost to the operator against benefits to the environment.
The system of Pollution Prevention and Control is replacing that of IPC and has been taking effect between 2000 and 2007. The Pollution Prevention and Control regime implements the European Directive (EC/96/61) on integrated pollution prevention and control.
United States
Air pollution
The United States Congress passed the Clean Air Act in 1963 to legislate the reduction of smog and atmospheric pollution in general. That legislation has subsequently been amended and extended in 1966, 1970, 1977 and 1990. In 1968 AP 42 Compilation of Air Pollutant Emission Factors. Numerous state and local governments have enacted similar legislation either implementing or filling in locally important gaps in the national program. The national Clean Air Act and similar state legislative acts have led to the widespread use of atmospheric dispersion modeling in order to analyze the air quality impacts of proposed major actions. With the 1990 Clean Air Act, the United States Environmental Protection Agency (EPA) began a controversial carbon trading system in which tradable rights to emit a specified level of carbon are granted to polluters.
Water pollution
Enactment of the 1972 Clean Water Act (CWA) required thousands of facilities to obtain permits for discharges to navigable waters, through the National Pollutant Discharge Elimination System (NPDES). It also required EPA to establish national technology-based discharge standards for municipal sewage treatment plants, and for many industrial categories (the latter are called "effluent guidelines.")Municipal and industrial permittees are required to regularly collect and analyze wastewater samples, and submit Discharge Monitoring Reports to a state agency or EPA. Amendments in 1977 required stricter regulation of toxic pollutants. In 1987 Congress expanded NPDES permit coverage to include municipal and industrial stormwater discharges.The Act also requires use of best management practices for a wide range of other water discharges including nonpoint source pollution.Thermal pollution discharges are regulated under section 316(a) of the CWA. NPDES permits include effluent limitations on water temperature to protect the biotic life supported by a water body. A permittee may request a variance to the typical thermal limitations. Alternate limitations may be issued in limited circumstances, if the permittee has provided sufficient proof through data submission that aquatic life in the water body will be protected.
In addition to wastewater discharge monitoring, EPA works with federal, state and local environmental agencies to conduct ambient water monitoring programs in water bodies nationwide. The CWA requires EPA and the states to prepare reports to Congress on the condition of the nation's waters. Ambient water quality data collected by EPA, the US Geological Survey and other organizations are available to the public in several online databases.
Land pollution
Congress passed the Resource Conservation and Recovery Act (RCRA) in 1976, which created a regulatory framework for both municipal solid waste and hazardous waste disposed on land. RCRA requires that all hazardous wastes be managed and tracked from generation of the waste, through transport and processing, to final disposal, by means of a nationwide permit system. The Hazardous and Solid Waste Amendments of 1984 mandated regulation of underground storage tanks containing petroleum and hazardous chemicals, and the phasing out of land disposal of hazardous waste. The Federal Facilities Compliance Act, passed in 1992, clarified RCRA coverage of federally owned properties such as military bases. Illegal disposal of waste is punishable by fines of up to $25,000 per occurrence.Alongside municipal and hazardous waste the EPA is in charge of soil conservation. The EPA, often with the help of state partners, manages soil contamination through contaminant sites and facilities. An annual report on the Environment and a Toxics Release Inventory is produced as a result of these efforts.
To specifically mitigate soil pollution from fertilizers, the USDA, National Resources Conservation Service (NRCS), National Institutue of Food and Agriculture (NIFA), and Agricultural Research Service (ARS) monitor soil resources and provide guidelines to prevent nutrient loss.
Noise pollution
Passage of the Noise Control Act in 1972 established mechanisms of setting emission standards for virtually every source of noise including motor vehicles, aircraft, certain types of HVAC equipment and major appliances. It also put local government on notice as to their responsibilities in land use planning to address noise mitigation. This noise regulation framework comprised a broad data base detailing the extent of noise health effects. Congress ended funding of the federal noise control program in 1981, which curtailed development of further national regulations.
Light pollution
Light Pollution in the United States is not federally regulated. The Environmental Protection Agency (EPA), in charge of most environmental regulations, does not manage light pollution.18 states and one territory have implemented laws that regulate light pollution to some extent. State legislation includes restrictions on hardware, protective equipment, and net light pollution ratings. Such legislation has been coined "Dark Skies" Legislation.States have implemented light pollution regulation for many factors including; public safety, energy conservation, improved astronomy research, and reduced environmental effects.
State programs
The state of California Office of Environmental Health Hazard Assessment (OEHHA) has maintained an independent list of substances with product labeling requirements as part of Proposition 65 since 1986.
Research
The Toxicology and Environmental Health Information Program (TEHIP) at the United States National Library of Medicine (NLM) maintains a comprehensive toxicology and environmental health web site that includes access to resources produced by TEHIP and by other government agencies and organizations. This web site includes links to databases, bibliographies, tutorials, and other scientific and consumer-oriented resources. TEHIP also is responsible for the Toxicology Data Network (TOXNET) an integrated system of toxicology and environmental health databases that are available free of charge on the web.
TOXMAP is a Geographic Information System (GIS) that is part of TOXNET. TOXMAP uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs.
See also
Greenhouse gas monitoring
List of environmental issues
Dutch standards, environmental pollutant reference values
Timeline of major US environmental and occupational health regulation
References
External links
Environment Agency (England and Wales)
Environmental Assessment Agency - Canada
Environmental Protection Agency - USA
Extoxnet newsletters - environmental pollution news. Last update 1998. |
climate-friendly gardening | Climate-friendly gardening is a form of gardening that can reduce emissions of greenhouse gases from gardens and encourage the absorption of carbon dioxide by soils and plants in order to aid the reduction of global warming.
To be a climate-friendly gardener means considering both what happens in a garden and the materials brought into it and the impact they have on land use and climate.
It can also include garden features or activities in the garden that help to reduce greenhouse gas emissions elsewhere.
Land use and greenhouse gases
Most of the excess greenhouse gases causing climate change has come from burning fossil fuel. But a special report from the Intergovernmental Panel on Climate Change (IPCC) estimated that, in the last 150 years fossil fuels and cement production have been responsible for only about two-thirds of climate change: the other third has been caused by human land use.The three main greenhouse gases produced by unsustainable land use are carbon dioxide, methane, and nitrous oxide.Black carbon or soot can also be caused by unsustainable land use, and, although not a gas, can behave like greenhouse gases and contribute to climate change.
Carbon dioxide
Carbon dioxide, CO2, is a natural part of the carbon cycle, but human land uses often add more, especially from habitat destruction and the cultivation of soil. When woodlands, wetlands, and other natural habitats are turned into pasture, arable fields, buildings and roads; the carbon held in the soil and vegetation becomes extra carbon dioxide and methane to extract more heat in the atmosphere.Gardeners may cause extra carbon dioxide to be added to the atmosphere in several ways:
Using peat or potting compost containing peat;
Buying garden furniture or other wooden products made from woodland which has been destroyed rather than taken as a renewable crop from sustainably managed woodland;
Digging the soil and leaving it bare so that the carbon in soil organic matter is oxidised;
Using power tools which burn fossil fuel or electricity generated by burning fossil fuel;
Using patio heaters;
Heating greenhouses by burning fossil fuel or electricity generated by burning fossil fuel;
Burning garden prunings and weeds on a bonfire, though pyrolysis of wood turns 35% of its carbon (which would otherwise decompose to CO2) into biochar, which remains stable in the soil for thousands of years;
Buying tools, pesticides, synthetic nitrogen fertilizers (over 2 kilograms of carbon dioxide equivalent is produced in the manufacture of each kilogram of ammonium nitrate), and other materials which have been manufactured using fossil fuel;
Heating and treating swimming pools by burning fossil fuel or electricity generated by burning fossil fuel;
Watering their gardens with tapwater, which has been treated and pumped by burning fossil fuel, with a greenhouse gas impact of about 1 kg CO2e/m3 water.Gardeners will also be responsible for extra carbon dioxide when they buy garden products which have been transported by vehicles powered by fossil fuel.
Methane
Methane, CH4, is a natural part of the carbon cycle, but human land uses often add more, especially from anaerobic soil, artificial wetlands such as rice fields, and from the guts of farm animals, especially ruminants such as cattle and sheep.Gardeners may cause extra methane to be added to the atmosphere in several ways:
Compacting soil so that it becomes anaerobic, for example by treading on soil when it is wet;
Allowing compost heaps to become compacted and anaerobic;
Creating homemade liquid feed by putting the leaves of plants such as comfrey under water, with the unintended consequence that the plants may release methane as they decay;
Killing pernicious weeds by covering them with water, with the unintended consequence that the plants may release methane as they decay;
Allowing ponds to become anaerobic, for example by adding unsuitable fish species which stir up sediment that then blocks light from and kills submerged oxygenating plants.
Nitrous oxide
Nitrous oxide, N2O, is a natural part of the nitrogen cycle, but human land uses often add more.Gardeners may cause extra nitrous oxide to be added to the atmosphere by:
Using synthetic nitrogen fertilizer, for example "weed and feed" on lawns, especially if it is applied when plants are not actively growing, the soil is compacted, or when other factors are limiting so that the plants cannot make use of the nitrogen;
Compacting the soil (for example by working in the garden when the soil is wet) which will increase the conversion of nitrates to nitrous oxide by soil bacteria;
Burning garden waste on bonfires.
Black carbon
Black carbon is not a gas, but it acts like a greenhouse gas because it can be suspended in the atmosphere and absorb heat.Gardeners may cause extra black carbon to be added to the atmosphere by burning garden prunings and weeds on bonfires, especially if the waste is wet and becomes black carbon in the form of soot. Gardeners will also be responsible for extra black carbon produced when they buy garden products which have been transported by vehicles powered by fossil fuel especially the diesel used in most lorries.
Gardening to reduce greenhouse gas emissions and absorb carbon dioxide
There are many ways in which climate-friendly gardeners may reduce their contribution to climate change and help their gardens absorb carbon dioxide from the atmosphere.Climate-friendly gardeners can find good ideas in many other sustainable approaches:
Agroforestry;
Forest gardening;
Orchards;
Organic gardening; [1]
Permaculture;
Rain garden;
Vegan organic gardening;
Water-wise gardening;
Wildlife garden;
Biochar.
Protecting and enhancing carbon stores
Protecting carbon stores in land beyond gardens
Climate-friendly gardening includes actions which protect carbon stores beyond gardens. The biggest carbon stores in land are in soil; the two habitat types with the biggest carbon stores per hectare are woods and wetlands; and woods absorb more carbon dioxide per hectare per year than most other habitats. Climate-friendly gardeners therefore aim to ensure that nothing they do will harm these habitats.
According to Morison and Morecroft (eds.)'s Plant Growth and Climate Change, the net primary productivity (the net amount of carbon absorbed each year) of various habitats is:
Tropical forests: 12.5 tonnes of carbon per hectare per year;
Temperate forests: 7.7 tonnes of carbon per hectare per year;
Temperate grasslands: 3.7 tonnes of carbon per hectare per year;
Croplands: 3.1 tonnes of carbon per hectare per year.The Intergovernmental Panel on Climate Change's Special Report Land use, land-use change, and forestry lists the carbon contained in different global habitats as:
Wetlands: 643 tonnes carbon per hectare in soil + 43 tonnes carbon per hectare in vegetation = total 686 tonnes carbon per hectare;
Tropical forests: 123 tonnes carbon per hectare in soil + 120 tonnes carbon per hectare in vegetation = total 243 tonnes carbon per hectare;
Temperate forests: 96 tonnes carbon per hectare in soil + 57 tonnes carbon per hectare in vegetation = total 153 tonnes carbon per hectare;
Temperate grasslands: 164 tonnes carbon per hectare in soil + 7 tonnes carbon per hectare in vegetation = total 171 tonnes carbon per hectare;
Croplands: 80 tonnes carbon per hectare in soil + 2 tonnes carbon per hectare in vegetation = total 82 tonnes carbon per hectare.The figures quoted above are global averages. More recent research in 2009 has found that the habitat with the world's highest known total carbon density - 1,867 tonnes of carbon per hectare - is temperate moist forest of Eucalyptus regnans in the Central Highlands of south-east Australia; and, in general, that temperate forests contain more carbon than either boreal forests or tropical forests.
Carbon stores in Britain
According to Milne and Brown's 1997 paper "Carbon in the vegetation and soils of Great Britain", Britain's vegetation and soil are estimated to contain 9952 million tonnes of carbon, of which almost all is in the soil, and most in Scottish peatland soil:
Soils in Scotland: 6948 million tonnes carbon;
Soils in England and Wales: 2890 million tonnes carbon;
Vegetation in British woods and plantations (which cover only 11% of Britain's land area): 91 million tonnes carbon;
Other vegetation: 23 million tonnes carbon.A 2005 report suggested that British woodland soil may contain as much as 250 tonnes of carbon per hectare.
Many studies of soil carbon only study the carbon in the top 30 centimetres, but soil is often much deeper than that, especially below woodland. One 2009 study of the United Kingdom's carbon stores by Keith Dyson and others gives figures for soil carbon down to 100 cm below the habitats, including "Forestland", "Cropland" and "Grassland", covered by the Kyoto Protocol reporting requirements.
Forestland soils: average figures in tonnes carbon per hectare are 160 (England), 428 (Scotland), 203 (Wales), and 366 (Northern Ireland).
Grassland soils: average figures in tonnes carbon per hectare are 148 (England), 386 (Scotland), 171 (Wales), and 304 (Northern Ireland).
Cropland soils: average figures in tonnes carbon per hectare are 110 (England), 159 (Scotland), 108 (Wales), and 222 (Northern Ireland).
Protecting carbon stores in wetland
Climate-friendly gardeners choose peat-free composts because some of the planet's biggest carbon stores are in soil, and especially in the peatland soil of wetlands.
The Intergovernmental Panel on Climate Change's Special Report Land Use, Land-Use Change and Forestry gives a figure of 2011 gigatonnes of carbon for global carbon stocks in the top 1 metre of soils, much more than the carbon stores in the vegetation or the atmosphere.Climate-friendly gardeners also avoid using tapwater not only because of the greenhouse gases emitted when fossil fuels are burnt to treat and pump water, but because if water is taken from wetlands then carbon stores are more likely to be oxidised to carbon dioxide.A climate-friendly garden therefore does not contain large irrigated lawns, but instead includes water-butts to collect rainwater; water-thrifty plants which survive on rainwater and do not need watering after they are established; trees, shrubs and hedges to shelter gardens from the drying effects of sun and wind; and groundcover plants and organic mulch to protect the soil and keep it moist.p. 242p. 80–82Climate-friendly gardeners will ensure that any paved surfaces in their gardens (which are kept to a minimum to increase carbon stores) are permeable, and may also make rain gardens, sunken areas into which rainwater from buildings and paving is directed, so that the rain can then be fed back into groundwater rather than going into storm drains. The plants in rain gardens must be able to grow in both dry and wet soils.
Protecting carbon stores in woodland
Wetlands may store the most carbon in their soils, but woods store more carbon in their living biomass than any other type of vegetation, and their soils store the most carbon after wetlands. Climate-friendly gardeners therefore ensure that any wooden products they buy, such as garden furniture, have been made of wood from sustainably managed woodland.
Protecting and increasing carbon stores in gardens
After rocks containing carbonate compounds, soil is known to be the biggest store of carbon on land.
Carbon is found in soil organic matter, including living organisms (plant roots, fungi, animals, protists, bacteria), dead organisms, and humus.
One study of the environmental benefits of gardens estimates that 86% of carbon stores in gardens is in the soil.
The first priorities for climate-friendly gardeners are therefore, to:
Protect the soil's existing carbon stores;
Increase the soil's carbon stores.
Choose low-emission garden products and practices.
Preventing erosion and keeping weeds down.
Planting of trees and shrubs.
By heat-trapping nitrous oxide emissions related to fertilizer use and generous watering.To protect the soil, climate-friendly gardens:
Are based on plants rather than buildings and paving;
Have soil that is kept at a relatively stable temperature by shelter from trees, shrubs and/or hedges;
Have soil that is always kept covered and therefore moist and at a relatively stable temperature by groundcover plants, fast-growing green manures (which can be used as an intercrop in kitchen gardens of annual vegetables) and/or organic mulches.Climate-friendly gardeners avoid things which may harm the soil. They do not tread on the soil when it is wet, because it is by then most vulnerable to compaction. They dig and till the soil as little as possible, and only when the soil is moist rather than wet, because cultivation increases the oxidation of soil organic matter and produces carbon dioxide.p. 54–55To increase soil carbon stores, climate-friendly gardeners ensure that their gardens create optimal conditions for various vigorous healthy growth of plants, and other garden organisms above and below the ground, and reduce the impact of any limiting factors.
In general, the more biomass that the plants can create each year, the more carbon at which will be added to the soil.p. 54–55
However, only some biomass each year becomes long-term soil carbon or humus. In Soil Carbon and Organic Farming, a 2009 report from the Soil Association, Gundula Azeez discusses several factors which increase how much biomass is turned into humus. These include good soil structure, soil organisms such as fine root hairs, microorganisms, mycorrhizas and earthworms which increases soil aggregation, residues from plants (such as trees and shrubs) which have a high level content of resistant chemicals such as lignin, and plant residues with a carbon to nitrogen ratio lower than about 32:1.
Climate-friendly gardens therefore include:
Hedges for shelter from wind;
A light canopy of late-leafing deciduous trees to let in enough sunlight for growth but not so much that the garden becomes too hot and dry (this is one of the principles behind many agroforestry systems, such as Paulownia's use in China partly because it is late-leafing and its canopy is sparse so that crops below it get shelter but also enough light);
Groundcover plants and organic mulches (such as woodchips over compost made from kitchen and garden "waste") to keep soil moist and at relatively stable temperatures;
Reducing the use of gas-powered lawn and garden equipment in favor of electric-powered devices. Instead of a leaf blower, using a rake or broom will cut down on gas emissions that contribute to climate change.
Nitrogen-fixing plants, because soil nitrogen may be a limiting factor (but climate-friendly gardeners avoid synthetic nitrogen fertilizers, because these may cause mycorrhizal associations to break down);
Many layers of plants, including woody plants such as trees and shrubs, other perennials, groundcover plants, deep-rooted plants, all chosen according to 'right plant, right place', so that they are suited to their growing conditions and will grow well;
A wide diversity of disease-resistant, vigorous plants for resilience and to make the most of all available ecological niches;
Plants to feed and shelter wildlife, to increase total biomass, and to ensure biological control of pests and diseases.
Soil amendments from waste products such as compost made from garden and kitchen "waste" and biochar from pyrolyzed dried, dead wood.
Maximise the ventilation and shading around the home as much as possible during the summer.Lawns, like other grasslands, can build up good levels of soil carbon, but they will grow much more vigorously and store more carbon if besides grasses, they also contain nitrogen-fixing plants such as clover, and if they are cut down using a mulching mower which returns finely-chopped mowings to the lawn. More carbon, however, may be stored by other perennial plants such as trees and shrubs and they also do not need to be maintained using power tools.
Climate-friendly gardeners will also aim to increase biodiversity not only for the sake of the wildlife itself, but so that the garden ecosystem is resilient and more likely to store as much carbon as possible as long as possible. They will therefore avoid pesticides, and increase the diversity of the habitats within their gardens.
Reducing greenhouse gas emissions
Climate-friendly gardeners can directly reduce the greenhouse gas emissions from their own gardens, but can also use their gardens to indirectly reduce greenhouse gas emissions elsewhere.
Using gardens to reduce greenhouse gas emissions
Climate-friendly gardeners can use their gardens in ways which reduce greenhouse gases elsewhere, for example by using the sun and wind to dry washing on washing lines in the garden instead of using electricity generated by fossil fuel to dry washing in tumble dryers.
From farmland
Food is a major contributor to climate change. In the United Kingdom, according to Tara Garnett of the Food Climate Research Network, food contributes 19% of the country's greenhouse gas emissions.Soil is the biggest store of carbon on land. It is therefore important to protect the soil organic matter in farmland. Farm animals, however, especially free-range pigs, may cause erosion, and also the cultivation of the soil increases the oxidation of soil organic matter into carbon dioxide. Other sources of greenhouse gases from farmland include: compaction caused by farm machinery or overgrazing by farm animals can make soil anaerobic and produce methane, which is emitted during the production and transport of coal, natural gas, and oil. Methane emissions also result from livestock and other agricultural practices, land use and by the decay of organic wastes in municipal solid waste landfills; farm animals produce methane; and nitrogen fertilizers can be converted to nitrous oxide which is also emitted during agricultural, land use, and industrial activities; combustion of fossil fuels and solid wastes; as well as during treatment of wastewater.
Most farmland consists of fields growing annual arable crops which are eaten directly by people or fed to farm animals, and grassland used as pasture, hay or silage to feed farm animals. Some perennial food plants are also grown, such as fruits and nuts in orchards, and watercress grown in water.
Although all cultivation of the soil in arable fields produces carbon dioxide, some arable crops cause more damage to soil than others. Root crops such as potatoes and sugar-beet, and crops which are harvested not just once a year but over a long period such as green vegetables and salads, are considered "high risk" in catchment-sensitive farming.Climate-friendly gardeners therefore grow at least some of their food, and may choose food crops which therefore help to keep carbon in farmland soils if they grow such high-risk crops in small vegetable plots in their gardens, where it is easier to protect the soil than in large fields under commercial pressures. Climate-friendly gardeners may grow and eat plants such as sweet cicely which sweeten food, and so reduce the land area needed for sugar-beet.
They may also choose to grow perennial food plants to not only reduce their indirect greenhouse gas emissions from farmland, but also to increase carbon stores in their own gardens.Grassland contains more carbon per hectare than arable fields, but farm animals, especially ruminants such as cattle or sheep, produce large amounts of methane, directly and from manure heaps and slurry. Slurry and manure may also produce nitrous oxide.
Gardeners who want to reduce their greenhouse gas emissions can help themselves to eat less meat and dairy produce by growing nut trees which are a good source of tasty, protein-rich food, including walnuts which are an excellent source of the omega-3 fatty acid alpha-linolenic acid.Researchers and farmers are investigating and improving ways of farming which are more sustainable, such as agroforestry, forest farming, wildlife-friendly farming, soil management, catchment-sensitive farming (or water-friendly farming). For example, the organisation Farming Futures assists farmers in the United Kingdom to reduce their farms' greenhouse gas emissions.Farmers are aware that consumers are increasingly asking for "green credentials". Gardeners who understand climate-friendly practices can advocate their use by farmers.
From industry
Climate-friendly gardeners aim to reduce their consumption in general. In particular, they try to avoid or reduce their consumption of tapwater because of the greenhouse gases emitted when fossil fuels are burnt to supply the energy needed to treat and pump it to them. Instead, gardeners can garden using only rainwater.Greenhouse gases are produced in the manufacture of many materials and products used by gardeners. For example, it takes a lot of energy to produce synthetic fertilizers, especially nitrogen fertilizers. Ammonium nitrate, for example, has an embodied energy of 67,000 kilojoules/kilogramme, so climate-friendly gardeners will choose alternative ways of ensuring the soil in their gardens has optimal levels of nitrogen by alternative means such as nitrogen-fixing plants.
Climate-friendly gardeners will also aim to follow "cradle-to-cradle design" and "circular economy" principles: when they choose to buy or make something, it should be possible to take it apart again and recycle or compost every part, so that there is no waste, only raw materials to be made into something else.
This will reduce the greenhouse gases otherwise produced when extracting raw materials.
From transport
Gardeners can reduce not only their food miles by growing some of their own food, but also their "gardening miles" by reducing the amount of plants and other materials they import, obtaining them as locally as possible and with as little packaging as possible. This might include ordering plants by mail order from a specialist nursery if the plants are sent out bare-root, reducing transport demand and the use of peat-based composts; or growing plants from seed, which will also increase genetic diversity and therefore resilience; or growing plants vegetatively from cuttings or offsets from other local gardeners; or buying reclaimed materials from salvage firms.
From houses
Climate-friendly gardeners can use their gardens in ways which reduce greenhouse gas emissions from homes by:
Using sunlight and wind to dry washing on washing lines instead of fossil fuel-generated electricity to run tumble dryers;
Planting deciduous climbers on houses and planting deciduous trees at suitable distances from the house to provide shade during the summer, reducing the consumption of electricity for air conditioning, but also such that at cooler times of year, sunlight can reach and warm a house, reducing heating costs and consumption;
Planting hedges, trees, shrubs and climbers to shelter houses from wind, reducing heating costs and consumption during the winter (as long as any planting does not create a wind-tunnel effect).p. 243Climate-friendly gardeners may also choose to reduce their own personal greenhouse gas emissions by growing and eating carminative plants such as fennel and garlic which reduce intestinal gases such as methane.
Reducing greenhouse gas emissions from gardens and homes
There are some patent sources of greenhouse gas emissions in gardens and some more latent.
Power tools which are powered by diesel or petrol, or electricity generated by burning other fossil fuels, emit carbon dioxide. Climate-friendly gardeners may therefore choose to use hand tools rather than power tools, or power tools powered by renewable electricity, or design their gardens to reduce or remove a need to use power tools. For example, they may choose dense, slow-growing species for hedges so that the hedges only need to be cut once a year.Turning one's thermostat equipment down to 3 degrees Fahrenheit in the winter and up to 3 degrees Fahrenheit in the summer will help reduce carbon dioxide emissions by about 1,050 pounds per year.
In place of a water-thirsty lawn that requires a lot of fertilizers and herbicides to be kept green and weed-free, native vegetation may be planted. This can be maintained with can a drip irrigation system to run by a "smart" sprinkler control. These "smart" sprinklers can determine whether it has rained recently and will not water the plants if it has. They are also system programmable relative to certain types of plants, as opposed to zones, so if certain plants need more water than others, they get it without drowning out other less water-loving plants.
Lawns are often cut by lawn mowers and, in drier parts of the world, are often irrigated by tapwater. Climate-friendly gardeners will therefore do what they can to reduce this consumption by:
Replacing part of or all lawns with other perennial planting such as trees and shrubs with less ecologically demanding maintenance requirements;
Cut some or all lawns only once or twice a year, i.e. convert them into meadows;
Make lawn shapes simple so that they may be cut quickly;
Increase the cutting height of mower blades;
Use a mulching mower to return organic matter to the soil;
Sow clover to increase vigour (without the need for synthetic fertilisers) and resilience in dry periods;
Cut lawns with electric mowers using electricity from renewable energy;
Cut lawns with hand tools such as push mowers or scythes.Greenhouses can be used to grow crops which might otherwise be imported from warmer climates, but if they are heated by fossil fuel, then they may cause more greenhouse gas emissions than they save. Climate-friendly gardeners will therefore use their greenhouses carefully by:
Choosing only annual plants which will only be in the greenhouse during warmer months, or perennial plants which do not need any extra heat during winter;
Using water tanks as heat stores and compost heaps as heat sources inside greenhouses so that they stay frost-free in winter.Climate-friendly gardeners will not put woody prunings on bonfires, which will emit carbon dioxide and black carbon due to the high oxygen content of such fires, but instead burn them indoors in a wood-burning stove and therefore cut emissions from fossil fuel, or cut them up to use as mulch and increase soil carbon stores, make biochar by pyrolysis, or add the smaller prunings to compost heaps to keep them aerated, reducing methane emissions. To reduce the risk of fire, they will also choose fire-resistant plants from habitats which are not prone to wildfires and which do not catch fire easily, rather than fire-adapted plants from fire-prone habitats, which are flammable and adapted to encourage fires and then gain a competitive advantage over less resistant species.
Climate-friendly gardeners may use deep-rooted plants such as comfrey to bring nutrients closer to the surface topsoil, but will do so without making the leaves into a liquid feed, because the rotting leaves in the anaerobic conditions under water may emit methane.
Nitrogen fertilizers may be oxidised to nitrous oxide, especially if fertilizer is applied in excess, or when plants are not actively growing. Climate-friendly gardeners may choose instead to use nitrogen-fixing plants which will add nitrogen to the soil without increasing nitrous oxide emissions.
See also
Agroforestry
Energy-efficient landscaping
Foodscaping
Forest gardening
Green building
List of organic gardening and farming topics
Orchard
Organic gardening
Permaculture
Rain garden
Sustainable design / gardening / landscaping and landscape architecture / living
Vegan organic gardening
Water-wise gardening
Wildlife gardening
References
Further reading
Cameron, Blanuša; et al. (2012). "The domestic garden – its contribution to urban green infrastructure" (PDF). Urban Forestry & Urban Greening. 11 (2): 129–137. doi:10.1016/j.ufug.2012.01.002.
Steven B. Carroll and Steven B. Salt (2004), Ecology for Gardeners, Portland, USA and Cambridge, UK: Timber Press (ISBN 0881926116).
Charlotte Green (1999), Gardening Without Water: Creating beautiful gardens using only rainwater, Tunbridge Wells: Search Press (ISBN 0855328851).
David S. Ingram, Daphne Vince-Prue and Peter J. Gregory (2008), Science and the Garden: The scientific basis for horticultural practice, Chichester, Sussex: Blackwell Publishing (ISBN 9781405160636).
John Walker (2011), How to Create an Eco Garden: The Practical Guide to Greener, Planet-Friendly Gardening, Wigston, Leicestershire: Aquamarine (ISBN 978-1903141892).
Ken Fern (1997), Plants for a Future: Edible and useful plants for a healthier world, Clanfield, Hampshire: Permanent Publications (ISBN 9781856230117).
Martin Crawford (2010), Creating a Forest Garden: Working with nature to grow edible crops, Hartland, Devon: Green Books (ISBN 9781900322621).
Michael Lavelle (2011), Sustainable Gardening, Marlborough: The Crowood Press (ISBN 9781847972323).
Matthew Wilson (2007), New Gardening: How to garden in a changing climate, London: Mitchell Beazley and the Royal Horticultural Society (ISBN 9781845333058).
Nex, Sally (2021). How to garden the low carbon way: the steps you can take to help combat climate change (First American ed.). New York. ISBN 978-0-7440-2928-4. OCLC 1241100709.{{cite book}}: CS1 maint: location missing publisher (link)
Rob Cross and Roger Spencer (2009), Sustainable Gardens, Collingwood, Australia: CSIRO (ISBN 9780643094222).
Sally Cunningham (2009), Ecological Gardening, Marlborough: The Crowood Press (ISBN 9781847971258).
Sara J. Scherr and Sajal Sthapit (2009), Mitigating Climate Change through Food and Land Use, Worldwatch Institute, Washington, United States of America (ISBN 9781878071910).
Richard Bisgrove and Paul Hadley (2002), Gardening in the Global Greenhouse: The impacts of climate change on gardens in the UK, Oxford: UK Climate Impacts Programme.
Tara Garnett (2008), Cooking up a Storm: Food, greenhouse gas emissions and our changing climate, Guildford: Food Climate Research Network, Centre for Environmental Strategy, University of Surrey.
Union of Concerned Scientists (2010), The Climate-Friendly Gardener: A guide to combating global warming from the ground up.
Wall, Bardgett et al (2013), Soil Ecology and Ecosystem Services, Oxford University Press (ISBN 9780199688166).
Watson, Noble et al (2000), Land Use, Land-Use Change and Forestry (Intergovernmental Panel on Climate Change Special Report), Cambridge, UK: Cambridge University Press (ISBN 9780521800839).
External links
Learning from Nature
Gardening in a Changing Climate, Royal Horticultural Society.
Watson, Noble et al (2000), Intergovernmental Panel on Climate Change Special Report: Land Use, Land-Use Change and Forestry, Cambridge, UK: Cambridge University Press (ISBN 9780521800839).
Richard Bisgrove and Paul Hadley (2002), Gardening in the Global Greenhouse: The impacts of climate change on gardens in the UK, Oxford: UK Climate Impacts Programme.
Sara J. Scherr and Sajal Sthapit (2009), Mitigating Climate Change through Food and Land Use, Worldwatch Institute, Washington, United States of America (ISBN 9781878071910).
Plants for a Future |
lithium-ion battery | A lithium-ion or Li-ion battery is a type of rechargeable battery which uses the reversible intercalation of Li+ ions into electronically conducting solids to store energy. In comparison with other rechargeable batteries, Li-ion batteries are characterized by a higher specific energy, higher energy density, higher energy efficiency, longer cycle life and longer calendar life. Also noteworthy is a dramatic improvement in lithium-ion battery properties after their market introduction in 1991: within the next 30 years their volumetric energy density increased threefold, while their cost dropped tenfold.The invention and commercialization of Li-ion batteries is considered as having one of the largest societal impacts in human history among all technologies, as was recognized by 2019 Nobel Prize in Chemistry.
More specifically, Li-ion batteries enabled portable consumer electronics, laptop computers, cellular phones and electric cars, or what has been called e-mobility revolution. It also sees significant use for grid-scale energy storage, as well as military and aerospace applications.
Although many thousands of different materials have been investigated for use in lithium-ion batteries, the usable chemistry space for this technology, that made into commercial applications, is extremely small. All commercial Li-ion cells use intercalation compounds as active materials:1) The anode (or negative electrode) is usually graphite, although silicon has been often mixed with graphite in commercial cells since ca. 2015.
2) The solvents in commercial Li-ion batteries comprise organic carbonates, such as ethylene carbonate and dimethyl carbonate, that form solid electrolyte interphase on the negode, which allows for Li+ ion transport but not for electron transfer.3) In addition to carbonate solvent(s) the battery electrolyte comprises a lithium salt. Lithium hexafluorophosphate is most commonly used, because it passivates the positive aluminium current collector.
4) There is more diversity among positive electroactive materials (cathodes). They are selected from a group comprising layered LiCoO2 and LiNiO2, spinel LiMn2O4, olivine LiFePO4, and their combinations/ derivatives. Many other posode materials have been studied, but they all suffer either from a high cost, poor durability (Li+ for M ion place exchange) or too high voltage incompatible with known electrolytes.
5) The negative current collector is usually made of copper and it uses a spot-welded nickel current collector.
6) The positive current collector is usually made of aluminium, and it uses and ultrasonically-welded titanium tab.
Lithium-ion cells can be manufactured to optimize energy or power density. Handheld electronics mostly use lithium polymer batteries (with a polymer gel as electrolyte), a lithium cobalt oxide (LiCoO2) cathode material, and a graphite anode, which together offer high energy density. Lithium iron phosphate (LiFePO4), lithium manganese oxide (LiMn2O4 spinel, or Li2MnO3-based lithium rich layered materials, LMR-NMC), and lithium nickel manganese cobalt oxide (LiNiMnCoO2 or NMC) may offer longer life and a higher discharge rate. NMC and its derivatives are widely used in the electrification of transport, one of the main technologies (combined with renewable energy) for reducing greenhouse gas emissions from vehicles.M. Stanley Whittingham conceived intercalation electrodes in the 1970s and created the first rechargeable lithium-ion battery, based on a titanium disulfide anode and a lithium-aluminum cathode, although it suffered from safety problems and was never commercialized. John Goodenough expanded on this work in 1980 by using lithium cobalt oxide as a cathode. The first prototype of the modern Li-ion battery, which uses a carbonaceous anode rather than lithium metal, was developed by Akira Yoshino in 1985, which was commercialized by a Sony and Asahi Kasei team led by Yoshio Nishi in 1991.M. Stanley Whittingham, John Goodenough and Akira Yoshino were awarded the 2019 Nobel Prize in Chemistry for their contributions to the development of lithium-ion batteries.
Lithium-ion batteries can be a safety hazard if not properly engineered and manufactured, because cells have flammable electrolytes and if damaged or incorrectly charged, can lead to explosions and fires. Much progress has been made in the development and manufacturing of safe lithium-ion batteries. Lithium ion all solid state batteries are being developed to eliminate the flammable electrolyte. Improperly recycled batteries can create toxic waste, especially from toxic metals and are at risk of fire. Moreover, both lithium and other key strategic minerals used in batteries have significant issues at extraction, with lithium being water intensive in often arid regions and other minerals often being conflict minerals such as cobalt. Both environmental issues have encouraged some researchers to improve mineral efficiency and alternatives such as iron-air batteries.
Research areas for lithium-ion batteries include extending lifetime, increasing energy density, improving safety, reducing cost, and increasing charging speed, among others. Research has been under way in the area of non-flammable electrolytes as a pathway to increased safety based on the flammability and volatility of the organic solvents used in the typical electrolyte. Strategies include aqueous lithium-ion batteries, ceramic solid electrolytes, polymer electrolytes, ionic liquids, and heavily fluorinated systems.
History
Research on rechargeable Li-ion batteries dates to the 1960s; one of the earliest examples is a CuF2/Li battery developed by NASA in 1965. The breakthrough that produced the earliest form of the modern Li-ion battery was made by British chemist M. Stanley Whittingham in 1974, who first used titanium disulfide (TiS2) as a cathode material, which has a layered structure that can take in lithium ions without significant changes to its crystal structure. Exxon tried to commercialize this battery in the late 1970s, but found the synthesis expensive and complex, as TiS2 is sensitive to moisture and releases toxic H2S gas on contact with water. More prohibitively, the batteries were also prone to spontaneously catch fire due to the presence of metallic lithium in the cells. For this, and other reasons, Exxon discontinued the development of Whittingham's lithium-titanium disulfide battery.In 1980 working in separate groups Ned A. Godshall et al., and, shortly thereafter, Koichi Mizushima and John B. Goodenough, after testing a range of alternative materials, replaced TiS2 with lithium cobalt oxide (LiCoO2, or LCO), which has a similar layered structure but offers a higher voltage and is much more stable in air. This material would later be used in the first commercial Li-ion battery, although it did not, on its own, resolve the persistent issue of flammability.These early attempts to develop rechargeable Li-ion batteries used lithium metal anodes, which were ultimately abandoned due to safety concerns, as lithium metal is unstable and prone to dendrite formation, which can cause short-circuiting. The eventual solution was to use an intercalation anode, similar to that used for the cathode, which prevents the formation of lithium metal during battery charging. A variety of anode materials were studied.
In 1980 Rachid Yazami demonstrated reversible electrochemical intercalation of lithium in graphite, and invented the lithium graphite electrode (anode). Yazami's work was limited to solid electrolyte (polyethylene oxide), because liquid solvents tested by him and before co-intercalated with Li+ ions into graphite, resuling in the electrode's crumbling and short cycle life.
In 1985, Akira Yoshino at Asahi Kasei Corporation discovered that petroleum coke, a less graphitized form of carbon, can reversibly intercalate Li-ions at a low potential of ~0.5 V relative to Li+ /Li without structural degradation. Its structural stability originates from the amorphous carbon regions in petroleum coke serving as covalent joints to pin the layers together. Although the amorphous nature of petroleum coke limits capacity compared to graphite (~Li0.5C6, 0.186 Ah g–1), it became the first commercial intercalation anode for Li-ion batteries owing to its cycling stability.
in 1987, Akira Yoshino patented what would become the first commercial lithium-ion battery using an anode of "soft carbon" (a charcoal-like material) along with Goodenough's previously reported LiCoO2 cathode and a carbonate ester-based electrolyte. This battery is assembled in a discharged state, which makes its manufacturing safer and cheaper. In 1991, using Yoshino's design, Sony began producing and selling the world's first rechargeable lithium-ion batteries. The following year, a joint venture between Toshiba and Asashi Kasei Co. also released their lithium-ion battery.Significant improvements in energy density were achieved in the 1990s by replacing the soft carbon anode first with hard carbon and later with graphite, a concept originally proposed by Jürgen Otto Besenhard in 1974 but considered unfeasible due to unresolved incompatibilities with the electrolytes then in use. In 1990 Jeff Dahn and two colleagues at Dalhousie University (Canada) reported reversible intercalation of lithium ions into graphite in the presence of ethylene carbonate solvent (which is solid at room temperature and is mixed with other solvents to make a liquid), thus finding the final piece of the puzzle leading to the modern lithium-ion battery.In 2010, global lithium-ion battery production capacity was 20 gigawatt-hours. By 2016, it was 28 GWh, with 16.4 GWh in China. Global production capacity was 767 GWh in 2020, with China accounting for 75%. Production in 2021 is estimated by various sources to be between 200 and 600 GWh, and predictions for 2023 range from 400 to 1,100 GWh.In 2012 John B. Goodenough, Rachid Yazami and Akira Yoshino received the 2012 IEEE Medal for Environmental and Safety Technologies for developing the lithium-ion battery; Goodenough, Whittingham, and Yoshino were awarded the 2019 Nobel Prize in Chemistry "for the development of lithium-ion batteries". Jeff Dahn received the ECS Battery Division Technology Award (2011) and the Yeager award from the International Battery Materials Association (2016).
In April 2023 CATL announced that it would begin scaled-up production of its semi-solid condensed matter battery that produces a then record 500 Wh/kg. They use electrodes made from a gelled material, requiring fewer binding agents. This in turn shortens the manufacturing cycle. One potential application is in battery-powered airplanes. Another new development of lithium-ion batteries are flow batteries with redox-targetted solids,that use no binders or electron-conducting additives, and allow for completely independent scaling of energy and power.
Design
Generally, the negative electrode of a conventional lithium-ion cell is graphite made from carbon. The positive electrode is typically a metal oxide. The electrolyte is a lithium salt in an organic solvent. The anode (negative electrode) and cathode (positive electrode) are prevented from shorting by a separator. The anode and cathode are separated from external electronics with a piece of metal called a current collector. The electrochemical roles of the electrodes reverse between anode and cathode, depending on the direction of current flow through the cell.
The most common commercially used anode is graphite, which in its fully lithiated state of LiC6 correlates to a maximal capacity of 1339 C/g (372 mAh/g). The cathode is generally one of three materials: a layered oxide (such as lithium cobalt oxide), a polyanion (such as lithium iron phosphate) or a spinel (such as lithium manganese oxide). More experimental materials include graphene-containing electrodes, although these remain far from commercially viable due to their high cost.Lithium reacts vigorously with water to form lithium hydroxide (LiOH) and hydrogen gas. Thus, a non-aqueous electrolyte is typically used, and a sealed container rigidly excludes moisture from the battery pack. The non-aqueous electrolyte is typically a mixture of organic carbonates such as ethylene carbonate and propylene carbonate containing complexes of lithium ions. Ethylene carbonate is essential for making solid electrolyte interphase on the carbon anode, but since it is solid at room temperature, a liquid solvent (such as propylene carbonate or diethyl carbonate) is added.
The electrolyte salt is almost always lithium hexafluorophosphate (LiPF6), which combines good ionic conductivity with chemical and electrochemical stability. Hexafluorophosphate is essential for passivating the aluminum current collector used for the cathode. A titanium tab is ultrasonically welded to the aluminum current collector.
Other salts like lithium perchlorate (LiClO4), lithium tetrafluoroborate (LiBF4), and lithium bis(trifluoromethanesulfonyl)imide (LiC2F6NO4S2) are frequently used in research in tab-less coin cells, but are not usable in larger format cells, often because they are not compatible with the aluminum current collector. Copper (with a spot-welded nickel tab) is used as the anode current collector.
Current collector design and surface treatments may take various forms: foil, mesh, foam (dealloyed), etched (wholly or selectively), and coated (with various materials) to improve electrical characteristics.Depending on materials choices, the voltage, energy density, life, and safety of a lithium-ion cell can change dramatically. Current effort has been exploring the use of novel architectures using nanotechnology to improve performance. Areas of interest include nano-scale electrode materials and alternative electrode structures.
Electrochemistry
The reactants in the electrochemical reactions in a lithium-ion cell are materials of anode and cathode, both of which are compounds containing lithium atoms. During discharge, an oxidation half-reaction at the anode produces positively charged lithium ions and negatively charged electrons. The oxidation half-reaction may also produce uncharged material that remains at the anode. Lithium ions move through the electrolyte, electrons move through the external circuit, and then they recombine at the cathode (together with the cathode material) in a reduction half-reaction. The electrolyte and external circuit provide conductive media for lithium ions and electrons, respectively, but do not partake in the electrochemical reaction. During discharge, electrons flow from the negative electrode (anode) towards the positive electrode (cathode) through the external circuit. The reactions during discharge lower the chemical potential of the cell, so discharging transfers energy from the cell to wherever the electric current dissipates its energy, mostly in the external circuit. During charging these reactions and transports go in the opposite direction: electrons move from the positive electrode to the negative electrode through the external circuit. To charge the cell the external circuit has to provide electric energy. This energy is then stored as chemical energy in the cell (with some loss, e. g. due to coulombic efficiency lower than 1).
Both electrodes allow lithium ions to move in and out of their structures with a process called insertion (intercalation) or extraction (deintercalation), respectively.
As the lithium ions "rock" back and forth between the two electrodes, these batteries are also known as "rocking-chair batteries" or "swing batteries" (a term given by some European industries).The following equations exemplify the chemistry.
The positive electrode (cathode) half-reaction in the lithium-doped cobalt oxide substrate is
CoO
2
+
Li
+
+
e
−
↽
−
−
⇀
LiCoO
2
{\displaystyle {\ce {CoO2 + Li+ + e- <=> LiCoO2}}}
The negative electrode (anode) half-reaction for the graphite is
LiC
6
↽
−
−
⇀
C
6
+
Li
+
+
e
−
{\displaystyle {\ce {LiC6 <=> C6 + Li+ + e^-}}}
The full reaction (left to right: discharging, right to left: charging) being
LiC
6
+
CoO
2
↽
−
−
⇀
C
6
+
LiCoO
2
{\displaystyle {\ce {LiC6 + CoO2 <=> C6 + LiCoO2}}}
The overall reaction has its limits. Overdischarging supersaturates lithium cobalt oxide, leading to the production of lithium oxide, possibly by the following irreversible reaction:
Li
+
+
e
−
+
LiCoO
2
⟶
Li
2
O
+
CoO
{\displaystyle {\ce {Li+ + e^- + LiCoO2 -> Li2O + CoO}}}
Overcharging up to 5.2 volts leads to the synthesis of cobalt (IV) oxide, as evidenced by x-ray diffraction:
LiCoO
2
⟶
Li
+
+
CoO
2
+
e
−
{\displaystyle {\ce {LiCoO2 -> Li+ + CoO2 + e^-}}}
In a lithium-ion cell, the lithium ions are transported to and from the positive or negative electrodes by oxidizing the transition metal, cobalt (Co), in Li1-xCoO2 from Co3+ to Co4+ during charge, and reducing from Co4+ to Co3+ during discharge. The cobalt electrode reaction is only reversible for x < 0.5 (x in mole units), limiting the depth of discharge allowable. This chemistry was used in the Li-ion cells developed by Sony in 1990.The cell's energy is equal to the voltage times the charge. Each gram of lithium represents Faraday's constant/6.941, or 13,901 coulombs. At 3 V, this gives 41.7 kJ per gram of lithium, or 11.6 kWh per kilogram of lithium. This is a bit more than the heat of combustion of gasoline but does not consider the other materials that go into a lithium battery and that make lithium batteries many times heavier per unit of energy.
Note that the cell voltages involved in these reactions are larger than the potential at which an aqueous solutions would electrolyze.
Charging and discharging
During discharge, lithium ions (Li+) carry the current within the battery cell from the negative to the positive electrode, through the non-aqueous electrolyte and separator diaphragm.During charging, an external electrical power source (the charging circuit) applies an over-voltage (a higher voltage than the battery produces, of the same polarity), forcing a charging current to flow within each cell from the positive to the negative electrode, i.e., in the reverse direction of a discharge current under normal conditions. The lithium ions then migrate from the positive to the negative electrode, where they become embedded in the porous electrode material in a process known as intercalation.
Energy losses arising from electrical contact resistance at interfaces between electrode layers and at contacts with current collectors can be as high as 20% of the entire energy flow of batteries under typical operating conditions.The charging procedures for single Li-ion cells, and complete Li-ion batteries, are slightly different:
A single Li-ion cell is charged in two stages:Constant current (CC)
Constant voltage (CV)A Li-ion battery (a set of Li-ion cells in series) is charged in three stages:Constant current
Balance (only required when cell groups become unbalanced during use)
Constant voltageDuring the constant current phase, the charger applies a constant current to the battery at a steadily increasing voltage, until the top-of-charge voltage limit per cell is reached.
During the balance phase, the charger/battery reduces the charging current (or cycles the charging on and off to reduce the average current) while the state of charge of individual cells is brought to the same level by a balancing circuit until the battery is balanced. Balancing typically occurs whenever one or more cells reach their top-of-charge voltage before the other(s), as it is generally inaccurate to do so at other stages of the charge cycle. This is most commonly done by passive balancing, which dissipates excess charge via resistors connected momentarily across the cell(s) to be balanced. Active balancing is less common, more expensive, but more efficient, returning excess energy to other cells (or the entire pack) through the means of a DC-DC converter or other circuitry. Some fast chargers skip this stage. Some chargers accomplish the balance by charging each cell independently. This is often performed by the battery protection circuit/battery management system (BPC or BMS) and not the charger (which typically provides only the bulk charge current, and does not interact with the pack at the cell-group level), e.g., e-bike and hoverboard chargers. In this method, the BPC/BMS will request a lower charge current (such as EV batteries), or will shut-off the charging input (typical in portable electronics) through the use of transistor circuitry while balancing is in effect (to prevent over-charging cells). Balancing most often occurs during the constant voltage stage of charging, switching between charge modes until complete. The pack is usually fully charged only when balancing is complete, as even a single cell group lower in charge than the rest will limit the entire battery's usable capacity to that of its own. Balancing can last hours or even days, depending on the magnitude of the imbalance in the battery.
During the constant voltage phase, the charger applies a voltage equal to the maximum cell voltage times the number of cells in series to the battery, as the current gradually declines towards 0, until the current is below a set threshold of about 3% of initial constant charge current.
Periodic topping charge about once per 500 hours. Top charging is recommended to be initiated when voltage goes below 4.05 V/cell.Failure to follow current and voltage limitations can result in an explosion.Charging temperature limits for Li-ion are stricter than the operating limits. Lithium-ion chemistry performs well at elevated temperatures but prolonged exposure to heat reduces battery life. Li‑ion batteries offer good charging performance at cooler temperatures and may even allow "fast-charging" within a temperature range of 5 to 45 °C (41 to 113 °F). Charging should be performed within this temperature range. At temperatures from 0 to 5 °C charging is possible, but the charge current should be reduced. During a low-temperature (under 0 °C) charge, the slight temperature rise above ambient due to the internal cell resistance is beneficial. High temperatures during charging may lead to battery degradation and charging at temperatures above 45 °C will degrade battery performance, whereas at lower temperatures the internal resistance of the battery may increase, resulting in slower charging and thus longer charging times.
Batteries gradually self-discharge even if not connected and delivering current. Li-ion rechargeable batteries have a self-discharge rate typically stated by manufacturers to be 1.5–2% per month.The rate increases with temperature and state of charge. A 2004 study found that for most cycling conditions self-discharge was primarily time-dependent; however, after several months of stand on open circuit or float charge, state-of-charge dependent losses became significant. The self-discharge rate did not increase monotonically with state-of-charge, but dropped somewhat at intermediate states of charge. Self-discharge rates may increase as batteries age. In 1999, self-discharge per month was measured at 8% at 21 °C, 15% at 40 °C, 31% at 60 °C. By 2007, monthly self-discharge rate was estimated at 2% to 3%, and 2–3% by 2016.By comparison, the self-discharge rate for NiMH batteries dropped, as of 2017, from up to 30% per month for previously common cells to about 0.08–0.33% per month for low self-discharge NiMH batteries, and is about 10% per month in NiCd batteries.
Cathode
There are three classes of commercial cathode materials in lithium-ion batteries: (1) layered oxides, (2) spinel oxides and (3) oxoanion complexes. All of them were discovered by John Goodenough and his collaborators.
(a) Layered Oxides
LiCoO2 is the golden standard among lithium-ion battery cathodes and the namesake prototype of these materials. It was used in the first commercial lithium-ion battery made by Sony in 1991. The layered oxides have a pseudo-tetrahedral structure comprising layers made of MO6 octahedra separated by interlayer spaces, that allow for two-dimensional lithium-ion diffusion. Notably, the band structure of LixCoO2 is such, that is allows for a true electronic (rather than polaronic) conductivity. However, due to an overlap between Co(4+) t2g d-band with O2- 2p-band, the x must be >0.5, otherwise O2 evolution occurs. This limits the charge capacity of this material to ~140 mA h g–1.
Several other first-row (3d) transition metals form layered LiMO2 salts. Some of them can be directly prepared from Li2O and M2O3 (e.g. for M=Ti, V, Cr, Co, Ni), while others (M= Mn or Fe) can be prepared by ion exchange from NaMO2. LiVO2, LiMnO2 and LiFeO2 suffer from structural instabilities (including mixing between M and Li sites) due to a low energy difference between octahedral and tetrahedral environments for the metal ion M. For this reason they did not find use in lithium-ion batteries. Notably, Na+ and Fe3+ have sufficiently different sizes, so that NaFeO2 can be used in sodium-ion batteries.Similarly, LiCrO2 shows reversible lithium (de)intercalation around 3.2 V with 170-270 mAh/g. However, its cycle life is short, because of disproportionation of Cr(+4) followed by translocation of Cr(+6) into tetrahedral sites. On the other hand, NaCrO2 shows a much better cycling stability. LiTiO2 shows Li+ (de)intercalation at a voltage of ~1.5 V, which is too low for a cathode material.
These problems leave LiCoO2 and LiNiO2 as the only practical layered oxide materials for lithium-ion battery cathodes. The cobalt-based cathodes show high theoretical specific (per-mass) charge capacity, high volumetric capacity, low self-discharge, high discharge voltage, and good cycling performance. Unfortunately, they suffer from a high cost of the material. For this reason, the current trend among lithium-ion battery manufacturers is to switch to cathodes with higher Ni content and lower Co content.In addition to a lower (than cobalt) cost, nickel-oxide based materials benefit from the two-electron redox chemistry of Ni: in layered oxides comprising nickel (such as nickel-cobalt-manganese NCM and nickel-cobalt-aluminium oxides NCA), Ni cycles between the oxidation states +2 and +4 (in one step between +3.5 and +4.3 V), cobalt- between +2 and +3, while Mn (usually >20%) and Al (typically, only 5% is needed) remain in +4 and 3+, respectively. Thus increasing the Ni content increases the cyclable charge. For example, NCM111 shows 160 mAh/g, while LiNi0.8Co0.1Mn0.1O2 (NCM811) and LiNi0.8Co0.15Al0.05O2 (NCA) deliver a higher capacity of ~200 mAh/g.It is worth mentioning so-called "lithium-rich" cathodes, that can be produced from traditional NCM (LiMO2, where M=Ni, Co, Mn) layered cathode materials upon cycling them to voltages/charges corresponding to Li:M<0.5. Under such conditions a new semi-reversible redox transition at a higher voltage with ca. 0.4-0.8 electrons/metal site charge appears. This transition involves non-binding electron orbitals centered mostly on O atoms. Despite significant initial interest, this phenomenon did not result in marketable products because of the fast structural degradation (O2 evolution and lattice rearrangements) of such "lithium-rich" phases.
(b) Cubic oxides (spinels)
LiMn2O4 adopts a cubic lattice, which allows for three-dimensional lithium-ion diffusion. Manganese cathodes are attractive because manganese is less expensive than cobalt or nickel. The operating voltage of Li-LiMn2O4 battery is 4 V, and ca. one lithium per two Mn ions can be reversibly extracted from the tetrahedral sites, resulting in a practical capacity of <130 mA h g–1. However, Mn3+ is not a stable oxidation state, as it tends to disporportionate into insoluble Mn4+ and soluble Mn2+. LiMn2O4 can also intercalate more than 0.5 Li per Mn at a lower voltage around +3.0 V. However, this results in an irreversible phase transition due to Jahn-Teller distortion in Mn3+:t2g3eg1, as well as disproportionation and dissolution of Mn3+.
An important improvement of Mn spinel are related cubic structures of the LiMn1.5Ni0.5O4 type, where Mn exists as Mn4+ and Ni cycles reversibly between the oxidation states +2 and +4. This materials show a reversible Li-ion capacity of ca. 135 mAh/g around 4.7 V. Although such high voltage is beneficial for increasing the specific energy of batteries, the adoption of such materials is currently hindered by the lack of suitable high-voltage electrolytes. In general, materials with a high nickel content are favored in 2023, because of the possibility of a 2-electron cycling of Ni between the oxidation states +2 and +4.
LiV2O4 operates as a lower (ca. +3.0V) voltage than LiMn2O4, suffers from similar durability issues, is more expensive, and thus is not considered of practical interest.
(c) Oxoanionic/olivins
Around 1980 Manthiram discovered, that oxoanions (molybdates and tungstates in that particular case) cause a substantial positive shift in the redox potential of the metal-ion compared to oxides. In addition, these oxoanionic cathode materials offer better stability/safety than the corresponding oxides. On the other hand, unlike the aforementioned oxides, oxoanionic cathodes suffer from poor electronic conductivity, which stems primarily from a long distance between redox-active metal centers, which slows down the electron transport. This necessitates the use of small (<200 nm) cathode particles and coatng each particle with a layer of electroncally-conducting carbon to overcome its low electrical conductivity. This further reduces the packing density of these materials.
Although numerous oxoanions (sulfate, phosphate, silicate) / metal (Mn, Fe, Co, Ni) cation combinations have been studied since, LiFePO4 is the only one, that reached the market. As of 2023, LiFePO4 is the primary candidate for large-scale use of lithium-ion batteries for stationary energy storage (rather than electric vehicles) due to its low cost, excellent safety, and high cycle durability. For example, Sony Fortelion batteries have retained 74% of their capacity after 8000 cycles with 100% discharge.
Anode
Negative electrode materials are traditionally constructed from graphite and other carbon materials, although newer silicon-based materials are being increasingly used (see Nanowire battery). In 2016, 89% of lithium-ion batteries contained graphite (43% artificial and 46% natural), 7% contained amorphous carbon (either soft carbon or hard carbon), 2% contained lithium titanate (LTO) and 2% contained silicon or tin-based materials.These materials are used because they are abundant, electrically conducting and can intercalate lithium ions to store electrical charge with modest volume expansion (~10%). Graphite is the dominant material because of its low intercalation voltage and excellent performance. Various alternative materials with higher capacities have been proposed, but they usually have higher voltages, which reduces energy density. Low voltage is the key requirement for anodes; otherwise, the excess capacity is useless in terms of energy density.
As graphite is limited to a maximum capacity of 372 mAh/g much research has been dedicated to the development of materials that exhibit higher theoretical capacities and overcoming the technical challenges that presently encumber their implementation. The extensive 2007 Review Article by Kasavajjula et al.
summarizes early research on silicon-based anodes for lithium-ion secondary cells. In particular, Hong Li et al. showed in 2000 that the electrochemical insertion of lithium ions in silicon nanoparticles and silicon nanowires leads to the formation of an amorphous Li-Si alloy. The same year, Bo Gao and his doctoral advisor, Professor Otto Zhou described the cycling of electrochemical cells with anodes comprising silicon nanowires, with a reversible capacity ranging from at least approximately 900 to 1500 mAh/g.Diamond-like carbon coatings can increase retention capacity by 40% and cycle life by 400% for lithium based batteries.To improve the stability of the lithium anode, several approaches to installing a protective layer have been suggested. Silicon is beginning to be looked at as an anode material because it can accommodate significantly more lithium ions, storing up to 10 times the electric charge, however this alloying between lithium and silicon results in significant volume expansion (ca. 400%), which causes catastrophic failure for the cell. Silicon has been used as an anode material but the insertion and extraction of
Li
+
{\displaystyle {\ce {\scriptstyle Li+}}}
can create cracks in the material. These cracks expose the Si surface to an electrolyte, causing decomposition and the formation of a solid electrolyte interphase (SEI) on the new Si surface (crumpled graphene encapsulated Si nanoparticles). This SEI will continue to grow thicker, deplete the available
Li
+
{\displaystyle {\ce {\scriptstyle Li+}}}
, and degrade the capacity and cycling stability of the anode.
In addition to carbon- and silicon- based anode materials for lithium-ion batteries, high-entropy metal oxide materials are being developed. These conversion (rather than intercalation) materials comprise an alloy (or subnanometer mixed phases) of several metal oxides performing different functions. For example, Zn and Co can act as electroactive charge-storing species, Cu can provide an electronically conducting support phase and MgO can prevent pulverization.
Electrolyte
Liquid electrolytes in lithium-ion batteries consist of lithium salts, such as LiPF6, LiBF4 or LiClO4 in an organic solvent, such as ethylene carbonate, dimethyl carbonate, and diethyl carbonate. A liquid electrolyte acts as a conductive pathway for the movement of cations passing from the negative to the positive electrodes during discharge. Typical conductivities of liquid electrolyte at room temperature (20 °C (68 °F)) are in the range of 10 mS/cm, increasing by approximately 30–40% at 40 °C (104 °F) and decreasing slightly at 0 °C (32 °F). The combination of linear and cyclic carbonates (e.g., ethylene carbonate (EC) and dimethyl carbonate (DMC)) offers high conductivity and solid electrolyte interphase (SEI)-forming ability. Organic solvents easily decompose on the negative electrodes during charge. When appropriate organic solvents are used as the electrolyte, the solvent decomposes on initial charging and forms a solid layer called the solid electrolyte interphase, which is electrically insulating, yet provides significant ionic conductivity. The interphase prevents further decomposition of the electrolyte after the second charge. For example, ethylene carbonate is decomposed at a relatively high voltage, 0.7 V vs. lithium, and forms a dense and stable interface. Composite electrolytes based on POE (poly(oxyethylene)) provide a relatively stable interface. It can be either solid (high molecular weight) and be applied in dry Li-polymer cells, or liquid (low molecular weight) and be applied in regular Li-ion cells. Room-temperature ionic liquids (RTILs) are another approach to limiting the flammability and volatility of organic electrolytes.Recent advances in battery technology involve using a solid as the electrolyte material. The most promising of these are ceramics. Solid ceramic electrolytes are mostly lithium metal oxides, which allow lithium-ion transport through the solid more readily due to the intrinsic lithium. The main benefit of solid electrolytes is that there is no risk of leaks, which is a serious safety issue for batteries with liquid electrolytes. Solid ceramic electrolytes can be further broken down into two main categories: ceramic and glassy. Ceramic solid electrolytes are highly ordered compounds with crystal structures that usually have ion transport channels. Common ceramic electrolytes are lithium super ion conductors (LISICON) and perovskites. Glassy solid electrolytes are amorphous atomic structures made up of similar elements to ceramic solid electrolytes but have higher conductivities overall due to higher conductivity at grain boundaries. Both glassy and ceramic electrolytes can be made more ionically conductive by substituting sulfur for oxygen. The larger radius of sulfur and its higher ability to be polarized allow higher conductivity of lithium. This contributes to conductivities of solid electrolytes are nearing parity with their liquid counterparts, with most on the order of 0.1 mS/cm and the best at 10 mS/cm. An efficient and economic way to tune targeted electrolytes properties is by adding a third component in small concentrations, known as an additive. By adding the additive in small amounts, the bulk properties of the electrolyte system will not be affected whilst the targeted property can be significantly improved. The numerous additives that have been tested can be divided into the following three distinct categories: (1) those used for SEI chemistry modifications; (2) those used for enhancing the ion conduction properties; (3) those used for improving the safety of the cell (e.g. prevent overcharging).Electrolyte alternatives have also played a significant role, for example the lithium polymer battery. Polymer electrolytes are promising for minimizing the dendrite formation of lithium. Polymers are supposed to prevent short circuits and maintain conductivity.The ions in the electrolyte diffuse because there are small changes in the electrolyte concentration. Linear diffusion is only considered here. The change in concentration c, as a function of time t and distance x, is
∂
c
∂
t
=
D
ε
∂
2
c
∂
x
2
.
{\displaystyle {\frac {\partial c}{\partial t}}={\frac {D}{\varepsilon }}{\frac {\partial ^{2}c}{\partial x^{2}}}.}
In this equation, D is the diffusion coefficient for the lithium ion. It has a value of 7.5×10−10 m2/s in the LiPF6 electrolyte. The value for ε, the porosity of the electrolyte, is 0.724.
Formats
Lithium-ion batteries are organized into multiple sub-units. The largest unit is the battery itself, also called the battery pack. Depending on the application, multiple battery packs are sometimes wired together in series to increase the voltage. Each pack consists of several battery modules connected both in series and in parallel. Each module is in turn made of multiple cells connected in parallel.
Cells
Li-ion cells are available in various shapes, which can generally be divided into four groups:
Small cylindrical (solid body without terminals, such as those used in most e-bikes and most electric vehicle battery and older laptop batteries); there are several standard lithium-ion cylinder sizes.
Large cylindrical (solid body with large threaded terminals)
Flat or pouch (soft, flat body, such as those used in cell phones and newer laptops; these are lithium-ion polymer batteries.
Rigid plastic case with large threaded terminals (such as electric vehicle traction packs)Cells with a cylindrical shape are made in a characteristic "swiss roll" manner (known as a "jelly roll" in the US), which means it is a single long "sandwich" of the positive electrode, separator, negative electrode, and separator rolled into a single spool. One advantage of cylindrical cells compared to cells with stacked electrodes is the faster production speed. One disadvantage of cylindrical cells can be a large radial temperature gradient inside the cells developing at high discharge currents.
The absence of a case gives pouch cells the highest gravimetric energy density; however, for many practical applications, they still require an external means of containment to prevent expansion when their state of charge (SOC) level is high, and for general structural stability of the battery pack of which they are part. Both rigid plastic and pouch-style cells are sometimes referred to as prismatic cells due to their rectangular shapes. Battery technology analyst Mark Ellis of Munro & Associates sees three basic Li-ion battery types used in modern (~2020) electric vehicle batteries at scale: cylindrical cells (e.g., Tesla), prismatic pouch (e.g., from LG), and prismatic can cells (e.g., from LG, Samsung, Panasonic, and others). Each form factor has characteristic advantages and disadvantages for EV use.Since 2011, several research groups have announced demonstrations of lithium-ion flow batteries that suspend the cathode or anode material in an aqueous or organic solution.In 2014, Panasonic created the smallest Li-ion cell. It is pin shaped. It has a diameter of 3.5mm and a weight of 0.6g. A coin cell form factor resembling that of ordinary lithium batteries is available since as early as 2006 for LiCoO2 cells, usually designated with a "LiR" prefix.
Batteries
A battery pack consists of multiple connected lithium-ion cells. Battery packs for large consumer electronics like laptop computers also contain temperature sensors, voltage regulator circuits, voltage taps, and charge-state monitors. These components minimize safety risks like overheating and short circuiting. To power larger devices, such as electric cars, connecting many small batteries in a series-parallel circuit is more effective
Uses
Lithium ion batteries are used in a multitude of applications from consumer electronics, toys, power tools and electric vehicles.More niche uses include backup power in telecommunications applications. Lithium-ion batteries are also frequently discussed as a potential option for grid energy storage, although as of 2020, they were not yet cost-competitive at scale.
Performance
Because lithium-ion batteries can have a variety of positive and negative electrode materials, the energy density and voltage vary accordingly.
The open-circuit voltage is higher than in aqueous batteries (such as lead–acid, nickel–metal hydride and nickel–cadmium). Internal resistance increases with both cycling and age, although this depends strongly on the voltage and temperature the batteries are stored at. Rising internal resistance causes the voltage at the terminals to drop under load, which reduces the maximum current draw. Eventually, increasing resistance will leave the battery in a state such that it can no longer support the normal discharge currents requested of it without unacceptable voltage drop or overheating.
Batteries with a lithium iron phosphate positive and graphite negative electrodes have a nominal open-circuit voltage of 3.2 V and a typical charging voltage of 3.6 V. Lithium nickel manganese cobalt (NMC) oxide positives with graphite negatives have a 3.7 V nominal voltage with a 4.2 V maximum while charging. The charging procedure is performed at constant voltage with current-limiting circuitry (i.e., charging with constant current until a voltage of 4.2 V is reached in the cell and continuing with a constant voltage applied until the current drops close to zero). Typically, the charge is terminated at 3% of the initial charge current. In the past, lithium-ion batteries could not be fast-charged and needed at least two hours to fully charge. Current-generation cells can be fully charged in 45 minutes or less. In 2015 researchers demonstrated a small 600 mAh capacity battery charged to 68 percent capacity in two minutes and a 3,000 mAh battery charged to 48 percent capacity in five minutes. The latter battery has an energy density of 620 W·h/L. The device employed heteroatoms bonded to graphite molecules in the anode.Performance of manufactured batteries has improved over time. For example, from 1991 to 2005 the energy capacity per price of lithium ion batteries improved more than ten-fold, from 0.3 W·h per dollar to over 3 W·h per dollar. In the period from 2011 to 2017, progress has averaged 7.5% annually.
Overall, between 1991 and 2018, prices for all types of lithium-ion cells (in dollars per kWh) fell approximately 97%. Over the same time period, energy density more than tripled.
Efforts to increase energy density contributed significantly to cost reduction.Differently sized cells with similar chemistry can also have different energy densities. The 21700 cell has 50% more energy than the 18650 cell, and the bigger size reduces heat transfer to its surroundings.
Round-trip efficiency
The table below shows the result of an experimental evaluation of a "high-energy" type 3.0Ah 18650 NMC cell in 2021, round-trip efficiency which compared the energy going into the cell and energy extracted from the cell from 100% (4.2v) SoC to 0% SoC (cut off 2.0v). A roundtrip efficiency is the percent of energy that can be used relative to the energy that went into charging the battery.
Characterization of a cell in a different experiment in 2017 reported round-trip efficiency of 85.5% at 2C and 97.6% at 0.1C
Lifespan
The lifespan of a lithium-ion battery is typically defined as the number of full charge-discharge cycles to reach a failure threshold in terms of capacity loss or impedance rise. Manufacturers' datasheet typically uses the word "cycle life" to specify lifespan in terms of the number of cycles to reach 80% of the rated battery capacity. Simply storing lithium-ion batteries in the charged state also reduces their capacity (the amount of cyclable Li+) and increases the cell resistance (primarily due to the continuous growth of the solid electrolyte interface on the anode). Calendar life is used to represent the whole life cycle of battery involving both the cycle and inactive storage operations. Battery cycle life is affected by many different stress factors including temperature, discharge current, charge current, and state of charge ranges (depth of discharge). Batteries are not fully charged and discharged in real applications such as smartphones, laptops and electric cars and hence defining battery life via full discharge cycles can be misleading. To avoid this confusion, researchers sometimes use cumulative discharge defined as the total amount of charge (Ah) delivered by the battery during its entire life or equivalent full cycles, which represents the summation of the partial cycles as fractions of a full charge-discharge cycle. Battery degradation during storage is affected by temperature and battery state of charge (SOC) and a combination of full charge (100% SOC) and high temperature (usually > 50 °C) can result in sharp capacity drop and gas generation. Multiplying the battery cumulative discharge by the rated nominal Voltage gives the total energy delivered over the life of the battery. From this one can calculate the cost per kWh of the energy (including the cost of charging).
Over their lifespan batteries degrade gradually leading to reduced capacity (and, in some cases, lower operating cell voltage) due to a variety of chemical and mechanical changes to the electrodes.Several degradation processes occur in lithium-ion batteries, some during cycling, some during storage, and some all the time: Degradation is strongly temperature-dependent: degradation at room temperature is minimal but increases for batteries stored or used in high temperature or low temperature environments. High charge levels also hasten capacity loss.In a study, scientists provided 3D imaging and model analysis to reveal main causes, mechanics, and potential mitigations of the problematic degradation of the batteries over charge cycles. They found "[p]article cracking increases and contact loss between particles and carbon-binder domain are observed to correlate with the cell degradation" and indicates that "the reaction heterogeneity within the thick cathode caused by the unbalanced electron conduction is the main cause of the battery degradation over cycling".The most common degradation mechanisms in lithium-ion batteries include:
Reduction of the organic carbonate electrolyte at the anode, which results in the growth of Solid Electrolyte Interface (SEI), where Li+ ions get irreversibly trapped, i.e. loss of lithium inventory. This shows as increased ohmic impedance and reduced Ah charge. At constant temperature the SEI film thickness (and therefore, the SEI resistance and the lost in cyclable Li+) increases as a square root of the time spent in the charged state. The number of cycles is not a useful metrics in characterizing this main degradation pathway. Under high temperatures or in the presence of a mechanical damage the electrolyte reduction can proceed explosively.
Lithium metal plating also results in the loss of lithium inventory (cyclable Ah charge), as well as internal short-circuiting and ignition of a battery. Once Li plating commences during cycling, it results in larger slopes of capacity loss per cycle and resistance increase per cycle. This degradation mechanism become more prominent as fast charging and low temperatures.
Loss of the (negative or positive) electroactive materials due to dissolution (e.g. of Mn(3+) species), cracking, exfoliation, detachment or even simple regular volume change during cycling. It shows up as both charge and power fade (increased resistance). Both positive and negative electrode materials are subject to fracturing due to the volumetric strain of repeated (de)lithiation cycles.
Structural degradation of cathode materials, such as Li+/Ni2+ cation mixing in nickel-rich materials. This manifests as “electrode saturation", loss of cyclable Ah charge and as a "voltage fade".
Other material degradations. Negative copper current collector is particularly prone to corrosion/dissolution at low cell voltages. PVDF binder also degrades, causing the detachment of the electroactive materials, and the loss of cyclable Ah charge.These are shown in the figure on the right. A change from one main degradation mechanism to another appears as a knee (slope change) in the capacity vs. cycle number plot.Most studies of lithium-ion battery aging have been done at elevated (50-60 °C) temperatures in order to complete the experiments sooner. Under these storage conditions, fully charged nickel-cobalt-aluminum and lithium-iron phosphate cells lose ca. 20% of their cyclable charge in 1-2 year. It is believed that the aforementioned anode aging is the most important degradation pathways in these cases. On the other hand, manganese-based cathodes show a (ca. 20-50%) faster degradation under these conditions, probably due to the additional mechanism of Mn ion dissolution. At 25 °C the degradation of lithium-ion batteries seems to follow the same pathway(s) as the degradation at 50 °C, but with half the speed. In other words, based on the limited extrapolated experimental data, lithium-ion batteries are expected to lose irreversibly ca. 20% of their cyclable charge in 3–5 years or 1000-2000 cycles at 25 °C. Lithium-ion batteries with titanate anodes do not suffer from SEI growth, and last longer (>5000 cycles) than graphite anodes. However, in complete cells other degradation mechanisms (i.e. the dissolution of Mn3+ and the Ni3+/Li+ place exchange, decomposition of PVDF binder and particle detachment) show up after 1000–2000 days, and the use titanate anode does not improve full cell durability in practice.
Detailed degradation description
A more detailed description of some of these mechanisms is provided below:
(1) The negative (anode) SEI layer, a passivation coating formed by electrolyte (such as ethylene carbonate, dimethyl carbonate but not propylene carbonate) reduction products, is essential for providing Li+ ion conduction, while preventing electron transfer (and, thus, further solvent reduction). Under typical operating conditions, the negative SEI layer reaches a fixed thickness after the first few charges (formation cycles), allowing the device to operate for years. However, at elevated temperatures or due to mechanical detachment of the negative SEI, this exothermic electrolyte reduction can proceed violently and lead to an explosion via several reactions. Lithium-ion batteries are prone to capacity fading over hundreds to thousands of cycles. Formation of the SEI consumes lithium ions, reducing the overall charge and discharge efficiency of the electrode material. as a decomposition product, various SEI-forming additives can be added to the electrolyte to promote the formation of a more stable SEI that remains selective for lithium ions to pass through while blocking electrons. Cycling cells at high temperature or at fast rates can promote the degradation of Li-ion batteries due in part to the degradation of the SEI or lithium plating. Charging Li-ion batteries beyond 80% can drastically accelerate battery degradation.Depending on the electrolyte and additives, common components of the SEI layer that forms on the anode include a mixture of lithium oxide, lithium fluoride and semicarbonates (e.g., lithium alkyl carbonates). At elevated temperatures, alkyl carbonates in the electrolyte decompose into insoluble species such as Li2CO3 that increases the film thickness. This increases cell impedance and reduces cycling capacity. Gases formed by electrolyte decomposition can increase the cell's internal pressure and are a potential safety issue in demanding environments such as mobile devices. Below 25 °C, plating of metallic Lithium on the anodes and subsequent reaction with the electrolyte is leading to loss of cyclable Lithium. Extended storage can trigger an incremental increase in film thickness and capacity loss. Charging at greater than 4.2 V can initiate Li+ plating on the anode, producing irreversible capacity loss.
Electrolyte degradation mechanisms include hydrolysis and thermal decomposition. At concentrations as low as 10 ppm, water begins catalyzing a host of degradation products that can affect the electrolyte, anode and cathode. LiPF6 participates in an equilibrium reaction with LiF and PF5. Under typical conditions, the equilibrium lies far to the left. However the presence of water generates substantial LiF, an insoluble, electrically insulating product. LiF binds to the anode surface, increasing film thickness. LiPF6 hydrolysis yields PF5, a strong Lewis acid that reacts with electron-rich species, such as water. PF5 reacts with water to form hydrofluoric acid (HF) and phosphorus oxyfluoride. Phosphorus oxyfluoride in turn reacts to form additional HF and difluorohydroxy phosphoric acid. HF converts the rigid SEI film into a fragile one. On the cathode, the carbonate solvent can then diffuse onto the cathode oxide over time, releasing heat and potentially causing thermal runaway. Decomposition of electrolyte salts and interactions between the salts and solvent start at as low as 70 °C. Significant decomposition occurs at higher temperatures. At 85 °C transesterification products, such as dimethyl-2,5-dioxahexane carboxylate (DMDOHC) are formed from EC reacting with DMC.Batteries generate heat when being charged or discharged, especially at high currents. Large battery packs, such as those used in electric vehicles, are generally equipped with thermal management systems that maintain a temperature between 15 °C (59 °F) and 35 °C (95 °F). Pouch and cylindrical cell temperatures depend linearly on the discharge current. Poor internal ventilation may increase temperatures. For large batteries consisting of multiple cells, non-uniform temperatures can lead to non-uniform and accelerated degradation. In contrast, the calendar life of LiFePO4 cells is not affected by high charge states.Positive SEI layer in lithium-ion batteries is much less understood than the negative SEI. It is believed to have a low-ionic conductivity and shows up as an increased interfacial resistance of the cathode during cycling and calendar aging.(2) Lithium plating is a phenomenon in which certain conditions lead to metallic lithium forming and depositing onto the surface of the battery’s anode rather than intercalating within the anode material’s structure. Low temperatures, overcharging and high charging rates can exacerbate this occurrence. During these conditions, lithium ions may not intercalate uniformly into the anode material and form layers of lithium ion on the surface in the form of dendrites. Dendrites are tiny needle-like structures that can accumulate and pierce the separator, causing a short circuit can initiate thermal runaway. This cascade of rapid and uncontrolled energy can lead to battery swelling, increased heat, fires and or explosions. Additionally, this dendritic growth can lead to side reactions with the electrolyte and convert the fresh plated lithium into electrochemically inert dead lithium. Moreover, the dendritic growth brought on by lithium plating can degrade the lithium-ion battery and lead to poor cycling efficiency and safety hazards. Some ways to mitigate lithium plating and the dendritic growth is by controlling the temperature, optimizing the charging conditions, and improving the materials used. In terms of temperature, the ideal charging temperature is anywhere between 0 °C to 45 °C, but also room temperature is ideal (20 °C to 25 °C). Advancements in materials innovation requires much research and development in the electrolyte selection and improving the anode resistance to plating. One such materials innovation would be to add other compounds to the electrolyte like fluoroethylene carbonate (FEC) to form a rich LiF SEI. Another novel method would be to coat the separator in a protective shield that essentially “kills” the lithium ions before it can form these dendrites.(3) Certain manganese containing cathodes can degrade by the Hunter degradation mechanism resulting in manganese dissolution and reduction on the anode. By the Hunter mechanism for LiMn2O4, hydrofluoric acid catalyzes the loss of manganese through disproportionation of a surface trivalent manganese to form a tetravalent manganese and a soluble divalent manganese:
2Mn3+ → Mn2++ Mn4+Material loss of the spinel results in capacity fade. Temperatures as low as 50 °C initiate Mn2+ deposition on the anode as metallic manganese with the same effects as lithium and copper plating. Cycling over the theoretical max and min voltage plateaus destroys the crystal lattice via Jahn-Teller distortion, which occurs when Mn4+ is reduced to Mn3+ during discharge. Storage of a battery charged to greater than 3.6 V initiates electrolyte oxidation by the cathode and induces SEI layer formation on the cathode. As with the anode, excessive SEI formation forms an insulator resulting in capacity fade and uneven current distribution. Storage at less than 2 V results in the slow degradation of LiCoO2 and LiMn2O4 cathodes, the release of oxygen and irreversible capacity loss.(4) Cation mixing is the main reason for the capacity decline of the Ni-rich cathode materials. As the Ni content in the NCM layered material increases the capacity will increase, which is the result of two-electron of Ni2+/Ni4+ redox reaction (please note, that Mn remains electrochemically inactive in the 4+ state) but, increasing the Ni content results in a significant degree of mixing of Ni2+ and Li+ cations due to the closeness of their ionic radius (Li+ =0.076 nm and Ni2+ =0.069 nm). During charge/discharge cycling, the Li+ in the cathode cannot be easily be extracted and the existence of Ni2+ in the Li layer blocks the diffusion of Li+, resulting in both capacity loss and increased ohmic resistance.(5) Discharging below 2 V can also result in the dissolution of the copper anode current collector and, thus, in catastrophic internal short-circuiting on recharge.
Recommendations
The IEEE standard 1188–1996 recommends replacing Lithium-ion batteries in an electric vehicle, when their charge capacity drops to 80% of the nominal value. In what follows, we shall use the 20% capacity loss as a comparison point between different studies. We shall note, nevertheless, that the linear model of degradation (the constant % of charge loss per cycle or per calendar time) is not always applicable, and that a “knee point”, observed as a change of the slope, and related to the change of the main degradation mechanism, is often observed.
Safety
Fire hazard
Lithium-ion batteries can be a safety hazard since they contain a flammable electrolyte and may become pressurized if they become damaged. A battery cell charged too quickly could cause a short circuit, leading to overheating and explosions and fires. A Li-ion battery fire can be started due to (1) thermal abuse, e.g. poor cooling or external fire, (2) electrical abuse, e.g. overcharge or external short circuit, (3) mechanical abuse, e.g. penetration or crash, or (4) internal short circuit, e.g. due to manufacturing flaws or aging. Because of these risks, testing standards are more stringent than those for acid-electrolyte batteries, requiring both a broader range of test conditions and additional battery-specific tests, and there are shipping limitations imposed by safety regulators. There have been battery-related recalls by some companies, including the 2016 Samsung Galaxy Note 7 recall for battery fires.Lithium-ion batteries have a flammable liquid electrolyte. A faulty battery can cause a serious fire. Faulty chargers can affect the safety of the battery because they can destroy the battery's protection circuit. While charging at temperatures below 0 °C, the negative electrode of the cells gets plated with pure lithium, which can compromise the safety of the whole pack.
Short-circuiting a battery will cause the cell to overheat and possibly to catch fire. Smoke from thermal runaway in a Li-ion battery is both flammable and toxic. The fire energy content (electrical + chemical) of cobalt-oxide cells is about 100 to 150 kJ/(A·h), most of it chemical.Around 2010, large lithium-ion batteries were introduced in place of other chemistries to power systems on some aircraft; as of January 2014, there had been at least four serious lithium-ion battery fires, or smoke, on the Boeing 787 passenger aircraft, introduced in 2011, which did not cause crashes but had the potential to do so. UPS Airlines Flight 6 crashed in Dubai after its payload of batteries spontaneously ignited.
To reduce fire hazards, research projects are intended to develop non-flammable electrolytes.
Damaging and overloading
If a lithium-ion battery is damaged, crushed, or is subjected to a higher electrical load without having overcharge protection, then problems may arise. External short circuit can trigger a battery explosion.If overheated or overcharged, Li-ion batteries may suffer thermal runaway and cell rupture. During thermal runaway, internal degradation and oxidization processes can keep cell temperatures above 500 °C, with the possibility of igniting secondary combustibles, as well as leading to leakage, explosion or fire in extreme cases. To reduce these risks, many lithium-ion cells (and battery packs) contain fail-safe circuitry that disconnects the battery when its voltage is outside the safe range of 3–4.2 V per cell, or when overcharged or discharged. Lithium battery packs, whether constructed by a vendor or the end-user, without effective battery management circuits are susceptible to these issues. Poorly designed or implemented battery management circuits also may cause problems; it is difficult to be certain that any particular battery management circuitry is properly implemented.
Voltage limits
Lithium-ion cells are susceptible to stress by voltage ranges outside of safe ones between 2.5 and 3.65/4.1/4.2 or 4.35V (depending on the components of the cell). Exceeding this voltage range results in premature aging and in safety risks due to the reactive components in the cells. When stored for long periods the small current draw of the protection circuitry may drain the battery below its shutoff voltage; normal chargers may then be useless since the battery management system (BMS) may retain a record of this battery (or charger) "failure". Many types of lithium-ion cells cannot be charged safely below 0 °C, as this can result in plating of lithium on the anode of the cell, which may cause complications such as internal short-circuit paths.Other safety features are required in each cell:
Shut-down separator (for overheating)
Tear-away tab (for internal pressure relief)
Vent (pressure relief in case of severe outgassing)
Thermal interrupt (overcurrent/overcharging/environmental exposure)These features are required because the negative electrode produces heat during use, while the positive electrode may produce oxygen. However, these additional devices occupy space inside the cells, add points of failure, and may irreversibly disable the cell when activated. Further, these features increase costs compared to nickel metal hydride batteries, which require only a hydrogen/oxygen recombination device and a back-up pressure valve. Contaminants inside the cells can defeat these safety devices. Also, these features can not be applied to all kinds of cells, e.g., prismatic high current cells cannot be equipped with a vent or thermal interrupt. High current cells must not produce excessive heat or oxygen, lest there be a failure, possibly violent. Instead, they must be equipped with internal thermal fuses which act before the anode and cathode reach their thermal limits.Replacing the lithium cobalt oxide positive electrode material in lithium-ion batteries with a lithium metal phosphate such as lithium iron phosphate (LFP) improves cycle counts, shelf life and safety, but lowers capacity. As of 2006, these safer lithium-ion batteries were mainly used in electric cars and other large-capacity battery applications, where safety is critical.
Recalls
In 2006, approximately 10 million Sony batteries used in Dell, Sony, Apple, Lenovo, Panasonic, Toshiba, Hitachi, Fujitsu and Sharp laptops were recalled. The batteries were found to be susceptible to internal contamination by metal particles during manufacture. Under some circumstances, these particles could pierce the separator, causing a dangerous short circuit.IATA estimates that over a billion lithium metal and lithium-ion cells are flown each year. Some kinds of lithium batteries may be prohibited aboard aircraft because of the fire hazard. Some postal administrations restrict air shipping (including EMS) of lithium and lithium-ion batteries, either separately or installed in equipment.
Non-flammable electrolyte
In 2023, most commercial Li-ion batteries employed alkylcarbonate solvent(s) to assure the formation solid electrolyte interphase on the negative electrode. Since such solvents are readily flammable, there has been active research to replace them with non-flammable solvents or to add fire suppressants. Another source of hazard is hexafluorophosphate anion, which is needed to passitivate the negative current collector made of aluminium. Hexafluorophosphate reacts with water and releases volatile and toxic hydrogen fluoride. Efforts to replace hexafluorophosphate have been less successful.
Supply chain
Environmental impact
Extraction of lithium, nickel, and cobalt, manufacture of solvents, and mining byproducts present significant environmental and health hazards.
Lithium extraction can be fatal to aquatic life due to water pollution. It is known to cause surface water contamination, drinking water contamination, respiratory problems, ecosystem degradation and landscape damage. It also leads to unsustainable water consumption in arid regions (1.9 million liters per ton of lithium). Massive byproduct generation of lithium extraction also presents unsolved problems, such as large amounts of magnesium and lime waste.Lithium mining takes place in North and South America, Asia, South Africa, Australia, and China.Cobalt for Li-ion batteries is largely mined in the Congo (see also Mining industry of the Democratic Republic of the Congo)
Manufacturing a kg of Li-ion battery takes about 67 megajoule (MJ) of energy. The global warming potential of lithium-ion batteries manufacturing strongly depends on the energy source used in mining and manufacturing operations, and is difficult to estimate, but one 2019 study estimated 73 kg CO2e/kWh. Effective recycling can reduce the carbon footprint of the production significantly.
Solid waste and recycling
Li-ion battery elements including iron, copper, nickel and cobalt are considered safe for incinerators and landfills. These metals can be recycled, usually by burning away the other materials, but mining generally remains cheaper than recycling; recycling may cost $3/kg, and in 2019 less than 5% of lithium ion batteries were being recycled. Since 2018, the recycling yield was increased significantly, and recovering lithium, manganese, aluminum, the organic solvents of the electrolyte, and graphite is possible at industrial scales. The most expensive metal involved in the construction of the cell is cobalt. Lithium is less expensive than other metals used and is rarely recycled, but recycling could prevent a future shortage.Accumulation of battery waste presents technical challenges and health hazards. Since the environmental impact of electric cars is heavily affected by the production of lithium-ion batteries, the development of efficient ways to repurpose waste is crucial. Recycling is a multi-step process, starting with the storage of batteries before disposal, followed by manual testing, disassembling, and finally the chemical separation of battery components. Re-use of the battery is preferred over complete recycling as there is less embodied energy in the process. As these batteries are a lot more reactive than classical vehicle waste like tire rubber, there are significant risks to stockpiling used batteries.
Pyrometallurgical recovery
The pyrometallurgical method uses a high-temperature furnace to reduce the components of the metal oxides in the battery to an alloy of Co, Cu, Fe, and Ni. This is the most common and commercially established method of recycling and can be combined with other similar batteries to increase smelting efficiency and improve thermodynamics. The metal current collectors aid the smelting process, allowing whole cells or modules to be melted at once. The product of this method is a collection of metallic alloy, slag, and gas. At high temperatures, the polymers used to hold the battery cells together burn off and the metal alloy can be separated through a hydrometallurgical process into its separate components. The slag can be further refined or used in the cement industry. The process is relatively risk-free and the exothermic reaction from polymer combustion reduces the required input energy. However, in the process, the plastics, electrolytes, and lithium salts will be lost.
Hydrometallurgical metals reclamation
This method involves the use of aqueous solutions to remove the desired metals from the cathode. The most common reagent is sulfuric acid. Factors that affect the leaching rate include the concentration of the acid, time, temperature, solid-to-liquid-ratio, and reducing agent. It is experimentally proven that H2O2 acts as a reducing agent to speed up the rate of leaching through the reaction:2LiCoO2(s) + 3H2SO4 + H2O2 → 2CoSO4(aq) + Li2SO4 + 4H2O + O2Once leached, the metals can be extracted through precipitation reactions controlled by changing the pH level of the solution. Cobalt, the most expensive metal, can then be recovered in the form of sulfate, oxalate, hydroxide, or carbonate. [75] More recently recycling methods experiment with the direct reproduction of the cathode from the leached metals. In these procedures, concentrations of the various leached metals are premeasured to match the target cathode and then the cathodes are directly synthesized.The main issues with this method, however, is that a large volume of solvent is required and the high cost of neutralization. Although it's easy to shred up the battery, mixing the cathode and anode at the beginning complicates the process, so they will also need to be separated. Unfortunately, the current design of batteries makes the process extremely complex and it is difficult to separate the metals in a closed-loop battery system. Shredding and dissolving may occur at different locations.
Direct recycling
Direct recycling is the removal of the cathode or anode from the electrode, reconditioned, and then reused in a new battery. Mixed metal-oxides can be added to the new electrode with very little change to the crystal morphology. The process generally involves the addition of new lithium to replenish the loss of lithium in the cathode due to degradation from cycling. Cathode strips are obtained from the dismantled batteries, then soaked in NMP, and undergo sonication to remove excess deposits. It is treated hydrothermally with a solution containing LiOH/Li2SO4 before annealing.This method is extremely cost-effective for noncobalt-based batteries as the raw materials do not make up the bulk of the cost. Direct recycling avoids the time-consuming and expensive purification steps, which is great for low-cost cathodes such as LiMn2O4 and LiFePO4. For these cheaper cathodes, most of the cost, embedded energy, and carbon footprint is associated with the manufacturing rather than the raw material. It is experimentally shown that direct recycling can reproduce similar properties to pristine graphite.
The drawback of the method lies in the condition of the retired battery. In the case where the battery is relatively healthy, direct recycling can cheaply restore its properties. However, for batteries where the state of charge is low, direct recycling may not be worth the investment. The process must also be tailored to the specific cathode composition, and therefore the process must be configured to one type of battery at a time. Lastly, in a time with rapidly developing battery technology, the design of a battery today may no longer be desirable a decade from now, rendering direct recycling ineffective.
Human rights impact
Extraction of raw materials for lithium ion batteries may present dangers to local people, especially land-based indigenous populations.Cobalt sourced from the Democratic Republic of the Congo is often mined by workers using hand tools with few safety precautions, resulting in frequent injuries and deaths. Pollution from these mines has exposed people to toxic chemicals that health officials believe to cause birth defects and breathing difficulties. Human rights activists have alleged, and investigative journalism reported confirmation, that child labor is used in these mines.A study of relationships between lithium extraction companies and indigenous peoples in Argentina indicated that the state may not have protected indigenous peoples' right to free prior and informed consent, and that extraction companies generally controlled community access to information and set the terms for discussion of the projects and benefit sharing.Development of the Thacker Pass lithium mine in Nevada, USA has met with protests and lawsuits from several indigenous tribes who have said they were not provided free prior and informed consent and that the project threatens cultural and sacred sites. Links between resource extraction and missing and murdered indigenous women have also prompted local communities to express concerns that the project will create risks to indigenous women. Protestors have been occupying the site of the proposed mine since January, 2021.
Research
Researchers are actively working to improve the power density, safety, cycle durability (battery life), recharge time, cost, flexibility, and other characteristics, as well as research methods and uses, of these batteries.
See also
Borate oxalate
Comparison of commercial battery types
European Battery Alliance
Nanowire battery
Solid-state battery
Thin-film lithium-ion battery
Blade battery
Flow battery
VRLA battery
Ultium
References
Sources
External links
Lithium-ion Battery at the Encyclopædia Britannica.
List of World's Largest Lithium-ion Battery Factories (2020).
Energy Storage Safety at National Renewable Energy Laboratory (NREL).
New More Efficient Lithium-ion Batteries The New York Times. September 2021.
NREL Innovation Improves Safety of Electric Vehicle Batteries, NREL, October 2015.
Degradation Mechanisms and Lifetime Prediction for Lithium-Ion Batteries, NREL, July 2015.
Impact of Temperature Extremes on Large Format Li-ion Batteries for Vehicle Applications, NREL, March 2013. |
enteric fermentation | Enteric fermentation is a digestive process by which carbohydrates are broken down by microorganisms into simple molecules for absorption into the bloodstream of an animal. Because of human agricultural reliance in many parts of the world on animals which digest by enteric fermentation, it is the second largest anthropogenic factor for the increase in methane emissions directly after fossil fuel use.
Ruminants
Ruminant animals are those that have a rumen. A rumen is a multichambered stomach found almost exclusively among some artiodactyl mammals, such as cattle, sheep, and deer, enabling them to eat cellulose-enhanced tough plants and grains that monogastric (i.e., "single-chambered stomached") animals, such as humans, dogs, and cats, cannot digest. Although camels are thought to be ruminants they are not true ruminants.Enteric fermentation occurs when methane (CH4) is produced in the rumen as microbial fermentation takes place. Over 200 species of microorganisms are present in the rumen, although only about 10% of these play an important role in digestion. Most of the CH4 byproduct is belched by the animal. However, a small percentage of CH4 is also produced in the large intestine and passed out as flatulence.
Methane emissions are an important contribution to global greenhouse gas emissions. The IPCC reports that methane is more than twenty times as effective as CO2 at trapping heat in the atmosphere - though note that it is produced in substantially smaller amounts. Methane represents also a significant energy loss to the animal ranging from 2 to 12% of gross energy intake. So, decreasing the production of enteric CH4 from ruminants without altering animal production is desirable both as a strategy to reduce global greenhouse gas emissions and as a means of improving feed conversion efficiency. In Australia ruminant animals account for over half of their green house gas contribution from methane.However, in Australia there are ruminant species of the kangaroos that are able to produce 80% less methane than cows. This is because the gut microbiota of Macropodids, rumen and others parts of their digestive system, is dominated by bacteria of the family Succinivibrionaceae. These bacteria are able to produce succinate as a final product of the lignocelluloses degradation, producing small amounts of methane as end product. Its special metabolic route allows it to utilize other proton acceptors, avoiding the formation of methane.
Experimental management
Enteric fermentation was the second largest anthropogenic source of methane emissions in the United States from 2000 through 2009. In 2007, methane emissions from enteric fermentation were 2.3% of net greenhouse gases produced in the United States at 139 teragrams of carbon dioxide equivalents (Tg CO2) out of a total net emission of 6087.5 Tg CO2. For this reason, scientists believe that, with the aid of microbial engineering, the use of microbioma to modify natural or anthropogenic processes, we could change the microbiota composition of the rumen of strong methane producers, emulating the Macropodidae microbiota.
Recent studies claim that this technique is possible to perform. In one of these studies scientists analyze the changes of human microbiota by different alimentary changes. In other study, researchers introduce a human microbiota in gnotobiotic mice in order to compare the different changes for developing new ways to manipulate the properties of the microbiota so as to prevent or treat various diseases.Another approach to manage methane emissions from enteric fermentation involves using diet additives and supplements in cattle feed. For example, Asparagopsis taxiformis (also known as red seaweed) is a species of algae that when fed to cattle has shown to substantially reduce their methane emissions. A second example that has been shown to reduce methane emissions from cattle significantly involves using the compound 3-nitroxypropanol (3-NOP) which inhibits the final step of methane synthesis by microorganisms in the rumen. Some of these methods have already been approved for farmer usage, while others continue to be evaluated for safety, efficacy, and other concerns.
See also
Environmental impact of meat production#Greenhouse gas emissions
Atmospheric methane
== References == |
concrete | Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined.When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete.
In the past, lime based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form Portland cement concrete (named for its visual resemblance to Portland stone). Many other non-cementitious types of concrete exist with other methods of binding aggregate together, including asphalt concrete with a bitumen binder, which is frequently used for road surfaces, and polymer concretes that use polymers as a binder. Concrete is distinct from mortar. Whereas concrete is itself a building material, mortar is a bonding agent that typically holds bricks, tiles and other masonry units together. Grout is another material associated with concrete and cement. It does not contain coarse aggregates and is usually either pourable or thixotropic, and is used to fill gaps between masonry components or coarse aggregate which has already been put in place. Some methods of concrete manufacture and repair involve pumping grout into the gaps to make up a solid mass in situ.
Etymology
The word concrete comes from the Latin word "concretus" (meaning compact or condensed), the perfect passive participle of "concrescere", from "con-" (together) and "crescere" (to grow).
History
Ancient times
Mayan concrete at the ruins of Uxmal (850-925 A.D.) is referenced in Incidents of Travel in the Yucatán by John L. Stephens. "The roof is flat and had been covered with cement". "The floors were cement, in some places hard, but, by long exposure, broken, and now crumbling under the feet." "But throughout the wall was solid, and consisting of large stones imbedded in mortar, almost as hard as rock."
Small-scale production of concrete-like materials was pioneered by the Nabatean traders who occupied and controlled a series of oases and developed a small empire in the regions of southern Syria and northern Jordan from the 4th century BC. They discovered the advantages of hydraulic lime, with some self-cementing properties, by 700 BC. They built kilns to supply mortar for the construction of rubble masonry houses, concrete floors, and underground waterproof cisterns. They kept the cisterns secret as these enabled the Nabataeans to thrive in the desert. Some of these structures survive to this day.
Classical era
In the Ancient Egyptian and later Roman eras, builders discovered that adding volcanic ash to lime allowed the mix to set underwater. They discovered the pozzolanic reaction.
Concrete floors were found in the royal palace of Tiryns, Greece, which dates roughly to 1400-1200 BC. Lime mortars were used in Greece, such as in Crete and Cyprus, in 800 BC. The Assyrian Jerwan Aqueduct (688 BC) made use of waterproof concrete. Concrete was used for construction in many ancient structures.The Romans used concrete extensively from 300 BC to 476 AD. During the Roman Empire, Roman concrete (or opus caementicium) was made from quicklime, pozzolana and an aggregate of pumice. Its widespread use in many Roman structures, a key event in the history of architecture termed the Roman architectural revolution, freed Roman construction from the restrictions of stone and brick materials. It enabled revolutionary new designs in terms of both structural complexity and dimension. The Colosseum in Rome was built largely of concrete, and the Pantheon has the world's largest unreinforced concrete dome.
Concrete, as the Romans knew it, was a new and revolutionary material. Laid in the shape of arches, vaults and domes, it quickly hardened into a rigid mass, free from many of the internal thrusts and strains that troubled the builders of similar structures in stone or brick.
Modern tests show that opus caementicium had as much compressive strength as modern Portland-cement concrete (ca. 200 kg/cm2 [20 MPa; 2,800 psi]). However, due to the absence of reinforcement, its tensile strength was far lower than modern reinforced concrete, and its mode of application also differed:
Modern structural concrete differs from Roman concrete in two important details. First, its mix consistency is fluid and homogeneous, allowing it to be poured into forms rather than requiring hand-layering together with the placement of aggregate, which, in Roman practice, often consisted of rubble. Second, integral reinforcing steel gives modern concrete assemblies great strength in tension, whereas Roman concrete could depend only upon the strength of the concrete bonding to resist tension.
The long-term durability of Roman concrete structures has been found to be due to its use of pyroclastic (volcanic) rock and ash, whereby the crystallization of strätlingite (a specific and complex calcium aluminosilicate hydrate) and the coalescence of this and similar calcium–aluminium-silicate–hydrate cementing binders helped give the concrete a greater degree of fracture resistance even in seismically active environments. Roman concrete is significantly more resistant to erosion by seawater than modern concrete; it used pyroclastic materials which react with seawater to form Al-tobermorite crystals over time.The widespread use of concrete in many Roman structures ensured that many survive to the present day. The Baths of Caracalla in Rome are just one example. Many Roman aqueducts and bridges, such as the magnificent Pont du Gard in southern France, have masonry cladding on a concrete core, as does the dome of the Pantheon.
Middle Ages
After the Roman Empire, the use of burned lime and pozzolana was greatly reduced. Low kiln temperatures in the burning of lime, lack of pozzolana, and poor mixing all contributed to a decline in the quality of concrete and mortar. From the 11th century, the increased use of stone in church and castle construction led to an increased demand for mortar. Quality began to improve in the 12th century through better grinding and sieving. Medieval lime mortars and concretes were non-hydraulic and were used for binding masonry, "hearting" (binding rubble masonry cores) and foundations. Bartholomaeus Anglicus in his De proprietatibus rerum (1240) describes the making of mortar. In an English translation from 1397, it reads "lyme ... is a stone brent; by medlynge thereof with sonde and water sement is made". From the 14th century, the quality of mortar was again excellent, but only from the 17th century was pozzolana commonly added.The Canal du Midi was built using concrete in 1670.
Industrial era
Perhaps the greatest step forward in the modern use of concrete was Smeaton's Tower, built by British engineer John Smeaton in Devon, England, between 1756 and 1759. This third Eddystone Lighthouse pioneered the use of hydraulic lime in concrete, using pebbles and powdered brick as aggregate.A method for producing Portland cement was developed in England and patented by Joseph Aspdin in 1824. Aspdin chose the name for its similarity to Portland stone, which was quarried on the Isle of Portland in Dorset, England. His son William continued developments into the 1840s, earning him recognition for the development of "modern" Portland cement.Reinforced concrete was invented in 1849 by Joseph Monier. and the first reinforced concrete house was built by François Coignet in 1853.
The first concrete reinforced bridge was designed and built by Joseph Monier in 1875.Prestressed concrete and post-tensioned concrete were pioneered by Eugène Freyssinet, a French structural and civil engineer. Concrete components or structures are compressed by tendon cables during, or after, their fabrication in order to strengthen them against tensile forces developing when put in service. Freyssinet patented the technique on 2 October 1928.
Composition
Concrete is an artificial composite material, comprising a matrix of cementitious binder (typically Portland cement paste or asphalt) and a dispersed phase or "filler" of aggregate (typically a rocky material, loose stones, and sand). The binder "glues" the filler together to form a synthetic conglomerate. Many types of concrete are available, determined by the formulations of binders and the types of aggregate used to suit the application of the engineered material. These variables determine strength and density, as well as chemical and thermal resistance of the finished product.
Construction aggregates consist of large chunks of material in a concrete mix, generally a coarse gravel or crushed rocks such as limestone, or granite, along with finer materials such as sand.
Cement paste, most commonly made of Portland cement, is the most prevalent kind of concrete binder. For cementitious binders, water is mixed with the dry cement powder and aggregate, which produces a semi-liquid slurry (paste) that can be shaped, typically by pouring it into a form. The concrete solidifies and hardens through a chemical process called hydration. The water reacts with the cement, which bonds the other components together, creating a robust, stone-like material. Other cementitious materials, such as fly ash and slag cement, are sometimes added—either pre-blended with the cement or directly as a concrete component—and become a part of the binder for the aggregate. Fly ash and slag can enhance some properties of concrete such as fresh properties and durability. Alternatively, other materials can also be used as a concrete binder: the most prevalent substitute is asphalt, which is used as the binder in asphalt concrete.
Admixtures are added to modify the cure rate or properties of the material. Mineral admixtures use recycled materials as concrete ingredients. Conspicuous materials include fly ash, a by-product of coal-fired power plants; ground granulated blast furnace slag, a by-product of steelmaking; and silica fume, a by-product of industrial electric arc furnaces.
Structures employing Portland cement concrete usually include steel reinforcement because this type of concrete can be formulated with high compressive strength, but always has lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension, typically steel rebar.
The mix design depends on the type of structure being built, how the concrete is mixed and delivered, and how it is placed to form the structure.
Cement
Portland cement is the most common type of cement in general usage. It is a basic ingredient of concrete, mortar, and many plasters. British masonry worker Joseph Aspdin patented Portland cement in 1824. It was named because of the similarity of its color to Portland limestone, quarried from the English Isle of Portland and used extensively in London architecture. It consists of a mixture of calcium silicates (alite, belite), aluminates and ferrites—compounds which combine calcium, silicon, aluminium and iron in forms which will react with water. Portland cement and similar materials are made by heating limestone (a source of calcium) with clay or shale (a source of silicon, aluminium and iron) and grinding this product (called clinker) with a source of sulfate (most commonly gypsum).
In modern cement kilns, many advanced features are used to lower the fuel consumption per ton of clinker produced. Cement kilns are extremely large, complex, and inherently dusty industrial installations, and have emissions which must be controlled. Of the various ingredients used to produce a given quantity of concrete, the cement is the most energetically expensive. Even complex and efficient kilns require 3.3 to 3.6 gigajoules of energy to produce a ton of clinker and then grind it into cement. Many kilns can be fueled with difficult-to-dispose-of wastes, the most common being used tires. The extremely high temperatures and long periods of time at those temperatures allows cement kilns to efficiently and completely burn even difficult-to-use fuels.
Water
Combining water with a cementitious material forms a cement paste by the process of hydration. The cement paste glues the aggregate together, fills voids within it, and makes it flow more freely.As stated by Abrams' law, a lower water-to-cement ratio yields a stronger, more durable concrete, whereas more water gives a freer-flowing concrete with a higher slump. Impure water used to make concrete can cause problems when setting or in causing premature failure of the structure.Portland cement consists of five major compounds of calcium silicates and aluminates ranging from 5 to 50% in weight, which all undergo hydration to contribute to final material's strength. Thus, the hydration of cement involves many reactions, often occurring at the same time. As the reactions proceed, the products of the cement hydration process gradually bond together the individual sand and gravel particles and other components of the concrete to form a solid mass.
Hydration of tricalcium silicate
Cement chemist notation: C3S + H → C-S-H + CH + heat
Standard notation: Ca3SiO5 + H2O → CaO・SiO2・H2O (gel) + Ca(OH)2 + heat
Balanced: 2 Ca3SiO5 + 7 H2O → 3 CaO・2 SiO2・4 H2O (gel) + 3 Ca(OH)2 + heat
(approximately as the exact ratios of CaO, SiO2 and H2O in C-S-H can vary)Due to the nature of the chemical bonds created in these reactions and the final characteristics of the hardened cement paste formed, the process of cement hydration is considered irreversible.
Aggregates
Fine and coarse aggregates make up the bulk of a concrete mixture. Sand, natural gravel, and crushed stone are used mainly for this purpose. Recycled aggregates (from construction, demolition, and excavation waste) are increasingly used as partial replacements for natural aggregates, while a number of manufactured aggregates, including air-cooled blast furnace slag and bottom ash are also permitted.
The size distribution of the aggregate determines how much binder is required. Aggregate with a very even size distribution has the biggest gaps whereas adding aggregate with smaller particles tends to fill these gaps. The binder must fill the gaps between the aggregate as well as paste the surfaces of the aggregate together, and is typically the most expensive component. Thus, variation in sizes of the aggregate reduces the cost of concrete. The aggregate is nearly always stronger than the binder, so its use does not negatively affect the strength of the concrete.
Redistribution of aggregates after compaction often creates non-homogeneity due to the influence of vibration. This can lead to strength gradients.Decorative stones such as quartzite, small river stones or crushed glass are sometimes added to the surface of concrete for a decorative "exposed aggregate" finish, popular among landscape designers.
Admixtures
Admixtures are materials in the form of powder or fluids that are added to the concrete to give it certain characteristics not obtainable with plain concrete mixes. Admixtures are defined as additions "made as the concrete mix is being prepared". The most common admixtures are retarders and accelerators. In normal use, admixture dosages are less than 5% by mass of cement and are added to the concrete at the time of batching/mixing. (See § Production below.) The common types of admixtures are as follows:
Accelerators speed up the hydration (hardening) of the concrete. Typical materials used are calcium chloride, calcium nitrate and sodium nitrate. However, use of chlorides may cause corrosion in steel reinforcing and is prohibited in some countries, so that nitrates may be favored, even though they are less effective than the chloride salt. Accelerating admixtures are especially useful for modifying the properties of concrete in cold weather.
Air entraining agents add and entrain tiny air bubbles in the concrete, which reduces damage during freeze-thaw cycles, increasing durability. However, entrained air entails a tradeoff with strength, as each 1% of air may decrease compressive strength by 5%. If too much air becomes trapped in the concrete as a result of the mixing process, defoamers can be used to encourage the air bubble to agglomerate, rise to the surface of the wet concrete and then disperse.
Bonding agents are used to create a bond between old and new concrete (typically a type of polymer) with wide temperature tolerance and corrosion resistance.
Corrosion inhibitors are used to minimize the corrosion of steel and steel bars in concrete.
Crystalline admixtures are typically added during batching of the concrete to lower permeability. The reaction takes place when exposed to water and un-hydrated cement particles to form insoluble needle-shaped crystals, which fill capillary pores and micro-cracks in the concrete to block pathways for water and waterborne contaminates. Concrete with crystalline admixture can expect to self-seal as constant exposure to water will continuously initiate crystallization to ensure permanent waterproof protection.
Pigments can be used to change the color of concrete, for aesthetics.
Plasticizers increase the workability of plastic, or "fresh", concrete, allowing it to be placed more easily, with less consolidating effort. A typical plasticizer is lignosulfonate. Plasticizers can be used to reduce the water content of a concrete while maintaining workability and are sometimes called water-reducers due to this use. Such treatment improves its strength and durability characteristics.
Superplasticizers (also called high-range water-reducers) are a class of plasticizers that have fewer deleterious effects and can be used to increase workability more than is practical with traditional plasticizers. Superplasticizers are used to increase compressive strength. It increases the workability of the concrete and lowers the need for water content by 15–30%.
Pumping aids improve pumpability, thicken the paste and reduce separation and bleeding.
Retarders slow the hydration of concrete and are used in large or difficult pours where partial setting is undesirable before completion of the pour. Typical polyol retarders are sugar, sucrose, sodium gluconate, glucose, citric acid, and tartaric acid.
Mineral admixtures and blended cements
Inorganic materials that have pozzolanic or latent hydraulic properties, these very fine-grained materials are added to the concrete mix to improve the properties of concrete (mineral admixtures), or as a replacement for Portland cement (blended cements). Products which incorporate limestone, fly ash, blast furnace slag, and other useful materials with pozzolanic properties into the mix, are being tested and used. These developments are ever growing in relevance to minimize the impacts caused by cement use, notorious for being one of the largest producers (at about 5 to 10%) of global greenhouse gas emissions. The use of alternative materials also is capable of lowering costs, improving concrete properties, and recycling wastes, the latest being relevant for circular economy aspects of the construction industry, whose demand is ever growing with greater impacts on raw material extraction, waste generation and landfill practices.
Fly ash: A by-product of coal-fired electric generating plants, it is used to partially replace Portland cement (by up to 60% by mass). The properties of fly ash depend on the type of coal burnt. In general, siliceous fly ash is pozzolanic, while calcareous fly ash has latent hydraulic properties.
Ground granulated blast furnace slag (GGBFS or GGBS): A by-product of steel production is used to partially replace Portland cement (by up to 80% by mass). It has latent hydraulic properties.
Silica fume: A by-product of the production of silicon and ferrosilicon alloys. Silica fume is similar to fly ash, but has a particle size 100 times smaller. This results in a higher surface-to-volume ratio and a much faster pozzolanic reaction. Silica fume is used to increase strength and durability of concrete, but generally requires the use of superplasticizers for workability.
High reactivity metakaolin (HRM): Metakaolin produces concrete with strength and durability similar to concrete made with silica fume. While silica fume is usually dark gray or black in color, high-reactivity metakaolin is usually bright white in color, making it the preferred choice for architectural concrete where appearance is important.
Carbon nanofibers can be added to concrete to enhance compressive strength and gain a higher Young's modulus, and also to improve the electrical properties required for strain monitoring, damage evaluation and self-health monitoring of concrete. Carbon fiber has many advantages in terms of mechanical and electrical properties (e.g., higher strength) and self-monitoring behavior due to the high tensile strength and high electrical conductivity.
Carbon products have been added to make concrete electrically conductive, for deicing purposes.
New research from Japan's University of Kitakyushu shows that a washed and dried recycled mix of used diapers can be an environmental solution to producing less landfill and using less sand in concrete production. A model home was built in Indonesia to test the strength and durability of the new diaper-cement composite.
Production
Concrete production is the process of mixing together the various ingredients—water, aggregate, cement, and any additives—to produce concrete. Concrete production is time-sensitive. Once the ingredients are mixed, workers must put the concrete in place before it hardens. In modern usage, most concrete production takes place in a large type of industrial facility called a concrete plant, or often a batch plant. The usual method of placement is casting in formwork, which holds the mix in shape until it has set enough to hold its shape unaided.
In general usage, concrete plants come in two main types, ready mix plants and central mix plants. A ready-mix plant mixes all the ingredients except water, while a central mix plant mixes all the ingredients including water. A central-mix plant offers more accurate control of the concrete quality through better measurements of the amount of water added, but must be placed closer to the work site where the concrete will be used, since hydration begins at the plant.
A concrete plant consists of large storage hoppers for various reactive ingredients like cement, storage for bulk ingredients like aggregate and water, mechanisms for the addition of various additives and amendments, machinery to accurately weigh, move, and mix some or all of those ingredients, and facilities to dispense the mixed concrete, often to a concrete mixer truck.
Modern concrete is usually prepared as a viscous fluid, so that it may be poured into forms, which are containers erected in the field to give the concrete its desired shape. Concrete formwork can be prepared in several ways, such as slip forming and steel plate construction. Alternatively, concrete can be mixed into dryer, non-fluid forms and used in factory settings to manufacture precast concrete products.
A wide variety of equipment is used for processing concrete, from hand tools to heavy industrial machinery. Whichever equipment builders use, however, the objective is to produce the desired building material; ingredients must be properly mixed, placed, shaped, and retained within time constraints. Any interruption in pouring the concrete can cause the initially placed material to begin to set before the next batch is added on top. This creates a horizontal plane of weakness called a cold joint between the two batches. Once the mix is where it should be, the curing process must be controlled to ensure that the concrete attains the desired attributes. During concrete preparation, various technical details may affect the quality and nature of the product.
Design mix
Design mix ratios are decided by an engineer after analyzing the properties of the specific ingredients being used. Instead of using a 'nominal mix' of 1 part cement, 2 parts sand, and 4 parts aggregate (the second example from above), a civil engineer will custom-design a concrete mix to exactly meet the requirements of the site and conditions, setting material ratios and often designing an admixture package to fine-tune the properties or increase the performance envelope of the mix. Design-mix concrete can have very broad specifications that cannot be met with more basic nominal mixes, but the involvement of the engineer often increases the cost of the concrete mix.
Concrete Mixes are primarily divided into nominal mix, standard mix and design mix.
Nominal mix ratios are given in volume of
Cement : Sand : Aggregate
{\displaystyle {\text{Cement : Sand : Aggregate}}}
. Nominal mixes are a simple, fast way of getting a basic idea of the properties of the finished concrete without having to perform testing in advance.
Various governing bodies (such as British Standards) define nominal mix ratios into a number of grades, usually ranging from lower compressive strength to higher compressive strength. The grades usually indicate the 28-day cube strength.
Mixing
Thorough mixing is essential to produce uniform, high-quality concrete.
Separate paste mixing has shown that the mixing of cement and water into a paste before combining these materials with aggregates can increase the compressive strength of the resulting concrete. The paste is generally mixed in a high-speed, shear-type mixer at a w/c (water to cement ratio) of 0.30 to 0.45 by mass. The cement paste premix may include admixtures such as accelerators or retarders, superplasticizers, pigments, or silica fume. The premixed paste is then blended with aggregates and any remaining batch water and final mixing is completed in conventional concrete mixing equipment.
Sample analysis – Workability
Workability is the ability of a fresh (plastic) concrete mix to fill the form/mold properly with the desired work (pouring, pumping, spreading, tamping, vibration) and without reducing the concrete's quality. Workability depends on water content, aggregate (shape and size distribution), cementitious content and age (level of hydration) and can be modified by adding chemical admixtures, like superplasticizer. Raising the water content or adding chemical admixtures increases concrete workability. Excessive water leads to increased bleeding or segregation of aggregates (when the cement and aggregates start to separate), with the resulting concrete having reduced quality. Changes in gradation can also affect workability of the concrete, although a wide range of gradation can be used for various applications. An undesirable gradation can mean using a large aggregate that is too large for the size of the formwork, or which has too few smaller aggregate grades to serve to fill the gaps between the larger grades, or using too little or too much sand for the same reason, or using too little water, or too much cement, or even using jagged crushed stone instead of smoother round aggregate such as pebbles. Any combination of these factors and others may result in a mix which is too harsh, i.e., which does not flow or spread out smoothly, is difficult to get into the formwork, and which is difficult to surface finish.Workability can be measured by the concrete slump test, a simple measure of the plasticity of a fresh batch of concrete following the ASTM C 143 or EN 12350-2 test standards. Slump is normally measured by filling an "Abrams cone" with a sample from a fresh batch of concrete. The cone is placed with the wide end down onto a level, non-absorptive surface. It is then filled in three layers of equal volume, with each layer being tamped with a steel rod to consolidate the layer. When the cone is carefully lifted off, the enclosed material slumps a certain amount, owing to gravity. A relatively dry sample slumps very little, having a slump value of one or two inches (25 or 50 mm) out of one foot (300 mm). A relatively wet concrete sample may slump as much as eight inches. Workability can also be measured by the flow table test.
Slump can be increased by addition of chemical admixtures such as plasticizer or superplasticizer without changing the water-cement ratio. Some other admixtures, especially air-entraining admixture, can increase the slump of a mix.
High-flow concrete, like self-consolidating concrete, is tested by other flow-measuring methods. One of these methods includes placing the cone on the narrow end and observing how the mix flows through the cone while it is gradually lifted.
After mixing, concrete is a fluid and can be pumped to the location where needed.
Curing
Maintaining optimal conditions for cement hydration
Concrete must be kept moist during curing in order to achieve optimal strength and durability. During curing hydration occurs, allowing calcium-silicate hydrate (C-S-H) to form. Over 90% of a mix's final strength is typically reached within four weeks, with the remaining 10% achieved over years or even decades. The conversion of calcium hydroxide in the concrete into calcium carbonate from absorption of CO2 over several decades further strengthens the concrete and makes it more resistant to damage. This carbonation reaction, however, lowers the pH of the cement pore solution and can corrode the reinforcement bars.
Hydration and hardening of concrete during the first three days is critical. Abnormally fast drying and shrinkage due to factors such as evaporation from wind during placement may lead to increased tensile stresses at a time when it has not yet gained sufficient strength, resulting in greater shrinkage cracking. The early strength of the concrete can be increased if it is kept damp during the curing process. Minimizing stress prior to curing minimizes cracking. High-early-strength concrete is designed to hydrate faster, often by increased use of cement that increases shrinkage and cracking. The strength of concrete changes (increases) for up to three years. It depends on cross-section dimension of elements and conditions of structure exploitation. Addition of short-cut polymer fibers can improve (reduce) shrinkage-induced stresses during curing and increase early and ultimate compression strength.Properly curing concrete leads to increased strength and lower permeability and avoids cracking where the surface dries out prematurely. Care must also be taken to avoid freezing or overheating due to the exothermic setting of cement. Improper curing can cause scaling, reduced strength, poor abrasion resistance and cracking.
Curing techniques avoiding water loss by evaporation
During the curing period, concrete is ideally maintained at controlled temperature and humidity. To ensure full hydration during curing, concrete slabs are often sprayed with "curing compounds" that create a water-retaining film over the concrete. Typical films are made of wax or related hydrophobic compounds. After the concrete is sufficiently cured, the film is allowed to abrade from the concrete through normal use.Traditional conditions for curing involve spraying or ponding the concrete surface with water. The adjacent picture shows one of many ways to achieve this, ponding—submerging setting concrete in water and wrapping in plastic to prevent dehydration. Additional common curing methods include wet burlap and plastic sheeting covering the fresh concrete.
For higher-strength applications, accelerated curing techniques may be applied to the concrete. A common technique involves heating the poured concrete with steam, which serves to both keep it damp and raise the temperature so that the hydration process proceeds more quickly and more thoroughly.
Alternative types
Asphalt
Asphalt concrete (commonly called asphalt, blacktop, or pavement in North America, and tarmac, bitumen macadam, or rolled asphalt in the United Kingdom and the Republic of Ireland) is a composite material commonly used to surface roads, parking lots, airports, as well as the core of embankment dams. Asphalt mixtures have been used in pavement construction since the beginning of the twentieth century. It consists of mineral aggregate bound together with asphalt, laid in layers, and compacted. The process was refined and enhanced by Belgian inventor and U.S. immigrant Edward De Smedt.The terms asphalt (or asphaltic) concrete, bituminous asphalt concrete, and bituminous mixture are typically used only in engineering and construction documents, which define concrete as any composite material composed of mineral aggregate adhered with a binder. The abbreviation, AC, is sometimes used for asphalt concrete but can also denote asphalt content or asphalt cement, referring to the liquid asphalt portion of the composite material.
Graphene enhanced concrete
Graphene enhanced concretes are standard designs of concrete mixes, except that during the cement-mixing or production process, a small amount of chemically engineered graphene (typically < 0.5% by weight) is added. These enhanced graphene concretes are designed around the concrete application.
Microbial
Bacteria such as Bacillus pasteurii, Bacillus pseudofirmus, Bacillus cohnii, Sporosarcina pasteuri, and Arthrobacter crystallopoietes increase the compression strength of concrete through their biomass. However some forms of bacteria can also be concrete-destroying. Bacillus sp. CT-5. can reduce corrosion of reinforcement in reinforced concrete by up to four times. Sporosarcina pasteurii reduces water and chloride permeability. B. pasteurii increases resistance to acid. Bacillus pasteurii and B. sphaericuscan induce calcium carbonate precipitation in the surface of cracks, adding compression strength.
Nanoconcrete
Nanoconcrete (also spelled "nano concrete"' or "nano-concrete") is a class of materials that contains Portland cement particles that are no greater than 100 μm and particles of silica no greater than 500 μm, which fill voids that would otherwise occur in normal concrete, thereby substantially increasing the material's strength. It is widely used in foot and highway bridges where high flexural and compressive strength are indicated.
Pervious
Pervious concrete is a mix of specially graded coarse aggregate, cement, water, and little-to-no fine aggregates. This concrete is also known as "no-fines" or porous concrete. Mixing the ingredients in a carefully controlled process creates a paste that coats and bonds the aggregate particles. The hardened concrete contains interconnected air voids totaling approximately 15 to 25 percent. Water runs through the voids in the pavement to the soil underneath. Air entrainment admixtures are often used in freeze-thaw climates to minimize the possibility of frost damage. Pervious concrete also permits rainwater to filter through roads and parking lots, to recharge aquifers, instead of contributing to runoff and flooding.
Polymer
Polymer concretes are mixtures of aggregate and any of various polymers and may be reinforced. The cement is costlier than lime-based cements, but polymer concretes nevertheless have advantages; they have significant tensile strength even without reinforcement, and they are largely impervious to water. Polymer concretes are frequently used for the repair and construction of other applications, such as drains.
Volcanic
Volcanic concrete substitutes volcanic rock for the limestone that is burned to form clinker. It consumes a similar amount of energy, but does not directly emit carbon as a byproduct. Volcanic rock/ash are used as supplementary cementitious materials in concrete to improve the resistance to sulfate, chloride and alkali silica reaction due to pore refinement. Also, they are generally cost effective in comparison to other aggregates, good for semi and light weight concretes, and good for thermal and acoustic insulation.Pyroclastic materials, such as pumice, scoria, and ashes are formed from cooling magma during explosive volcanic eruptions. They are used as supplementary cementitious materials (SCM) or as aggregates for cements and concretes. They have been extensively used since ancient times to produce materials for building applications. For example, pumice and other volcanic glasses were added as a natural pozzolanic material for mortars and plasters during the construction of the Villa San Marco in the Roman period (89 BC – 79 AD), which remain one of the best-preserved otium villae of the Bay of Naples in Italy.
Waste light
Waste light is form of polymer modified concrete. The specific polymer admixture allows the replacement of all the traditional aggregates (gravel, sand, stone) by any mixture of solid waste materials in the grain size of 3–10 mm to form a low-compressive-strength (3–20 N/mm2) product for road and building construction. One cubic meter of waste light concrete contains 1.1–1.3 m3 of shredded waste and no other aggregates.
Sulfur concrete
Sulfur concrete is a special concrete that uses sulfur as a binder and does not require cement or water.
Properties
Concrete has relatively high compressive strength, but much lower tensile strength. Therefore, it is usually reinforced with materials that are strong in tension (often steel). The elasticity of concrete is relatively constant at low stress levels but starts decreasing at higher stress levels as matrix cracking develops. Concrete has a very low coefficient of thermal expansion and shrinks as it matures. All concrete structures crack to some extent, due to shrinkage and tension. Concrete that is subjected to long-duration forces is prone to creep.
Tests can be performed to ensure that the properties of concrete correspond to specifications for the application.
The ingredients affect the strengths of the material. Concrete strength values are usually specified as the lower-bound compressive strength of either a cylindrical or cubic specimen as determined by standard test procedures.
The strengths of concrete is dictated by its function. Very low-strength—14 MPa (2,000 psi) or less—concrete may be used when the concrete must be lightweight. Lightweight concrete is often achieved by adding air, foams, or lightweight aggregates, with the side effect that the strength is reduced. For most routine uses, 20 to 32 MPa (2,900 to 4,600 psi) concrete is often used. 40 MPa (5,800 psi) concrete is readily commercially available as a more durable, although more expensive, option. Higher-strength concrete is often used for larger civil projects. Strengths above 40 MPa (5,800 psi) are often used for specific building elements. For example, the lower floor columns of high-rise concrete buildings may use concrete of 80 MPa (11,600 psi) or more, to keep the size of the columns small. Bridges may use long beams of high-strength concrete to lower the number of spans required. Occasionally, other structural needs may require high-strength concrete. If a structure must be very rigid, concrete of very high strength may be specified, even much stronger than is required to bear the service loads. Strengths as high as 130 MPa (18,900 psi) have been used commercially for these reasons.
Energy efficiency
The cement produced for making concrete accounts for about 8% of worldwide CO2 emissions per year (compared to, e.g., global aviation at 1.9%). The two largest sources of CO2 are produced by the cement manufacturing process, arising from (1) the decarbonation reaction of limestone in the cement kiln (T ≈ 950 °C), and (2) from the combustion of fossil fuel to reach the sintering temperature (T ≈ 1450 °C) of cement clinker in the kiln. The energy required for extracting, crushing, and mixing the raw materials (construction aggregates used in the concrete production, and also limestone and clay feeding the cement kiln) is lower. Energy requirement for transportation of ready-mix concrete is also lower because it is produced nearby the construction site from local resources, typically manufactured within 100 kilometers of the job site. The overall embodied energy of concrete at roughly 1 to 1.5 megajoules per kilogram is therefore lower than for many structural and construction materials.Once in place, concrete offers a great energy efficiency over the lifetime of a building. Concrete walls leak air far less than those made of wood frames. Air leakage accounts for a large percentage of energy loss from a home. The thermal mass properties of concrete increase the efficiency of both residential and commercial buildings. By storing and releasing the energy needed for heating or cooling, concrete's thermal mass delivers year-round benefits by reducing temperature swings inside and minimizing heating and cooling costs. While insulation reduces energy loss through the building envelope, thermal mass uses walls to store and release energy. Modern concrete wall systems use both external insulation and thermal mass to create an energy-efficient building. Insulating concrete forms (ICFs) are hollow blocks or panels made of either insulating foam or rastra that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Fire safety
Concrete buildings are more resistant to fire than those constructed using steel frames, since concrete has lower heat conductivity than steel and can thus last longer under the same fire conditions. Concrete is sometimes used as a fire protection for steel frames, for the same effect as above. Concrete as a fire shield, for example Fondu fyre, can also be used in extreme environments like a missile launch pad.
Options for non-combustible construction include floors, ceilings and roofs made of cast-in-place and hollow-core precast concrete. For walls, concrete masonry technology and Insulating Concrete Forms (ICFs) are additional options. ICFs are hollow blocks or panels made of fireproof insulating foam that are stacked to form the shape of the walls of a building and then filled with reinforced concrete to create the structure.
Concrete also provides good resistance against externally applied forces such as high winds, hurricanes, and tornadoes owing to its lateral stiffness, which results in minimal horizontal movement. However, this stiffness can work against certain types of concrete structures, particularly where a relatively higher flexing structure is required to resist more extreme forces.
Earthquake safety
As discussed above, concrete is very strong in compression, but weak in tension. Larger earthquakes can generate very large shear loads on structures. These shear loads subject the structure to both tensile and compressional loads. Concrete structures without reinforcement, like other unreinforced masonry structures, can fail during severe earthquake shaking. Unreinforced masonry structures constitute one of the largest earthquake risks globally. These risks can be reduced through seismic retrofitting of at-risk buildings, (e.g. school buildings in Istanbul, Turkey).
Construction with concrete
Concrete is one of the most durable building materials. It provides superior fire resistance compared with wooden construction and gains strength over time. Structures made of concrete can have a long service life. Concrete is used more than any other artificial material in the world. As of 2006, about 7.5 billion cubic meters of concrete are made each year, more than one cubic meter for every person on Earth.
Reinforced concrete
The use of reinforcement, in the form of iron was introduced in the 1850s by French industrialist François Coignet, and it was not until the 1880s that German civil engineer G. A. Wayss used steel as reinforcement. Concrete is a relatively brittle material that is strong under compression but less in tension. Plain, unreinforced concrete is unsuitable for many structures as it is relatively poor at withstanding stresses induced by vibrations, wind loading, and so on. Hence, to increase its overall strength, steel rods, wires, mesh or cables can be embedded in concrete before it is set. This reinforcement, often known as rebar, resists tensile forces.Reinforced concrete (RC) is a versatile composite and one of the most widely used materials in modern construction. It is made up of different constituent materials with very different properties that complement each other. In the case of reinforced concrete, the component materials are almost always concrete and steel. These two materials form a strong bond together and are able to resist a variety of applied forces, effectively acting as a single structural element.Reinforced concrete can be precast or cast-in-place (in situ) concrete, and is used in a wide range of applications such as; slab, wall, beam, column, foundation, and frame construction. Reinforcement is generally placed in areas of the concrete that are likely to be subject to tension, such as the lower portion of beams. Usually, there is a minimum of 50 mm cover, both above and below the steel reinforcement, to resist spalling and corrosion which can lead to structural instability. Other types of non-steel reinforcement, such as Fibre-reinforced concretes are used for specialized applications, predominately as a means of controlling cracking.
Precast concrete
Precast concrete is concrete which is cast in one place for use elsewhere and is a mobile material. The largest part of precast production is carried out in the works of specialist suppliers, although in some instances, due to economic and geographical factors, scale of product or difficulty of access, the elements are cast on or adjacent to the construction site. Precasting offers considerable advantages because it is carried out in a controlled environment, protected from the elements, but the downside of this is the contribution to greenhouse gas emission from transportation to the construction site.Advantages to be achieved by employing precast concrete:
Preferred dimension schemes exist, with elements of tried and tested designs available from a catalogue.
Major savings in time result from manufacture of structural elements apart from the series of events which determine overall duration of the construction, known by planning engineers as the 'critical path'.
Availability of Laboratory facilities capable of the required control tests, many being certified for specific testing in accordance with National Standards.
Equipment with capability suited to specific types of production such as stressing beds with appropriate capacity, moulds and machinery dedicated to particular products.
High-quality finishes achieved direct from the mould eliminate the need for interior decoration and ensure low maintenance costs.
Mass structures
Due to cement's exothermic chemical reaction while setting up, large concrete structures such as dams, navigation locks, large mat foundations, and large breakwaters generate excessive heat during hydration and associated expansion. To mitigate these effects, post-cooling is commonly applied during construction. An early example at Hoover Dam used a network of pipes between vertical concrete placements to circulate cooling water during the curing process to avoid damaging overheating. Similar systems are still used; depending on volume of the pour, the concrete mix used, and ambient air temperature, the cooling process may last for many months after the concrete is placed. Various methods also are used to pre-cool the concrete mix in mass concrete structures.Another approach to mass concrete structures that minimizes cement's thermal by-product is the use of roller-compacted concrete, which uses a dry mix which has a much lower cooling requirement than conventional wet placement. It is deposited in thick layers as a semi-dry material then roller compacted into a dense, strong mass.
Surface finishes
Raw concrete surfaces tend to be porous and have a relatively uninteresting appearance. Many finishes can be applied to improve the appearance and preserve the surface against staining, water penetration, and freezing.
Examples of improved appearance include stamped concrete where the wet concrete has a pattern impressed on the surface, to give a paved, cobbled or brick-like effect, and may be accompanied with coloration. Another popular effect for flooring and table tops is polished concrete where the concrete is polished optically flat with diamond abrasives and sealed with polymers or other sealants.
Other finishes can be achieved with chiseling, or more conventional techniques such as painting or covering it with other materials.
The proper treatment of the surface of concrete, and therefore its characteristics, is an important stage in the construction and renovation of architectural structures.
Prestressed structures
Prestressed concrete is a form of reinforced concrete that builds in compressive stresses during construction to oppose tensile stresses experienced in use. This can greatly reduce the weight of beams or slabs, by
better distributing the stresses in the structure to make optimal use of the reinforcement. For example, a horizontal beam tends to sag. Prestressed reinforcement along the bottom of the beam counteracts this.
In pre-tensioned concrete, the prestressing is achieved by using steel or polymer tendons or bars that are subjected to a tensile force prior to casting, or for post-tensioned concrete, after casting.
There are two different systems being used:
Pretensioned concrete is almost always precast, and contains steel wires (tendons) that are held in tension while the concrete is placed and sets around them.
Post-tensioned concrete has ducts through it. After the concrete has gained strength, tendons are pulled through the ducts and stressed. The ducts are then filled with grout. Bridges built in this way have experienced considerable corrosion of the tendons, so external post-tensioning may now be used in which the tendons run along the outer surface of the concrete.More than 55,000 miles (89,000 km) of highways in the United States are paved with this material. Reinforced concrete, prestressed concrete and precast concrete are the most widely used types of concrete functional extensions in modern days. For more information see Brutalist architecture.
Placement
Once mixed, concrete is typically transported to the place where it is intended to become a structural item. Various methods of transportation and placement are used depending on the distances involve, quantity needed, and other details of application. Large amounts are often transported by truck, poured free under gravity or through a tremie, or pumped through a pipe. Smaller amounts may be carried in a skip (a metal container which can be tilted or opened to release the contents, usually transported by crane or hoist), or wheelbarrow, or carried in toggle bags for manual placement underwater.
Cold weather placement
Extreme weather conditions (extreme heat or cold; windy conditions, and humidity variations) can significantly alter the quality of concrete. Many precautions are observed in cold weather placement. Low temperatures significantly slow the chemical reactions involved in hydration of cement, thus affecting the strength development. Preventing freezing is the most important precaution, as formation of ice crystals can cause damage to the crystalline structure of the hydrated cement paste. If the surface of the concrete pour is insulated from the outside temperatures, the heat of hydration will prevent freezing.
The American Concrete Institute (ACI) definition of cold weather placement, ACI 306, is:
A period when for more than three successive days the average daily air temperature drops below 40 °F (~ 4.5 °C), and
Temperature stays below 50 °F (10 °C) for more than one-half of any 24-hour period.In Canada, where temperatures tend to be much lower during the cold season, the following criteria are used by CSA A23.1:
When the air temperature is ≤ 5 °C, and
When there is a probability that the temperature may fall below 5 °C within 24 hours of placing the concrete.The minimum strength before exposing concrete to extreme cold is 500 psi (3.4 MPa). CSA A 23.1 specified a compressive strength of 7.0 MPa to be considered safe for exposure to freezing.
Underwater placement
Concrete may be placed and cured underwater. Care must be taken in the placement method to prevent washing out the cement. Underwater placement methods include the tremie, pumping, skip placement, manual placement using toggle bags, and bagwork.Grouted aggregate is an alternative method of forming a concrete mass underwater, where the forms are filled with coarse aggregate and the voids then completely filled with pumped grout.
Roads
Concrete roads are more fuel efficient to drive on, more reflective and last significantly longer than other paving surfaces, yet have a much smaller market share than other paving solutions. Modern-paving methods and design practices have changed the economics of concrete paving, so that a well-designed and placed concrete pavement will be less expensive on initial costs and significantly less expensive over the life cycle. Another major benefit is that pervious concrete can be used, which eliminates the need to place storm drains near the road, and reducing the need for slightly sloped roadway to help rainwater to run off. No longer requiring discarding rainwater through use of drains also means that less electricity is needed (more pumping is otherwise needed in the water-distribution system), and no rainwater gets polluted as it no longer mixes with polluted water. Rather, it is immediately absorbed by the ground.
Environment, health and safety
The manufacture and use of concrete produce a wide range of environmental, economic and social impacts.
Concrete, cement and the environment
A major component of concrete is cement, a fine powder used mainly to bind sand and coarser aggregates together in concrete. Although a variety of cement types exist, the most common is "Portland cement", which is produced by mixing clinker with smaller quantities of other additives such as gypsum and ground limestone. The production of clinker, the main constituent of cement, is responsible for the bulk of the sector's greenhouse gas emissions, including both energy intensity and process emissions.The cement industry is one of the three primary producers of carbon dioxide, a major greenhouse gas – the other two being energy production and transportation industries. On average, every tonne of cement produced releases one tonne of CO2 into the atmosphere. Pioneer cement manufacturers have claimed to reach lower carbon intensities, with 590 kg of CO2eq per tonne of cement produced. The emissions are due to combustion and calcination processes, which roughly account for 40% and 60% of the greenhouse gases, respectively. Considering that cement is only a fraction of the constituents of concrete, it is estimated that a tonne of concrete is responsible for emitting about 100–200 kg of CO2. Every year more than 10 billion tonnes of concrete are used worldwide. In the coming years, large quantities of concrete will continue to be used, and the mitigation of CO2 emissions from the sector will be even more critical.
Concrete is used to create hard surfaces that contribute to surface runoff, which can cause heavy soil erosion, water pollution, and flooding, but conversely can be used to divert, dam, and control flooding. Concrete dust released by building demolition and natural disasters can be a major source of dangerous air pollution. Concrete is a contributor to the urban heat island effect, though less so than asphalt.
Concrete and climate change mitigation
Reducing the cement clinker content might have positive effects on the environmental life-cycle assessment of concrete. Some research work on reducing the cement clinker content in concrete has already been carried out. However, there exist different research strategies. Often replacement of some clinker for large amounts of slag or fly ash was investigated based on conventional concrete technology. This could lead to a waste of scarce raw materials such as slag and fly ash. The aim of other research activities is the efficient use of cement and reactive materials like slag and fly ash in concrete based on a modified mix design approach.An environmental investigation found that the embodied carbon of a precast concrete facade can be reduced by 50% when using the presented fiber reinforced high performance concrete in place of typical reinforced concrete cladding.Studies have been conducted about commercialization of low-carbon concretes. Life cycle assessment (LCA) of low-carbon concrete was investigated according to the ground granulated blast-furnace slag (GGBS) and fly ash (FA) replacement ratios. Global warming potential (GWP) of GGBS decreased by 1.1 kg CO2 eq/m3, while FA decreased by 17.3 kg CO2 eq/m3 when the mineral admixture replacement ratio was increased by 10%. This study also compared the compressive strength properties of binary blended low-carbon concrete according to the replacement ratios, and the applicable range of mixing proportions was derived.Researchers at University of Auckland are working on utilizing biochar in concrete applications to reduce carbon emissions during concrete production and to improve strength.
Concrete and climate change adaptation
High-performance building materials will be particularly important for enhancing resilience, including for flood defenses and critical-infrastructure protection. Risks to infrastructure and cities posed by extreme weather events are especially serious for those places exposed to flood and hurricane damage, but also where residents need protection from extreme summer temperatures. Traditional concrete can come under strain when exposed to humidity and higher concentrations of atmospheric CO2. While concrete is likely to remain important in applications where the environment is challenging, novel, smarter and more adaptable materials are also needed.
Concrete – health and safety
Grinding of concrete can produce hazardous dust. Exposure to cement dust can lead to issues such as silicosis, kidney disease, skin irritation and similar effects. The U.S. National Institute for Occupational Safety and Health in the United States recommends attaching local exhaust ventilation shrouds to electric concrete grinders to control the spread of this dust. In addition, the Occupational Safety and Health Administration (OSHA) has placed more stringent regulations on companies whose workers regularly come into contact with silica dust. An updated silica rule, which OSHA put into effect 23 September 2017 for construction companies, restricted the amount of breathable crystalline silica workers could legally come into contact with to 50 micro grams per cubic meter of air per 8-hour workday. That same rule went into effect 23 June 2018 for general industry, hydraulic fracturing and maritime. That deadline was extended to 23 June 2021 for engineering controls in the hydraulic fracturing industry. Companies which fail to meet the tightened safety regulations can face financial charges and extensive penalties. The presence of some substances in concrete, including useful and unwanted additives, can cause health concerns due to toxicity and radioactivity. Fresh concrete (before curing is complete) is highly alkaline and must be handled with proper protective equipment.
Circular economy
Concrete is an excellent material with which to make long-lasting and energy-efficient buildings. However, even with good design, human needs change and potential waste will be generated.
End-of-life: concrete degradation and waste
Concrete can be damaged by many processes, such as the expansion of corrosion products of the steel reinforcement bars, freezing of trapped water, fire or radiant heat, aggregate expansion, sea water effects, bacterial corrosion, leaching, erosion by fast-flowing water, physical damage and chemical damage (from carbonatation, chlorides, sulfates and distillate water). The micro fungi Aspergillus alternaria and Cladosporium were able to grow on samples of concrete used as a radioactive waste barrier in the Chernobyl reactor; leaching aluminium, iron, calcium, and silicon.Concrete may be considered waste according to the European Commission decision of 2014/955/EU for the List of Waste under the codes: 17 (construction and demolition wastes, including excavated soil from contaminated sites) 01 (concrete, bricks, tiles and ceramics), 01 (concrete), and 17.01.06* (mixtures of, separate fractions of concrete, bricks, tiles and ceramics containing hazardous substances), and 17.01.07 (mixtures of, separate fractions of concrete, bricks, tiles and ceramics other than those mentioned in 17.01.06). It is estimated that in 2018 the European Union generated 371,910 thousand tons of mineral waste from construction and demolition, and close to 4% of this quantity is considered hazardous. Germany, France and the United Kingdom were the top three polluters with 86,412 thousand tons, 68,976 and 68,732 thousand tons of construction waste generation, respectively.Currently, there is not an End-of-Waste criteria for concrete materials in the EU. However, different sectors have been proposing alternatives for concrete waste and re purposing it as a secondary raw material in various applications, including concrete manufacturing itself.
Reuse of concrete
Reuse of blocks in original form, or by cutting into smaller blocks, has even less environmental impact; however, only a limited market currently exists. Improved building designs that allow for slab reuse and building transformation without demolition could increase this use. Hollow core concrete slabs are easy to dismantle and the span is normally constant, making them good for reuse.Other cases of re-use are possible with pre-cast concrete pieces: through selective demolition, such pieces can be disassembled and collected for further use in other building sites. Studies show that back-building and remounting plans for building units (i.e., re-use of pre-fabricated concrete) is an alternative for a kind of construction which protects resources and saves energy. Especially long-living, durable, energy-intensive building materials, such as concrete, can be kept in the life-cycle longer through recycling. Prefabricated constructions are the prerequisites for constructions necessarily capable of being taken apart. In the case of optimal application in the building carcass, savings in costs are estimated in 26%, a lucrative complement to new building methods. However, this depends on several courses to be set. The viability of this alternative has to be studied as the logistics associated with transporting heavy pieces of concrete can impact the operation financially and also increase the carbon footprint of the project. Also, ever changing regulations on new buildings worldwide may require higher quality standards for construction elements and inhibit the use of old elements which may be classified as obsolete.
Recycling of concrete
Concrete recycling is an increasingly common method for disposing of concrete structures. Concrete debris were once routinely shipped to landfills for disposal, but recycling is increasing due to improved environmental awareness, governmental laws and economic benefits.
Contrary to general belief, concrete recovery is achievable – concrete can be crushed and reused as aggregate in new projects.Recycling or recovering concrete reduces natural resource exploitation and associated transportation costs, and reduces waste landfill. However, it has little impact on reducing greenhouse gas emissions as most emissions occur when cement is made, and cement alone cannot be recycled. At present, most recovered concrete is used for road sub-base and civil engineering projects. From a sustainability viewpoint, these relatively low-grade uses currently provide the optimal outcome.The recycling process can be done in situ, with mobile plants, or in specific recycling units. The input material can be returned concrete which is fresh (wet) from ready-mix trucks, production waste at a pre-cast production facility, or waste from construction and demolition. The most significant source is demolition waste, preferably pre-sorted from selective demolition processes.By far the most common method for recycling dry and hardened concrete involves crushing. Mobile sorters and crushers are often installed on construction sites to allow on-site processing. In other situations, specific processing sites are established, which are usually able to produce higher quality aggregate. Screens are used to achieve desired particle size, and remove dirt, foreign particles and fine material from the coarse aggregate.Chloride and sulfates are undesired contaminants originated from soil and weathering and can provoke corrosion problems on aluminium and steel structures. The final product, Recycled Concrete Aggregate (RCA), presents interesting properties such as: angular shape, rougher surface, lower specific gravity (20%), higher water absorption, and pH greater than 11 – this elevated pH increases the risk of alkali reactions.The lower density of RCA usually Increases project efficiency and improve job cost – recycled concrete aggregates yield more volume by weight (up to 15%). The physical properties of coarse aggregates made from crushed demolition concrete make it the preferred material for applications such as road base and sub-base. This is because recycled aggregates often have better compaction properties and require less cement for sub-base uses. Furthermore, it is generally cheaper to obtain than virgin material.
Applications of recycled concrete aggregate
The main commercial applications of the final recycled concrete aggregate are:
Aggregate base course (road base), or the untreated aggregates used as foundation for roadway pavement, is the underlying layer (under pavement surfacing) which forms a structural foundation for paving. To this date this has been the most popular application for RCA due to technical-economic aspects.
Aggregate for ready-mix concrete, by replacing from 10 to 45% of the natural aggregates in the concrete mix with a blend of cement, sand and water. Some concept buildings are showing the progress of this field. Because the RCA itself contains cement, the ratios of the mix have to be adjusted to achieve desired structural requirements such as workability, strength and water absorption.
Soil Stabilization, with the incorporation of recycled aggregate, lime, or fly ash into marginal quality subgrade material used to enhance the load bearing capacity of that subgrade.
Pipe bedding: serving as a stable bed or firm foundation in which to lay underground utilities. Some countries' regulations prohibit the use of RCA and other construction and demolition wastes in filtration and drainage beds due to potential contamination with chromium and pH-value impacts.
Landscape Materials: to promote green architecture. To date, recycled concrete aggregate has been used as boulder/stacked rock walls, underpass abutment structures, erosion structures, water features, retaining walls, and more.
Cradle-to-cradle challenges
The applications developed for RCA so far are not exhaustive, and many more uses are to be developed as regulations, institutions and norms find ways to accommodate construction and demolition waste as secondary raw materials in a safe and economic way. However, considering the purpose of having a circularity of resources in the concrete life cycle, the only application of RCA that could be considered as recycling of concrete is the replacement of natural aggregates on concrete mixes. All the other applications would fall under the category of downcycling. It is estimated that even near complete recovery of concrete from construction and demolition waste will only supply about 20% of total aggregate needs in the developed world.The path towards circularity goes beyond concrete technology itself, depending on multilateral advances in the cement industry, research and development of alternative materials, building design and management, and demolition as well as conscious use of spaces in urban areas to reduce consumption.
World records
The world record for the largest concrete pour in a single project is the Three Gorges Dam in Hubei Province, China by the Three Gorges Corporation. The amount of concrete used in the construction of the dam is estimated at 16 million cubic meters over 17 years. The previous record was 12.3 million cubic meters held by Itaipu hydropower station in Brazil.The world record for concrete pumping was set on 7 August 2009 during the construction of the Parbati Hydroelectric Project, near the village of Suind, Himachal Pradesh, India, when the concrete mix was pumped through a vertical height of 715 m (2,346 ft).The Polavaram dam works in Andhra Pradesh on 6 January 2019 entered the Guinness World Records by pouring 32,100 cubic metres of concrete in 24 hours. The world record for the largest continuously poured concrete raft was achieved in August 2007 in Abu Dhabi by contracting firm Al Habtoor-CCC Joint Venture and the concrete supplier is Unibeton Ready Mix. The pour (a part of the foundation for the Abu Dhabi's Landmark Tower) was 16,000 cubic meters of concrete poured within a two-day period. The previous record, 13,200 cubic meters poured in 54 hours despite a severe tropical storm requiring the site to be covered with tarpaulins to allow work to continue, was achieved in 1992 by joint Japanese and South Korean consortiums Hazama Corporation and the Samsung C&T Corporation for the construction of the Petronas Towers in Kuala Lumpur, Malaysia.The world record for largest continuously poured concrete floor was completed 8 November 1997, in Louisville, Kentucky by design-build firm EXXCEL Project Management. The monolithic placement consisted of 225,000 square feet (20,900 m2) of concrete placed in 30 hours, finished to a flatness tolerance of FF 54.60 and a levelness tolerance of FL 43.83. This surpassed the previous record by 50% in total volume and 7.5% in total area.The record for the largest continuously placed underwater concrete pour was completed 18 October 2010, in New Orleans, Louisiana by contractor C. J. Mahan Construction Company, LLC of Grove City, Ohio. The placement consisted of 10,251 cubic yards of concrete placed in 58.5 hours using two concrete pumps and two dedicated concrete batch plants. Upon curing, this placement allows the 50,180-square-foot (4,662 m2) cofferdam to be dewatered approximately 26 feet (7.9 m) below sea level to allow the construction of the Inner Harbor Navigation Canal Sill & Monolith Project to be completed in the dry.
See also
Concrete leveling – Process to level concrete by levelling its underlying foundation
Concrete mixer – Device that combines cement, aggregate, and water to form concrete
Concrete masonry unit – Standard-sized block used in constructionPages displaying short descriptions of redirect targets
Concrete plant – Equipment that combines various ingredients to form concrete
Syncrete – Synthetic form of concrete
Further reading
"The world's growing problem with concrete, the world's most destructive material" (Video). BBC Reel. 6 March 2023.
References
External links
Media related to Concrete at Wikimedia Commons
Advantage and Disadvantage of Concrete
Dunning, Brian (4 January 2022). "Skeptoid #813: Why You Need to Care About Concrete". Skeptoid. Retrieved 14 May 2022.
Getting Buried In Concrete To Explain How It Works on YouTube
Release of ultrafine particles from three simulated building processes
Concrete: The Quest for Greener Alternatives |
electricity sector in hong kong | Electricity sector in Hong Kong ranges from generation, transmission, distribution and sales of electricity covering Hong Kong. The combustion of coal, natural gas and oil are the main sources of electricity in Hong Kong. The electricity sector contributes 60.4% of Hong Kong's total greenhouse gas emissions.There are two main providers of electricity in Hong Kong.
Companies
Power generation in Hong Kong is managed by two major companies under a Scheme of Control arrangement with the Hong Kong Government. These companies effectively operate in a regulated market.
Hongkong Electric Company
The Hongkong Electric Company (HEC; Chinese: 香港電燈有限公司) HEC's supply area includes Hong Kong Island and Lamma Island.HEC owns and operates:
Lamma Power Station
Lamma Winds Power Station
CLP Power Hong Kong Limited
The CLP Power Hong Kong Limited (CLP; Chinese: 中華電力有限公司) under the CLP Group was founded on 25 January 1901 as China Light & Power Company Syndicate in British Hong Kong. CLP's supply area includes Kowloon, New Territories and outlying islands except Lamma Island.
CLP owns the following power stations in Hong Kong territory under a joint-venture company Castle Peak Power Company Limited (CAPCO) with China Southern Power Grid International (HK) Co., Limited.
Black Point Power Station
Castle Peak Power Station
Penny's Bay Power StationCLP also owns 25% shares of the Guangdong Daya Bay Nuclear Power Station and wholly owns Guangzhou Pumped Storage Power Station in Conghua.
Generation
Fuel
In 2012, Hong Kong relied on coal (53%), nuclear (23%), natural gas (22%) and a very small amount (2%) of renewable energy for its electricity generation. As coal-firing generation units start to retire in 2017, the Government plans to raise the share of natural gas to 50% in 2020 while maintaining the share of nuclear power at present levels.The Government announced that in the year 2020, around half of electricity generation had been met by natural gas, and generation from coal had been successfully reduced to about a quarter, with the remaining generation from imported nuclear energy from the mainland and renewables (utility and off-grid).
Power stations in Hong Kong
Hong Kong has currently 5 power stations, supplying 77% of its electricity needs in 2012.
Black Point Power StationCommissioned in 1996, the Black Point Power Station is a gas-fired power station located in Yuen Long, Tuen Mun in the New Territories. It is the largest gas-fired power station in Hong Kong with an installed generation capacity of 2,500MW.
Castle Peak Power StationCommissioned in 1982, the Castle Peak Power Station is a coal-fired power station located Tap Shek Kok, Tuen Mun in the New Territories. It is the largest power station in Hong Kong with an installed generation capacity of 4,108MW.
Lamma Power StationCommissioned in 1982, the Lamma Power Station is a coal-fired power station located on Po Lo Tsui, Lamma Island, part of the Islands District. It is the second largest power station in Hong Kong at an installed generation capacity of 3,237 MW.
Lamma Winds Power StationCommissioned in 2006, the Lamma Winds Power Station is a wind turbine located on Lamma Island in Islands District. It is the only industrial-sized wind turbine in Hong Kong with an installed generation capacity of 800 kW.
Penny's Bay Power StationCommissioned in 1992, the Penny's Bay Power Station is a diesel-fired gas turbine power station located at Penny's Bay on Lantau Island. It is a peaking power station with an installed generation capacity of 300MW.
Interconnection with China
CLP's electrical grid is interconnected with the China Southern Power Grid of Mainland China. Hong Kong imports 23% of its total electricity needs from generating facilities with CLP's equity situated in the mainland. These include a contractual agreement of 70% electricity output from the 2 x 944 MW Daya Bay Nuclear Power Plant and some peaking power/pumping load from Guangzhou Pumped Storage Power Station in Conghua. Recently, CLP said that it would buy 10% more nuclear power from Daya Bay Plant, increasing the share to 80% of the plant's output capacity. But with the increasing electricity demand in southern China, it will be difficult for CLP to acquire 100% of the plant's output capacity.
Transmission
HEC transmits electricity on Hong Kong Island at 275 kV and 132 kV voltage level to various load centres, in which the network consists mainly of underground and submarine cable. The network is owned and operated by HEC. There are only few remaining 132 kV overhead power lines in the system. The use of underground cable was chosen because it is ideal for a densely populated area like Hong Kong, and to ensure supply reliability even in bad weather, such as during typhoon.
There are also six dedicated cable tunnels to accommodate some of the 275 kV fluid-filled cable circuits in Hong Kong Island and Lamma Island. In most of the load centres, the voltage is being stepped down to 22 kV or 11 kV for distribution purpose.CLP transmits electricity in Kowloon and New Territories at 400 kV and 132 kV voltage level. These transmission networks consist mainly of overhead lines. The network is owned and operated by CLP Power. In most of the load centres, the voltage is stepped down to 11 kV for distribution.
Transmission networks of CLP Power and HEC are interconnected by three 132 kV submarine circuits from Hung Hom to North Point for emergency support but no economy power interchange is normally scheduled.
CLP's 400 kV transmission network is also interconnected with the 500 kV China Southern Power Grid in Guangdong Province.
Distribution
Electricity is distributed at 22 kV and 11 kV voltage level to over 3,800 distribution substations on HEC side. CLP distributes power mainly at 11 kV level. Voltage is further stepped down to 380 V three-phase or 220 V single-phase and supplied through low voltage cables to customers.
Control centres
The system control centre located at Ap Lei Chau monitors and controls all of the switching in HEC's distribution substations remotely. CLP has its system control centre in Tai Po district.
Consumption
In 2021, 178,301 TJ (49.528 TWh) of electricity was consumed, accounting for 51.8% of total energy consumption in Hong Kong. Electricity usage based on industry in Hong Kong are 66% (commercial), 26% (residential), 6% (industrial) and 2% (transportation). Peak demand of electricity use was 9.942 GW.
See also
Energy in Hong Kong
List of electricity sectors
Electricity sector in China
References
Further reading
Leung, C.T. (December 1979). "Alternative Energy Sources in Hong Kong: Policy Considerations and Constraints" (PDF). Hong Kong Journal of Public Administration. Vol. 1, no. 2. Hong Kong. pp. 40–49. Retrieved Sep 15, 2014. |
energy policy | Energy policy is the manner in which a given entity (often governmental) has decided to address issues of energy development including energy conversion, distribution and use as well as reduction of greenhouse gas emissions in order to contribute to climate change mitigation. The attributes of energy policy may include legislation, international treaties, incentives to investment, guidelines for energy conservation, taxation and other public policy techniques. Energy is a core component of modern economies. A functioning economy requires not only labor and capital but also energy, for manufacturing processes, transportation, communication, agriculture, and more. Energy planning is more detailed than energy policy.
Energy policy is closely related to climate change policy because totalled worldwide the energy sector emits more greenhouse gas than other sectors.
Purposes
Access to energy is critical for basic social needs, such as lighting, heating, cooking, and healthcare. Given the importance of energy, the price of energy has a direct effect on jobs, economic productivity, business competitiveness, and the cost of goods and services.
Frequently the dominant issue of energy policy is the risk of supply-demand mismatch (see: energy crisis). Current energy policies also address environmental issues (see: climate change), particularly challenging because of the need to reconcile global objectives and international rules with domestic needs and laws.The "human dimensions" of energy use are of increasing interest to business, utilities, and policymakers. Using the social sciences to gain insights into energy consumer behavior can help policymakers to make better decisions about broad-based climate and energy options. This could facilitate more efficient energy use, renewable-energy commercialization, and carbon-emission reductions.
Approaches
The attributes of energy policy may include legislation, international treaties, incentives to investment, guidelines for energy conservation, taxation and other public policy techniques. Economic and energy modelling can be used by governmental or inter-governmental bodies as an advisory and analysis tool.
National energy policy
Some governments state an explicit energy policy. Others do not but in any case, each government practices some type of energy policy. A national energy policy comprises a set of measures involving that country's laws, treaties and agency directives. The energy policy of a sovereign nation may include one or more of the following measures:
statement of national policy regarding energy planning, energy generation, transmission and usage
legislation on commercial energy activities (trading, transport, storage, etc.)
legislation affecting energy use, such as efficiency standards, emission standards
instructions for state-owned energy sector assets and organizations
active participation in, co-ordination of and incentives for mineral fuels exploration (see geological survey) and other energy-related research and development policy command
fiscal policies related to energy products and services (taxes, exemptions, subsidies, etc.)
energy security and international policy measures such as:
international energy sector treaties and alliances,
general international trade agreements,
special relations with energy-rich countries, including military presence and/or domination.There are a number of elements that are naturally contained in a national energy policy, regardless of which of the above measures was used to arrive at the resultant policy. The chief elements intrinsic to an energy policy are:
What is the extent of energy self-sufficiency for this nation
Where future energy sources will derive
How future energy will be consumed (e.g. among sectors)
What fraction of the population will be acceptable to endure energy poverty
What are the goals for future energy intensity, ratio of energy consumed to GDP
What is the reliability standard for distribution reliability
What environmental externalities are acceptable and are forecast
What form of "portable energy" is forecast (e.g. sources of fuel for motor vehicles)
How will energy efficient hardware (e.g. hybrid vehicles, household appliances) be encouraged
How can the national policy drive province, state and municipal functions
What specific mechanisms (e.g. taxes, incentives, manufacturing standards) are in place to implement the total policy
Do you want to develop and promote a plan for how to get the world to zero CO2 emissions?
What future consequences there will be for national security and foreign policy
Relationship to other government policies
Energy policy sometimes dominates and sometimes is dominated by other government policies. For example energy policy may dominate, supplying free coal to poor families and schools thus supporting social policy, but thus causing air pollution and so impeding heath policy and environmental policy.: 13 On the other hand energy policy may be dominated by defense policy, for example some counties started building expensive nuclear power plants to supply material for bombs. Or defense policy may be dominated for a while, eventually resulting in stranded assets, such as Nord Stream 2.
Energy policy is closely related to climate change policy because totalled worldwide the energy sector emits more greenhouse gas than other sectors.Energy policy decisions are sometimes not taken democratically.
Corporate energy policy
In 2019, some companies “have committed to set climate targets across their operations and value chains aligned with limiting global temperature rise to 1.5°C above pre-industrial levels and reaching net-zero emissions by no later than 2050”. Corporate power purchase agreements can kickstart renewable energy projects, but the energy policies of some countries do not allow or discourage them.
By type of energy
Nuclear energy
Renewable energy
By country
Energy policies vary by country, see tables below.
Examples
China
India
Ecuador
European Union
Russia
United Kingdom
United States
See also
Energy balance
Energy industry
Energy law
Energy security
Environmental policy
Oil Shockwave
Sustainable energy
World Forum on Energy Regulation (WFER)
All pages with titles containing Energy policy of
References
External links
"Energy Policies of (Country x)" series, International Energy Agency
UN-Energy - Global energy policy co-ordination
Renewable Energy Policy Network (REN21)
Information on energy institutions, policies and local energy companies by country, Enerdata Publications |
ethanol fuel in the united states | The United States became the world's largest producer of ethanol fuel in 2005. The U.S. produced 15.8 billion U.S. liquid gallons of ethanol fuel in 2019, and 13.9 billion U.S. liquid gallons (52.6 billion liters) in 2011, an increase from 13.2 billion U.S. liquid gallons (49.2 billion liters) in 2010, and up from 1.63 billion gallons in 2000. Brazil and U.S. production accounted for 87.1% of global production in 2011. In the U.S, ethanol fuel is mainly used as an oxygenate in gasoline in the form of low-level blends up to 10 percent, and, increasingly, as E85 fuel for flex-fuel vehicles. The U.S. government subsidizes ethanol production.The ethanol market share in the U.S. gasoline supply grew by volume from just over 1 percent in 2000 to more than 3 percent in 2006 to 10 percent in 2011. Domestic production capacity increased fifteen times after 1990, from 900 million US gallons to 1.63 billion US gal in 2000, to 13.5 billion US gallons in 2010. The Renewable Fuels Association reported 209 ethanol distilleries in operation located in 29 states in 2011.By 2011 most cars on U.S. roads could run on blends of up to 10% ethanol(E10), and manufacturers had begun producing vehicles designed for much higher percentages. However, the fuel systems of cars, trucks, and motorcycles sold before the ethanol mandate may suffer substantial damage from the use of 10% ethanol blends. Flexible-fuel cars, trucks, and minivans use gasoline/ethanol blends ranging from pure gasoline up to 85% ethanol (E85). By early 2013 there were around 11 million E85-capable vehicles on U.S. roads. Regular use of E85 is low due to lack of fueling infrastructure, but is common in the Midwest. In January 2011 the U.S. Environmental Protection Agency (EPA) granted a waiver to allow up to 15% of ethanol blended with gasoline (E15) to be sold only for cars and light pickup trucks with a model year of 2001 or later. The EPA waiver authorizes, but does not require stations to offer E15. Like the limitations suffered by sales of E85, commercialization of E15 is constrained by the lack of infrastructure as most fuel stations do not have enough pumps to offer the new E15 blend, few existing pumps are certified to dispense E15, and no dedicated tanks are readily available to store E15.
Historically most U.S. ethanol has come from corn, and the required electricity for many distilleries came mainly from coal. There is a debate about ethanol's sustainability and environmental impact. The primary issues related to the large amount of arable land required for crops and ethanol production's impact on grain supply, indirect land use change (ILUC) effects, as well as issues regarding its energy balance and carbon intensity considering its full life cycle.
History
In 1826 Samuel Morey experimented with an internal combustion chemical mixture that used ethanol (combined with turpentine and ambient air then vaporized) as fuel. At the time, his discovery was overlooked, mostly due to the success of steam power. Ethanol fuel received little attention until 1860 when Nicholas Otto began experimenting with internal combustion engines. In 1859, oil was found in Pennsylvania, which decades later provided a new kind of fuel. A popular fuel in the U.S. before petroleum was a blend of alcohol and turpentine called "camphene", also known as "burning fluid". The discovery of a ready supply of oil and unfavorable taxation on burning fluid made kerosene a more popular fuel.In 1896, Henry Ford designed his first car, the "Quadricycle" to run on pure ethanol. In 1908, the revolutionary Ford Model T was capable of running on gasoline, ethanol or a combination. Ford continued to advocate for ethanol fuel even during the prohibition, but lower prices caused gasoline to prevail.
Gasoline containing up to 10% ethanol began a decades-long growth in the United States in the late 1970s. The demand for ethanol produced from field corn was spurred by the discovery that methyl tertiary butyl ether (MTBE) was contaminating groundwater. MTBE's use as an oxygenate additive was widespread due to mandates in the Clean Air Act amendments of 1992 to reduce carbon monoxide emissions. MTBE in gasoline had been banned in almost 20 states by 2006. Suppliers were concerned about potential litigation and a 2005 court decision denying legal protection for MTBE. MTBE's fall from grace opened a new market for ethanol, its primary substitute. Corn prices at the time were around US$2 a bushel. Farmers saw a new market and increased production. This demand shift took place at a time when oil prices were rising.The steep growth in twenty-first century ethanol consumption was driven by federal legislation aimed to reduce oil consumption and enhance energy security. The Energy Policy Act of 2005 required use of 7,500,000,000 US gal (2.8×1010 L) of renewable fuel by 2012, and the Energy Independence and Security Act of 2007 raised the standard, to 36,000,000,000 US gal (1.4×1011 L) of annual renewable fuel use by 2022. Of this requirement, 21,000,000,000 US gal (7.9×1010 L) had to be advanced biofuels, defined as renewable fuels that reduce greenhouse gas emissions by at least 50%.
Recent trends
The world's top ethanol fuel producer in 2010 was the United States with 13.2 billion U.S. gallons (49.95 billion liters) representing 57.5% of global production, followed by Brazil with 6.92 billion U.S. gallons (26.19 billion liters), and together both countries accounted for 88% of the world production of 22.95 billion U.S. gallons (86.85 billion liters). By December 2010 the U.S. ethanol production industry consisted of 204 plants operating in 29 states, and 9 plants under construction or expansion, adding 560 million gallons of new capacity and bringing total U.S. installed capacity to 14.6 billion U.S. gallons (55.25 billion liters). At the end of 2010 over 90 percent of all gasoline sold in the U.S. was blended with ethanol.
Production
Beginning in late 2008 and early 2009, the industry came under financial stress due to that year's economic crisis. Motorists drove less and gasoline prices dropped sharply, while bank financing shrank. As a result, some plants operated below capacity, several firms closed plants, others laid off staff, some firms went bankrupt, plant projects were suspended and market prices declined. The Energy Information Administration raised concerns that the industry would not meet the legislated targets.As of 2011, most of the U.S. car fleet was able to run on blends of up to 10% ethanol, and motor vehicle manufacturers produced vehicles designed to run on more concentrated blends. As of 2015, seven states – Missouri, Minnesota, Louisiana, Montana, Oregon, Pennsylvania, and Washington – required ethanol to be blended with gasoline in motor fuels. These states, particularly Minnesota, had more ethanol usage, and according to a source at Washington University, these states accumulated substantial environmental and economic benefits as a result. Florida required ethanol blends as of the end of 2010, but has since repealed it. Many cities had separate ethanol requirements due to non-attainment of federal air quality standards. In 2007, Portland, Oregon, became the first U.S. city to require all gasoline sold within city limits to contain at least 10% ethanol. Chicago has proposed the idea of mandating E15 in the city limits, while some area gas stations have already begun offering it.Expanding ethanol (and biodiesel) industries provided jobs in plant construction, operations, and maintenance, mostly in rural communities. According to RFA the ethanol industry created almost 154,000 U.S. jobs in 2005, boosting household income by $5.7 billion. It also contributed about $3.5 billion in federal, state and local tax revenues.The return on investment (ROI) to upgrade a service station to sell E15 is quick given today's markets. Given ethanol's discount to gasoline and the current value of RINs, retailers offering mid-level ethanol blends like E15 can quickly recoup their investments in infrastructure. Federal, state and local incentives and grant programs are available in most areas, and would further help reduce the cost of equipment and installation. E15 is a higher octane fuel, it is currently available in 29 states at retail fueling stations. E15 was approved for use in model year 2001 and newer cars, light-duty trucks, medium-duty passenger vehicles (SUVs), and all flex-fuel vehicles (FFVs) by the U.S. Environmental Protection Agency (EPA) in 2012.
E85 vehicles
Ford, Chrysler, and GM are among many automobile companies that sell flexible-fuel vehicles that can run blends ranging from pure gasoline to 85% ethanol (E85), and beginning in 2008 almost any type of automobile and light duty vehicle was available with the flex-fuel option, including sedans, vans, SUVs and pickup trucks. By early 2013, about 11 million E85 flex-fuel cars and light trucks were in operation, though actual use of E85 fuel was limited, because the ethanol fueling infrastructure was limited.As of 2005, 68% of American flex-fuel car owners were not aware they owned an E85 flex. Flex and non-flex vehicles looked the same. There was no price difference. American automakers did not label these vehicles. In contrast, all Brazilian automakers clearly labeled FFVs with text that was some variant of the word Flex. Beginning in 2007 many new FFV models in the US featured a yellow gas cap to remind drivers of the E85 capabilities. As of 2008, GM badged its vehicles with the text "Flexfuel/E85 Ethanol". Nevertheless, the U.S. Department of Energy (DOE) estimated that in 2009 only 504,297 flex-fuel vehicles were regularly fueled with E85, and these were primarily fleet-operated vehicles. As a result, only 712 million gallons were used for E85, representing just 1% of that year's ethanol consumption.During the decade following 2000, E85 vehicles became increasingly common in the Midwest, where corn was a major crop.Fueling infrastructure has been a major restriction hampering E85 sales. As of March 2013, there were 3,028 fueling stations selling E85 in the U.S. Most stations were in the Corn Belt states. As of 2008 the leading state was Minnesota with 353 stations, followed by Illinois with 181, and Wisconsin with 114. About another 200 stations that dispensed ethanol were restricted to city, state and federal government vehicles.
E15 blend
In March 2009 Growth Energy, a lobbying group for the ethanol industry, formally requested the U.S. Environmental Protection Agency (EPA) to allow the ethanol content in gasoline to be increased to 15%, from 10%. In October 2010, the EPA granted a waiver to allow up to 15% blends to be sold for cars and trucks with a model year of 2007 or later, representing about 15% of vehicles on the roads. In January 2011 the waiver was expanded to authorize use of E15 to include model year 2001 through 2006 passenger vehicles. The EPA also decided not to grant any waiver for E15 use in any motorcycles, heavy-duty vehicles, or non-road engines because current testing data does not support such a waiver. According to the Renewable Fuels Association the E15 waivers now cover 62% of vehicles on the road in the country. In December 2010 several groups, including the Alliance of Automobile Manufacturers, the American Petroleum Institute, the Association of International Automobile Manufacturers, the National Marine Manufacturers Association, the Outdoor Power Equipment Institute, and the Grocery Manufacturers Association, filed suit against the EPA. In August 2012 the federal appeals court rejected the suit against the EPA ruling that the groups did not have legal standing to challenge EPA's decision to issue the waiver for E15. In June 2013 the U.S. Supreme Court declined to hear an appeal from industry groups opposed to the EPA ruling about E15 and let the 2012 federal appeals court ruling stand.According to a survey conducted by the American Automobile Association (AAA) in 2012, only about 12 million out of the more than 240 million light-duty vehicles on the U.S. roads in 2012 are approved by manufacturers are fully compliant with E15 gasoline. According with the Association, BMW, Chrysler, Nissan, Toyota, and Volkswagen warned that their warranties will not cover E15-related damage. Despite the controversy, in order to adjust to EPA regulations, 2012 and 2013 model year vehicles manufactured by General Motors can use fuel containing up to 15 percent ethanol, as indicated in the vehicle owners' manuals. However, the carmaker warned that for model year 2011 or earlier vehicles, they "strongly recommend that GM customers refer to their owners manuals for the proper fuel designation for their vehicles." Ford Motor Company also is manufacturing all of its 2013 vehicles E15 compatible, including hybrid electrics and vehicles with Ecoboost engines. Also Porsches built since 2001 are approved by its manufacturer to use E15. Volkswagen announced that for the 2014 model year, its entire lineup will be E15 capable. Fiat Chrysler Automobiles announced in August 2015 that all 2016 model year Chrysler/Fiat, Jeep, Dodge and Ram vehicles will be E15 compatible.Despite EPA's waiver, there is a practical barrier to the commercialization of the higher blend due to the lack of infrastructure, similar to the limitations suffered by sales of E85, as most fuel stations do not have enough pumps to offer the new blend, few existing pumps are certified to dispense E15, and there are no dedicated tanks readily available to store E15. In July 2012 a fueling station in Lawrence, Kansas became the first in the U.S. to sell the E15 blend. The fuel is sold through a blender pump that allows customers to choose between E10, E15, E30 or E85, with the latter blends sold only to flexible-fuel vehicles. This station was followed by a Marathon fueling station in East Lansing, Michigan. As of June 2013, there are about 24 fueling stations selling E15 out of 180,000 stations operating across the U.S.As of November 2012, sales of E15 are not authorized in California, and according to the California Air Resources Board (CARB), the blend is still awaiting approval, and in a public statement the agency said that "it would take several years to complete the vehicle testing and rule development necessary to introduce a new transportation fuel into California's market."
Legislation and regulations
The Energy Independence and Security Act of 2007, directed DOE to assess the feasibility of using intermediate ethanol blends in the existing vehicle fleet. The National Renewable Energy Laboratory (NREL) evaluated the potential impacts on legacy vehicles and other engines. In a preliminary report released in October 2008, NREL described the effects of E10, E15 and E20 on tailpipe and evaporative emissions, catalyst and engine durability, vehicle driveability, engine operability, and vehicle and engine materials. This preliminary report found that none of the vehicles displayed a malfunction indicator light; no fuel filter plugging symptoms were observed; no cold start problems were observed at 24 °C (75 °F) and 10 °C (50 °F) under laboratory conditions; and all test vehicles exhibited a loss in fuel economy proportional to ethanol's lower energy density. For example, E20 reduced average fuel economy by 7.7% when compared to gas-only (E0) test vehicles.The Obama Administration set the goal of installing 10,000 blender pumps nationwide by 2015. These pumps can dispense multiple blends including E85, E50, E30 and E20 that can be used by E85 vehicles. The US Department of Agriculture (USDA) issued a rule in May 2011 to include flexible fuel pumps in the Rural Energy for America Program (REAP). This ruling provided financial assistance, via grants and loan guarantees, to fuel station owners to install E85 and blender pumps.In May 2011 the Open Fuel Standard Act (OFS) was introduced to Congress with bipartisan support. The bill required that 50 percent of automobiles made in 2014, 80 percent in 2016, and 95 percent in 2017, be manufactured and warrantied to operate on non-petroleum-based fuels, which included existing technologies such as flex-fuel, natural gas, hydrogen, biodiesel, plug-in electric and fuel cell. Considering the rapid adoption of flexible-fuel vehicles in Brazil and the fact that the cost of making flex-fuel vehicles was approximately $100 per car, the bill's primary objective was to promote a massive adoption of flex-fuel vehicles capable of running on ethanol or methanol fuel.In November 2013, the Environmental Protection Agency opened for public comment its proposal to reduce the amount of ethanol required in the US gasoline supply as mandated by the Energy Independence and Security Act of 2007. The agency cited problems with increasing the blend of ethanol above 10%. This limit, known as the "blend wall", refers to the practical difficulty in incorporating increasing amounts of ethanol into the transportation fuel supply at volumes exceeding those achieved by the sale of nearly all gasoline as E10.
Contractual restrictions
Gasoline distribution contracts in the United States generally have provisions that make offering E15 and E85 difficult, expensive, or even impossible. Such provisions include requirements that no E85 be sold under the gas station canopy, labeling requirements, minimum sales volumes, and exclusivity provisions. Penalties for breach are severe and often allow immediate termination of the agreement, cutting off supplies to retailers. Repayment of franchise royalties and other incentives is often required.
Energy security
One rationale for ethanol production in the U.S. is increased energy security, from shifting supply from oil imports to domestic sources. Ethanol production requires significant energy, and current U.S. production derives most of that energy from domestic coal, natural gas and other non-oil sources. Because in 2006, 66% of U.S. oil consumption was imported, compared to a net surplus of coal and just 16% of natural gas (2006 figures), the displacement of oil-based fuels to ethanol produced a net shift from foreign to domestic U.S. energy sources.
Effect on gasoline prices
The effect of ethanol use on gasoline prices is the source of conflicting opinion from economic studies, further complicated by the non-market forces of tax credits, met and unmet government quotas, and the dramatic recent increase in domestic oil production. According to a 2012 Massachusetts Institute of Technology analysis, ethanol, and biofuel in general, does not materially influence the price of gasoline, while a runup in the price of government mandated Renewable Identification Number credits has driven up the price of gasoline. These are in contrast to a May 2012, Center for Agricultural and Rural Development study which showed a $0.29 to $1.09 reduction in per gallon gasoline price from ethanol use.The U.S. consumed 138.2×10^9 US gal (523×10^6 m3) of gasoline in 2008, blended with about 9.6×10^9 US gal (36×10^6 m3) of ethanol, representing a market share of almost 7% of supply by volume. Given its lower energy content, ethanol fuel displaced about 6.4×10^9 US gal (24×10^6 m3) of gasoline, representing 4.6 percent in equivalent energy units.The EPA announced in November 2013, a reduction in mandated U.S. 2014 ethanol production, due to "market conditions".
Tariffs and tax credits
Since the 1980s until 2011, domestic ethanol producers were protected by a 54-cent per gallon import tariff, mainly intended to curb Brazilian sugarcane ethanol imports. Beginning in 2004 blenders of transportation fuel received a tax credit for each gallon of ethanol they mix with gasoline. Historically, the tariff was intended to offset the federal tax credit that applied to ethanol regardless of country of origin. Several countries in the Caribbean Basin imported and reprocessed Brazilian ethanol, usually converting hydrated ethanol into anhydrous ethanol, for re-export to the United States. They avoided the 2.5% duty and the tariff, thanks to the Caribbean Basin Initiative (CBI) and free trade agreements. This process was limited to 7% of U.S. ethanol consumption.As of 2011, blenders received a US$0.45 per gallon tax credit, regardless of feedstock; small producers received an additional US$0.10 on the first 15 million US gallons; and producers of cellulosic ethanol received credits up to US$1.01. Tax credits to promote the production and consumption of biofuels date to the 1970s. For 2011, credits were based on the Energy Policy Act of 2005, the Food, Conservation, and Energy Act of 2008, and the Energy Improvement and Extension Act of 2008.A 2010 study by the Congressional Budget Office (CBO) found that in fiscal year 2009, biofuel tax credits reduced federal revenues by around US$6 billion, of which corn and cellulosic ethanol accounted for US$5.16 billion and US$50 million, respectively.In 2010, CBO estimated that taxpayer costs to reduce gasoline consumption by one gallon were $1.78 for corn ethanol and $3.00 for cellulosic ethanol. In a similar way, and without considering potential indirect land use effects, the costs to taxpayers of reducing greenhouse gas emissions through tax credits were about $750 per metric ton of CO2-equivalent for ethanol and around $275 per metric ton for cellulosic ethanol.On June 16, 2011, the U.S. Congress approved an amendment to an economic development bill to repeal both the tax credit and the tariff, but this bill did not move forward. Nevertheless, the U.S. Congress did not extend the tariff and the tax credit, allowing both to end on December 31, 2011. Since 1980 the ethanol industry was awarded an estimated US$45 billion in subsidies.
Feedstocks
Corn
Corn is the main feedstock used for producing ethanol fuel in the United States. Most of the controversies surrounding U.S. ethanol fuel production and use is related to corn ethanol's energy balance and its social and environmental impacts.
Cellulose
Cellulosic sources have the potential to produce a renewable, cleaner-burning, and carbon-neutral alternative to gasoline. In his State of the Union Address on January 31, 2006, President George W. Bush stated, "We'll also fund additional research in cutting-edge methods of producing ethanol, not just from corn, but from wood chips and stalks or switchgrass. Our goal is to make this new kind of ethanol practical and competitive within six years."On July 7, 2006, DOE announced a new research agenda for cellulosic ethanol. The 200-page scientific roadmap cited recent advances in biotechnology that could aid use of cellulosic sources. The report outlined a detailed research plan for additional technologies to improve production efficiency. The roadmap acknowledged the need for substantial federal loan guarantees for biorefineries.The 2007 federal budget earmarked $150 million for the research effort – more than doubling the 2006 budget. DOE invested in enzymatic, thermochemical, acid hydrolysis, hybrid hydrolysis/enzymatic, and other research approaches targeting more efficient and lower–cost conversion of cellulose to ethanol.The first materials considered for cellulosic biofuel included plant matter from agricultural waste, yard waste, sawdust and paper. Professors R. Malcolm Brown Jr. and David Nobles, Jr. of the University of Texas at Austin developed cyanobacteria that had the potential to produce cellulose, glucose and sucrose, the latter two easily converted into ethanol. This offers the potential to create ethanol without plant matter.
Sugar
Producing ethanol from sugar is simpler than converting corn into ethanol. Converting sugar requires only a yeast fermentation process. Converting corn requires additional cooking and the application of enzymes. The energy requirement for sugar conversion is about half that for corn. Sugarcane produces more than enough energy to do the conversion with energy left over. A 2006 U.S. Department of Agriculture report found that at market prices for ethanol, converting sugarcane, sugar beets and molasses to ethanol would be profitable. As of 2008 researchers were attempting to breed new varieties adapted to U.S. soil and weather conditions, as well as to take advantage of cellulosic ethanol technologies to also convert sugarcane bagasse.U.S. sugarcane production occurs in Florida, Louisiana, Hawaii, and Texas. The first three plants to produce sugarcane-based ethanol went online in Louisiana in mid-2009. Sugar mills in Lacassine, St. James and Bunkie were converted to sugarcane ethanol production using Colombian technology to enable profitable ethanol production. These three plants planned to produce 100×10^6 US gal (380×10^3 m3) of ethanol per year within five years.By 2009 two other sugarcane ethanol production projects were being developed in Kauai, Hawaii and Imperial Valley, California. The Hawaiian plant was projected to have a capacity of between 12–15 million US gallons (45×10^3–57×10^3 m3) a year and to supply local markets only, as shipping costs made competing in the continental US impractical. This plant went online in 2010. The California plant was expected to produce 60×10^6 US gal (230×10^3 m3) a year in 2011.
In March 2007, "ethanol diplomacy" was the focus of President George W. Bush's Latin American tour, in which he and Brazil's president, Luiz Inacio Lula da Silva, promoted the production and use of sugarcane ethanol throughout the Caribbean Basin. The two countries agreed to share technology and set international biofuel standards. Brazilian sugarcane technology transfer was intended to permit various Central American, such as Honduras, El Salvador, Nicaragua, Costa Rica and Panama, several Caribbean countries, and various Andean Countries tariff-free trade with the U.S., thanks to existing trade agreements. The expectation was that such countries would export to the United States in the short-term using Brazilian technology.In 2007, combined exports from Jamaica, El Salvador, Trinidad and Tobago and Costa Rica to the U.S. reached a total of 230.5×10^6 US gal (873×10^3 m3) of sugarcane ethanol, representing 54.1% of imports. Brazil began exporting ethanol to the U.S. in 2004 and exported 188.8×10^6 US gal (715×10^3 m3) representing 44.3% of U.S. ethanol imports in 2007. The remaining imports that year came from Canada and China.
Other feedstocks
Cheese whey, barley, potato waste, beverage waste, and brewery and beer waste have been used as feedstocks for ethanol fuel, but at a far smaller scale than corn and sugarcane ethanol, as plants using these feedstocks have the capacity to produce only 3 to 5 million US gallons (11×10^3 to 19×10^3 m3) per year.
Comparison with Brazilian ethanol
Sugarcane ethanol has an energy balance seven times greater than corn ethanol. As of 2007, Brazilian distiller production costs were 22 cents per liter, compared with 30 cents per liter for corn-based ethanol. Corn-derived ethanol costs 30% more because the corn starch must first be converted to sugar before distillation into alcohol. However, corn-derived ethanol offers the ability to return 1/3 of the feedstock to the market as a replacement for the corn used in the form of Distillers Dried Grain. Sugarcane ethanol production is seasonal: unlike corn, sugarcane must be processed into ethanol almost immediately after harvest.
Environmental and social impact
Environmental effects
Energy balance and carbon intensity
Until 2008, several full life cycle ("Well to Wheels") studies had found that corn ethanol reduces greenhouse gas emissions as compared to gasoline. In 2007 a team led by Farrel from the University of California, Berkeley evaluated six previous studies and concluded corn ethanol reduces greenhouse gas emissions by only 13 percent. Another figure is 20 to 30 percent, and an 85 to 85 percent reduction for cellulosic ethanol. Both figures were estimated by Wang from Argonne National Laboratory, based on a comprehensive review of 22 studies conducted between 1979 and 2005, and simulations with Argonne's GREET model. All of these studies included direct land use changes. However, further research examining the actual effects of the Renewable Fuel Standard from 2008 to 2016 has concluded that corn ethanol produces more carbon emissions per unit of energy – likely more than 24% more – than gasoline, when factoring in fertilizer use and land use change.The reduction estimates on carbon intensity for a given biofuel depend on the assumptions regarding several variables, including crop productivity, agricultural practices, and distillery power source and energy efficiency. None of these earlier studies considered the effects of indirect land-use changes, and though their impact was recognized, its estimation was considered too complex and more difficult to model than direct land use changes.
Effects of land use change
Two 2008 studies, both published in the same issue of Scienceexpress, questioned the previous assessments. A team led by Searchinger from Princeton University concluded that once direct and indirect effect of land use changes (ILUC) are considered, both corn and cellulosic ethanol increased carbon emissions as compared to gasoline by 93 and 50 percent respectively. The study limited the analysis to a 30-year time horizon, assuming that land conversion emitted 25 percent of the carbon stored in soils and all carbon in plants cleared for cultivation. Brazil, China and India were considered among the overseas locations where land use change would occur as a result of diverting U.S. corn cropland, and it was assumed that new cropland in each of these regions correspond to different types of forest, savanna or grassland based on the historical proportion of each natural land converted to cultivation in these countries during the 1990s.A team led by Fargione from The Nature Conservancy found that clearing natural lands for use as agricultural land to produce biofuel feedstock creates a carbon debt. Therefore, this carbon debt applies to both direct and indirect land use changes. The study examined six scenarios of wilderness conversion, Brazilian Amazon to soybean biodiesel, Brazilian Cerrado to soybean biodiesel, Brazilian Cerrado to sugarcane ethanol, Indonesian or Malaysian lowland tropical rainforest to palm biodiesel, Indonesian or Malaysian peatland tropical rainforest to oil palm forest, and U.S. Central grassland to corn ethanol.
Low-carbon fuel standards
On April 23, 2009, the California Air Resources Board approved specific rules and carbon intensity reference values for the California Low-Carbon Fuel Standard (LCFS) that was to go into effect on January 1, 2011. The consultation process produced controversy regarding the inclusion and modeling of indirect land use change effects. After the CARB's ruling, among other criticisms, representatives of the ethanol industry complained that the standard overstated the negative environmental effects of corn ethanol, and also criticized the inclusion of indirect effects of land-use changes as an unfair penalty to home-made corn ethanol because deforestation in the developing world had been tied to US ethanol production. The emissions standard for 2011 for LCFS meant that Midwest corn ethanol would not meet the California standard unless current carbon intensity is reduced.A similar controversy arose after the U.S. Environmental Protection Agency (EPA) published on May 5, 2009, its notice of proposed rulemaking for the new Renewable Fuel Standard (RFS). EPA's proposal included the carbon footprint from indirect land-use changes. On the same day, President Barack Obama signed a Presidential Directive with the aim to advance biofuel research and commercialization. The Directive asked a new Biofuels Interagency Working Group comprising the Department of Agriculture, EPA, and DOE, to develop a plan to increase flexible fuel vehicle use, assist in retail marketing and to coordinate infrastructure policies.
The group also was tasked to develop policy ideas for increasing investment in next-generation fuels, and for reducing biofuels' environmental footprint.In December 2009 two lobbying groups, the Renewable Fuels Association (RFA) and Growth Energy, filed a lawsuit challenging LCFS's constitutionality. The two organizations argued that LCFS violates both the Supremacy Clause and the Commerce Clause of the US Constitution, and "jeopardizes the nationwide market for ethanol." In a press release the associations announced that "If the United States is going to have a low carbon fuel standard, it must be based on sound science and it must be consistent with the U.S. Constitution".On February 3, 2010, EPA finalized the Renewable Fuel Standard Program (RFS2) for 2010 and beyond. EPA incorporated direct emissions and significant indirect emissions such as emissions from land use changes along with comments and data from new studies. Adopting a 30-year time horizon and a 0% discount rate EPA declared that ethanol produced from corn starch at a new (or expanded capacity from an existing) natural gas-fired facility using approved technologies would be considered to comply with the 20% GHG emission reduction threshold. Given average production conditions it expected for 2022, EPA estimated that corn ethanol would reduce GHGs an average of 21% compared to the 2005 gasoline baseline. A 95% confidence interval spans a 7-32% range reflecting uncertainty in the land use change assumptions.The following table summarizes the mean GHG emissions for ethanol using different feedstocks estimated by EPA modelling and the range of variations considering that the main source of uncertainty in the life cycle analysis is the GHG emissions related to international land use change.
Water footprint
Water-related concerns relate to water supply and quality, and include availability and potential overuse, pollution, and possible contamination by fertilizers and pesticides. Several studies concluded that increased ethanol production was likely to result in a substantial increase in water pollution by fertilizers and pesticides, with the potential to exacerbate eutrophication and hypoxia, particularly in the Chesapeake Bay and the Gulf of Mexico.Growing feedstocks consumes most of the water associated with ethanol production. Corn consumes from 500–2,000 litres (110–440 imp gal; 130–530 US gal) of water per liter of ethanol, mostly for evapotranspiration. In general terms, both corn and switchgrass require less irrigation than other fuel crops. Corn is grown mainly in regions with adequate rainfall. However, corn usually needs to be irrigated in the drier climates of Nebraska and eastern Colorado. Further, corn production for ethanol is increasingly taking place in areas requiring irrigation. A 2008 study by the National Research Council concluded that "in the longer term, the likely expansion of cellulosic biofuel production has the potential to further increase the demand for water resources in many parts of the United States. Biofuels expansion beyond current irrigated agriculture, especially in dry western areas, has the potential to greatly increase pressure on water resources in some areas."A 2009 study estimated that irrigated corn ethanol implied water consumption at between 50 US gal/mi (120 L/km) and 100 US gal/mi (240 L/km) for U.S. vehicles. This figure increased to 90 US gal/mi (210 L/km) for sorghum ethanol from Nebraska, and 115 US gal/mi (270 L/km) for Texas sorghum. By contrast, an average U.S. car effectively consumes between 0.2 US gal/mi (0.47 L/km) to 0.5 US gal/mi (1.2 L/km) running on gasoline, including extraction and refining.In 2010 RFA argued that more efficient water technologies and pre-treated water could reduce consumption. It further claimed that non-conventional oil "sources, such as tar sands and oil shale, require far more water than conventional petroleum extraction and refining."
U.S. standard agricultural practices for most crops employ fertilizers that provide nitrogen and phosphorus along with herbicides, fungicides, insecticides, and other pesticides.Some part of these chemicals leaves the field. Nitrogen in forms such as nitrate (NO3) is highly soluble, and along with some pesticides infiltrates downwards toward the water table, where it can migrate to water wells, rivers and streams. A 2008 National Research Council study found that regionally the highest stream concentrations occur where the rates of application were highest, and that these rates were highest in the Corn Belt. These flows mainly stem from corn, which as of 2010 was the major source of total nitrogen loading to the Mississippi River.Several studies found that corn ethanol production contributed to the worsening of the Gulf of Mexico dead zone. The nitrogen leached into the Mississippi River and out into the Gulf, where it fed giant algae blooms. As the algae died, it settled to the ocean floor and decayed, consuming oxygen and suffocating marine life, causing hypoxia. This oxygen depletion killed shrimp, crabs, worms and anything else that could not escape, and affected important shrimp fishing grounds.
Social implications
Effect on food prices
Some environmentalists, such as George Monbiot, expressed fears that the marketplace would convert crops to fuel for the rich, while the poor starved and biofuels caused environmental problems. The food vs fuel debate grew in 2008 as a result of the international community's concerns regarding the steep increase in food prices. In April 2008, Jean Ziegler, back then United Nations Special Rapporteur on the Right to Food, repeated his claim that biofuels were a "crime against humanity", echoing his October 2007 call for a 5-year ban for the conversion of land for the production of biofuels. Also in April 2008, World Bank President Robert Zoellick stated that "While many worry about filling their gas tanks, many others around the world are struggling to fill their stomachs. And it's getting more and more difficult every day."
A July 2008 World Bank report found that from June 2002 to June 2008 "biofuels and the related consequences of low grain stocks, large land use shifts, speculative activity and export bans" accounted for 70–75% of total price rises. The study found that higher oil prices and a weak dollar explain 25–30% of total price rise. The study said that "large increases in biofuels production in the United States and Europe are the main reason behind the steep rise in global food prices." The report argued that increased production of biofuels in these developed regions was supported by subsidies and tariffs, and claimed that without such policies, food price increases worldwide would have been smaller. It also concluded that Brazil's sugarcane ethanol had not raised sugar prices significantly, and recommended that both the U.S. and E.U. remove tariffs, including on many African countries.An RFA rebuttal said that the World Bank analysis was highly subjective and that the author considered only "the impact of global food prices from the weak dollar and the direct and indirect effect of high petroleum prices and attribute[d] everything else to biofuels."A 2010 World Bank study concluded that its previous study may have overestimated the impact, as "the effect of biofuels on food prices has not been as large as originally thought, but that the use of commodities by financial investors (the so-called 'financialization of commodities') may have been partly responsible for the 2007/08 spike."A July 2008 OECD economic assessment agreed about the negative effects of subsidies and trade restrictions, but found that the impact of biofuels on food prices was much smaller. The OECD study found that existing biofuel support policies would reduce greenhouse gas emissions by no more than 0.8 percent by 2015. It called for more open markets in biofuels and feedstocks to improve efficiency and lower costs. The OECD study concluded that "current biofuel support measures alone are estimated to increase average wheat prices by about 5 percent, maize by around 7 percent and vegetable oil by about 19 percent over the next 10 years."During the 2008 financial crisis corn prices, fell 50% from their July 2008 high by October 2008, in tandem with other commodities, including oil, while corn ethanol production continued unabated. "Analysts, including some in the ethanol sector, say ethanol demand adds about 75 cents to $1.00 per bushel to the price of corn, as a rule of thumb. Other analysts say it adds around 20 percent, or just under 80 cents per bushel at current prices. Those estimates hint that $4 per bushel corn might be priced at only $3 without demand for ethanol fuel."Reviewing eight years of actual implementation of the Renewable Fuel Standard, researchers from the University of Wisconsin found the standard increased corn prices by 30% and prices of other crops by 20%.
See also
Further reading
Duffield, James A., Irene M. Xiarchos, and Steve A. Halbrook, "Ethanol Policy: Past, Present, and Future", South Dakota Law Review. 53 (no. 3, 2008), 425–53.
References
External links
U.S. government
Effects of Increased Biofuels on the U.S. Economy in 2022, U.S. Department of Agriculture, October 2010.
U.S. Ethanol Fuel: Data, Analysis and Trends, U.S. Department of Energy
Using Biofuel Tax Credits to Achieve Energy and Environmental Policy Goals, Congressional Budget Office, July 2010.
International organizations
Towards Sustainable Production and Use of Resources: Assessing Biofuels by the United Nations Environment Programme, October 2009.
World Bank, Biofuels: The Promise and the Risks. World Development Report 2008: Agriculture for Development
Other sources
Renewable Fuels Association web site
Thermodynamics of the Corn-Ethanol biofuel cycle, T.Patzek |
coal power in the united states | Coal generated about 19.5% of the electricity at utility-scale facilities in the United States in 2022, down from 38.6% in 2014 and 51% in 2001. In 2021, coal supplied 9.5 quadrillion British thermal units (2,800 TWh) of primary energy to electric power plants, which made up 90% of coal's contribution to U.S. energy supply. Utilities buy more than 90% of the coal consumed in the United States.
There were over 200 coal powered units across the United States in 2022. Coal plants have been closing since the 2010s due to cheaper and cleaner natural gas and renewables. But environmentalists say that political action is needed to close them faster, to reduce greenhouse gas emissions by the United States to better limit climate change.Coal has been used to generate electricity in the United States since an Edison plant was built in New York City in 1882. The first AC power station was opened by General Electric in Ehrenfeld, Pennsylvania in 1902, servicing the Webster Coal and Coke Company. By the mid-20th century, coal had become the leading fuel for generating electricity in the US. The long, steady rise of coal-fired generation of electricity shifted to a decline after 2007. The decline has been linked to the increased availability of natural gas, decreased consumption, renewable power, and more stringent environmental regulations. The Environmental Protection Agency has advanced restrictions on coal plants to counteract mercury pollution, smog, and global warming.
Trends, comparisons, and forecasts
The average share of electricity generated from coal in the US has dropped from 52.8% in 1997 to 27.4% in 2018. In 2017, there were 359 coal-powered units at the electrical utilities across the US, with a total nominal capacity of 256 GW
(compared to 1024 units at nominal 278 GW in 2000).
The actual average generated power from coal in 2006 was 227.1 GW (1991 TWh per year), the highest in the world and still slightly ahead of China (1950 TWh per year) at that time. In 2000, the US average production of electricity from coal was 224.3 GW (1966 TWh for the year). In 2006, US electrical generation consumed 1,027 million short tons (932 million metric tons) or 92.3% of the coal mined in the US.
Due to emergence of shale gas, coal consumption declined from 2009. In the first quarter of 2012, the use of coal for electricity generation declined substantially more, 21% from 2011 levels. According to the U.S. Energy Information Administration, 27 gigawatts of capacity from coal-fired generators is to be retired from 175 coal-fired power plants between 2012 and 2016. Natural gas showed a corresponding increase, increasing by a third over 2011. Coal's share of electricity generation dropped to just over 36%. Coal use continues to decline rapidly through November 2015 with its share around 33.6%.The coal plants are mostly base-load plants with typical utilisation rates of 50% to 60% (relating to full load hours).
Utility companies have shut down and retired aging coal-fired power plants following the Environmental Protection Agency's (EPA) implementation of the Cross-State Air Pollution Rule (CSAP). The extent of shutdowns and reduction in utilization depend on factors such as future price of natural gas and cost of installation of pollution control equipment; however, as of 2013, the future of coal-fired power plants in the United States did not appear promising. In 2014 estimates gauged that an additional 40 gigawatts (GW) of coal-fired capacity would retire until 2020, in addition to the nearly 20GW that already had retired as of 2014. This is driven most strongly by inexpensive natural gas competing with coal, and EPA's Mercury and Air Toxics Standards (MATS), which require significant reductions in emissions of mercury, acid gases, and toxic metals, scheduled to take effect in April 2015. Over 13 GW of coal power plants built between 1950 and 1970 were retired in 2015, averaging 133 MW per plant. In Texas, the price drop of natural gas has reduced the capacity factor in 7 of the state's coal plants (max. output 8 GW), and they contribute about a quarter of the state's electricity.The cost of transporting coal may be around $20/ton for trains, or $5–6/ton for barge and truck. A 2015 study by a consortium of environmental organizations concluded that US Government subsidies for coal production are around $8/ton for the Powder River Basin.In 2018, 16 of the 50 Federal States of the US had either no coal power in their power production for the public power supply (California, Idaho, Massachusetts, Rhode Island and Vermont), less than 5% coal in power production (Connecticut, Maine, New Hampshire, New Jersey, New York, Delaware) or between 5 and 10% (Alaska, Nevada, Mississippi, Oregon and Washington State)
Environmental impacts
In the United States, three coal-fired power plants reported the largest toxic air releases in 2001:
Duke Energy's Roxboro Steam Electric Plant in Semora, North Carolina. The four-unit, 2,462 megawatt facility is one of the largest power plants in the United States.
Reliant Energy's Keystone Power Plant in Shelocta, Pennsylvania.
Georgia Power's Bowen Steam Electric Generating Plant in Cartersville, Georgia.The Environmental Protection Agency classified the 44 sites as potential hazards to communities, which means the waste sites could cause death and significant property damage if an event such as a storm or a structural failure caused a spill. They estimate that about 300 dry landfills and wet storage ponds are used around the country to store ash from coal-fired power plants. The storage facilities hold the noncombustible ingredients of coal and the ash trapped by equipment designed to reduce air pollution.
Acid rain
Byproducts of coal plants have been linked to acid rain.
Sulfur dioxide emissions
86 coal powered plants have a capacity of 107.1 GW, or 9.9% of total U.S. electric capacity, they emitted 5,389,592 tons of SO2 in 2006 – which represents 28.6% of U.S. SO2 emissions from all sources.
Carbon footprint: CO2 emissions
Emissions from electricity generation account for the largest share of U.S. greenhouse gases, 38.9% of U.S. production of carbon dioxide in 2006 (with transportation emissions close behind, at 31%). Although coal power only accounted for 49% of the U.S. electricity production in 2006, it was responsible for 83% of CO2 emissions caused by electricity generation that year, or 1,970 million metric ton of CO2 emissions. Further 130 million metric ton of CO2 were released by other industrial coal-burning applications.
Mercury pollution
U.S. coal-fired electricity-generating power plants owned by utilities emitted an estimated 48 tons of mercury in 1999, the largest source of man-made mercury pollution in the U.S. In 1995–96, this accounted for 32.6% of all mercury emitted into the air by human activity in the U.S. In addition, 13.1% was emitted by coal-fired industrial and mixed-use commercial boilers, and 0.3% by coal-fired residential boilers, bringing the total U.S. mercury pollution due to coal combustion to 46% of the U.S. man-made mercury sources. In contrast, China's coal-fired power plants emitted an estimated 200 ± 90 tons of mercury in 1999, which was about 38% of Chinese human-generated mercury emissions (45% being emitted from non-ferrous metals smelting). Mercury in emissions from power plants can be reduced by the use of activated carbon.
Public debate
Advocates
In 2007 an advertising campaign was launched to improve public opinion on coal power titled America's Power. This was done by the American Coalition for Clean Coal Electricity (then known as Americans for Balanced Energy Choices), a pro-coal organization started in 2000.
Opposition
In the face of increasing electricity demand through the 2000s, the US has seen a "Growing Trend Against Coal-Fired Power Plants". In 2005 the 790 MW Mohave Power Station closed rather than implement court ordered pollution controls. In 2006 through 2007 there was first a bullish market attitude towards coal with the expectation of a new wave of plants, but political barriers and pollution concerns escalated exponentially, which is likely to damage plans for new generation and put pressure on older plants. In 2007, 59 proposed coal plants were canceled, abandoned, or placed on hold by sponsors as a result of financing obstacles, regulatory decisions, judicial rulings, and new global warming legislation.The Stop Coal campaign has called for a moratorium on the construction of any new coal plants and for the phase out of all existing plants, citing concern for global warming. Others have called for a carbon tax and a requirement of carbon sequestration for all coal power plants.The creation in January 2009 of a Presidential task force (to look at ways to alter the energy direction of the United States energy providers) favors the trend away from coal-fired power plants.
Statistics
See also
Coal mining in the United States
Mountaintop removal mining
Superfund
References
External links
Carbon-emissions culprit? Coal.
Clean Air Watch.
State Coal Profile Index Map
Coal production in the United States – an historical overview
Is America Ready to Quit Coal? |
environmental issues in canada | Environmental issues in Canada include impacts of climate change, air and water pollution, mining, logging, and the degradation of natural habitats. As one of the world's significant emitters of greenhouse gasses, Canada has the potential to make contributions to curbing climate change with its environmental policies and conservation efforts.
Climate change
Arctic melting
Scientists across the world have already started to notice massive reductions in Canada's Arctic sea ice cover, particularly during the summertime. The shrinking of this ice results in the disruption of the ocean circulation, and changes in climate and weather around the world. The 2019 Canada's Changing Climate Report, written by scientists from institutions around the globe, states that the impacts of climate change on Atlantic Canada will be very diverse. One impact is that the sea ice will become thinner and will also form for much shorter periods of the year. And with less sea ice than the region usually gets now, wave seasons will become more intense. Atlantic Canada will see a relative rise in sea levels everywhere - a rise which is estimated to be 75 to 100-cm by the year 2100. Scientists also predict that even if emissions decrease, a 20-cm rise is expected to take place during the course of the next 20 to 30 years. NASA studies have also found that a major ocean current in the Arctic has become faster and more turbulent due to the rapid ice melt, disrupting the delicate balance of the Arctic environment with an influx of freshwater. As the ocean warms and subtropical waters move north, the ocean will become warmer and saltier, and since warmer water holds less oxygen than cooler water, marine ecosystems can suffer and become less sustainable because of this lower oxygen level.
In the journal, Science, which was published in March 2019, it explains that warmer waters could actually increase fish stocks in certain regions, like the halibut found off the coast of Newfoundland and Greenland but other species such as the Atlantic Cod and albacore tuna might not be able to cope with the conditions so well.
Wildfires
Wildfires are a major concern in Canada, with an average of over 7,000 wildfires occurring each year in Canada. Since 1990, these fires across Canada have consumed approximately an average of 2.5 million hectares a year. Wildfires are a recurrent natural disaster in Canada, escalating due to climate change and other human-induced factors. The situation has worsened over the years, with 2023 marking a particularly devastating wildfire season. The wildfires led to massive evacuations, with tens of thousands of people displaced from their homes. In British Columbia, about 35,000 individuals were under evacuation orders, and over 30,000 were on evacuation alert due to the intensifying fires. The wildfires also caused substantial property and infrastructure damage, destroying nearly 200 homes and structures in Kelowna, BC. The Canadian federal government, along with provincial authorities, initiated several measures to combat the fires and mitigate their impacts. This included deploying the military to affected regions, imposing travel restrictions, and soliciting international assistance.
Pipelines
The environmental issue of pipelines in Canada is a complex and multi-faceted concern, encompassing potential impacts on both natural ecosystems and human communities. Public opinion in Canada reflects a significant opposition towards government financial involvement in oil pipelines. Many Canadians opposed a multibillion-dollar writedown on the Trans Mountain oil pipeline by the federal government. Additionally, Canadian governments have provided over CAD 23 billion to oil and gas pipelines in the last few years. This financial support was aimed at boosting the economy, but critics often argue it undermines Canada's green recovery efforts by potentially increasing carbon emissions. Economically, the new pipeline still serves as a strong case, helping open up newer markets for Canadian producers.The Canadian Energy Regulator controls about 10% (73,000 km) of the pipelines in Canada, their Pipeline Safety Act, as a regulatory response, aims to mitigate several risks by enhancing pipeline operating safety and environmental protection measures. From a technical perspective, corrosion, construction defects, and cracking are generally the most commonly identified leading causes of pipeline leaks in Canada, emphasizing the need for robust maintenance and safety protocols. Additionally, there are measures in place for preventing and responding to marine oil spills, including using satellite technology for detection and surveillance and advancing science to improve cleanup technologies.
Species conservation
Endangered species and biodiversity
Species biodiversity and wildlife population numbers have been declining in Canada for decades. According to the most recent Living Planet Report Canada, species that are deemed at-risk of extinction have experienced an average population decline of 59% compared to 1970. Today, there are more than 600 plant and animal species throughout Canada that are listed on the Federal Species at Risk Act. This federal act utilizes a variety of measures to protect wildlife species that have been deemed endangered species or are at risk of becoming endangered. These measures are designed to encourage engagement and cooperation between individual citizens, local governments, and Aboriginal peoples.
Shipping vs. orcas
There are hundreds of different species that are at risk throughout Canada, but a few of them are particularly noteworthy. The Southern Resident Killer Whale, commonly referred to as the Orca Whale, is an apex predator in coastal regions off the West Coast of Canada and the United States. These whales play a vital role in maintaining the resiliency and health of the ecosystems that they are a part of. Despite their importance, this species continues to face an increasing number of threats. Some of the most pressing threats are the result of habitat disturbance from human activity. The underwater noise that marine vessels produce interferes with the Orcas echolocation abilities, impacting their ability to locate food. Shipping activities also impact Orca whales in other ways including oil spills, ship strikes, and pollution. As a result of these threats, the current population of this species is estimated to be 71 in total. Their preservation is very important to marine health in the regions that they inhabit.
Polar bear
Another important endangered species to highlight is the polar bear. Two-thirds of the world’s polar bears live on Canadian portions of the Arctic. Polar Bears are another apex predator that serve as an important indicator for the health of the ecosystems that they are a part of. The greatest threat to this species is the loss of their primary habitat- sea ice. Sea ice is where Polar Bears raise their cubs, and it is also the habitat for their primary food source, ringed seals. As climate change causes sea ice to melt, Polar Bears population numbers have fallen dramatically, making this species a direct indicator for the effects of climate change on the region. Right now, there are estimated to be around 16,000 Polar Bears throughout Canada.
Resource conservation
The Rainforest Action Network and indigenous groups have campaigned to protect the boreal forest of Canada from logging and mining. In July 2008 the Ontario government announced plans to protect some of the area from all industrial activity.
Logging
In 2021, the logging industry accounted for around 20 megatonnes more than that of electricity production and emssisions from Canada’s Tar Sands. When it comes to the logging industry, the Canadian Government has perpetuatually failed to report accurate greenhouse gas emissions (GHG), absolving the logging industry from taking accountability for its contribution to GHG. Every year, the logging industry cuts down around half a million hectares of forest in Canada. It is responsible for over 10% of the country's total greenhouse gas emissions. The Canadian logging industry employs denial as a tactic, by claiming its practices are sustainable and better than other countriesl. However, the current clearcutting and replanting methods contribute to significant forest loss and biodiversity decline, along with releasing carbon into the atmosphere.The industry minimizes its role in environmental issues by deflecting responsibility on “naturally occurring phenomena” when it comes to fires and forest loss. Spreading misinformation has influenced logging policies in Canada, prioritizing short-term profits over environmental protection. This has led to the neglect of necessary safeguards, such as boreal caribou habitat protections and environmental erosion regulations. Ignoring its adverse effects on forests, communities, and the climate has enabled the Canadian government and logging companies to continue to widen the gap in creating a climate safe sector of logging.
Mining
Abandoned fossil fuel wells
According to the Alberta Energy Regulator (AER), approximately 170,000 wells have been abandoned in Alberta, Canada. These “orphaned” wells pose threats for the surrounding communities and environment. Inactive wells pose a significant risk the longer they remain unplugged. When the hazardous materials from these wells are not properly managed, it can lead to the leaking of countless chemical toxins. Methane is a colorless, oderless gas that has tremendously more heat trapping abilities than CO2, and because of its makeup, methane can go undetected for years. The Wilderness Society also states that this leads to countless negative health impacts including, but not limited to, cancer, premature birth, and asthma. On top of that, the other undetectable toxins being released from orphaned wells gradually poison wldlife habitats, along with air and water.Typically companies must follow Alberta's Environmental Protection and Enhancement Act (EPEA) requirements for these abandoned sites. The process of reclaiming these sites requires specific criteria starting with a comprehensive site assessment determining the extent of contamination and the potential environmental risks. From there, these sites must create a reclamation plan outlining strategies for returning the abandoned site to a useful and safe condition. Based on the scale of the project, there must be approvals to ensure the reclamation work complies with relevant laws and standards.The EPEA has a specific set of requirements for the cleanup of contaminated sites, which often requires ongoing monitoring and reporting to ensure that the reclamation plans are being met. These projects are tedious and can take years or decades to complete. As of 2020, Canada’s Office of the Parliamentary Budget Officer (PBO) estimated these orphan well-clean up projects at costing around $361 million. By 2025, the clean-up process is estimated to reach around $1.1billion.
Pollution
Air pollution
Chemical pollution
The Aamjiwnaang First Nation community has expressed concern regarding its proximity to chemical plants, as birth rates of their people have been documented by the American journal Environmental Health Perspectives as deviating from the normal ratio of close to 50% boys, 50% girls. The ratio as found between 1999 and 2003 by the journal was roughly 33% boys, and 67% girls. The First Nation is concerned that this abnormal trend is due to adverse effects of maternal and fetal exposure to the effluent and emissions of the nearby chemical plants. This is the first community in the world to have a birth rate of two girls to every boy.
Mining pollution
Canada, like most other countries with significant
Environmental impact of the Athabasca oil sands
Cleanup of the Colomac Mine
Acid mine drainage from the Northland Pyrite Mine
Mary River Mine environmental concerns
Plastic pollution
In the year 2022 Canada announced a ban on producing and importing single use plastic from December 2022. The sale of those items will be banned from December 2023 and the export from 2025. The prime minister of Canada Justin Trudeau pledged to ban single use plastic in 2019. As for now in Canada "Up to 15 billion plastic checkout bags are used each year and approximately 16 million straws are used every day"
Tar sands
Tar sands can be described as areas on land containing an unconventional mixture of sand, clay, water, and a petroleum based residue called bitumen, that is useful to produce crude oil. In the NDRC article it mentions that Canada is currently one of the largest depositors of crude oil in the world. The development of tar sands requires extensive infrastructure, such as roads and pipelines. Canada's oil and gas sector, primarily driven by tar sands, contributes 26% of the country's greenhouse gas emissions. Tar sands production has surged by 456% between 1990 and 2018, resulting in a carbon footprint larger than that of New Zealand and Kenya combined. This expansion has led to the clearance or degradation of millions of acres of the Boreal Forests, endangering vital habitats for wildlife. The Boreal Forests serve as a massive carbon sink, however with these areas rapidly being destroyed there are even more concerns around air pollution and water contamination. Additionally, these extraction sites violate Indigenous rights as tar sands encroach on traditional lands, causing environmental contamination and health issues.
Refining of tar sands produces air pollutants, including sulfur dioxide and nitrogen oxides, which can have adverse effects on air quality and human health. This process produces three times more carbon emissions compared to the production of conventional crude. According to another NDRC article, mining operations require flattening the forests in order to access the tar sands. Extraction requires a substantial amount of water, which also contaminates local water sources and disrupts aquatic ecosystems. Leftover waste from tar sands processing, known as tailings, is stored in large ponds. These ponds pose a risk of leakage, which can contaminate nearby water sources and harm aquatic life. While around 150 Nations have signed a Treaty against Tar Sands Expansion, Canadian governments continue to support these projects, posing a threat to Indigenous lands and the environment.
Population
Economy of Alberta
Alberta's economy, notably its substantial fossil fuel industry, poses significant environmental challenges, making it a crucial topic for discussing environmental issues in Canada. The extraction of non-conventional oil from the oil sands is particularly impactful, contributing to greenhouse gas emissions, water and air pollution, and land disturbance. Despite this, Alberta is striving to mitigate environmental impacts by diversifying its economic sectors. The province's efforts to transition towards a more sustainable economic model, balancing economic growth with environmental stewardship, encapsulate broader environmental endeavors within Canada, portraying a microcosm of the challenges and actions toward sustainability.
Indigenous rights and land use
According to the 2021 Canadian Census, over 1.8 million people self-identified as Indigenous. Despite this demographic accounting for only 5.0% of the total population, there has been a pattern where most of the toxic, polluting industries and corporations are located directly adjacent to indigenous communities. This has placed a disproportionately high environmental burden on these communities, exposing indigenous peoples to the health risks that are associated with these polluting facilities more so than other Canadian citizens. For example, there is a region colloquially referred to as ‘Chemical Valley’ which has the largest concentration of chemical plants and refineries in the entire country – and this region is directly bordering the Aamjiwnaang First Nation, an indigenous community in Sarnia, Ontario. Members of this community believe that the air, water, and soil pollution from these chemical facilities has contributed to higher rates of asthma and cancer amongst its residents. In 2019, the United Nations Special Rapporteur on human rights and hazardous substances visited this region, and concluded that the Aamjiwnaang community, as well as other indigenous communities throughout Canada, are in fact disproportionately affected by toxic waste compared to other demographic groups. In response to this situation, some grassroots groups and movements have been formed in order to fight to change this imbalance. For example, ‘Land Back’ is an Indigenous-led movement that leads protests and demonstrations. Their goal is to help influence policy changes that would reclaim land for indigenous groups, allowing them to have control over how that land is used, extracted, and polluted.
See also
Environment of Canada
Environmental policy of the Harper government
Environmental racism
Water pollution in Canada
Hard Choices: Climate Change in Canada
List of environmental issues
Pollution in Canada
RAVEN (Respecting Aboriginal Values & Environmental Needs)
Carbon pricing in Canada
2023 Canadian wildfires
Pipelines in Canada
References
External links
Environment and Climate Change Canada |
climate target | A climate target, climate goal or climate pledge is a measurable long-term commitment for climate policy and energy policy with the aim of limiting the climate change. Researchers within, among others, the UN climate panel have identified probable consequences of global warming for people and nature at different levels of warming. Based on this, politicians in a large number of countries have agreed on temperature targets for warming, which is the basis for scientifically calculated carbon budgets and ways to achieve these targets. This in turn forms the basis for politically decided global and national emission targets for greenhouse gases, targets for fossil-free energy production and efficient energy use, and for the extent of planned measures for climate change mitigation and adaptation.
At least 164 countries have implemented climate targets in their national climate legislation.
Global climate targets
Global climate targets are goals that a large number of countries have agreed upon, including at United Nations Climate Change conferences (COP). Targets often referred to are:
The Climate Convention – an international environmental treaty adopted at the Rio Conference in Brazil in 1992.
Targets for 2008 to 2012: In the Kyoto Protocol of 1997, 160 countries committed to reducing their greenhouse gas emissions by an average of 5.2 percent over the period 2008 to 2012 compared to 1990 levels.
Targets for 2013 to 2020: In the Doha amendment to the Kyoto Protocol, slightly fewer I countries committed to reducing their emissions by at least 18 percent in the period 2013 to 2020 compared to 1990.
Targets for 2030:
105 countries promised deforestation at the COP26 in 2021 to end deforestation to 2030.
105 countries signed in connection with COP26 and COP27 a pledge to reduce methane emissions by 30 percent by 2030 compared to 2020.
Targets for 2100:
United Nations Climate Change Conference 2009 proposed a 2 degree climate target for global warming until the year 2100.
The Paris Agreement (United Nations Climate Change Agreement) of 2015 with countries' non-binding climate pledges, formally known as NDCs, and before the agreement's ratification for INDCs (Intended Nationally Determined Contributions), to keep global warming well below the 2-degree target by 2100, and that further efforts should be made towards a 1.5-degree target.
Goal number 13 in the global goals for sustainable development within the Agenda 2030 deals with climate action, and was decided by the UN General Assembly in 2015. Among other things, it includes the UN Green Climate Fund.
Calculation of Emissions Targets
An emissions target or greenhouse gas emissions reduction target is a central policy instrument of international greenhouse gas emissions reduction politics and a key pillar of climate policy. They typically include heavy consideration of emissions budgets, which are calculated using rate of warming per standard emission of carbon dioxide, a historic baseline temperature, a desired level of confidence and a target global average temperature to stay below.An "emissions target" may be distinguished from an emissions budget, as an emissions target may be internationally or nationally set in accordance with objectives other than a specific global temperature. This includes targets created for their political palatability, rather than budgets scientifically determined to meet a specific temperature target.A country's determination of emissions targets is based on careful consideration of pledged NDCs (nationally determined contributions), economic and social feasibility, and political palatability. Carbon budgets can provide political entities with knowledge of how much carbon can be emitted before likely reaching a certain temperature threshold, but specific emissions targets take more into account. The exact way these targets are determined varies widely from country to country. Variation in emissions targets and time to complete them depends on factors such as accounting of land-use emissions, afforestation capacity of a country, and a countries transport emissions. Importantly, emissions targets also depend on their hypothesized reception.
Many emissions pathways, budgets and targets also rely on the implementation of negative emissions technology. These currently undeveloped technologies are predicted to pull net emissions down even as source emissions are not reduced.
Effectiveness of Targets
Many countries' emissions targets are above the scientifically calculated allowable emissions to remain below a certain temperature threshold. In 2015, many countries pledged NDCs to limit the increase in the global average temperature to well below 2 °C above pre-industrial levels. Many of the largest emitters of GHGs, however, are on track to push global average temperature to as much as 4 °C. Some of these projections contradict agreements made in the 2015 Paris Agreement, meaning countries are not keeping to their pledged NDCs.
In addition, it is uncertain how effective many emissions targets and accompanying policies really are. For example, with countries that have high consumption-based carbon emissions, strictly enforced, aligned and coordinated international policy measures determine the effectiveness of targets. In addition, many ambitious policies are proposed and passed but are not practically enforced or regulated, or have unintended consequences. China's ETS (emissions trading scheme), while seeming to have an effect on reducing production-based emissions also promoted outsourcing of emissions contributing to a further imbalance of carbon transfer among China's different provinces. The ETS evaluation also did not account for exported consumption-based emissions.
Many countries aim to reach net zero emissions in the next few decades. In order to reach this goal however, there must be a radical shift in energy infrastructure. For example, in the United States, political entities are attempting to switch away from coal and oil based energy by replacing plants with natural gas combined cycle (NGCC) power plants. Other countries like the Netherlands were obligated by the District Court of Hague to reduce its greenhouse gas emissions by 25% by 2020. The Court has passed other innovations (Milieudefensie v. Royal Dutch Shell) to reduce dioxide emissions by 45% by 2030.However many find this transition to not be significant enough to reach net-zero emissions. More significant changes, for example using biomass energy with carbon capture and storage (BECCS) are suggested as a viable option to transition to net-zero emissions countries.
See also
Climate change in Europe#Climate targets
Global warming
Intergovernmental Panel on Climate Change
United Nations Climate Change conference
Nationally Determined Contributions
Bioenergy with Carbon Capture and Storage
Paris Agreement
List of countries by greenhouse gas emissions per capita
== References == |
low-carbon fuel standard | A low-carbon fuel standard (LCFS) is an emissions trading rule designed to reduce the average carbon intensity of transportation fuels in a given jurisdiction, as compared to conventional petroleum fuels, such as gasoline and diesel. The most common methods for reducing transportation carbon emissions are supplying electricity to electric vehicles, supplying hydrogen fuel to fuel cell vehicles and blending biofuels, such as ethanol, biodiesel, renewable diesel, and renewable natural gas into fossil fuels. The main purpose of a low-carbon fuel standard is to decrease carbon dioxide emissions associated with vehicles powered by various types of internal combustion engines while also considering the entire life cycle ("well to wheels"), in order to reduce the carbon footprint of transportation.
The first low-carbon fuel standard mandate in the world was enacted by California in 2007, with specific eligibility criteria defined by the California Air Resources Board (CARB) in April 2009 but taking effect in January 2011. Similar legislation was approved in British Columbia in April 2008, and by European Union which proposed its legislation in January 2007 and which was adopted in December 2008. The United Kingdom is implementing its Renewable Transport Fuel Obligation Program, which also applies the concept of low-carbon fuels.Several bills have been proposed in the United States for similar low-carbon fuel regulation at a national level but with less stringent standards than California. As of early 2010 none have been approved. The U.S. Environmental Protection Agency (EPA) issued its final rule regarding the expanded Renewable Fuel Standard (RFS2) for 2010 and beyond on February 3, 2010. This ruling, as mandated by the Energy Independence and Security Act of 2007 (EISA), included direct emissions and significant indirect emissions from land use changes.
California Low-Carbon Fuel Standard
Californian Governor Arnold Schwarzenegger issued Executive Order S-1-07 on January 19, 2007, to enact a low-carbon fuel standard (LCFS). The LCFS requires oil refineries and distributors to ensure that the mix of fuel they sell in the Californian market meets the established declining targets for greenhouse gas (GHG) emissions measured in CO2-equivalent grams per unit of fuel energy sold for transport purposes. The LCFS directive calls for a reduction of at least 10 percent in the carbon intensity of California's transportation fuels by 2020. These reductions include not only tailpipe emissions but also all other associated emissions from production, distribution and use of transport fuels within the state. Therefore, California LCFS considers the fuel's full life cycle, also known as the "well to wheels" or "seed to wheels" efficiency of transport fuels. The standard is also aimed to reduce the state's dependence on petroleum, create a market for clean transportation technology, and stimulate the production and use of alternative, low-carbon fuels in California.The LCFS is a mix of command and control regulation and emissions trading, as it will use market-based mechanisms that allow providers to choose how they will reduce emissions while responding to consumer demand. Some believe that oil companies could opt for several actions to comply. For example, they state that refiners and producers could improve the efficiency of the refineries and upstream production, or may purchase and blend more low-carbon ethanol into gasoline products, or purchase credits from electric utilities supplying low carbon electrons to electric passenger vehicles, or diversifying and selling low carbon hydrogen for use by vehicles as a product, or any new strategy as the standard is being designed. The California Global Warming Solutions Act of 2006 authorized the establishment of emissions trading in California, with rules to be adopted by 2010, and taking effect no later than January 2012.
Regulatory proceedings
In accordance to the California Global Warming Solutions Act of 2006 and the Governor's Directive, the California Air Resources Board is the agency responsible for developing the "Low-Carbon Fuel Standard Program", and it was directed to initiate the regulatory proceedings to establish and implement the LCFS." CARB identified the LCFS as an early action item with a regulation to be adopted and implemented by 2010. Also Executive Order S-1-07 ordered the California Environmental Protection Agency to coordinate activities between the University of California, the California Energy Commission and other state agencies to develop and propose a draft compliance schedule to meet the 2020 target.As mandated by the Executive Order, a University of California team, led by Daniel Sperling of UC Davis and the late Alexander E. Farrell (UC Berkeley), developed two reports that established the technical feasibility of an LCFS, proposed the methodology to calculate the full life cycle GHG emissions from all fuels sold in the state, identified technical and policy issues, and provided a number of specific recommendations, thus providing an initial framework for the development of CARB's LCFS. This study was presented by Governor Schwarzenegger in May 2007 and they were the backbone of CARB's initial efforts to develop the LCFS, even though not all of the specific recommendations were incorporated in the final LCFS staff's proposed regulation.
Public consultation process
During 2008 and until the April 2009 LCFS ruling, CARB published in its website all technical reports prepared by its staff and collaborators regarding the definition and calculations related to the proposed LCFS regulation, conducted 16 public workshops, and also submitted its studies for external peer review. Before the April 23, 2009 ruling, the Board held a 45-day public hearing that received 229 comments, 21 of which were presented during the Board Hearing.
Controversy about indirect land use impacts
Among relevant and controversial comments submitted to CARB as public letters, on June 24, 2008, a group of 27 scientists and researchers from a number of universities and national laboratories, expressed their concerns arguing that there "is not enough hard empirical data to base any sound policy regulation in regards to the indirect impacts of renewable biofuels production. The field is relative new, especially when compared to the vast knowledge base present in fossil fuel production, and the limited analyses are driven by assumptions that sometimes lack robust empirical validation." With a similar opposing position, on October 23, 2008, a letter submitted to CARB by the New Fuels Alliance, representing more than two-dozen advanced biofuel companies, researchers and investors, questioned the Board intention to include indirect land use change (ILUC). In another public letter just before the ruling meeting, more than 170 scientists and economists sent a letter to CARB, urging it to account for GHG emissions from indirect land use change for biofuels and all other transportation fuels. They argued that "...there are uncertainties inherent in estimating the magnitude of indirect land use emissions from biofuels, but assigning a value of zero is clearly not supported by the science."
2009 Ruling
On April 23, 2009, CARB approved the specific rules and carbon intensity reference values for the LCFS that will go into effect on January 1, 2011. The technical proposal was approved without modifications by a 9–1 vote, to set the 2020 maximum carbon intensity reference value for gasoline to 86 grams of carbon dioxide equivalent released per megajoule of energy produced. One standard was established for gasoline and the alternative fuels that can replace it, and a second similar standard is set for diesel fuel and its replacements. The regulation is based on an average declining standard of carbon intensity that is expected to achieve 16 million metric tons of greenhouse gas emission reductions by 2020. CARB expects the new generation of fuels to come from the development of technology that uses cellulosic ethanol from algae, wood, agricultural waste such as straw and switchgrass, and also natural gas from municipal solid waste. They also expect the standard to drive the availability of plug-in hybrid, battery electric and fuel-cell powered cars while promoting investment in infrastructure for electric charging stations and hydrogen fueling stations.The ruling is controversial. Representatives of the US ethanol industry complained that this rule overstates the environmental effects of corn ethanol, and also criticized the inclusion of indirect effects of land-use changes as an unfair penalty to home-made corn ethanol because deforestation in the developing world is being tied to US ethanol production. The initial reference value set for 2011 for LCFS means that Mid-west corn ethanol will not meet the California standard unless current carbon intensity is reduced. Oil industry representatives complained that there is a cost associated to the new standard, as the LCFS will limit the use of corn ethanol blended in gasoline, thus leaving oil refiners with few available and viable options, such as sugarcane ethanol from Brazil, but this option means paying costly U.S. import tariffs. CARB officials and environmentalists reject such scenario because they think there will be plenty of time and economic incentive to developed inexpensive biofuels, hydrogen-based fuels, even ethanol from such cellulosic materials, or new ways to make ethanol out of corn with a smaller carbon footprint.Brazilian ethanol producers (UNICA), though they welcomed the ruling as they consider their sugarcane ethanol have passed a critical test and expect their biofuel to enter the California market in the future, UNICA also urged CARB to update the data and assumptions used, which according to them, is excessively penalizing their ethanol and is not reflecting the technology and agricultural practices currently in use in Brazil. UNICA disagreed with the assertion that indirect land-use changes can be accurately calculated with the current methodologies. Canadian officials also complained the standard could become an entry barrier to their Alberta oil sands, as producers will have to significantly reduce their emissions or purchase expensive credits from alternative energy producers in order for their non-conventional oil to be sold in California. They complained that the measure could be discriminating against Canadian oil sands crude as a high carbon intensity crude oil, while other heavy crude oils from other sources were not evaluated by CARB's studies.The only Board member who voted against the ruling explained that he had "hard time accepting the fact that we’re going to ignore the comments of 125 scientists", referring to the letter submitted by a group of scientists questioning the indirect land use change penalty. "They said the model was not good enough... to use at this time as a component part of such an historic new standard." CARB adopted only one main amendment to the staff proposal to bolster the standard review process, moving up the expected date of an expert working group to report on indirect land use change from January 2012 to January 2011. This change is expected to provide for a thoroughly review of the specific penalty for indirect land use change and correct it if possible. The CARB staff is also expected to report back to the Board on indirect impacts of other fuel pathways before the commencement of the standard in 2011.Fuels were rated based on their carbon intensity, estimated in terms of the quantity of grams of carbon dioxide equivalent released for every megajoule of energy produced for their full life cycle, also referred to as the fuel pathway. Carbon intensity was estimated considering the direct carbon footprint for each fuel, and for biofuels the indirect land-use effects were also included. The resulting intensities for the main biofuels readily available are the following:
The LCFS standards established in CARB's rulemaking will be periodically reviewed. The first formal review will occur by January 1, 2011. Additional reviews are expected to be conducted approximately every three years thereafter, or as necessary. The 2011 review will consider the status of efforts to develop low carbon fuels, the compliance schedule, updated technical information, and provide recommendations on metrics to address the sustainable production of low carbon fuels.According to CARB's ruling, providers of transportation fuels must demonstrate that the mix of fuels they supply meet the LCFS intensity standards for each annual compliance period. They must report all fuels provided and track the fuels’ carbon intensity through a system of "credits" and "deficits." Credits are generated from fuels with lower carbon intensity than the standard. Deficits result from the use of fuels with higher carbon intensity than the standard. A fuel provider meets its compliance obligation by ensuring that amount of credits it earns (or otherwise acquires from another party) is equal to, or greater than, the deficits it has incurred. Credits and deficits are generally determined based on the amount of fuel sold, the carbon intensity of the fuel, and the efficiency by which a vehicle converts the fuel into usable energy. Credits may be banked and traded within the LCFS market to meet obligations.Two "lookup tables" (similar to the one above) and its carbon intensity values are part of the regulation, one for gasoline and another for diesel. The carbon intensity values can only be amended or expanded by regulatory amendments, and the Board delegated to the Executive Officer the responsibility to conduct the necessary rulemaking hearings and take final action on any amendments, other than amending indirect land-use change values included in the lookup tables.
Latest Developments
On July 20, 2009, CARB published a Notice of Public Availability of modified text and availability of additional documents regarding the April 2009 rule making (Resolution 09–31), open for public comment until August 19. The supporting documents and information added to the rule making record include new pathways for Liquefied Natural Gas (LNG) from several sources, Compressed Natural Gas (CNG) from dairy digester biogas, biodiesel produced in California from used cooking oil, renewable diesel produced in California from tallow (U.S. sourced), and two additional new pathways for Brazilian sugarcane ethanol which reflect best practices already implemented in some regions of the country.The two additional scenarios for sugarcane ethanol were requested by the Board in order to account for improved harvesting practices and the export of electricity from sugarcane ethanol plants in Brazil using energy from bagasse. These two scenarios are not to be considered average for all of Brazilian ethanol but specific cases when such practices are adopted in Brazil. Scenario 1 considers mechanized harvesting of cane which is gradually replacing the traditional practice of burning straw before harvesting cane, and the sale of electricity (co-generated) from power plants that are capable of exporting additional energy beyond that required for processing in the plant (co-product credit). Scenario 2 only considers the export of electricity (co-product) from power plants capable of producing the additional electricity for export. The assumptions or values for the baseline pathway published in February 2009 are the same, including the estimates of indirect land use change for all Brazilian sugarcane scenarios.In December 2009 the Renewable Fuels Association (RFA) and Growth Energy, two U.S. ethanol lobbying groups, filed a lawsuit in the Federal District Court in Fresno, California, challenging the constitutionality of the California Low Carbon Fuel Standard (LCFS). The two organizations argued that the LCFS violates both the Supremacy Clause and the Commerce Clause of the US Constitution, and "jeopardizes the nationwide market for ethanol". In a press release both associations announced that "If the United States is going to have a low carbon fuel standard, it must be based on sound science and it must be consistent with the U.S. Constitution..." and that "One state cannot dictate policy for all the others, yet that is precisely what California has aimed to do through a poorly conceived and, frankly, unconstitutional LCFS." Additional lawsuits against the California regulation were filed by refiners and truckers including Rocky Mountain Farmers Union; Redwood County Minnesota Corn and Soybean Growers; Penny Newman Grain, Inc.; Red Nederend; Fresno County Farm Bureau; Nisei Farmers League; California Dairy Campaign; National Petrochemical and Refiners Association; American Trucking Associations; Center for North American Energy Security; and the Consumer Energy Alliance.In December 2011 a federal judge granted a preliminary injunction against the implementation of California's LCFS. In three separate rulings the judge rejected CARB's defense as he concluded that the state acted unconstitutionally and the regulation "impermissibly treads into the province and powers of our federal government, reaches beyond its boundaries to regulate activity wholly outside of its borders." CARB announced it intends to appeal the decision. The Ninth Circuit Court of Appeals issued a stay of the injunction on 23 April 2012 during the tendency of the litigation. In other words, the challenge to the constitutionality of the LCFS continues, but until it is resolved there is no bar on the CARB continuing to enforce the LCFS. (While the stay did not specifically authorize a return to the LCFS, CARB argued in its briefs before the Court that a stay would "permit the LCFS to go back into effect as though the injunction had never been issued". That is the approach currently taken by CARB and it continues to refine carbon intensity standards and applicability).In 2011, a provision was added to the LCFS that allows refiners to receive credits for the deployment of innovative crude production technologies, such as carbon capture and sequestration or solar steam generation. Solar thermal enhanced oil recovery is a form of enhanced oil recovery (EOR), which is key to harvesting California's heavy crude. Currently, California uses EOR to help produce about 60% of its crude output. By using solar power instead of natural gas to create steam for EOR, solar steam generation reduces the amount of emissions produced during oil extraction, thus lowering the overall carbon intensity of crude. California currently has two solar EOR projects in operation, one in McKittrick, operated by LINN Energy (formerly Berry Petroleum) using enclosed trough technology from GlassPoint Solar, and another in Coalinga operated by Chevron Corporation using BrightSource Energy power tower technology.
CARB is currently considering an amendment to allow upstream operators to receive credits for deploying innovative crude production technologies.In 2015, California's LCFS was re-adopted in order to address some of the issues in the original proposed standard. A number of the changes were made, including updated crude provisions, a new model used to be used to calculate carbon intensity, the establishment of a "Credit Clearance" process that would take effect at the end of the year should the LCFS credit market become too competitive, and other provisions.In May 2016, the Seneca Solar Project became the first facility to start earning LCFS credits. Located in the North Midway Sunset oil field in Taft, Kern County, California, this facility met the threshold of 0.10gCO2/ MJ carbon intensity (CI) reduction.Shortly after that, in August 2016, the SB 32 was passed, which changed the target for green house gas (GHG) reduction to 40% below the levels in 1990, to be achieved by 2030. It is anticipated that this will lead to the tightening of LCFS standards from 2020 all the way through 2030.In November 2017, GlassPoint announced a partnership with Aera Energy to bring its enclosed trough technology to the South Belridge Oil Field, near Bakersfield, California. When done, the facility will be California's largest solar EOR field. It is projected to produce approximately 12 million barrels of steam per year through a 850MW thermal solar steam generator. It will also cut carbon emissions from the facility by 376,000 metric tons per year.
US National Low-Carbon Fuel Standard
Using California's LCFS as a model, several bills have been presented to establish a national low-carbon fuel standards at the federal level.
2007
Senators Barbara Boxer, Dianne Feinstein, and future President Barack Obama introduced in 2007 competing bills with varying versions of California's LCFS.
In March 2007, Senator Dianne Feinstein sponsored the "Clean Fuels and Vehicles Bill", which would have reduced emissions from motor vehicle fuels by 10 percent below projected levels by 2030, and would have required fuel suppliers to increase the percentage of low-carbon fuels – biodiesel, E-85 (made with cellulosic ethanol), hydrogen, electricity, and others – in the motor vehicle fuel supply.
California Senator Barbara Boxer presented on May 3, 2007, the "Advanced Clean Fuels Act of 2007". This bill was an amendment to the Clean Air Act to promote the use of advanced clean fuels that help reduce air and water pollution and protect the environment.
Then Senator Obama introduced his bill on May 7, 2007. The "National Low Carbon Fuel Standard Act of 2007" would have required fuel refineries to reduce the lifecycle greenhouse gas emissions of the transportation fuels sold in the U.S. by 5 percent in 2015 and 10 percent in 2020.
2009
In March 2009, the Waxman-Markey Climate Bill was introduced in the U.S. House Committee on Energy and Commerce, and it has been praised by top Obama Administration officials. The bill requires a slightly higher targets for reductions in emissions of carbon dioxide, methane, and other greenhouse gases than those proposed by President Barack Obama. The bill proposed a 20-percent emissions reduction from 2005 levels by 2020 (Obama had proposed a 14 percent reduction by 2020). Both plans aim to reduce emissions by about 80 percent by 2050. The Climate Change Bill was approved by the U.S. House of Representatives on June 26, 2009. As approved, emissions would be cut 17 percent from 2005 levels by 2020, and 83 percent by 2050.
EPA Renewable Fuel Standard
The Energy Independence and Security Act of 2007 (EISA) established new renewable fuel categories and eligibility requirements, setting mandatory life cycle greenhouse gas emissions thresholds for renewable fuel categories, as compared to those of average petroleum fuels used in 2005. EISA definition of life cycle GHG emissions explicitly mandated the U.S. Environmental Protection Agency (EPA) to include "direct emissions and significant indirect emissions such as significant emissions from land use changes."On May 5, 2009, the U.S. Environmental Protection Agency (EPA) released its notice of proposed rulemaking for implementation of the 2007 modification of the Renewable Fuel Standard (RFS). The draft of the regulations was released for public comment during a 60-day period. EPA's proposed regulations also included the carbon footprint from indirect land-use changes, which, as CARB's ruling, caused controversy among ethanol producers. On the same day, President Barack Obama signed a Presidential Directive with the aim to advance biofuels research and improve their commercialization. The Directive established a Biofuels Interagency Working Group which has the mandate to come up with policy ideas for increasing investment in next-generation fuels, such as cellulosic ethanol, and for reducing the environmental footprint of growing biofuels crops, particularly corn-based ethanol.An amendment was introduced in the House Appropriations Committee during the discussion of the fiscal 2010 Interior and Environment spending bill, aimed to prohibit EPA to consider indirect land-use changes in the RFS2 ruling for five years. This amendment was rejected on June 18, 2009, by a 30 to 29 vote. A similar amendment to the Waxman-Markey Climate Bill was introduced in the U.S. House Committee on Energy and Commerce. The Climate Bill was approved by the U.S. House of Representatives with a vote of 219 to 212, and included a mandate for EPA to exclude any estimation of international indirect land use changes due to biofuels for a five-year period for the purposes of the RFS2. During this period, more research is to be conducted to develop more reliable models and methodologies for estimating ILUC. By 2010 the bill is awaiting approval by the U.S. Senate.
On February 3, 2010, EPA issued its final rule regarding the expanded Renewable Fuel Standard (RFS2) for 2010 and beyond. The final rule revises the annual renewable fuel standards, and the required renewable fuel volume continues to increase reaching 36 billion gallons (136.3 billion liters) by 2022. For 2010, EISA set a total renewable fuel standard of 12.95 billion gallons (49.0 billion liters). This total volume, presented as a fraction of a refiner's or importer's gasoline and diesel volume, must be renewable fuel. The final 2010 standards set by EPA are shown in the table in the right side.As mandated by law, and in order to establish the fuel category for each biofuel, EPA included in its modeling direct emissions and significant indirect emissions such as emissions from land use changes related to the full lifecycle. EPA's modeling of specific fuel pathways incorporated comments received through the third-party peer review process, and data and information from new studies and public comments. EPA's analysis determined that both ethanol produced from corn starch and biobutanol from corn starch comply with the 20% GHG emission reduction threshold required to classify as a renewable fuel. EISA grandfathered existing U.S. corn ethanol plants, and only requires the 20% reduction in life cycle GHG emissions for any renewable fuel produced at new facilities that commenced construction after December 19, 2007.EPA also determined that ethanol produced from sugarcane, both in Brazil and Caribbean Basin Initiative countries, complies with the applicable 50% GHG reduction threshold for the advanced fuel category. Both diesel produced from algal oils and biodiesel from soy oil and renewable diesel from waste oils, fats, and greases complies with the 50% GHG threshold for the biomass-based diesel category. Cellulosic ethanol and cellulosic diesel (based on currently modeled pathways) comply with the 60% GHG reduction threshold applicable to cellulosic biofuels.The following table summarizes the mean GHG emissions estimated and the range of variations considering that the main source of uncertainty in the life cycle analysis is the emissions related to international land use change GHG emissions.
UNICA, a Brazilian ethanol producers association, welcomed the ruling and commented that they hope the classification of Brazilian sugarcane ethanol as an advanced biofuel will contribute to influence those who seek to lift the trade barriers imposed against clean energy, both in the U.S. and the rest of the world. EPA's final ruling is expected to benefit Brazilian producers, as the blending mandate requires an increasing quota of advanced biofuels, which is not likely to be fulfill with cellulosic ethanol, and then it would force blenders to import more Brazilian sugarcane-based ethanol, despite the existing 54¢ per gallon tariff on ethanol imported directly from Brazil.In the case of corn-based ethanol, EPA said that manufacturers would need to use “advanced efficient technologies” during production to meet RSF2 limits. The U.S. Renewable Fuels Association also welcomed the ruling, as ethanol producers "require stable federal policy that provides them the market assurances they need to commercialize new technologies." However, they complained that "EPA continues to rely on oft-challenged and unproven theories such as international indirect land use change to penalize U.S. biofuels to the advantage of imported ethanol and petroleum."
Other U.S. regional proposals and programs
Eleven U.S. Northeast and Mid-Atlantic states have committed to analyzing a single low-carbon fuel standard for the entire region, driving commercialization and creating a larger market for fuels with low carbon intensity. The standard is aimed to reduce greenhouse gas emissions from fuels for vehicles and other uses, including fuel used for heating buildings, industrial processes, and electricity generation. Ten of these states are members of the Regional Greenhouse Gas Initiative (RGGI). California Air Resources Board (CARB) staff has been coordinating with representatives of these States. The states developing a regional LCFS are Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.A Memorandum of Understanding concerning the development of the regional low carbon fuel standard program was signed by the Governors of each State on December 30, 2009, committing the states to an economic analysis of the program, consultation with stakeholders before ruling, and a draft model rule by early 2011.
The Oregon Clean Fuels Program
In 2009, Oregon's legislature authorized the state's Department of Environmental Quality to create a standard with the same essential structure as California's LCFS. DEQ began full implementation of the program starting in 2016. The Oregon Clean Fuels Standard (CFS) explicitly draws on life-cycle greenhouse gas intensity calculations created or approved by the California Air Resources Board for the LCFS.
By early 2019, the path of CFS credit prices seemed to suggest some de facto linkage between the California and Oregon programs. Although Oregon's credit prices have been generally lower, Oregon has experienced a similar upward movement in prices in the 2016-2018 period, in parallel with California credit price increases.
Washington Clean Fuel Standard
The state of Washington is implementing a clean fuel standard which will require carbon intensity of its transportation fuels to 20% below 2017 levels by 2034.
British Columbia Low-Carbon Fuel Requirements
The Legislative Assembly of British Columbia, Canada, approved in April 2008 the Renewable and Low Carbon Fuel Requirements Act, which mandates fuel suppliers in B.C. to sell gasoline and diesel containing 5% and 4% percent renewable fuels, respectively, by 2010, and allows the provincial government to set thresholds for the carbon intensity of fuels, taking into account their entire carbon footprint. The RLCFR Act also provides flexibility for regulated fuel suppliers to meet their obligations as they may receive notional transfers of renewable fuels and of attributable greenhouse gas emissions.
Canada Clean fuel regulations
At a national level, Canada is implementing regulations to require liquid fossil fuel providers to reduce the carbon intensity of their fuels to 15% below 2016 levels by the year 2030.
Europe
Existing regulations
The EU has mainly acted to mitigate road transport greenhouse emissions mainly through its voluntary agreement on CO2 emissions from cars and subsequently through Regulation 443/2009 which sets mandatory CO2 emission limits for new cars. The EU promoted the use of biofuels through the directive on the promotion of the use of biofuels and other renewable fuels for transport (2003/30/EC), also known as Biofuel Directive, which calls for countries across the EU aiming at replacing 5,75% of all transport fossil fuels (petrol and diesel) with biofuels by 2010. None of these regulations, however, were based on carbon intensity of fuel. Fuel quality standards in the European Union are regulated by Directive 98/70/EC.
Other European countries have their own mandates limiting consumption of conventional fossil fuels by substituting to cleaner fuels in order to reduce greenhouse gas emissions, such as the United Kingdom Renewable Transport Fuel Obligation Program (RTFO), requiring transport fuel suppliers to ensure that 5% of all road vehicle fuel comes from sustainable renewable sources by 2010.
UK Renewable Transport Fuel Obligation
The Renewable Transport Fuel Obligation is similar to California's LCFS in some aspects. Biofuel suppliers are required to report on the level of carbon savings and sustainability of the biofuels they supplied in order to receive Renewable Transport Fuel Certificates (RTFCs). Suppliers have to report on both the net GHG savings and the sustainability of the biofuels they supply according to the appropriate sustainability standards of the feedstocks from which they are produced and any potential indirect impacts of biofuel production, such as indirect land-use change or changes to food and other commodity prices that are beyond the control of individual suppliers. Suppliers that do not submit a report will not be eligible for RTFO certificates.Certificates can be claimed when renewable fuels are supplied and fuel duty is paid on them. At the end of the obligation period, these certificates may be redeemed to the RTFO Administrator to demonstrate compliance. Certificates can be traded, therefore, if obligated suppliers don't have enough certificates at the end of an obligation period they have to 'buy-out' the balance of their obligation by paying a buy-out price. The buy out price will be 15 pence per litre in the first two years.
EU low-carbon fuel standard
On January 31, 2007, the European Commission (EC) proposed new standards for transport fuels to reduce full life cycle emissions by up to 10 percent between 2011 and 2020 This was three weeks after the California LCFS Directive was announced. The EU proposal aimed to encourage the development of low-carbon fuels and biofuels, considering reductions in greenhouse gas emissions caused by the production, transport and use of the suppliers fuels.In December 2008 the European Parliament, among other measures to address climate change in the European Union, approved amendments to the fuel quality directive (98/70) as well as replacing the Biofuels Directive with a Directive on the promotion of Renewable Energy Sources as proposed by the European Commission. The revision of Directive 98/07/EC introduced a mechanism to monitor and reduce greenhouse gas emissions from the use of road transport fuels, requiring fuel suppliers to reduce GHG emissions by up to 10 percent by 2020 on a life cycle basis. Regarding land use changes, the EC was ordered to "develop a concrete methodology to minimise greenhouse gas emissions caused by indirect land use changes." The fuel directive includes provisions to promote sustainable biofuels which minimized the impacts of ILUC. The approved goal of 10 percent reduction in greenhouse gas emissions can be achieved in several ways, and not exclusively from low-carbon fuels:
At least 6% by 31 December 2020, compared to the EU-average level of life cycle greenhouse gas emissions per unit of energy from fossil fuels in 2010, obtained through the use of biofuels, alternative fuels and reductions in flaring and venting at production sites.
A further 2% reduction (subject to a review) obtained through the use of environmentally friendly carbon capture and storage technologies and electric vehicles.
An additional further 2% reduction obtained through the purchase of credits under the Clean Development Mechanism of the Kyoto Protocol.The commission is continuing development of the EU LCFS, in particular on the methodology for fossil fuel emissions, has recently consulted on various aspects of the implementation and responses have been published. Further work is underway to address Indirect Land Use Change emissions. Two modelling exercises and a model comparison exercise are being carried out to better understand the scale and nature of indirect land use change due to biofuels before the Commission makes proposals to address it.
On June 10, 2010, the EC adopted guidelines explaining how the Renewable Energy Directive (RED) should be implemented, as the Directive came into effect in December 2010. Three measures focus on the criteria for sustainability of biofuels and how to control that only sustainable biofuels are used in the EU. First, the commission is encouraging E.U. nations, industry and NGOs to set up voluntary schemes to certify biofuel sustainability. Second, the EC laid down the rules to protect untouched nature, such as forests, wetlands and protected areas, and third, a set of rules to guarantee that biofuels deliver substantial reduction in well-to-wheel greenhouse gas emissions.
Sustainable biofuel certificates
The EC decided to request governments, industry and NGO's to set up voluntary schemes to certify biofuel sustainability for all types of biofuels, including those imported into the EU. According to the EC, the overall majority of biofuels are produced in the EU, and for 2007, only 26% of biodiesel and 31% of bioethanol consumed in the EU was imported, mainly from Brazil and the United States. The Commission set standards that must be met for these schemes to gain EU recognition. One of the main criteria is that the certification scheme must be interdependently audited and fraud-resistant. Auditors must check the whole production chain, from the farmer and mill to the filling station (well-to-wheel life cycle). Auditors must check all the paper and inspect a sample of the farmers, mills and traders, and also whether the land where the feedstock for the ethanol is produced has been indeed farm land before and not a tropical forest or protected area. The certificates are to guarantee that all the biofuels sold under the label are sustainable and produced under the criteria set by the Renewable Energy Directive. Several private certification systems originally designed for sustainability more generally have adapted their standards to qualify for recognition under the Renewable Energy Directive, including the Roundtable on Sustainable Biomaterials and Bonsucro.Environmental groups complained the measures "are too weak to halt a dramatic increase in deforestation." According to Greenpeace "Indirect land use change impacts of biofuels (ILUC) production still are not properly addressed" because if not properly regulated, "ILUC impacts will continue causing major biodiversity loss and more greenhosuse gas emissions." On the other hand, industry representatives welcomed the introduction of a certification system and some dismissed the concerns regarding the lack of criteria about ILUC.UNICA, the Brazilian ethanol producers association welcome the rules more cautiously, as they consider "that gaps in the rules needed to be filled in so the "industry has a clear framework within which to operate". Some other industry organizations also said that further clarification is needed in order to implement the Renewable Energy Directive.The EC clarified that it would publish a report on the impacts of indirect land use by the end of 2010, as requested in the Renewable Energy Directive and on the basis of recently released reports that suggest that biofuels are saving greenhouse gas emissions.
Protecting untouched nature
The rules set by the Commission establish that biofuels should not be made from feedstocks from tropical forests or recently deforested areas, drained peatland, wetland or highly biodiverse areas. The corresponding Communication explains how this should be assessed and as an example, it makes it clear that the conversion of a forest to a palm oil plantation would not meet the sustainability requirements.
Promote only biofuels with high greenhouse gas savings
The Commission reiterated that Member States have to meet binding, national targets for renewable energy and that only those biofuels with high greenhouse gas emission savings count for the national targets. The corresponding Communication explains how to make the calculation, which not only includes carbon dioxide (CO2), but also methane (CH4) and nitrous oxide (N2O), both stronger greenhouse gases than CO2. Biofuels must deliver greenhouse gas savings of at least 35% compared to fossil fuels, rising to 50% in 2017 and to 60%, for biofuels from new plants, in 2018.
See also
References
External links
CARB's website for the Low-Carbon Fuel Standard Program
CARB: Proposed Regulation to Implement the Low Carbon Fuel Standard (approved April 23, 2009)
European Parliament "Monitoring and reduction of greenhouse gas emissions from fuels (road transport and inland waterway vessels"(approved December 17, 2008)
Draft bill of the ‘‘American Clean Energy and Security Act of 2009’’
EPA's Renewable Fuel Standard Program (RFS2): Regulatory Impact Analysis for RFS2 (February 2010)
EPA's RFS2 Final rule: Greenhouse Gas Reduction Thresholds
Text of the National Low-Carbon Fuel Standard Act of 2007, presented by Barack Obama (This bill never became law).
Text of the Advanced Clean Fuels Act of 2007 (This bill never became law)
DfT home site for the Renewable Transport Fuel Obligation (RTFO)
Indirect land use impacts of biofuels (Bioenergy Wiki)
Roundtable on Sustainable Biomaterials profile on database of market governance mechanisms
Status Review of California's Low Carbon Fuel Standard - July 2014, Institute of Transportation Studies (ITS), University of California, Davis.
Bonsucro profile on database of market governance mechanisms
LCFS Credit Price Tracking (The Jacobsen Pricing Benchmark) |
carbon leakage | Carbon leakage a concept to quantify an increase in greenhouse gas emissions in one country as a result of an emissions reduction by a second country with stricter climate change mitigation policies. Carbon leakage is one type of spill-over effect. Spill-over effects can be positive or negative; for example, emission reductions policy might lead to technological developments that aid reductions outside of the policy area. Carbon leakage is defined as "the increase in CO2 emissions outside the countries taking domestic mitigation action divided by the reduction in the emissions of these countries." It is expressed as a percentage, and can be greater or less than 100%. There is no consensus over the magnitude of long-term leakage effects.Carbon leakage may occur for a number of reasons: If the emissions policy of a country raises local costs, then another country with a more relaxed policy may have a trading advantage. If demand for these goods remains the same, production may move offshore to the cheaper country with lower standards, and global emissions will not be reduced.
If environmental policies in one country add a premium to certain fuels or commodities, then the demand may decline and their price may fall. Countries that do not place a premium on those items may then take up the demand and use the same supply, negating any benefit.
Coal, oil and alternative technologies
The issue of carbon leakage can be interpreted from the perspective of the reliance of society on coal, oil, and alternative (less polluting) technologies, e.g., biomass. This is based on the theory of nonrenewable resources. The potential emissions from coal, oil and gas is limited by the supply of these nonrenewable resources. To a first approximation, the total emissions from oil and gas is fixed, and the total load of carbon in the atmosphere is primarily determined by coal usage.
A policy that sets a carbon tax only in developed countries might lead to leakage of emissions to developing countries. However, a negative leakage (i.e., leakage having the effect of reducing emissions) could also occur due to a lowering in demand and price for oil and gas. One of the negative effects of carbon leakage is the undermining of global emissions reduction efforts. When industries relocate to countries with lower emission standards, it can lead to increased greenhouse gas emissions in those countries.
This might lead coal-rich countries to use less coal and more oil and gas, thus lowering their emissions. While this is of short-term benefit, it reduces the insurance provided by limiting the consumption of oil and gas. The insurance is against the possibility of delayed arrival of backstop technologies. If the arrival of alternative technologies is delayed, the replacement of coal by oil and gas might have no long-term benefit. If the alternative technology arrives earlier, then the issue of substitution becomes unimportant. In terms of climate policy, the issue of substitution means that long-term leakage needs to be considered, and not just short-term leakage.By taking into account the potential delays in alternative technologies and wider substitution effects, policymakers can develop strategies that minimize leakage and promote sustainable emissions reduction.
Current schemes
Estimates of leakage rates for action under the Kyoto Protocol ranged from 5 to 20% as a result of a loss in price competitiveness, but these leakage rates were viewed as being very uncertain. For energy-intensive industries, the beneficial effects of Annex I actions through technological development were viewed as possibly being substantial. This beneficial effect, however, had not been reliably quantified. On the empirical evidence they assessed, Barker et al. (2007) concluded that the competitive losses of then-current mitigation actions, e.g., the EU ETS, were not significant.
Recent North American emissions schemes such as the Regional Greenhouse Gas Initiative and the Western Climate Initiative are looking at ways of measuring and equalising the price of energy 'imports' that enter their trading region
See also
Carbon footprint
Carbon shifting
Carbon tax
Embedded emissions
Emissions trading
Indirect land use change impacts of biofuels
Leakage (economics)
References
Further reading
Markandya, A.; et al. (2001). "7.4.3 Valuation of Spillover Costs and Benefits. In (book chapter): 7. Costing Methodologies. In: Climate Change 2001: Mitigation. Contribution of Working Group III to the Third Assessment Report of the Intergovernmental Panel on Climate Change (B. Metz et al. Eds.)". Print version: Cambridge University Press, Cambridge, U.K., and New York, N.Y., U.S.A.. This version: GRID-Arendal website. Archived from the original on 2009-08-05. Retrieved 2010-05-12.
Reinaud, J. (October 2008). "Issues behind Competitiveness and Carbon Leakage- Focus on Heavy Industry. IEA information paper". International Energy Agency (IEA), Head of Communication and Information Office, 9 rue de la Fédération, 75739 Paris Cedex 15, France. p. 122. Archived from the original on 2010-06-15. Retrieved 2010-05-12.
Reinaud, J. (October 2008). "Climate Policy and Carbon Leakage- Impacts of the European Emissions Trading Scheme on Aluminium. IEA information paper". International Energy Agency (IEA), Head of Communication and Information Office, 9 rue de la Fédération, 75739 Paris Cedex 15, France. p. 45. Archived from the original on 2010-06-15. Retrieved 2010-05-12. |
stop esso campaign | The Stop Esso campaign was a campaign by Greenpeace, Friends of the Earth and People and Planet aimed at boycotting the oil company ExxonMobil (which owns and operates the brand Esso), on the grounds that it is damaging the environment.
The campaign alleges that Esso / ExxonMobil is:
not investing in renewable energy sources
denying the existence of global warming
funding the creation of junk science which denies climate change, delaying urgently needed climate change action
undermining the Kyoto Protocol.
Stop Esso (France) website injunction
Greenpeace was sued in France by Esso, who alleged that the company's reputation was damaged by the campaign's use of a parody Esso logo featuring dollar signs in place of the letters "ss".
Esso claimed that the $$ resemble the SS of the Nazi paramilitary organization. A French court ruled in favour of Esso, granting them an injunction against the French website. The campaign then moved their French web site to the United States. Another French judge has subsequently overturned the original ruling, so the site has moved back to France. The Stop Esso campaign continues to use the dollar sign logo.
Esso's greenhouse gas production
Stop Esso's consumer boycott has focused on the greenhouse gas production and climate change policies of Esso. Esso's critics claim the company produces twice the CO2 pollution of a country such as Norway. Company data revealed a 2% increase in greenhouse gas production in 2004 to 135.6m tonnes. Supporters of Stop Esso argue that BP has a similar level of production as Esso with nearly 50% less greenhouse gas emissions. One environmental consultancy believed Esso underestimated its greenhouse gas production because it excluded petrol stations and tankers. It estimates Exxon's production at over 300m tonnes.
Esso's reaction
In response to Stop Esso, Esso gave financial support to climate change research. However, it continued to encourage President Bush and other world leaders not to sign the Kyoto Protocol which mandates decreased production of greenhouse gases.
A proportion of Esso's greenhouse gas production arose because of the flaring of gas in Nigeria. When natural gas is brought out with oil, Esso in Nigeria burned the gas rather than processing it. Esso pledged to cease this practice by 2006.At the same time, Exxon-Mobil funded provided the non-profit, Public Interest Watch, with $120,000 of the group's $124,094 budget covering August 2003 to July 2004, when the group called for the Internal Revenue Service to audit Greenpeace USA According to Phil Radford, Greenpeace USA Executive Director, "We might not have thought more about it, but in 2006, the Wall Street Journal reported Public Interest Watch wasn't as obscure a group as we'd thought. Instead, Public Interest Watch received $120,000 of its $124,000 budget from ExxonMobil, the multinational entity Greenpeace has clashed with for years over its drilling, spilling, and denial of climate change." Greenpeace USA received a clean audit from the IRS.
Stop Esso days
UK
December 1, 2001 - about 306 Esso stations boycotted
May 18, 2002 - about 400 Esso stations boycotted
Luxembourg
October 25, 2002 - all 28 Esso stations boycotted
See also
Esso
ExxonMobil
Global warming controversy
External links
International Stop Esso Site
Greenpeace's French Stop Esso Site
Greenpeace UK Blog - The Case Against Esso
== References == |
nitrogen trifluoride | Nitrogen trifluoride (NF3) is an inorganic, colorless, non-flammable, toxic gas with a slightly musty odor. It finds increasing use within the manufacturing of flat-panel displays, photovoltaics, LEDs and other microelectronics. Nitrogen trifluoride is also an extremely strong and long-lived greenhouse gas. Its atmospheric burden exceeded 2 parts per trillion during 2019 and has doubled every five years since the late 20th century.
Synthesis and reactivity
Nitrogen trifluoride did not exist in significant quantities on Earth prior to its synthesis by humans. It is a rare example of a binary fluoride that can be prepared directly from the elements only at very uncommon conditions, such as an electric discharge. After first attempting the synthesis in 1903, Otto Ruff prepared nitrogen trifluoride by the electrolysis of a molten mixture of ammonium fluoride and hydrogen fluoride. It proved to be far less reactive than the other nitrogen trihalides nitrogen trichloride, nitrogen tribromide and nitrogen triiodide, all of which are explosive. Alone among the nitrogen trihalides it has a negative enthalpy of formation. It is prepared in modern times both by direct reaction of ammonia and fluorine and by a variation of Ruff's method. It is supplied in pressurized cylinders.
NF3 is slightly soluble in water without undergoing chemical reaction. It is nonbasic with a low dipole moment of 0.2340 D. By contrast, ammonia is basic and highly polar (1.47 D). This difference arises from the fluorine atoms acting as electron-withdrawing groups, attracting essentially all of the lone pair electrons on the nitrogen atom. NF3 is a potent yet sluggish oxidizer.
It oxidizes hydrogen chloride to chlorine:
2 NF3 + 6 HCl → 6 HF + N2 + 3 Cl2It is compatible with steel and Monel, as well as several plastics.
It converts to tetrafluorohydrazine upon contact with metals, but only at high temperatures:
2 NF3 + Cu → N2F4 + CuF2NF3 reacts with fluorine and antimony pentafluoride to give the tetrafluoroammonium salt:
NF3 + F2 + SbF5 → NF+4SbF−6Mixtures of NF3 and B2H6 are explosive even at cryogenic temperatures, reacting to produce nitrogen gas, boron trifluoride, and hydrofluoric acid.
Applications
Etching
Nitrogen trifluoride is primarily used to remove silicon and silicon-compounds during the manufacturing of semiconductor devices such as LCD displays, some thin-film solar cells, and other microelectronics. In these applications NF3 is initially broken down within a plasma. The resulting fluorine radicals are the active agents that attack polysilicon, silicon nitride and silicon oxide. They can be used as well to remove tungsten silicide, tungsten, and certain other metals. In addition to serving as an etchant in device fabrication, NF3 is also widely used to clean PECVD chambers.
NF3 dissociates more readily within a low-pressure discharge in comparison to perfluorinated compounds (PFCs) and sulfur hexafluoride (SF6). The greater abundance of negatively-charged free radicals thus generated can yield higher silicon removal rates, and provide other process benefits such as less residual contamination and a lower net charge stress on the device being fabricated. As a somewhat more thoroughly consumed etching and cleaning agent, NF3 has also been promoted as an environmentally preferable substitute for SF6 or PFCs such as hexafluoroethane.The utilization efficiency of the chemicals applied in plasma processes varies widely between equipment and applications. A sizeable fraction of the reactants are wasted into the exhaust stream and can ultimately be emitted into Earth's atmosphere. Modern abatement systems can substantially decrease atmospheric emissions. NF3 has not been subject to significant use restrictions. The annual reporting of NF3 production, consumption, and waste emissions by large manufacturers has been required in many industrialized countries as a response to the observed atmospheric growth and the international Kyoto Protocol.Highly toxic fluorine gas (F2, diatomic fluorine) is a climate neutral replacement for nitrogen trifluoride in some manufacturing applications. It requires more stringent handling and safety precautions, especially to protect manufacturing personnel.Nitrogen trifluoride is also used in hydrogen fluoride and deuterium fluoride lasers, which are types of chemical lasers. There it is also preferred to fluorine gas due to its more convenient handling properties
Greenhouse gas
NF3 is a greenhouse gas, with a global warming potential (GWP) 17,200 times greater than that of CO2 when compared over a 100-year period. Its GWP place it second only to SF6 in the group of Kyoto-recognised greenhouse gases, and NF3 was included in that grouping with effect from 2013 and the commencement of the second commitment period of the Kyoto Protocol. It has an estimated atmospheric lifetime of 740 years, although other work suggests a slightly shorter lifetime of 550 years (and a corresponding GWP of 16,800).Although NF3 has a high GWP, for a long time its radiative forcing in the Earth's atmosphere has been assumed to be small, spuriously presuming that only small quantities are released into the atmosphere. Industrial applications of NF3 routinely break it down, while in the past previously used regulated compounds such as SF6 and PFCs were often released. Research has questioned the previous assumptions. High-volume applications such as DRAM computer memory production, the manufacturing of flat panel displays and the large-scale production of thin-film solar cells use NF3.
Since 1992, when less than 100 tons were produced, production has grown to an estimated 4000 tons in 2007 and is projected to increase significantly. World production of NF3 is expected to reach 8000 tons a year by 2010. By far the world's largest producer of NF3 is the US industrial gas and chemical company Air Products & Chemicals. An estimated 2% of produced NF3 is released into the atmosphere. Robson projected that the maximum atmospheric concentration is less than 0.16 parts per trillion (ppt) by volume, which will provide less than 0.001 Wm−2 of IR forcing.
The mean global tropospheric concentration of NF3 has risen from about 0.02 ppt (parts per trillion, dry air mole fraction) in 1980, to 0.86 ppt in 2011, with a rate of increase of 0.095 ppt yr−1, or about 11% per year, and an interhemispheric gradient that is consistent with emissions occurring overwhelmingly in the Northern Hemisphere, as expected. This rise rate in 2011 corresponds to about 1200 metric tons/y NF3 emissions globally, or about 10% of the NF3 global production estimates. This is a significantly higher percentage than has been estimated by industry, and thus strengthens the case for inventorying NF3 production and for regulating its emissions.
One study co-authored by industry representatives suggests that the contribution of the NF3 emissions to the overall greenhouse gas budget of thin-film Si-solar cell manufacturing is clear.The UNFCCC, within the context of the Kyoto Protocol, decided to include nitrogen trifluoride in the second Kyoto Protocol compliance period, which begins in 2012 and ends in either 2017 or 2020. Following suit, the WBCSD/WRI GHG Protocol is amending all of its standards (corporate, product and Scope 3) to also cover NF3.
Safety
Skin contact with NF3 is not hazardous, and it is a relatively minor irritant to mucous membranes and eyes. It is a pulmonary irritant with a toxicity considerably lower than nitrogen oxides, and overexposure via inhalation causes the conversion of hemoglobin in blood to methemoglobin, which can lead to the condition methemoglobinemia. The National Institute for Occupational Safety and Health (NIOSH) specifies that the concentration that is immediately dangerous to life or health (IDLH value) is 1,000 ppm.
See also
IPCC list of greenhouse gases
Nitrogen pentafluoride
Tetrafluorohydrazine
Notes
References
External links
National Pollutant Inventory – Fluoride and compounds fact sheet at the Wayback Machine (archived December 22, 2003)
NF3 Code of Practice (European Industrial Gas Association)]
WebBook page for NF3
CDC - NIOSH Pocket Guide to Chemical Hazards |
merchants of doubt (film) | Merchants of Doubt is a 2014 American documentary film directed by Robert Kenner and inspired by the 2010 book of the same name by Naomi Oreskes and Erik M. Conway. The film traces the use of public relations tactics that were originally developed by the tobacco industry to protect their business from research indicating health risks from smoking. The most prominent of these tactics is the cultivation of scientists and others who successfully cast doubt on scientific results. Using a professional magician, the film explores the analogy between these tactics and the methods used by magicians to distract their audiences from observing how illusions are performed. For the tobacco industry, the tactics successfully delayed government regulation until long after the establishment of scientific consensus about the health risks from smoking. As its second example, the film describes how manufacturers of flame retardants worked to protect their sales after toxic effects of the retardants were reported in the scientific literature. The central concern of the film is the ongoing use of these tactics to forestall governmental action to regulate greenhouse gas emissions in response to the risks of global climate change.
Production
Interview subjects
The filmmakers interviewed more than a dozen individuals who have been involved in a series of conflicts ranging from the regulation of tobacco products to global climate change. In the order in which they appear in the film, the interview subjects are:
Stanton Glantz, a professor of medicine and activist for regulation of tobacco smoking. In 1994, Glantz received a carton of documents copied from the records of the Brown and Williamson tobacco company that revealed their awareness of health risks of smoking tobacco as early as the 1950s.
Sam Roe and Patricia Callahan, reporters at the Chicago Tribune newspaper who, in 2012, exposed "manufacturers that imperil public health by continuing to use toxic fire retardants in household furniture and crib mattresses, triggering reform efforts at the state and national level." Callahan and Roe were finalists for a Pulitzer Prize for Investigative Reporting.
James Hansen, a former NASA scientist whose 1988 testimony on climate change to congressional committees helped raise broad awareness of global warming. Hansen has become a prominent advocate for regulation of greenhouse gas emissions.
John Passacantando, a former executive director of Greenpeace, an organization of environmental activists.
William O'Keefe, the chief executive officer of the George C. Marshall Institute, an organization that opposes government regulation of greenhouse gas emissions.Naomi Oreskes, a professor of the history of science, and the co-author of the book that inspired the film.
Fred Singer, a physicist and environmental scientist who founded the Science & Environmental Policy Project (SEPP) in 1990 to work against regulation of greenhouse gas emissions, among other issues.
Michael Shermer, a writer and publisher of the magazine Skeptic. Shermer was initially a "contrarian" regarding regulation of greenhouse gas emissions, but his views changed as the scientific evidence regarding climate change advanced.
Matthew Crawford, a writer and former director of the George C. Marshall Institute. Crawford resigned over the influence of the Institute's sponsors in determining the Institute's activities and positions.
Marc Morano, a political activist who has published the climate denial website ClimateDepot since 2009. To encourage complaints to scientists whose work is viewed as supporting action on greenhouse gas emission, the website publishes their addresses.
Ben Santer, Michael E. Mann, and Katharine Hayhoe, climate scientists who have received personal threats because of their work on global climate change.
Tim Phillips, the president of Americans for Prosperity, which works against government regulation of climate change and other issues.
Bob Inglis, a former U.S. representative for South Carolina's 4th congressional district who lost his seat in the primary election following his announcement that he had changed his opinion and now believes global climate change is a problem the government should address. After losing his seat in Congress, Inglis began working to gain support for action to combat global climate change in conservative areas of the United States.
Professional magic
The film embeds commentary and performances by magician Jamy Ian Swiss. The premise of these interludes is that there is an analogy between the techniques of professional magicians and the tactics of public relations organizations: magicians learn how to distract their audiences from noticing the deceptions that underlie their tricks and illusions, while the organizations distract the public from the risks associated with products. These tactics were systematically developed by the tobacco industry in the 1950s in response to scientific research showing that smoking was a significant health risk, as the research was a significant threat to tobacco sales. The principal distraction tactic has been the use of convincing personalities who claim that uncertainties about the risks justify a delay in taking action.
An unsigned review in The Boston Globe explains: "To make his point clear, Kenner follows up Swiss’s magic act and fancy patter with a snappy montage of various experts over the years denying that cigarettes cause cancer, or extolling the virtues of pesticide, or proclaiming that asbestos is 'designed to last a lifetime — a trouble free lifetime.' And then the inevitable parade of climate change deniers bloviating in Congress or on cable news, all backed by Sinatra singing 'That Old Black Magic.'" A. O. Scott wrote in The New York Times that Swiss' "presence, and the animated playing cards that sometimes fly across the screen, feel like a glib and somewhat condescending gimmick, an attempt to wring some fun out of a grim and appalling story."
Reception
Threatened lawsuit
One of the subjects of the film, Fred Singer, wrote the director indicating that he was considering a lawsuit. Although no suit was filed, Kenner noted in an interview that "when [Singer] implies litigation is very expensive, I think it's an attempt to be intimidating." In the 1990s, Singer had sued Justin Lancaster over his statements regarding the inclusion of Roger Revelle as a co-author of a climate change paper with Singer and Chauncey Starr; Revelle had died shortly after the paper was published. That lawsuit ended when Lancaster withdrew his statements as "unwarranted", although Lancaster later expressed regret over the settlement.
Critics' views
The film was widely reviewed in the mainstream U.S. media and garnered mostly positive reviews. On review aggregator website Rotten Tomatoes, it holds an approval rating of 86% based on 90 reviews, with an average score of 7.0/10; the site's "critics consensus" reads: "Merchants of Doubt is a thought-provoking documentary assembled with energy and style, even if it doesn't dig as deep as it could." On Metacritic, the film has a weighted average score of 70 out of 100 based on 24 critics, indicating "generally favorable reviews".Justin Chang wrote for Variety that the film is "An intelligent, solidly argued and almost too-polished takedown of America’s spin factory — that network of professional fabricators, obfuscators and pseudo-scientists who have lately attempted to muddle the scientific debate around global warming — this is a movie so intrigued by its designated villains that it almost conveys a perverse form of admiration, and the fascination proves contagious." William Goss wrote for The Austin Chronicle that "Merchants spends much of its running time exposing trends of political subterfuge before working in an earnest call to action regarding climate change. Using the same type of tinkling score and shots of children at play as campaign ads shown earlier in the film, this late-inning agenda comes off as noble as it is hypocritical. Regardless of one’s personal beliefs, it’s tough to respect a movie that ultimately invites viewers to question every case of propaganda except its own."
Home media
Merchants of Doubt was released as a 2-disc Blu-ray/DVD combo pack on July 7, 2015.
References
External links
Merchants of Doubt at IMDb
Merchants of Doubt at AllMovie
Merchants of Doubt at Box Office Mojo
Merchants of Doubt at Metacritic
Merchants of Doubt at Rotten Tomatoes |
electricity sector in chile | As of August 2020 Chile had diverse sources of electric power: for the National Electric System, providing over 99% of the county's electric power, hydropower represented around 26.7% of its installed capacity, biomass 1.8%, wind power 8.8%, solar 12.1%, geothermal 0.2%, natural gas 18.9%, coal 20.3%, and petroleum-based capacity 11.3%. Prior to that time, faced with natural gas shortages, Chile began in 2007 to build its first liquefied natural gas terminal and re-gasification plant at Quintero near the capital city of Santiago to secure supply for its existing and upcoming gas-fired thermal plants. In addition, it had engaged in the construction of several new hydropower and coal-fired thermal plants. But by July 2020 91% of the new capacity under construction was of renewable power, 46.8% of the total solar and 25.6% wind, with most of the remainder hydro.Chile's electricity sector reform, which served as a model for other countries, was carried out in the first half of the 1980s. Vertical and horizontal unbundling of generation, transmission and distribution and large scale privatization led to soaring private investment. The 1982 Electricity Act was amended three times in 1999, 2004 and 2005 after major electricity shortages. Further amendments are envisaged.
Electricity supply and demand
Installed capacity
There are four separate electricity systems in Chile:
the Central Interconnected System (SIC, Sistema Interconectado Central), which serves the central part of the country (75.8% of the total installed capacity and 93% of the population, 15 GW capacity and 7.5 GW peak load);
the Norte Grande Interconnected System (SING Sistema Interconectado del Norte Grande), which serves the desert mining regions in the North (23.3% of the total installed capacity, 4 GW capacity and 2.4 GW peak load); and
the Aysén (0.3% of total capacity) and
Magallanes (0.6% of total capacity) systems, which serve small areas of the extreme southern part of the country.The long distances between the four systems made their integration difficult, but after the 600 km SIC-SING 500 kV AC transmission project costing US$1bn came online in May 2019, Chile's northern grid (SING) and central-southern grid (SIC) are now connected into a single national wide area synchronous grid.Total installed nominal capacity in April 2010 was 15.94 GW. Of the installed capacity, 64.9% is thermal, 34% hydroelectric and nearly 1% wind power, with nuclear absent. The SING is mostly thermal and suffers from overcapacity, while the hydro-dominated SIC has been subject to rationing in dry years.
Total generation in 2008 was 56.3 TW·h, 42% of which was contributed by hydropower sources. The remaining 58% was produced by thermal sources. This figure varies significantly from one year to another, depending upon the hydrology of the particular period. The electricity production grew rapidly since the start of natural gas imports from Argentina in the late 1990s.Besides the new hydro projects (see Renewables section below), there are several large-scale thermal projects in the development pipeline for Chile. Numerous projects are being built, although other similar plants have been delayed due to opposition from locals and uncertainty about gas supply. It is this uncertainty that has directed new attention to coal-fired facilities, of which Chile already has several plants in operation, with a combined capacity of 2,042 MW. In addition, as of April 2010, there are plans to build new plants for a total of 11,852 MW of new generation capacity.
By company
The main companies involved, in terms of installed capacity, are the following:
Enel Generación Chile (formerly ENDESA; 35%, 6085 MW)
AES Andes (18%, 3157 MW)
Colbún S.A. (15%, 2621 MW)
Suez Energy Andino (12%, 2176 MW)
E.E. Guacolda (3%, 610 MW)
Pacific Hydro (3%, 551 MW)A number of other companies account for the remaining 14% (2418 MW)
Imports and exports
In 2003, Chile imported 2 TW·h of electricity (mainly from Argentina) while it did not have any exports.
Demand
In 2007, the country consumed 55.2 TW·h of electricity. This corresponds to 3,326 kWh per capita, which is still low by developed country standards. It grew rapidly (6% per year) until 2006, but since then it has been stagnant.
Demand and supply projections
In 2006 it was expected that electricity demand would increase at 5% per year in the period up to 2030. In that same period, the share of natural gas in the generation mix would increase to 46%. The installed capacity of natural-gas-fired electricity generation was expected to reach 14 GW in 2030 (to be achieved by the construction of 10 new combined-cycle gas-fired power plants), while coal-fired and hydroelectricity generation would each account for about 26% of the total electricity generation mix. As can be seen above, by 2020 trends were quite different.
Access to electricity
Total electricity coverage in Chile was as high as 99.3% in 2006. Most of the progress in rural areas, where 96.4% of the population now has access to electricity, has happened in the last 15 years, following the establishment of a National Program for Rural Electrification (REP) administered by the National Fund for Regional Development. Under this Fund, there is tripartite funding of the capital costs of rural connections: users pay 10%, companies 20% and the state provides the remaining 70%, with users expected to pay for running costs.
Service quality
Interruption frequency and duration
In 2002, the average number of interruptions per subscriber was 9.8, while the total duration of interruptions per subscriber was 11.5 hours in 2005. Both numbers are below the weighted averages of 13 interruptions and 14 hours for the LAC region.
Distribution and transmission losses
Distribution losses in 2005 were 6.52%, down from 8% a decade before and well below the 13.5% LAC average.
Responsibilities in the electricity sector
Policy and regulation
The National Energy Commission (CNE), created in 1978 to advise on long-term strategies, is responsible for advising the Minister of Economy on electricity policy and for setting of regulated distribution charges. The Energy Superintendence (SEC) is responsible for supervising compliance with laws, regulations and technical standards for generation, production, storage, transportation and distribution of liquid fuels, gas and electricity. In turn, the Minister of Energy formally imposes the regulated tariffs and retains control over the issuing of rationing decrees during periods of drought when there is a shortage of hydro-electric generation capacity. Further responsibilities in the electricity sector are also held by the Superintendence of Secure Values (SVS), which is in charge of taxation, as well as directly by the regions and municipalities.
Generation, transmission and distribution
Since the privatization of the Chilean electricity sector in 1980, all generation, transmission and distribution activities have been in private hands. There are 26 companies that participate in generation, although three main economic clusters control the sector: Enel group, AES Andes and Tractebel (Colbún). The situation is similar in the distribution sector, with approximately 25 companies, in which the major companies include CGE Distribución S.A., Chilectra S.A., Chilquinta Energía S.A., and Inversiones Eléctricas del Sur S.A. (Grupo SAESA). In transmission, there are 5 players. In the Central Interconnected System (SIC), the most important player is Transelec, a pure transmission company which controls almost the entire transmission grid that serves the SIC. In the other interconnected systems, the large companies generation or the large clients are the owners of the transmission systems.The Central Interconnected System (SIC) serves principally household consumers, while the "Large North" Interconnected System (SING) mostly serves large industrial customers, primarily mining interests in Chile's northern regions. The largest generating company in the SING is Electroandina, owned by Tractebel and Codelco.
Renewable energy resources
In January 2006, new legislation was passed to apply the benefits included in Short Laws I & II (see Recent Developments section below for details) to renewable energy production. The new regulation provided for exemptions in transmission charges for new renewable energy sources (i.e. geothermal, wind, solar, biomass, tidal, small hydropower and cogeneration) below 20 MW of capacity. It also simplified the legal procedures for projects below 9 MW. Previously, besides hydro, no other renewable source had a significant contribution to the Chilean energy mix, but this has changed.
Hydro
At the end of 2021 Chile was the 28th country in the world in terms of installed hydroelectric power (6.8 GW).Historically, hydroelectric plants have been the largest power source in Chile. Periodical droughts have, however, caused supply shortfalls and blackouts, which led the government to increase diversification in the country's energy mix in the 1990s, mainly through the addition of natural-gas-fired power plants. Nevertheless, hydropower projects continued to be carried out, with the 570 MW Endesa's Ralco plant, on the Biobio River, being the best example as the largest power plant in Chile. The construction of this plant was long delayed by opposition from local residents and environmental activists, but it finally began operations in 2004, the year when it also got the approval from Chile's environmental authority to be expanded to a capacity of 690 MW.
Furthermore, Argentina's gas crisis has revitalized other hydropower projects in Chile. In 2007–8, Chilean power generator Colbun completed three hydroelectric projects, the 70 MW Quilleco plant on the Laja River and the Chiburgo and Homito plants, with 19 MW and 65 MW generation capacity respectively. In addition, in 2007 Endesa started operating the 32 MW Palmucho plant, which is to work in conjunction with Ralco's facility. Finally, Australia's Pacific Hydro and Norway's SN Power Invest are developing the 155 MW La Higuera and the 156 MW La Confluencia hydroelectric plants on the Tinguiririca River. The controversial 2,750 MW HidroAysén project was cancelled in 2014.
Solar power
At the end of 2021 Chile was the 22nd country in the world in terms of installed solar energy (4.4 GW).As noted above, solar power has surged, with 3.104 GW installed capacity and 2.801 GW under construction in July 2020. At a power auction in October 2015, three solar generators offered power at $65 to $68 per MWh and two wind farms offered power at $79 per MWh, vs. coal power offered at $85 per MWh, and an average price of $104.3 per MWh at an auction in 2008 with no wind or solar power offered. In the August 2016 auction, the Spanish company Solarpack was one of the winners with a proposal to sell power beginning in 2021 from a new 120 MW solar facility, Granja Solar, at $29.1 per MWh, an international record low price at the time. On March 2, 2020, Solarpack began supplying power from Granja Solar, 10 months early; rated at 123MW, this raised Solarpack's current Chilean capacity to 181MW.
Wind and geothermal
At the end of 2021 Chile was the 28th country in the world in terms of installed wind energy (3.1 GW).In the year 2008, wind power amounted to 0.05% of the total electricity generation, but was projected to grow rapidly in coming years. By August 2020 2,242 MW, 8.8% of the National Electric System's installed generating capacity, was wind power, and wind power was also 25.6% of the 5,990 MW in additional capacity currently under construction. Because southern Chile receives prevailing westerlies of the roaring forties and furious fifties, it has some of the most promising wind power potential in the world.
At the turn of the century, there was increasing interest in the country's geothermal potential. In 2006, after a surveying campaign, a consortium formed by the National Petroleum Company (ENAP) and Enel requested a concession to develop geothermal resources in the El Tatio region in the North. In August 2020 Chile had 45 MW of installed geothermal generating capacity, 0.2% of the national generating capacity.
History of the electricity sector
Electricity sector reform of 1982
Chile represents the world's longest running comprehensive electricity reform in the post-World War II period. The reform was led by the 1982 Electricity Act, which is still the most important law regulating the organization of the electricity sector in the country. The reform was similar to the UK's model and started with vertical and horizontal unbundling of generation, transmission and distribution in 1981. According to Cambridge economist Michael Pollitt, the reform is widely regarded as a successful example of electricity reform in a developing country and has been used as a model for other privatizations in Latin America and around the world.In the period 1970–73, Salvador Allende's government had undertaken a process of nationalization of many large companies, including utilities and banks. By 1974, inflation, high fuel prices and price controls had led to large losses and lack of investment in electric utilities, which were then under public ownership. The subsequent military dictatorship, under Augusto Pinochet, decided to reorganize the sector through the introduction of a different economic discipline. The government returned large state owned companies, including electricity, to their previous owners, an action that was followed by improving rates of return on capital. In addition, the 1985 reform of the Chilean pension fund system, which operated through Pension Fund Management Companies (AFPs), preceded the privatization of utilities, which began in 1986. By the end of the 1990s, foreign firms had gained majority ownership of the Chilean electricity system.During the initial restructuring of the electricity industry Endesa, a state-owned company since 1944, was divided into 14 companies. Before the division Endesa had extensive generation, transmission and distribution assets throughout the country. The companies generated from Endesa's division included 6 generation companies (including Endesa and Colbun), 6 distribution companies and 2 small isolated generation and distribution companies in the South. Chilectra, privately owned since 1970, was split into 3 firms: a generation company (Gener) and two distribution companies.
The high levels of investment that have been attained since 1982 have enabled the expansion of the Central Interconnected System (SIC) from 2,713 to 6,991 MW (4.1% p.a.) and of the Northern Interconnected System (SING) from 428 up to 3,634 MW between 1982 and 2004.
Recent developments
There have been various attempts to modify the 1982 Electricity Act (Ley General de Servicios Eléctricos) with the purpose of adjusting to developments in the sector over the last 20 years. The first successful attempt happened in 1999, which led to electricity rationing after the drought of 1998–99, the worst one in 40 years, causing blackouts from November 1998 to April 1999 (with a total of 500 GW·h of electricity not being supplied). However, the most important modifications date from 2004, with Law 19,940, known as Ley Corta I (Short Law), and 2005, with Law 20,018, known as Ley Corta II (Short Law II), which sought to address some of the most pressing shortcomings of the current system. However, according to Cambridge economist Michael Pollitt, more comprehensive legislation is still needed.
Major problems have resulted from the aftermath of the 2002 Argentine crisis. In Argentina, sharp economic recovery has boosted energy demand and led to power cuts. This led Argentina to unilaterally decide, in 2004, on a reduction of its gas exports to Chile, which had been subject to a 1995 treaty between the two countries. These cuts have had serious implications for Chile, leading to an expensive substitution of fuel oil for gas in the midst of a shortage of hydroelectric capacity. In addition, gas supply shortages fueled the debate for investment in expensive liquid natural gas (LNG) import facilities. Construction of the country's first liquefied natural gas re-gasification plant, at Quintero (Region V), near the capital city of Santiago, started in 2007 under the coordination of the state oil company Enap (National Petroleum Company). The partners are British Gas, with 40% of the shares, while ENAP, ENDESA and METROGAS have 20% each. The project is built under an Engineering, Procurement and Construction Contract by the Chicago Bridge & Iron company, while BG will be the long term supplier of LNG. The plant received project finance for US$1.1 billion from a consortium of international banks and is due to start operating in July 2009.
The Chilean government, as an additional response to secure electricity supply, proposed a new bill to the National Congress in August 2007. The main objective of this bill is to minimize the negative consequences derived from a generator's failure to meet its contracted supply obligations (i.e. due to bankruptcy). In such an event, the new law would mandate the rest of the generators to assume the obligations of the failed company. In addition, the National Energy Commission (CNE) has recently approved Resolution No.386, a new piece of legislation that will allow regulated final consumers to receive economic incentives to reduce their electricity demand.In 2008, a special law for non conventional renewable energy was approved (Ley 20.257), which requires that, from 2010 onwards, at least a 5% of the energy produced by the medium and large generator sector be from non conventional renewable energy sources. This quota will increase by 0.5% per year from 2015 onward, to reach a 10% requirement in 2024. A 2015 report addresses the challenges of the system.
Tariffs, cost recovery and subsidies
Tariffs
In 2005, the average residential tariff was US$0.109/(kWh), while the average industrial tariff was US$0.0805/(kWh). These tariffs are very close to the LAC weighted averages of US$0.115 for residential consumers and 0.107 for industrial customers.
Subsidies
Electricity subsidies in Chile aim to temper the impact of rising electricity tariffs on the poorest sectors of the population. In June 2005, Law 20,040 established an electricity subsidy for poor Chilean families. As mandated by the law, the subsidy will be triggered when electricity tariffs for residential, urban or rural users face an increase equal to or above 5% during a period equal to or below six months. This measure was first applied between June 2005 and March 2006, when it targeted 40% of the total population (about 1,250,000 families). The subsidy was triggered a second time from February to March 2007, when it benefited 32,000 clients in the Second and Third Regions of the country. More recently, the government has announced a new application of the subsidy to benefit an estimated 1,000,000 households between December 2007 and March 2008. The total amount of the subsidy (US$33 million) will triple the resources committed in previous campaigns and is a response to rising electricity prices caused by the increasing use of diesel as a substitute for natural gas and the low precipitations of 2007, which have hindered hydropower generation.
Investment and financing
Investment requirements in electricity generation, transmission and distribution over the period to 2030 are estimated to be between US$38–49 billion.
Summary of private participation in the electricity sector
As a result of the 1982 reform of the electricity sector, 100% of generation, transmission and distribution activities in Chile are in the hands of private companies. Enel group (5223 MW ; 32,8%), AES Andes (2642 MW 16,6%), Colbún (2591 MW, 16,3%) and Engie (1856 MW ; 11,6%) control the largest part generation sector, in which a total of 26 companies participate. The distribution sector, with about 25 companies, is also dominated by four main groups: CGE Distribución S.A., Chilectra S.A., Chilquinta Energía S.A., and Inversiones Eléctricas del Sur S.A.(Grupo SAESA). As for transmission, Transelec is the largest owner of the transmission grid, followed by CGE transmission. There are some minor exceptions to the 100% private production of electricity, such as the case of the Chilean Air Force providing power for the Chilean installations in Antarctica.
Electricity and the environment
Responsibility for the environment
CONAMA (National Commission for the Environment) was created in 1994 and acts as coordinator of the Government's environmental actions. CONAMA is chaired by a Minister and is integrated by several different Ministries (e.g. Economy, Public Works, Telecommunications, Agriculture, Health, etc.) In July 2007, faced with the need for early installation of new back-up capacity in the National Interconnected System (SIC), the Energy Ministry urged CONAMA to grant maximum priority to the environmental assessment of the projects related with the installation of emergency turbines.
Greenhouse gas emissions
OLADE (Organización Latinoamericana de Energía) estimated that CO2 emissions from electricity production in 2003 were 13.82 million tons of CO2, which represents 25% of total emissions for the energy sector. It is estimated that, by 2030, emissions from electricity generation will account for the largest share of emissions from the energy sector, 39% (some 74 million tons) of the total.
CDM projects in electricity
Currently (September 2007), there are eight energy-related registered CDM projects in Chile, with expected total emissions reductions of about 2 million tons of CO2e per year. The breakdown of the projects is as follows:
Source: UNFCCC
Legislation
The main legal framework for the electricity sector in Chile is the "General Law of Electric Services (DFL-4)", a rather liberal framework which enables private investment in generation, transmission and distribution. Generation has been structured as a competitive market, whilst transmission and distribution are regulated. The Chilean model for the electricity market was very innovative in its time and has served as a model for several Latin American countries. It has allowed the Chilean company, Endesa, to expand successfully in the region. See the complete "legal framework for the electricity sector in Chile".
External assistance
World Bank
Currently, the World Bank is funding a Project for Infrastructure Development in Chile. A US$50.26 million loan was approved in 2004 with the objective of increasing the effective and productive use of sustainable infrastructure services by poor rural communities from selected territories in the regions of Coquimbo, Maule, Biobío, Araucanía and Los Lagos. The project, to be completed in 2010, seeks, among other goals, to improve quality of conventional electricity services and to promote off-grid and renewable energy solutions, such as generators, solar panels and wind turbines.
Inter-American Development Bank (IDB)
The Inter-American Development Bank has provided funding for three active electricity-related projects in Chile.
A Rural Electrification Project was approved in 2003. This project, with a total loan of US$40 million from the IDB, seeks to increase government's incentives for private investment in rural electrification. The objective is to extend the electrical networks, support auto-generation projects and assist in institutional strengthening. Projects will receive a subsidy according to the rules set for by the Ministry of Planning and Cooperation.
A project for the Promotion of Clean Energy Market Opportunities received a US$975,000 loan from the IDB in 2006. The overall objective of the project is to increase market opportunities for small and medium enterprises and enhance their competitiveness. Promotion of the use of renewable energy and energy efficiency is to be achieved by facilitating access to financial incentives that support the use of low-carbon emitting technologies.
The Chamua-Terruco transmission line is a 20-year concession for the construction and operation of a 200 km, 220 kV transmission line in the South of the country that was awarded to Cia. Tecnica de Engenharia Electrica ("Alusa") and Companhia Energetica de Minas Gerais ("Cemig"). The IDB approved a US$51 million loan in 2006 for the construction of this transmission line in a region that has shown strong economic growth in recent years.
See also
Energy in Chile
Economy of Chile
Renewable energy
March 2010 Chile blackout
2011 Chile blackout
References
Further reading
Asia Pacific Energy Research Center, Institute of Energy Economics, Japan, 2006. APEC Energy Supply and Demand Outlook 2006, p. 16-20
External links
National Environmental Commission (CONAMA)
Anuario Estadístico Sector Eléctrico
Administration for Fuels and Electricity (SEC)
National Energy Commission (CNE)
Energy information Chile
Regulation of electricity sector in Chile
National Petroleum Company (Enap)
Implications of Short Law I & II for non conventional sources
List of World Bank projects in Chile
List of energy generation projects in Chile
List of Inter-American Development Bank projects in Chile
Chile deregulation links |
greenhouse and icehouse earth | Throughout Earth's climate history (Paleoclimate) its climate has fluctuated between two primary states: greenhouse and icehouse Earth. Both climate states last for millions of years and should not be confused with glacial and interglacial periods, which occur as alternate phases within an icehouse period and tend to last less than 1 million years. There are five known Icehouse periods in Earth's climate history, which are known as the Huronian, Cryogenian, Andean-Saharan, Late Paleozoic, and Late Cenozoic glaciations. The main factors involved in changes of the paleoclimate are believed to be the concentration of atmospheric carbon dioxide (CO2), changes in Earth's orbit, long-term changes in the solar constant, and oceanic and orogenic changes from tectonic plate dynamics. Greenhouse and icehouse periods have played key roles in the evolution of life on Earth by directly and indirectly forcing biotic adaptation and turnover at various spatial scales across time.
Greenhouse Earth
A "greenhouse Earth" is a period during which no continental glaciers exist anywhere on the planet. Additionally, the levels of carbon dioxide and other greenhouse gases (such as water vapor and methane) are high, and sea surface temperatures (SSTs) range from 28 °C (82.4 °F) in the tropics to 0 °C (32 °F) in the polar regions. Earth has been in a greenhouse state for about 85% of its history.The state should not be confused with a hypothetical runaway greenhouse effect, which is an irreversible tipping point that corresponds to the ongoing runaway greenhouse effect on Venus. The IPCC states that "a 'runaway greenhouse effect'—analogous to [that of] Venus—appears to have virtually no chance of being induced by anthropogenic activities."
Causes
There are several theories as to how a greenhouse Earth can come about. Geologic climate proxies indicate that there is a strong correlation between a greenhouse state and high CO2 levels. However, it is important to recognize that high CO2 levels are interpreted as an indicator of Earth's climate, rather than as an independent driver. Other phenomena have instead likely played a key role in influencing global climate by altering oceanic and atmospheric currents and increasing the net amount of solar radiation absorbed by Earth's atmosphere. Such phenomena may include but are not limited to tectonic shifts that result in the release of greenhouse gases (such as CO2 and CH4) via volcanic activity, Volcanoes emit massive amounts of CO2 and methane into the atmosphere when they are active, which can trap enough heat to cause a greenhouse effect. On Earth, atmospheric concentrations of greenhouse gases like carbon dioxide (CO2) and methane (CH4) are higher, trapping solar energy in the atmosphere via the greenhouse effect. Methane, the main component of natural gas, is responsible for more than a quarter of the current global warming. It's a formidable pollutant with an 80-fold higher global warming potential than CO2 in the 20 years after it's been introduced into the atmosphere. An increase in the solar constant increases the net amount of solar energy absorbed into Earth's atmosphere, and changes in Earth's obliquity and eccentricity increase the net amount of solar radiation absorbed into Earth's atmosphere.
Icehouse Earth
Earth is now in an icehouse state, and ice sheets are present in both poles simultaneously. Climatic proxies indicate that greenhouse gas concentrations tend to lower during an icehouse Earth. Similarly, global temperatures are also lower under Icehouse conditions. Earth then fluctuates between glacial and interglacial periods, and the size and the distribution of continental ice sheets fluctuate dramatically. The fluctuation of the ice sheets results in changes in regional climatic conditions that affect the range and the distribution of many terrestrial and oceanic species. On scales ranging from thousands to hundreds of millions of years, the Earth's climate has transitioned from warm to chilly intervals within life-sustaining ranges. There have been three periods of glaciation in the Phanerozoic Eon (Ordovician, Carboniferous, and Cenozoic), each lasting tens of millions of years and bringing ice down to sea level at mid-latitudes. During these frigid "icehouse" intervals, sea levels were generally lower, CO2 levels in the atmosphere were lower, net photosynthesis and carbon burial were lower, and oceanic volcanism was lower than during the alternate "greenhouse" intervals. Transitions from Phanerozoic icehouse to greenhouse intervals coincided with biotic crises or catastrophic extinction events, indicating complicated biosphere-hydrosphere feedbacks. [39]The glacial and interglacial periods tend to alternate in accordance with solar and climatic oscillation until Earth eventually returns to a greenhouse state.Earth's current icehouse state is known as the Quaternary Ice Age and began approximately 2.58 million years ago. However, an ice sheet has existed in Antarctica for approximately 34 million years. Earth is now in a clement interglacial period that started approximately 11,800 years ago. Earth will likely phase into another interglacial period such as the Eemian, which occurred between 130,000 and 115,000 years ago, during which evidence of forest in North Cape, Norway, and hippopotamus in the Rhine and Thames Rivers can be observed. Earth is expected to continue to transition between glacial and interglacial periods until the cessation of the Quaternary Ice Age and will then enter another greenhouse state.
Causes
It is well established that there is strong correlation between low CO2 levels and an icehouse state. However, that does not mean that decreasing atmospheric levels CO2 is a primary driver of a transition to the icehouse state. Rather, it may be an indicator of other solar, geologic, and atmospheric processes at work.Potential drivers of previous icehouse states include the movement of the tectonic plates and the opening and the closing of oceanic gateways. They seem to play a crucial part in driving Earth into an icehouse state, as tectonic shifts result in the transportation of cool, deep water, which circulates to the ocean surface and assists in ice sheet development at the poles. Examples of oceanic current shifts as a result of tectonic plate dynamics include the opening of the Tasmanian Gateway 36.5 million years ago, which separated Australia and Antarctica, and the opening of the Drake Passage 32.8 million years ago by the separation of South America and Antarctica, both of which are believed to have allowed for the development of the Antarctic ice sheet. The closing of the Isthmus of Panama and of the Indonesian seaway approximately 3 to 4 million years ago may also be a contributor to Earth's current icehouse state. One proposed driver of the Ordovician Ice Age was the evolution of land plants. Under that paradigm, the rapid increase in photosynthetic biomass gradually removed CO2 from the atmosphere and replaced it with increasing levels of O2, which induced global cooling. One proposed driver of the Quaternary Ice age is the collision of the Indian Subcontinent with Eurasia to form the Himalayas and the Tibetan Plateau. Under that paradigm, the resulting continental uplift revealed massive quantities of unweathered silicate rock CaSiO3, which reacted with CO2 to produce CaCO3 (lime) and SiO2 (silica). The CaCO3 was eventually transported to the ocean and taken up by plankton, which then died and sank to the bottom of the ocean, which effectively removed CO2 from the atmosphere.
Glacials and interglacials
Within icehouse states are "glacial" and "interglacial" periods that cause ice sheets to build up or to retreat. The main causes for glacial and interglacial periods are variations in the movement of Earth around the Sun. The astronomical components, discovered by the Serbian geophysicist Milutin Milanković and now known as Milankovitch cycles, include the axial tilt of Earth, the orbital eccentricity (or shape of the orbit), and the precession (or wobble) of Earth's rotation. The tilt of the axis tends to fluctuate from 21.5° to 24.5° and back every 41,000 years on the vertical axis. The change actually affects the seasonality on Earth since a change in solar radiation hits certain areas of the planet more often on a higher tilt, and a lower tilt creates a more even set of seasons worldwide. The changes can be seen in ice cores, which also contain evidence that during glacial times (at the maximum extension of the ice sheets), the atmosphere had lower levels of carbon dioxide. That may be caused by the increase or the redistribution of the acid-base balance with bicarbonate and carbonate ions that deals with alkalinity. During an icehouse period, only 20% of the time is spent in interglacial, or warmer times. Model simulations suggest that the current interglacial climate state will continue for at least another 100,000 years because of CO2 emissions, including the complete deglaciation of the Northern Hemisphere.
Snowball Earth
A "snowball Earth" is the complete opposite of greenhouse Earth in which Earth's surface is completely frozen over. However, a snowball Earth technically does not have continental ice sheets like during the icehouse state. "The Great Infra-Cambrian Ice Age" has been claimed to be the host of such a world, and in 1964, the scientist W. Brian Harland brought forth his discovery of indications of glaciers in the low latitudes (Harland and Rudwick). That became a problem for Harland because of the thought of the "Runaway Snowball Paradox" (a kind of Snowball effect) that once Earth enters the route of becoming a snowball Earth, it would never be able to leave that state. However, Joseph Kirschvink brought up a solution to the paradox in 1992. Since the continents were then huddled at the low and the middle latitudes, there was less ocean water available to absorb the higher amount solar energy hitting the tropics, and there was also an increase in rainfall because more land exposed to higher solar energy might have caused chemical weathering, which would contribute to removal of CO2 from the atmosphere. Both conditions might have caused a substantial drop in CO2 atmospheric levels which resulted in cooling temperatures and increasing ice albedo (ice reflectivity of incoming solar radiation), which would further increase global cooling (a positive feedback). That might have been the mechanism of entering Snowball Earth state. Kirschvink explained that the way to get out of Snowball Earth state could be connected again to carbon dioxide. A possible explanation is that during Snowball Earth, volcanic activity would not halt but accumulate atmospheric CO2. At the same time, global ice cover would prevent chemical weathering (particularly hydrolysis), responsible for removal of CO2 from the atmosphere. CO2 therefore accumulated in the atmosphere. Once the atmosphere accumulation of CO2 reached a threshold, temperature would rise enough for ice sheets to start melting. That would in turn reduce the ice albedo effect, which would in turn further reduce the ice cover and allow an exit from Snowball Earth. At the end of Snowball Earth, before the equilibrium "thermostat" between volcanic activity and the by then slowly resuming chemical weathering was reinstated, CO2 in the atmosphere had accumulated enough to cause temperatures to peak to as much as 60 °C, thrusting the Earth into a brief moist greenhouse state. Around the same geologic period of Snowball Earth (it is debated if it was the cause or the result of Snowball Earth), the Great Oxygenation Event (GOE) was occurring. The event known as the Cambrian Explosion followed and produced the beginnings of populous bilateral organisms, as well as a greater diversity and mobility in multicellular life. However, some biologists claim that a complete snowball Earth could not have happened since photosynthetic life would not have survived under many meters of ice without sunlight. However, sunlight has been observed to penetrate meters of ice in Antarctica. Most scientists now believe that a "hard" Snowball Earth, one completely covered by ice, is probably impossible. However, a "slushball Earth," with points of opening near the equator, is considered to be possible.
Recent studies may have again complicated the idea of a snowball Earth. In October 2011, a team of French researchers announced that the carbon dioxide during the last speculated "snowball Earth" may have been lower than originally stated, which provides a challenge in finding out how Earth got out of its state and whether a snowball or a slushball Earth occurred.
Transitions
Causes
The Eocene, which occurred between 56.0 and 33.9 million years ago, was Earth's warmest temperature period for 100 million years. However, the "super-greenhouse" period had eventually become an icehouse period by the late Eocene. It is believed that the decline of CO2 caused the change, but mechanisms of positive feedback may have contributed to the cooling.
The best available record for a transition from an icehouse to greenhouse period in which plant life existed is for the Permian period, which occurred around 300 million years ago. A major transition took place 40 million years ago and caused Earth to change from a moist, icy planet in which rainforests covered the tropics to a hot, dry, and windy location in which little could survive. Professor Isabel P. Montañez of University of California, Davis, who has researched the time period, found the climate to be "highly unstable" and to be "marked by dips and rises in carbon dioxide."
Impacts
The Eocene-Oligocene transition was the latest and occurred approximately 34 million years ago. It resulted in a rapid global cooling, the glaciation of Antarctica, and a series of biotic extinction events. The most dramatic species turnover event associated with the time period is the Grande Coupure, a period that saw the replacement of European tree-dwelling and leaf-eating mammal species by migratory species from Asia.
Research
Paleoclimatology is a branch of science that attempts to understand the history of greenhouse and icehouse conditions over geological time. The study of ice cores, dendrochronology, ocean and lake sediments (varve), palynology, (paleobotany), isotope analysis (such as radiometric dating and stable isotope analysis), and other climate proxies allows scientists to create models of Earth's past energy budgets and the resulting climate. One study has shown that atmospheric carbon dioxide levels during the Permian age rocked back and forth between 250 parts per million, which is close to today's levels, up to 2,000 parts per million. Studies on lake sediments suggest that the "hothouse" or "super-greenhouse" Eocene was in a "permanent El Niño state" after the 10 °C warming of the deep ocean and high latitude surface temperatures shut down the Pacific Ocean's El Niño-Southern Oscillation. A theory was suggested for the Paleocene–Eocene Thermal Maximum on the sudden decrease of the carbon isotopic composition of the global inorganic carbon pool by 2.5 parts per million. A hypothesis posed for this drop of isotopes was the increase of methane hydrates, the trigger for which remains a mystery. The increase of atmospheric methane, which happens to be a potent but short-lived greenhouse gas, increased the global temperatures by 6 °C with the assistance of the less potent carbon dioxide.
List of icehouse and greenhouse periods
A greenhouse period ran from 4.6 to 2.4 billion years ago.
Huronian glaciation – an icehouse period that ran from 2.4 billion to 2.1 billion years ago
A greenhouse period ran from 2.1 billion to 720 million years ago.
Cryogenian – an icehouse period that ran from 720 to 635 million years ago during which the entire Earth was at times frozen over
A greenhouse period ran from 635 million years ago to 450 million years ago.
Andean-Saharan glaciation – an icehouse period that ran from 450 million to 420 million years ago
A greenhouse period ran from 420 million years ago to 360 million years ago.
Late Paleozoic Ice Age – an icehouse period that ran from 360 million to 260 million years ago
A greenhouse period ran from 260 million years ago to 33.9 million years ago.
Late Cenozoic Ice Age – the current icehouse period, which began 33.9 million years ago
Modern conditions
Currently, Earth is in an icehouse climate state. About 34 million years ago, ice sheets began to form in Antarctica; the ice sheets in the Arctic did not start forming until 2 million years ago. Some processes that may have led to the current icehouse may be connected to the development of the Himalayan Mountains and the opening of the Drake Passage between South America and Antarctica, but climate model simulations suggest that the early opening of the Drake Passage played only a limited role, and the later constriction of the Tethys and Central American Seaways is more important in explaining the observed Cenozoic cooling. Scientists have tried to compare the past transitions between icehouse and greenhouse, and vice versa, to understand what type of climate state Earth will have next.
Without the human influence on the greenhouse gas concentration, a glacial period would be the next climate state. Predicted changes in orbital forcing suggest that in absence of human-made global warming, the next glacial period would begin at least 50,000 years from now (see Milankovitch cycles), but the ongoing anthropogenic greenhouse gas emissions mean the next climate state will be a greenhouse Earth period. Permanent ice is actually a rare phenomenon in the history of Earth and occurs only in coincidence with the icehouse effect, which has affected about 20% of Earth's history.
See also
List of periods and events in climate history
== References == |
landfills in the united states | Municipal solid waste (MSW) – more commonly known as trash or garbage – consists of everyday items people use and then throw away, such as product packaging, grass clippings, furniture, clothing, bottles, food scraps and papers. In 2018, Americans generated about 265.3 million tonnes of waste. In the United States, landfills are regulated by the Environmental Protection Agency (EPA) and the states' environmental agencies. Municipal solid waste landfills (MSWLF) are required to be designed to protect the environment from contaminants that may be present in the solid waste stream.Some materials may be banned from disposal in municipal solid waste landfills including common household items such as paints, cleaners/chemicals, motor oil, batteries, pesticides, and electronics. These products, if mishandled, can be dangerous to health and the environment, creating leachate into water bodies and groundwater, and landfill gas contributes to air pollution, and greenhouse gas emissions. Safe management of solid waste through guidance, technical assistance, regulations, permitting, environmental monitoring, compliance evaluation and enforcement is the goal of the EPA and state environmental agencies.
History
The Fresno Municipal Sanitary Landfill, opened in Fresno, California in 1937, is considered to have been the first modern, sanitary landfill in the United States, innovating the techniques of trenching, compacting, and the daily covering of waste with soil. It has been designated a National Historic Landmark, underlining the significance of waste disposal in urban society.
The first federal legislation addressing solid waste management was the Solid Waste Disposal Act of 1965 (SWDA) that created a national office of solid waste. By the mid-1970s, all states had some type of solid waste management regulations. In 1976, the U.S. House of Representatives passed the Resource Conservation and Recovery Act (RCRA) that dramatically expanded the federal government's role in managing waste disposal. RCRA divided wastes into hazardous and non-hazardous categories, and directed the EPA to develop the design and operational standards for sanitary landfills and close or upgrade existing open dumps that did not meet the sanitary landfill standards.In 1979, the EPA developed criteria for sanitary landfills that included siting restrictions in floodplains; endangered species protection; surface water protection; groundwater protection; disease and vector (rodents, birds, insects) control; opening burning prohibitions; explosive gas (methane) control; fire prevention through the use of cover materials; and prevention of bird hazards to aircraft.The RCRA was amended in 1984. In 1991, the EPA established new federal standards for municipal solid waste landfills that updated location and operation standards, added design standards, groundwater monitoring requirements, corrective action requirements for known environmental releases, closure and post-closure requirements, and financial assurances to pay for landfill future care and maintenance.
Regulation
The EPA generally relies on the states to enforce their own operating permits and federal laws. If state agencies are not aggressive, violations can worsen, multiplying negative environmental impacts exponentially. There are some notably recorded violations in the U.S. such as for a landfill in Hawaii that was fined $2.8 million in 2006 for operating violations, but this is not common.Modern landfills are specifically designed to protect human health and the environment by controlling water and air emissions. All MSWLF must comply with the federal regulations in 40 CFR Part 258, or equivalent state regulations. Some of the federal regulations in 40 CFR part 258 include:
Location Restrictions - landfills must be built in suitable geological areas away from faults, wetlands, flood plains or other restricted areas.
Composite Liners Requirements - include a flexible membrane (geomembrane) overlaying two feet of compacted clay soil lining the bottom and sides of the landfill, protect groundwater and the underlying soil from leachate releases.
Leachate Collection and Removal Systems - sit on top of the composite liner and removes leachate from the landfill for treatment and disposal.
Operating Practices - including the compacting and covering of waste frequently with several inches of soil to help reduce odor; control litter, insects and rodents; and protect public health.
Groundwater Monitoring Requirements - testing of groundwater wells must be done to determine whether waste materials have escaped from the landfill.
Closure and Postclosure Care Requirements - including covering landfills and providing long-term care of closed landfills.
Corrective Action Provisions - control and cleanup of landfill releases and achieves groundwater protection standards.
Financial Assurance - provides funding for environmental protection during and after landfill closure.Under Subtitle D of RCRA, states are required to adopt and implement permit programs to ensure that landfills in their states comply with relevant federal standards. The law also requires EPA to determine whether state permit programs are adequate to ensure such compliance. For permit programs to be approved, states must provide opportunities for public involvement during the permit application process. This may include public meetings or submission of concerns in writing to the permitting agency. In addition, states must have the power to issue permits and perform compliance monitoring and enforcement actions that ensure compliance with the federal standards.Agencies such as the Solid Waste Association of North America's (SWANA) Landfill Management Division provide training and technical advice related to the planning, design, construction, closure and post-closure of today's landfills. The division regularly monitors, reviews and comments on current legislative and regulatory actions that could potentially affect landfill operations and new technology. Waste Management, based in Houston, Texas, manages/operates five of the top 10 largest landfills and owns three of those outright. [Forbes]
Leachate collection
Landfill leachate is generated from liquids existing in the waste as it enters a landfill or from rainwater that passes through the waste within the facility. The leachate consists of different organic and inorganic compounds that may be either dissolved or suspended. An important part of maintaining a landfill is managing the leachate through proper treatment methods designed to prevent pollution into surrounding ground and surface waters. For landfills receiving hazardous waste, permits require landfill liners and the installation of systems for collecting leachate. Based on recent EPA studies, a liner and leachate collection system constructed to current standards typically has a liquid removal efficiency of 99 to 100 percent and frequently exceeds 99.99 percent.The leachate collection system collects the leachate so that it can be removed from the landfill and properly treated or disposed. Most leachate collection systems have the following components:
Leachate collection layer - a layer of sand or gravel or a thick plastic mesh called a geonet collects leachate and allows it to drain by gravity to the leachate collection pipe system.
Filter Geotextile - a geotextile fabric, similar in appearance to felt, may be located at the top of the leachate collection pipe system to provide separation of solid particles from liquid. This prevents clogging of the pipe system.
Leachate Collection Pipe System - Perforated pipes, surrounded by a bed of gravel, transport collected leachate to specially designed low points called sumps. Pumps, located within the sumps, automatically remove the leachate from the landfill and transport it to the leachate management facilities for treatment or another proper method of disposal.Federal requirements mandate that treatment must meet drinking water quality standards, which are set to prevent harm to public health, or more stringent state standards to protect sensitive environments (high quality streams, trout streams).
Groundwater monitoring
Nearly all municipal solid waste landfills (MSWLFs) are required to monitor the underlying groundwater for contamination during their active life and post-closure care period. The exceptions to this requirement are small landfills that receive less than 20 tons of solid waste per day, and facilities that can demonstrate that there is no potential for the migration of hazardous constituents from the unit into the groundwater. All other MSWLFs must comply with the groundwater monitoring requirements found at 40 CFR Part 258, Subpart E–Ground-Water Monitoring and Corrective Action.The groundwater monitoring system consists of a series of wells placed upgradient and downgradient of the MSWLF. The samples from the upgradient wells shows the background concentrations of constituents in the groundwater while, the downgradient wells show the extent of groundwater contamination caused by the MSWLF. The required number of wells, spacing, and depth of wells is determined on a site-specific basis based on the aquifer thickness, groundwater flow rate and direction, and the other geologic and hydrogeologic characteristics of the site. All groundwater monitoring systems must be certified by a qualified groundwater scientist and must comply with the sampling and analytical procedures outlined in the regulations.There are three phases of groundwater monitoring requirements:
Detection Monitoring - monitoring for the 62 constituents listed in Appendix I of 40 CFR Part 258 and sampling that occurs at least semiannually throughout the landfill's active life and post-closure period. If any of the constituents is detected at a higher level than the established background level, state regulatory agencies must be notified and an assessment monitoring program begun.
Assessment monitoring - within 90 days of detection of an increase in constituents, a MSWLF must begin an assessment monitoring program. Samples must be taken from all wells and analyzed for the presence of all 214 constituents listed in Appendix II of 40 CFR Part 258. If any of the constituents are detected, background levels for these constituents and establish a groundwater protection standard (GWPS) for each. Within 90 days of establishing the background levels and GWPS, resamples for all constituents in Appendix I and Appendix II. Resampling must then be completed at least semiannually. Provided that levels remain within specified limits after two sampling events, the facility may return to the detection monitoring phase. If levels remain higher than standard, the owners/operators of the MSWLF must characterize the nature of the release, determine if contamination has migrated beyond the facility boundary, and begin assessing corrective measures.
Corrective measures - must be protective of human health and the environment, meet the GWPS, control the source(s) of the release to prevent further releases, and manage any solid waste generated in accordance with all applicable RCRA regulations. Remedial actions must continue until three years of consecutive compliance are met.
Landfill gas utilization
A United States Environmental Protection Agency (EPA) report indicates that as of 2016, counts of operational municipal solid waste landfills range between 1,900 and 2,000. In a nationwide study done by the Environmental Research and Education Foundation in 2013, only 1,540 operational municipal solid waste landfills were counted throughout the United States. Decomposing waste in these landfills produces landfill gas, which is a mixture of about half methane and half carbon dioxide. Landfills are the third-largest source of methane emissions in the United States, with municipal solid waste landfills representing 95 percent of this fraction.In the U.S., the number of landfill gas projects increased from 399 in 2005, to 594 in 2012 according to the Environmental Protection Agency. These projects are popular because they control energy costs and reduce greenhouse gas emissions. These projects collect the methane gas and treat it, so it can be used for electricity or upgraded to pipeline-grade gas. (Methane gas has twenty-one times the global warming potential of carbon dioxide). For example, in the U.S., Waste Management uses landfill gas as an energy source at 110 landfill gas-to-energy facilities. This energy production offsets almost two million tons of coal per year, creating energy equivalent to that needed by four hundred thousand homes. These projects also reduce greenhouse gas emissions into the atmosphere.The EPA, which estimates that hundreds of landfills could support gas to energy projects, has also established the Landfill Methane Outreach Program. This program was developed to reduce methane emissions from landfills in a cost-effective manner by encouraging the development of environmentally and economically beneficial landfill gas-to-energy projects.
Post-closure and reclamation
In the U.S., the regulatory structure for landfills specifies a 30-year post-closure monitoring period. It is presumed that at the end of the 30-year period, the landfill will be stable and will no longer require intensive monitoring. Today, landfills are designed from the start to ensure protection of the environment and public health, and the safe and productive use of the site after closure.There are three categories of post-closure uses of landfill sites: Category 1 - open space, agricultural and passive recreation; Category 2 - Active recreation, parking or industrial/commercial activities; and Category 3 - Intensive uses such as residences, industry and commercial development.Category 1 post-closures are the most numerous and may be the least recognizable due to the fact they appear to be nothing more than an open field. Some examples include: Westview Sanitary Landfill in Georgia - now a cemetery and Griffith Park in California - used for hiking trails.
Category 2 post-closures may have utilities, light structures or paving. Examples include Settler's Hill Landfill in Illinois - now golf courses and a minor league baseball field or the Germantown Sanitary Landfill in Wisconsin that is now a ski slope.
Category 3 post-closures are usually characterized by inclusion of major structures. Some of the most well known are Mile High Stadium in Colorado which is the football stadium for the Denver Broncos; Brickyard Shopping Center in Illinois; and Columbia Point in Massachusetts, home of the John F. Kennedy Presidential Library and Museum, and University of Massachusetts Boston's State Archives Building.
Statistics
The EPA has collected and reported data on the generation and disposal of waste in the United States for more than 30 years. Recent estimates state that the amount of municipal waste disposed of in US landfills per year is about 265 million tonnes (261,000,000 long tons; 292,000,000 short tons) as of 2013.Organic materials are estimated to be the largest component of MSW. Paper and paperboard account for 29% and yard trimmings and food scraps account for another 27%; plastics 12%; metals 9%, rubber, leather and textiles 8%; wood is approximately 6.4% and glass 5%. Other miscellaneous wastes make up approximately 3.4%.In 2010, Americans recovered almost 65 million tons of MSW (excluding composting) through recycling. Between 1980 and 2013, waste disposed in landfills decreased from 89% to under 53%.In 2013, about 32.7 million tons of MSW were combusted for energy recovery.Research has shown that leachate treatment facilities at modern landfills are capable of removing 100 percent of the trace organics and over 85 percent of the heavy metals.The Puente Hills Landfill is the largest landfill in America. Over 150 m (490 ft) of garbage has risen from the ground since the area became a designated landfill site in 1957.In 1986, there were 7,683 landfills in the United States. By 2009, there were just 1,908 landfills nationwide: a 75 percent decline in disposal facilities in less than 25 years. However, this number is deceptive. Much of the decrease is due to consolidation of multiple landfills into a single, more efficient facility. Also technology has allowed for each acre of landfill to take 30% more waste. So during this time, the available landfill per person has increased by almost 30%.
Notable landfills
See also
Landfill gas
Environment of the United States
Environmental issues in the United States
References
Further reading
"Municipal Solid Waste Fact Sheet" (PDF). Retrieved 4 September 2016.
"Solid Waste Section/NC DENR". Retrieved 1 March 2013.
"Title 40 - Protection of Environment Volume 26 PART 257 - CRITERIA FOR CLASSIFICATION OF SOLID WASTE DISPOSAL FACILITIES AND PRACTICES". Retrieved 6 March 2013.
"Title 40 - Protection of Environment Volume 26 PART 258 - CRITERIA FOR MUNICIPAL SOLID WASTE LANDFILLS". Retrieved 6 March 2013.
"Hawaii landfill faces $2.8 million fine for landfill violations". Retrieved 2 April 2013.
"EPA Procedures for Approving State Subtitle D Permitting Programs". Retrieved 17 March 2013.
"Legislative, Advocacy and Rulemaking". Retrieved 6 March 2013.
"Groundwater Monitoring Requirements for Municipal Solid Waste Landfills (MSWLFs)". Retrieved 1 April 2013.
"Typical Anatomy of a Landfill" (PDF). Retrieved 22 April 2018.
"America's Biggest Landfills". Forbes. Retrieved 1 March 2013.
"Trash city: Inside America's largest landfill site". Retrieved 1 March 2013.
Palmer, Brian (21 February 2011). "Landfills are safer than dumps, but trash must travel farther to reach them". The Washington Post. Archived from the original on 25 April 2014. Retrieved 29 October 2013.
External links
Landfill Methane Outreach Program at the EPA
Solid waste facilities in Massachusetts |
energy in ireland | Ireland is a net energy importer. Ireland's import dependency decreased to 85% in 2014 (from 89% in 2013).
The cost of all energy imports to Ireland was approximately €5.7 billion, down from €6.5 billion (revised) in 2013 due mainly to falling oil and, to a lesser extent, gas import prices.
Consumption of all fuels fell in 2014 with the exception of peat, renewables and non-renewable wastes.Final consumption of electricity in 2017 was 26 TWh, a 1.1% increase on the previous year. Renewable electricity generation, consisting of wind, hydro, landfill gas, biomass and biogas, accounted for 30.1% of gross electricity consumption. In 2019, it was 31 TWh with renewables accounting for 37.6% of consumption.Energy-related CO2 emissions decreased by 2.1% in 2017 to a level 17% above 1990 levels.
Energy-related CO2 emissions were 18% below 2005 levels.
60% of Irish greenhouse gas emissions are caused by energy consumption.
Statistics
Energy plan
Ireland had a plan to reduce by 30% its greenhouse gas emissions by 2030, compared to 2005. This was improved, with a new target aiming for Ireland to achieve a 7% annual average reduction in greenhouse gas emissions between 2021 and 2030.
Primary energy sources
Fossil fuels
Natural gas
There have been four commercial natural gas discoveries since exploration began offshore Ireland in the early 1970s; namely the Kinsale Head, Ballycotton and Seven Heads producing gas fields off the coast of Cork and the Corrib gas field off the coast of Mayo.The main natural gas/Fossil gas fields in Ireland are the Corrib gas project and Kinsale Head gas field. Since the Corrib gas field came on stream in 2016, Ireland reduced its energy import dependency from 88% in 2015 to 69% in 2016.The Corrib Gas Field was discovered off the west coast of Ireland in 1996. Approximately 70% of the size of the Kinsale Head field, it has an estimated producing life of just over 15 years. Production began in 2015. The project was operated by Royal Dutch Shell until 2018, and from 2018 onwards by Vermilion Energy.
The indigenous production of gas from 1990 to 2019 is shown on the graph. Figures are in thousand tonnes of oil equivalent.Since 1991 Ireland has imported natural gas by pipeline from the British National Transmission System in Scotland. This was from the Interconnector IC1 commissioned in 1991 and Interconnector IC2 commissioned in 2003.
The import of gas from 1990 to 2019 is shown on the graph. Figures are in thousand tonnes of oil equivalent.
Peat
Ireland uses peat, a fuel composed of decayed plants and other organic matter which is usually found in swampy lowlands known as bogs, as energy which is not common in Europe. Peat in Ireland is used for two main purposes – to generate electricity and as a fuel for domestic heating. The raised bogs in Ireland are located mainly in the midlands.
Bord na Móna is a commercial semi-state company that was established under the Turf Development Act 1946. The company was responsible for the mechanised harvesting of peat in Ireland. The National Parks and Wildlife Service (NPWS), under the remit of the Minister for Housing, Local Government and Heritage, deals with Special Areas of Conservation and Special Protection Areas under the Habitats Directive. Restrictions have been imposed on the harvesting of peat in certain areas under relevant designations.The West Offaly Power Station was refused permission to continue burning peat for electricity and closed in December 2020.Peat fired power stations will cease by 2023. Edenderry is the last peat fuelled power plant left operating in Ireland. Bord na Móna has been co-firing peat with biomass at Edenderry for more than 5 years. From 2023 it will become solely Biomass.
Coal
Coal remains an important solid fuel that is still used in home heating by a certain portion of households. In order to improve air quality, certain areas are banned from burning so-called 'smoky coal.' The regulations and policy relating to smoky fuel are dealt with by the Minister for the Environment, Climate and Communications.Ireland has a single coal-fired power plant at Moneypoint, Co. Clare which is operated by ESB Power Generation. At 915MW output, it is Ireland's largest power station. The station was originally built in the 1980s as part of a fuel diversity strategy and was significantly refurbished during the 2000s to make it fit for purpose in terms of environmental regulations and standards. Moneypoint is considered to have a useful life until at least 2025 but ESB Power Generation has indicated that it intends to close Moneypoint and convert it to a green energy hub.Coal fired power stations will cease by 2025.
Oil
There have been no commercial discoveries of oil in Ireland to date.One Irish oil explorer is Providence Resources, with CEO Tony O'Reilly, Junior and among the main owners Tony O'Reilly with a 40% stake.The oil industry in Ireland is based on the import, production and distribution of refined petroleum products. Oil and petroleum products are imported via oil terminals around the coast. Some crude oil is imported for processing at Ireland's only oil refinery at Whitegate Cork.
Renewable Energy
Renewable energy includes wind, solar, biomass and geothermal energy sources.
Biomass power
Non-renewable energy refers to energy generated from domestic and commercial waste in Energy-from-Waste plants. The Dublin Waste-to-Energy Facility burns waste to provide heat to generate electricity and provide district heating for areas of Dublin.
The contribution of non-renewable energy to Ireland’s energy supply is show by the graph. The quantity of energy is in thousand tonnes of oil equivalent.
Wind power
Wood
The Department of Agriculture, Food and the Marine have responsibility for the Forest Service and forestry policy in Ireland. Coillte (a commercial state company operating in forestry, land based businesses, renewable energy and panel products) and Coford (the Council for Forest Research and Development) also fall under that Department's remit.
Wood is used by households that rely on solid fuels for home heating. It is used in open fireplaces, stoves and biomass boilers.
In 2014, the Department produced a draft bioenergy strategy. In compiling the strategy, the Department worked closely with the Department of Agriculture in terms of the potential of sustainable wood biomass for energy purposes.
Electricity
Final consumption of electricity in 2014 was 24 TWh. Electricity demand which peaked in 2008 has since returned to 2004 levels. Renewable electricity generation, consisting of wind, hydro, landfill gas, biomass and biogas, accounted for 22.7% of gross electricity consumption.The use of renewables in electricity generation in 2014 reduced CO2 emissions by 2.6 Mt. In 2014, wind generation accounted for 18.2% of electricity generated and as such was the second largest source of electricity generation after natural gas.The carbon intensity of electricity fell by 49% since 1990 to a new low of 457 g CO2/kWh in 2014.Ireland is connected to the adjacent UK National Grid at an electricity interconnection level of 9% (transmission capacity relative to production capacity). In 2016, Ireland and France agreed to advance the planning of the Celtic Interconnector, which if realized will provide the two countries with a 700 MW transmission capacity by 2025.
Energy storage
The utility ESB operates a pumped storage station in Co. Wicklow called Turlough Hill with 292 MW peak power capacity. A Compressed air energy storage project in salt caverns near Larne received €15m of funding from EU. It won a further €90m from the EU in 2017 as a project of common interest (PCI). It was intended to provide a 250-330 MW buffer for 6–8 hours in the electricity system. This project has since been cancelled due to the company in charge of the project, Gaelectric, entering administration in 2017.Several battery energy storage systems are in development. Statkraft completed a 11MW, 5.6MWh lithium-ion battery in April 2020. ESB has developed a battery facility on the site of its Aghada gas-fired power station in Co. Cork. ESB and partner company Fluence are developing a further 60 MWh of battery capacity at Inchicore in Dublin and 38 MWh at Aghada . These facilities are primarily aimed at providing ancillary grid services.Imported crude oil and petroleum products are stored at oil terminals around the coast. A strategic reserve of petroleum products, equivalent to 90 days usage, is stored at some of these terminals. The National Oil Reserves Agency is responsible for the strategic reserve. The largest energy store in Ireland is the coal reserve for Moneypoint power station.
Carbon Tax
In 2010 the country's carbon tax was introduced at €15 per tonne of CO2 emissions (approx. US$20 per tonne).The tax applies to kerosene, marked gas oil, liquid petroleum gas, fuel oil, and natural gas. The tax does not apply to electricity because the cost of electricity is already included in pricing under the Single Electricity Market (SEM). Similarly, natural gas users are exempt if they can prove they are using the gas to "generate electricity, for chemical reduction, or for electrolytic or metallurgical processes". Partial relief is granted for natural gas covered by a greenhouse gas emissions permit issued by the Environmental Protection Agency. Such gas will be taxed at the minimum rate specified in the EU Energy Tax Directive, which is €0.54 per megawatt-hour at gross calorific value." Pure biofuels are also exempt. The Economic and Social Research Institute (ESRI) estimated costs between €2 and €3 a week per household: a survey from the Central Statistics Office reports that Ireland's average disposable income was almost €48,000 in 2007.Activist group Active Retirement Ireland proposed a pensioner's allowance of €4 per week for the 30 weeks currently covered by the fuel allowance and that home heating oil be covered under the Household Benefit Package.The tax is paid by companies. Payment for the first accounting period was due in July 2010. Fraudulent violation is punishable by jail or a fine.The NGO Irish Rural Link noted that according to ESRI a carbon tax would weigh more heavily on rural households. They claim that other countries have shown that carbon taxation succeeds only if it is part of a comprehensive package that includes reducing other taxes.In 2011, the coalition government of Fine Gael and Labour raised the tax to €20/tonne. Farmers were granted tax relief.The Minister for Finance introduced, with effect from 1 May 2013, a solid fuel carbon tax (SFCT). The Revenue Commissioners have responsibility for administering the tax. It applies to coal and peat and is chargeable per tonne of product.
Sustainable Energy Authority of Ireland (SEAI)
The Sustainable Energy Authority of Ireland (SEAI) was established as Ireland's national energy authority under the Sustainable Energy Act 2002. SEAI's mission is to play a leading role in transforming Ireland into a society based on sustainable energy structures, technologies and practices. To fulfil this mission SEAI aims to advise the Government, and deliver a range of programmes aimed at a wide range of stakeholders.
Nuclear energy
See also
Electricity sector in Ireland
Renewable energy in the Republic of Ireland
List of power stations in the Republic of Ireland
Oil terminals in Ireland
Whitegate refinery
Renewable energy by country
United Kingdom–Ireland natural gas interconnectors
== References == |
ovo energy | OVO Energy is a major energy supplier based in Bristol, England.
It was founded by Stephen Fitzpatrick and began trading energy in September 2009, buying and selling electricity and gas to supply domestic properties throughout the UK. By June 2017 OVO had 680,000 customers, an increase of 10,000 over the previous year, representing a 2.5% domestic market share. In November 2018, OVO Energy acquired one of its largest competitors, Spark Energy. Although at first one of over 15 smaller energy companies competing with the Big Six which dominated the market, in January 2020 OVO completed the acquisition of the retail arm of SSE, becoming itself one of the Big Six and the country's third-largest domestic energy supply company.
History
OVO Energy is British-owned and privately backed, with its headquarters in Bristol. OVO Energy supplies gas and electricity to domestic customers since 2009, and to business customers since 2013. This sector of the UK economy is dominated by a number of larger companies known as the Big Six.On 14 February 2019, Mitsubishi Corporation bought a 20 percent stake in OVO, valuing the company at £1bn.OVO Energy is part of OVO Group, which in turn is a subsidiary of Imagination Industries Ltd, a holding company wholly owned by Stephen Fitzpatrick.In 2022, OVO Energy was ranked second worst (only behind Utilita) in customer service by Citizens Advice.
Acquisitions
The 2018 acquisition of Spark Energy included a subsidiary, Home Telecom Limited, which specialises in providing telephone and broadband services to tenants of private landlords.In September 2019, OVO agreed to pay £500 million for SSE Energy Services, the retail business of SSE plc, and the purchase – which included SSE's 8,000 employees and their phone, broadband and heating insurance customers – was completed in January 2020. This made OVO the UK's second-largest energy supply company (after British Gas) with around 5 million customers. OVO stated that the SSE brand would continue for the time being. SSE had earlier intended to merge the business with Innogy's subsidiary Npower, but this was called off in December 2018. Following OVO Energy's takeover of SSE, numerous reports of incorrect and inflated bills were reported by former SSE consumers who had their accounts transferred to OVO Energy.
Regulator action
OVO Energy has been fined across two consecutive years, from 2020 to 2021, for their predatory practices.
In January 2020, OVO Energy agreed to pay £8.9m into Ofgem's voluntary redress fund, after an investigation by Ofgem found instances of undercharging and overcharging, and inaccurate annual statements sent to more than half a million customers between 2015 and 2018. Head of Ofgem enforcement, Anthony Pygram, said "The supplier did not prioritise putting these issues right whilst its business was expanding."In March 2021, as part of a wider investigation into price protection failings by energy suppliers, OVO's practices were found to have caused detriment to 240,563 customers totalling over £2m, and the company was required to pay redress of over £2.8m – the highest amount of compensation among the 18 companies investigated.Despite such regulator action, OVO energy's practice of overcharging consumers remains a common recurring theme within news reports.
Electricity
Electricity supplied by OVO Energy comes from various sources including wind farms in Gloucestershire and North Wales, and the burning of landfill gas. Its two tariffs include 50% green electricity (OVO Better Energy) and 100% green electricity (OVO Greener Energy).
OVO's "pay as you go" product has been branded as Boost since 2017. After taking on customers from Economy Energy in 2019, the brand had around 350,000 customers.
Gas
OVO Energy sources its gas from the national grid. The majority of the UK's gas is sourced from the North Sea; the rest comes from Norway, Continental Europe and some from further afield. Increasingly, gas is imported as liquefied natural gas (LNG), natural gas cooled to about −165 °C (−265 °F) and compressed to make it easier to transport.
Energy market competition
The entry of OVO into the UK supply market in 2009 was welcomed as it increased competition in a market that had been criticised for high prices.In October 2013, Managing Director Stephen Fitzpatrick appeared at the Energy and Climate Change Select Committee, when energy companies were asked to justify recent gas and electricity price rises. Fitzpatrick explained to the committee that the 'wholesale gas price had actually got cheaper', contrary to the Big Six energy suppliers' assertions that international global prices of gas and electricity had consistently risen.In November 2018, OVO acquired one of its rivals, Spark Energy, after the troubled supplier ceased trading.Following the collapse of Economy Energy in January 2019, regulator Ofgem announced that OVO Energy would take on Economy Energy's 235,000 customers.
Sponsorship
In 2016, OVO sponsored the Bristol leg of the Tour of Britain cycling race. In 2017, the company began sponsoring both The Women's Tour and the Tour of Britain, the longest cycle stage races taking place in the UK. In March 2018, OVO announced they would begin providing equal prize money for both tours.
They are no longer sponsoring either race as of 2021.
In October 2021, OVO Energy took over sponsorship of Glasgow's entertainment and multi-purpose indoor arena, which was rebranded as the OVO Hydro.
Management
Stacey Cartwright was appointed as chair of the board at OVO Energy in April 2020. She holds other directorships including at Savills, Genpact and the Football Association, and was deputy chair at retailer Harvey Nichols. Non-executive directors include Jonson Cox, chair of water regulator Ofwat.
References
External links
Official website |
energy policy of the european union | The energy policy of the European Union focuses on energy security, sustainability, and integrating the energy markets of member states. An increasingly important part of it is climate policy. A key energy policy adopted in 2009 is the 20/20/20 objectives, binding for all EU Member States. The target involved increasing the share of renewable energy in its final energy use to 20%, reduce greenhouse gases by 20% and increase energy efficiency by 20%. After this target was met, new targets for 2030 were set at a 55% reduction of greenhouse gas emissions by 2030 as part of the European Green Deal. After the Russian invasion of Ukraine, the EU's energy policy turned more towards energy security in their REPowerEU policy package, which boosts both renewable deployment and fossil fuel infrastructure for alternative suppliers.The EU Treaty of Lisbon of 2007 legally includes solidarity in matters of energy supply and changes to the energy policy within the EU. Prior to the Treaty of Lisbon, EU energy legislation has been based on the EU authority in the area of the common market and environment. However, in practice many policy competencies in relation to energy remain at national member state level, and progress in policy at European level requires voluntary cooperation by members states.In 2007, the EU was importing 82% of its oil and 57% of its gas, which then made it the world's leading importer of these fuels. Only 3% of the uranium used in European nuclear reactors was mined in Europe. Russia, Canada, Australia, Niger and Kazakhstan were the five largest suppliers of nuclear materials to the EU, supplying more than 75% of the total needs in 2009. In 2015, the EU imports 53% of the energy it consumes.The European Investment Bank took part in energy financing in Europe in 2022: a part of their REPowerEU package was to assist up to €115 billion in energy investment through 2027, in addition to regular lending operation in the sector. In 2022, the EIB sponsored €17 billion in energy investments throughout the European Union.The history of energy markets in Europe started with the European Coal and Steel Community, which was created in 1951 to lessen hostilities between France and Germany by making them economically intertwined. The 1957 Treaty of Rome established the free movement of goods, but three decades later, integration of energy markets had yet to take place. The start of an internal market for gas and electricity took place in the 1990s.
History
Early days
The history of energy markets in Europe started with the European Coal and Steel Community, which was created in 1951 in the aftermath of World War II to lessen hostilities between France and Germany by making them economically intertwined. A second key moment was the formation in 1957 Euratom, to collaborate on nuclear energy. A year later, the Treaty of Rome established the free movement of goods, which was intended to create a single market also for energy. However, three decades later, integration of energy markets had yet to take place.In the late 1980s, the European Commission proposed a set of policies (called directives in the EU context) on integrating the European market. One of the key ideas was that consumers would be able to buy electricity from outside of their own country. This plan encountered opposition from the Council of Ministers, as the policy sought to liberalise what was regarded as a natural monopoly. The less controversial parts of the directives—those on price transparency and transit right for grid operators—were adopted in 1990.
Start of an internal market
The 1992 Treaty of Maastricht, which founded the European Union, included no chapter specific on energy. Such a chapter had been rejected by member states who wanted to retain autonomy on energy, specifically those with larger energy reserves. A directive for an internal electricity market was passed in 1996 by the European Parliament, and another on the internal gas market two years later. The directive for the electricity market contained the requirement that network operation and energy generation should not done by a single (monopolistic) company. Having energy generation separate would allow for competition in that sector, whereas network operation would remain regulated.
Renewable energy and the 20/20/20 target
In 2001, the first Renewable Energy Directive was passed, in the context of the 1997 Kyoto Protocol against climate change. It included a target of doubling the share of renewable energy in the EU's energy mix from 6% to 12% by 2010. The increase for the electricity sector was even higher, with a goal of 22%. Two years later a directive was passed which increased the share of biofuels in transport.These directives were replaced in 2009 with the 20-20-20 targets, which sought to increase the share of renewables to 20% by 2020. Additionally, greenhouse gas emissions needed to drop by 20% compared to 1990, and energy efficiency improved by 20%. In included mandatory targets for each member state, which differed by member state. While not all national government reached their targets, overall, the EU surpassed the three targets. Greenhouse gas emissions were 34% lower in 2020 than in 1990 for instance.
Energy Union
The Energy Union Strategy is a project of the European Commission to coordinate the transformation of European energy supply. It was launched in February 2015, with the aim of providing secure, sustainable, competitive, affordable energy.Donald Tusk, President of the European Council, introduced the idea of an energy union when he was Prime Minister of Poland. Eurocommissioner Vice President Maroš Šefčovič called the Energy Union the biggest energy project since the European Coal and Steel Community. The EU's reliance on Russia for its energy, and the annexation of Crimea by Russia have been cited as strong reasons for the importance of this policy.
The European Council concluded on 19 March 2015 that the EU is committed to building an Energy Union with a forward-looking climate policy on the basis of the commission's framework strategy, with five priority dimensions:
Energy security, solidarity and trust
A fully integrated European energy market
Energy efficiency contributing to moderation of demand
Decarbonising the economy
Research, innovation and competitiveness.The strategy includes a minimum 10% electricity interconnection target for all member states by 2020, which the Commission hopes will put downward pressure onto energy prices, reduce the need to build new power plants, reduce the risk of black-outs or other forms of electrical grid instability, improve the reliability of renewable energy supply, and encourage market integration.EU Member States agreed 25 January 2018 on the commission's proposal to invest €873 million in clean energy infrastructure. The projects are financed by CEF (Connecting Europe Facility).
€578 million for the construction of the Biscay Gulf France-Spain interconnection, a 280 km long off-shore section and a French underground land section. This new link will increase the interconnection capacity between both countries from 2.8 GW to 5 GW.
€70 million to construct the SüdOstLink, 580 km of high-voltage cables laid fully underground. The power line will create an urgently needed link between the wind power generated in the north and the consumption centres in the south of Germany.
€101 million for the CyprusGas2EU project to provide natural gas to Cyprus
European Green Deal
The European Green Deal, approved in 2020, is a set of policy initiatives by the European Commission with the overarching aim of making the European Union (EU) climate neutral in 2050. The plan is to review each existing law on its climate merits, and also introduce new legislation on the circular economy, building renovation, farming and innovation.The president of the European Commission, Ursula von der Leyen, stated that the European Green Deal would be Europe's "man on the moon moment". Von der Leyen appointed Frans Timmermans as Executive Vice President of the European Commission for the European Green Deal. On 13 December 2019, the European Council decided to press ahead with the plan, with an opt-out for Poland. On 15 January 2020, the European Parliament voted to support the deal as well, with requests for higher ambition. A year later, the European Climate Law was passed, which legislated that greenhouse gas emissions should be 55% lower in 2030 compared to 1990. The Fit for 55 package is a large set of proposed legislation detailing how the European Union plans to reach this target, including major proposal for energy sectors such as renewable energy and transport.After the Russian invasion of Ukraine, the EU launched REPowerEU to quickly reduce import dependency on Russia for oil and gas. While the policy proposal includes a substantial acceleration for renewable energy deployment, it also contains expansion of fossil fuel infrastructure from alternative suppliers.
Earlier proposals
The possible principles of Energy Policy for Europe were elaborated at the commission's green paper A European Strategy for Sustainable, Competitive and Secure Energy on 8 March 2006. As a result of the decision to develop a common energy policy, the first proposals, Energy for a Changing World were published by the European Commission, following a consultation process, on 10 January 2007.
It is claimed that they will lead to a 'post-industrial revolution', or a low-carbon economy, in the European Union, as well as increased competition in the energy markets, improved security of supply, and improved employment prospects. The commission's proposals have been approved at a meeting of the European Council on 8 and 9 March 2007.Key proposals include:
A cut of at least 20% in greenhouse gas emissions from all primary energy sources by 2020 (compared to 1990 levels), while pushing for an international agreement to succeed the Kyoto Protocol aimed at achieving a 30% cut by all developed nations by 2020.
A cut of up to 95% in carbon emissions from primary energy sources by 2050, compared to 1990 levels.
A minimum target of 10% for the use of biofuels by 2020.
That the energy supply and generation activities of energy companies should be 'unbundled' from their distribution networks to further increase market competition.
Improving energy relations with the EU's neighbours, including Russia.
The development of a European Strategic Energy Technology Plan to develop technologies in areas including renewable energy, energy conservation, low-energy buildings, fourth generation nuclear reactor, coal pollution mitigation, and carbon capture and sequestration (CCS).
Developing an Africa-Europe Energy partnership, to help Africa 'leap-frog' to low-carbon technologies and to help develop the continent as a sustainable energy supplier.Many underlying proposals are designed to limit global temperature changes to no more than 2 °C above pre-industrial levels, of which 0.8 °C has already taken place and another 0.5–0.7 °C is already committed. 2 °C is usually seen as the upper temperature limit to avoid 'dangerous global warming'. Due to only minor efforts in global Climate change mitigation it is highly likely that the world will not be able to reach this particular target. The EU might then not only be forced to accept a less ambitious global target. Because the planned emissions reductions in the European energy sector (95% by 2050) are derived directly from the 2 °C target since 2007, the EU will have to revise its energy policy paradigm.In 2014, negotiations about binding EU energy and climate targets until 2030 are set to start.
European Parliament voted in February 2014 in favour of binding 2030 targets on renewables, emissions and energy efficiency: a 40% cut in greenhouse gases, compared with 1990 levels; at least 30% of energy to come from renewable sources; and a 40% improvement in energy efficiency.
Current policies
Energy sources
Under the requirements of the Directive on Electricity Production from Renewable Energy Sources, which entered into force in October 2001, the member states are expected to meet "indicative" targets for renewable energy production. Although there is significant variation in national targets, the average is that 22% of electricity should be generated by renewables by 2010 (compared to 13,9% in 1997). The European Commission has proposed in its Renewable Energy Roadmap21 a binding target of increasing the level of renewable energy in the EU's overall mix from less than 7% today to 20% by 2020.Europe spent €406 billion in 2011 and €545 billion in 2012 on importing fossil fuels. This is around three times more than the cost of the Greek bailout up to 2013. In 2012, wind energy avoided €9.6 billion of fossil fuel costs. EWEA recommends binding renewable energy target to support in replacing fossil fuels with wind energy in Europe by providing a stable regulatory framework. In addition, it recommends setting a minimum emission performance standard for all new-build power installations.For over a decade, the European Investment Bank has managed the European Local Energy Assistance (ELENA) facility on behalf of the European Commission, which provides technical assistance to any private or public entity in order to help prepare energy-efficient and renewable energy investments in buildings or innovative urban transportation projects. The EU Modernisation Fund, formed in 2018 as part of the new EU Emissions Trading System (ETS) Directive and with direct engagement from the EIB12, targets such investments as well as energy efficiency and a fair transition across 10 Member States.
The European Investment Bank took part in energy financing in Europe in 2022: a part of their REPowerEU package was to assist up to €115 billion in energy investment through 2027, in addition to regular lending operation in the sector. The European Investment Bank Group has invested about €134 billion in the energy sector of the European Union during the last ten years (2010-2020), in addition to extra funding for renewable energy projects in various countries. These initiatives are currently assisting Europe in surviving the crisis brought on by the sudden interruption of Russian gas supply.
Energy markets
The EU promotes electricity market liberalisation and security of supply through Directive 2019/944
The 2004 Gas Security Directive has been intended to improve security of supply in the natural gas sector.
Energy efficiency
Energy taxation
IPEEC
At the Heiligendamm Summit in June 2007, the G8 acknowledged an EU proposal for an international initiative on energy efficiency tabled in March 2007, and agreed to explore, together with the International Energy Agency, the most effective means to promote energy efficiency internationally. A year later, on 8 June 2008, the G8 countries, China, India, South Korea and the European Community decided to establish the International Partnership for Energy Efficiency Cooperation, at the Energy Ministerial meeting hosted by Japan in the frame of the 2008 G8 Presidency, in Aomori.
Buildings
Buildings account for around 40% of EU energy requirements and have been the focus of several initiatives. From 4 January 2006, the 2002 Directive on the energy performance of buildings requires member states to ensure that new buildings, as well as large existing buildings undergoing refurbishment, meet certain minimum energy requirements. It also requires that all buildings should undergo 'energy certification' prior to sale, and that boilers and air conditioning equipment should be regularly inspected.
As part of the EU's SAVE Programme, aimed at promoting energy efficiency and encouraging energy-saving behaviour, the Boiler Efficiency Directive specifies minimum levels of efficiency for boilers fired with liquid or gaseous fuels. Originally, from June 2007, all homes (and other buildings) in the UK would have to undergo Energy Performance Certification before they are sold or let, to meet the requirements of the European Energy Performance of Buildings Directive (Directive 2002/91/EC).
Transport
EU policies include the voluntary ACEA agreement, signed in 1998, to cut carbon dioxide emissions for new cars sold in Europe to an average of 140 grams of CO2/km by 2008, a 25% cut from the 1995 level. Because the target was unlikely to be met, the European Commission published new proposals in February 2007, requiring a mandatory limit of 130 grams of CO2/km for new cars by 2012, with 'complementary measures' being proposed to achieve the target of 120 grams of CO2/km that had originally been expected.In the area of fuels, the 2001 Biofuels Directive requires that 5,75% of all transport fossil fuels (petrol and diesel) should be replaced by biofuels by 31 December 2010, with an intermediate target of 2% by the end of 2005. In February 2007 the European Commission proposed that, from 2011, suppliers will have to reduce carbon emissions per unit of energy by 1% a year from 2010 levels, to result in a cut of 10% by 2020 Stricter fuel standards to combat climate change and reduce air pollution.
Flights
Airlines can be charged for their greenhouse gas emissions on flights to and from Europe according to a court ruling in October 2011.Historically, EU aviation fuel was tax free and applied no VAT. Fuel taxation in the EU was banned in 2003 under the Energy Taxation Directive, except for domestic flights and on intra-EU flights on the basis of bilateral agreements. No such agreements exist.In 2018 Germany applied 19% VAT on domestic airline tickets. Many other member states had 0% VAT. Unlike air travel, VAT is applied to bus and rail, which creates economic distortions, increasing demand for air travel relative to other forms of transport. This increases aviation emissions and constitutes a state aid subsidy. Air fuel tax 33 cents/litre equal to road traffic would give €9.5 billion. Applying a 15% VAT in all air traffics within and from Europe would be equal to €15 billion.
Industry
The European Union Emission Trading Scheme, introduced in 2005 under the 2003 Emission Trading Directive, sets member state-level caps on greenhouse gas emissions for power plants and other large point sources.
Consumer goods
A further area of energy policy has been in the area of consumer goods, where energy labels were introduced to encourage consumers to purchase more energy-efficient appliances.
External energy relations
Beyond the bounds of the European Union, EU energy policy has included negotiating and developing wider international agreements, such as the Energy Charter Treaty, the Kyoto Protocol, the post-Kyoto regime and a framework agreement on energy efficiency; extension of the EC energy regulatory framework or principles to neighbours (Energy Community, Baku Initiative, Euro-Med energy cooperation) and the emission trading scheme to global partners; the promotion of research and the use of renewable energy.The EU-Russia energy cooperation will be based on a new comprehensive framework agreement within the post-Partnership and Cooperation Agreement (PCA), which will be negotiated in 2007. The energy cooperation with other third energy producer and transit countries is facilitated with different tools, such as the PCAs, the existing and foreseen Memorandums of Understanding on Energy Cooperation (with Ukraine, Azerbaijan, Kazakhstan and Algeria), the Association Agreements with Mediterranean countries, the European Neighbourhood Policy Action Plans; Euromed energy cooperation; the Baku initiative; and the EU-Norway energy dialogue. For the cooperation with African countries, a comprehensive Africa-Europe Energy partnership would be launched at the highest level, with the integration of Europe's Energy and Development Policies.
For ensuring efficient follow-up and coherence in pursuing the initiatives and processes, for sharing information in case of an external energy crisis, and for assisting the EU's early response and reactions in case of energy security threats, the network of energy correspondents in the Member States was established in early 2007. After the Russian-Ukrainian Gas Crisis of 2009 the EU decided that the existing external measures regarding gas supply security should be supplemented by internal provisions for emergency prevention and response, such as enhancing gas storage and network capacity or the development of the technical prerequisites for reverse flow in transit pipelines.
Just Transition Fund
Just Transition Fund (JTF) was created in 2020 to boost investments in low-carbon energy. The fund was criticized for blanket ban on low-carbon nuclear power but also introduction of a loophole for fossil gas. Having the largest workforce dedicated to the coal industry, Poland—followed by Germany and Romania—is the fund's largest receptor. Amounting to €17.5 billion, the fund was approved by the European Parliament in May 2021.
Solar anti-dumping levies
In 2013, a two-year investigation by the European Commission concluded that Chinese solar panel exporters were selling their products in the EU up to 88% below market prices, backed by state subsidies. In response, the European Council imposed tariffs on solar imported from China at an average rate of 47.6% beginning 6 June that year.The Commission reviewed these measures in December 2016 and proposed to extend them for two years until March 2019. However, in January 2017, 18 out of 28 EU member states voted in favour of shortening the extension period. In February 2017, the commission announced its intention to extend its anti-dumping measures for a reduced period of 18 months.
Research and development
The European Union is active in the areas of energy research, development and promotion, via initiatives such as CEPHEUS (ultra-low energy housing), and programs under the umbrella titles of SAVE (energy saving) ALTENER (new and renewable energy sources), STEER (transport) and COOPENER (developing countries). Through Fusion for Energy, the EU is participating in the ITER project.
SET Plan
The Seventh Framework Programme research program that run until 2013 only reserved a moderate amount of funding for energy research, although energy did emerge as one of the key issues of the European Union. A large part of FP7 energy funding was devoted to fusion research, a technology that will not be able to help meet European climate and energy objectives until beyond 2050. The European Commission tried to redress this shortfall with the SET plan.The SET plan initiatives included a European Wind Initiative, the Solar Europe Initiative, the Bioenergy Europe Initiative, the European electricity grid initiative and an inititaive for sustainable nuclear fission. The budget for the SET plan is estimated at €71.5 billion. The IEA raised its concern that demand-side technologies do not feature at all in the six priority areas of the SET Plan.
Public opinion
In a poll carried out for the European Commission in October and November 2005, 47% of the citizens questioned in the 27 countries of the EU (including the 2 states that joined in 2007) were in favour of taking decisions on key energy policy issues at a European level. 37% favoured national decisions and 8% that they be tackled locally.A similar survey of 29,220 people in March and May 2006 indicated that the balance had changed in favour of national decisions in these areas (42% in favour), with 39% backing EU policy making and 12% preferring local decisions. There was significant national variation with this, with 55% in favour of European level decision making in the Netherlands, but only 15% in Finland.
A comprehensive public opinion survey was performed in May and June 2006. The authors propose following conclusions:
Energy issues are considered to be important but not at first glance.
EU citizens perceive great future promise in the use of renewable energies. Despite majority opposition, nuclear energy also has its place in the future energy mix.
Citizens appear to opt for changing the energy structure, enhancing research and development and guaranteeing the stability of the energy field rather than saving energy as the way to meet energy challenges.
The possible future consequences of energy issues do not generate deep fears in Europeans' minds.
Europeans appear to be fairly familiar with energy issues, although their knowledge seems somewhat vague.
Energy issues touch everybody and it is therefore hard to distinguish clear groups with differing perceptions. Nevertheless, rough distinction between groups of citizens is sketched.
Example European countries
Germany
In September 2010, the German government adopted a set of ambitious goals to transform their national energy system and to reduce national greenhouse gas emissions by 80 to 95% by 2050 (relative to 1990). This transformation became known as the Energiewende. Subsequently, the government decided to the phase-out the nation's fleet of nuclear reactors, to be complete by 2022. As of 2014, the country is making steady progress on this transition.
See also
CHP Directive
Directorate-General for Energy
Energy Charter Treaty
Energy Community
Energy diplomacy
Energy in Europe
EU Energy Efficiency Directive 2012
European Climate Change Programme
European Commissioner for Energy
European countries by electricity consumption per person
European countries by fossil fuel use (% of total energy)
European Ecodesign Directive
European Pollutant Emission Register (EPER)
Global strategic petroleum reserves
Internal Market in Electricity Directive
INOGATE
List of electricity interconnection level
Renewable energy in the European Union
Special economic zone
Transport in Europe
References
External links
Official website
European information campaign on the opening of the energy markets and on energy consumers' right.
European Strategic Energy Technology Plan, Towards A Low Carbon Future.
Eurostat – Statistics Explained – all articles on energy
ManagEnergy, for energy efficiency and renewable energies at the local and regional level.
BBC Q&A: EU energy proposals
2006 Energy Green Paper
Collective Energy Security: A New Approach for Europe
Berlin Forum on Fossil Fuels.
Netherlands Environmental Assessment Agency – Meeting the European Union 2 °C climate target: global and regional emission implications
German Institute for International and Security Affairs – Perspectives for the European Union's External Energy Policy
The Liberalisation of the Power Industry in the European Union and its Impact on Climate Change – A Legal Analysis of the Internal Market in Electricity.
In the media
8 Sep 2008 New Europe (neurope.eu) : Energy security and Europe.
10 Jan 2007, Reuters: EU puts climate change at heart of energy policy
14 Dec 2006, opendemocracy.net: Russia, Germany and European energy policy
20 Nov 2006, eupolitix.com: Barroso calls for strong EU energy policy |
environmental issues in the european union | Environmental issues in the European Union include the environmental issues identified by the European Union as well as its constituent states. The European Union has several federal bodies which create policy and practice across the constituent states.
Issues
Air pollution
A report from the European Environment Agency shows that road transport remains Europe's single largest air polluter.National Emission Ceilings (NEC) for certain atmospheric pollutants are regulated by NECD Directive 2001/81/EC (NECD). As part of the preparatory work associated with the revision of the NECD, the European Commission is assisted by the NECPI working group (National Emission Ceilings – Policy Instruments).Directive 2008/50/EC of the European Parliament and of the Council of 21 May 2008 on ambient air quality and cleaner air for Europe (the new Air Quality Directive) has entered into force on 11 June 2008.Individual citizens can force their local councils to tackle air pollution, following an important ruling in July 2009 from the European Court of Justice (ECJ). The EU's court was asked to judge the case of a resident of Munich, Dieter Janecek, who said that under the 1996 EU Air Quality Directive (Council Directive 96/62/EC of 27 September 1996 on
ambient air quality assessment and management ) the Munich authorities were obliged to take action to stop pollution exceeding specified targets. Janecek then took his case to the ECJ, whose judges said European citizens are entitled to demand air quality action plans from local authorities in situations where there is a risk that EU limits will be overshot.
Legislation
Climate change
Protected areas
Policy
Renewable energy
European Green Deal
Pesticides
Invasive species
Government organizations
EEA
Climate Programme
Directorate General
By state
See also
Bonn Agreement
Coordination of Information on the Environment
Directorate-General for the Environment (European Commission)
European Commissioner for the Environment
European Federation for Transport and Environment
European Week for Waste Reduction (EWWR)
References
External links
Environment at EUROPA, the portal site of the European Union
European Commission - Nature and Biodiversity
recyclingportal.eu |
asia-pacific partnership on clean development and climate | The Asia-Pacific Partnership on Clean Development and Climate, also known as APP, was an international, voluntary, public-private partnership among Australia, Canada, India, Japan, the People's Republic of China, South Korea, and the United States announced July 28, 2005 at an Association of South East Asian Nations (ASEAN) Regional Forum meeting and launched on January 12, 2006, at the Partnership's inaugural Ministerial meeting in Sydney. As of 5 April 2011, the Partnership formally concluded although a number of individual projects continue. The conclusion of the APP and cancellation of many of its projects attracted almost no media comment.
Foreign, Environment and Energy Ministers from partner countries agreed to co-operate on the development and transfer of technology which enables reduction of greenhouse gas emissions that is consistent with and complementary to the UN Framework Convention on Climate Change and other relevant international instruments, and is intended to complement but not replace the Kyoto Protocol., Ministers agreed to a Charter, Communique and Work Plan that "outline a ground-breaking new model of private-public task forces to address climate change, energy security and air pollution."
Member countries account for over 50% of the world's greenhouse gas emissions, energy consumption, GDP and population. Unlike the Kyoto Protocol (currently unratified by the United States), which imposes mandatory limits on greenhouse gas emissions, the Partnership engages member countries to accelerate the development and deployment of clean energy technologies, with no mandatory enforcement mechanism. This has led to criticism that the Partnership is worthless, by other governments, climate scientists and environmental groups. Proponents, on the other hand, argue that unrestricted economic growth and emission reductions can only be brought about through active engagement by all major polluters, which includes India and China, within the Kyoto Protocol framework neither India nor China are yet required to reduce emissions.
Canada became the 7th member of the APP at the Second Ministerial Meeting in New Delhi on October 15, 2007. Canada's Prime Minister Stephen Harper earlier expressed his intention to join the Partnership in August 2007, despite some domestic opposition.
Aims
U.S. former President George W. Bush called it a "new results-oriented partnership" that he said "will allow our nations to develop and accelerate deployment of cleaner, more efficient energy technologies to meet national pollution reduction, energy security and climate change concerns in ways that reduce poverty and promote economic development." John Howard, the former Australian Prime Minister, described the pact as "fair and effective".However, the Worldwide Fund for Nature stated that "a deal on climate change that doesn't limit pollution is the same as a peace plan that allows guns to be fired" whilst the British Governments' chief scientific adviser, Sir David King, in a BBC interview said he doubted the new deal could work without setting caps on emissions, but added it should be seen as a sign of progress on climate change. Compared to the Kyoto Protocol, which so far requires no emission reductions from India and China, the APP actively engages both countries through building market incentives to reduce greenhouse emissions along with building capacity and providing clean technology transfers. Proponents argue that this approach creates a greater likelihood that both India and China will, sooner rather than later, effectively cut their greenhouse emissions even though they are not required to do so under the Kyoto Protocol.
Areas for collaboration
The intent is to create a voluntary, non-legally binding framework for international cooperation to facilitate the development, diffusion, deployment, and transfer of existing, emerging and longer term cost- effective, cleaner, more efficient technologies and practices among the Partners through concrete and substantial cooperation so as to achieve practical results; promote and create enabling environments to assist in such efforts; facilitate attainment of the Partners' respective national pollution reduction, energy security and climate change objectives; and provide a forum for exploring the Partners’ respective policy approaches relevant to addressing interlinked development, energy, environment, and climate change issues within the context of clean development goals, and for sharing experiences in developing and implementing respective national development and energy strategies.
The Partnership's inaugural Ministerial meeting established eight government/business taskforces through its Work Plan, posted on the APP website.
cleaner fossil energy
renewable energy and distributed generation
power generation and transmission
steel
aluminum
cement
coal mining
buildings and appliances
Ministerial meetings
The inaugural ministerial meeting was held at the Four Seasons Hotel and Government House in Sydney, Australia on January 11 and 12, 2006.Asia-Pacific Partnership Ministers agreed and released a:
Charter that provides the framework and structure of the Partnership;
Communiqué that highlights key outcomes from this meeting; and
Work Plan that maps out an intensive agenda of work for the taskforces in the near-term.Partnership Ministers met again in New Delhi, India on October 15, 2007, and released a second communique and admitted Canada as a Partner.
The Ministers also met in Shanghai, China on October 26–27, 2009 where they discussed the accomplishments of the Partnership since the New Delhi Ministerial, and received the results of a report analyzing and evaluating the progress of the APP flagship projects.
Criticism
The Partnership has been criticized by environmentalists who have rebuked the proceedings as ineffectual without mandatory limits on greenhouse-gas emissions. A coalition of national environment groups and networks from all of the APP countries issued a challenge to their governments to make the APP meaningful by agreeing to mandatory targets, creating financial mechanisms with incentives for the dissemination of clean energy technologies, and create an action plan to overcome the key barriers to technology transfer. U.S. Senator John McCain said the Partnership "[amounted] to nothing more than a nice little public relations ploy.", while the Economist described the Partnership as "patent fig-leaf for the refusal of America and Australia to ratify Kyoto".
Successes
Proponents of the Partnership have lauded the APP's achievements since its inception in 2006. In its over three years, the Partnership has established a record of achievement in promoting collaboration between our governments and private sector in key energy-intensive sectors and activities. The Partnership has worked to develop and implement detailed action plans across key sectors of the energy economy, and to date has endorsed 175 collaborative projects including 22 flagship projects across all the seven Partner countries. These projects have, inter alia, helped power plant managers improve the efficiency of their operations, trained cement plant operators how to save energy at their facilities, assisted in pushing solar photovoltaics toward commercialization, and improved design, equipment and operations of buildings and appliances. The Partnership has been widely noted for its innovative work in public-private sector cooperation, and stands as an example of the benefits of international cooperative efforts in addressing climate change.
References
External links
Asia-Pacific Partnership Website - Includes information on the work of the Partnership, individual Task Force and other meeting pages, and documents in all Partner languages
Australian Department of Foreign Affairs Website Partnership Webpage - Includes documents
China's Asia-Pacific Partnership Website - Includes documents in Chinese
Japan's Asia-Pacific Partnership Website - Includes information in Japanese
Korea's Asia-Pacific Partnership Website - Includes information in Korean
U.S. Government's Asia-Pacific Partnership Website
Joint Press Release: Asia-Pacific Partnership Sets New Path to Address Climate Change
& China - The Lure of Cheap Coal
Wikinews July 28, 2005
US agrees climate deal with Asia - BBC
Climate pact: For good or bad? - BBC
CNN July 27, 2005
First meeting for 'Kyoto rival' - BBC
AAP report
Sydney Morning Herald report
AusBC report |
germany | Germany, officially the Federal Republic of Germany, is a country in the western region of Central Europe. It is the second-most populous country in Europe after Russia, and the most populous member state of the European Union. Germany is situated between the Baltic and North seas to the north, and the Alps to the south. Its 16 constituent states are bordered by Denmark to the north, Poland and the Czech Republic to the east, Austria and Switzerland to the south, and France, Luxembourg, Belgium, and the Netherlands to the west. The nation's capital and most populous city is Berlin and its main financial centre is Frankfurt; the largest urban area is the Ruhr.
Various Germanic tribes have inhabited the northern parts of modern Germany since classical antiquity. A region named Germania was documented before AD 100. In 962, the Kingdom of Germany formed the bulk of the Holy Roman Empire. During the 16th century, northern German regions became the centre of the Protestant Reformation. Following the Napoleonic Wars and the dissolution of the Holy Roman Empire in 1806, the German Confederation was formed in 1815.
Formal unification of Germany into the modern nation-state was commenced on 18 August 1866 with the North German Confederation Treaty establishing the Prussia-led North German Confederation later transformed in 1871 into the German Empire. After World War I and the German Revolution of 1918–1919, the Empire was in turn transformed into the semi-presidential Weimar Republic. The Nazi seizure of power in 1933 led to the establishment of a totalitarian dictatorship, World War II, and the Holocaust. After the end of World War II in Europe and a period of Allied occupation, in 1949, Germany as a whole was organized into two separate polities with limited sovereignty: the Federal Republic of Germany, generally known as West Germany, and the German Democratic Republic, known as East Germany, while Berlin continued its de jure Four Power status. The Federal Republic of Germany was a founding member of the European Economic Community and the European Union, while the German Democratic Republic was a communist Eastern Bloc state and member of the Warsaw Pact. After the fall of communist led-government in East Germany, German reunification saw the former East German states join the Federal Republic of Germany on 3 October 1990.
Germany has been described as a great power with a strong economy; it has the largest economy in Europe, the world's third-largest economy by nominal GDP and the fifth-largest by PPP. As a global power in industrial, scientific and technological sectors, it is both the world's third-largest exporter and importer. As a developed country it offers social security, a universal health care system and a tuition-free university education. Germany is a member of the United Nations, European Union, NATO, Council of Europe, G7, G20, and OECD. It has the third-greatest number of UNESCO World Heritage Sites.
Etymology
The English word Germany derives from the Latin Germania, which came into use after Julius Caesar adopted it for the peoples east of the Rhine. The German term Deutschland, originally diutisciu land ('the German lands') is derived from deutsch (cf. Dutch), descended from Old High German diutisc 'of the people' (from diot or diota 'people'), originally used to distinguish the language of the common people from Latin and its Romance descendants. This in turn descends from Proto-Germanic *þiudiskaz 'of the people' (see also the Latinised form Theodiscus), derived from *þeudō, descended from Proto-Indo-European *tewtéh₂- 'people', from which the word Teutons also originates.
History
Pre-human ancestors, the Danuvius guggenmosi, who were present in Germany over 11 million years ago, are theorized to be among the earliest ones to walk on two legs. Ancient humans were present in Germany at least 600,000 years ago. The first non-modern human fossil (the Neanderthal) was discovered in the Neander Valley. Similarly dated evidence of modern humans has been found in the Swabian Jura, including 42,000-year-old flutes which are the oldest musical instruments ever found, the 40,000-year-old Lion Man, and the 35,000-year-old Venus of Hohle Fels. The Nebra sky disk, created during the European Bronze Age, has been attributed to a German site.
Germanic tribes and the Frankish Empire
The Germanic peoples are thought to date from the Nordic Bronze Age, early Iron Age, or the Jastorf culture. From southern Scandinavia and northern Germany, they expanded south, east, and west, coming into contact with the Celtic, Iranian, Baltic, and Slavic tribes.Under Augustus, the Roman Empire began to invade lands inhabited by the Germanic tribes, creating a short-lived Roman province of Germania between the Rhine and Elbe rivers. In 9 AD, three Roman legions were defeated by Arminius in the Battle of the Teutoburg Forest. The outcome of this battle dissuaded the Romans from their ambition of conquering Germania, and is thus considered one of the most important events in European history. By 100 AD, when Tacitus wrote Germania, Germanic tribes had settled along the Rhine and the Danube (the Limes Germanicus), occupying most of modern Germany. However, Baden-Württemberg, southern Bavaria, southern Hesse and the western Rhineland had been incorporated into Roman provinces.Around 260, Germanic peoples broke into Roman-controlled lands. After the invasion of the Huns in 375, and with the decline of Rome from 395, Germanic tribes moved farther southwest: the Franks established the Frankish Kingdom and pushed east to subjugate Saxony and Bavaria, and areas of what is today eastern Germany were inhabited by Western Slavic tribes.
East Francia and the Holy Roman Empire
Charlemagne founded the Carolingian Empire in 800; it was divided in 843. The eastern successor kingdom of East Francia stretched from the Rhine in the west to the Elbe river in the east and from the North Sea to the Alps. Subsequently, the Holy Roman Empire emerged from it. The Ottonian rulers (919–1024) consolidated several major duchies. In 996, Gregory V became the first German Pope, appointed by his cousin Otto III, whom he shortly after crowned Holy Roman Emperor. The Holy Roman Empire absorbed northern Italy and Burgundy under the Salian emperors (1024–1125), although the emperors lost power through the Investiture controversy.Under the Hohenstaufen emperors (1138–1254), German princes encouraged German settlement to the south and east (Ostsiedlung). Members of the Hanseatic League, mostly north German towns, prospered in the expansion of trade. The population declined starting with the Great Famine in 1315, followed by the Black Death of 1348–1350. The Golden Bull issued in 1356 provided the constitutional structure of the Empire and codified the election of the emperor by seven prince-electors.Johannes Gutenberg introduced moveable-type printing to Europe, laying the basis for the democratization of knowledge. In 1517, Martin Luther incited the Protestant Reformation and his translation of the Bible began the standardization of the language; the 1555 Peace of Augsburg tolerated the "Evangelical" faith (Lutheranism), but also decreed that the faith of the prince was to be the faith of his subjects (cuius regio, eius religio). From the Cologne War through the Thirty Years' Wars (1618–1648), religious conflict devastated German lands and significantly reduced the population.The Peace of Westphalia ended religious warfare among the Imperial Estates; their mostly German-speaking rulers were able to choose Catholicism, Lutheranism, or Calvinism as their official religion. The legal system initiated by a series of Imperial Reforms (approximately 1495–1555) provided for considerable local autonomy and a stronger Imperial Diet. The House of Habsburg held the imperial crown from 1438 until the death of Charles VI in 1740. Following the War of the Austrian Succession and the Treaty of Aix-la-Chapelle, Charles VI's daughter Maria Theresa ruled as empress consort when her husband, Francis I, became emperor.From 1740, dualism between the Austrian Habsburg monarchy and the Kingdom of Prussia dominated German history. In 1772, 1793, and 1795, Prussia and Austria, along with the Russian Empire, agreed to the Partitions of Poland. During the period of the French Revolutionary Wars, the Napoleonic era and the subsequent final meeting of the Imperial Diet, most of the Free Imperial Cities were annexed by dynastic territories; the ecclesiastical territories were secularised and annexed. In 1806 the Imperium was dissolved; France, Russia, Prussia, and the Habsburgs (Austria) competed for hegemony in the German states during the Napoleonic Wars.
German Confederation and Empire
Following the fall of Napoleon, the Congress of Vienna founded the German Confederation, a loose league of 39 sovereign states. The appointment of the emperor of Austria as the permanent president reflected the Congress's rejection of Prussia's rising influence. Disagreement within restoration politics partly led to the rise of liberal movements, followed by new measures of repression by Austrian statesman Klemens von Metternich. The Zollverein, a tariff union, furthered economic unity. In light of revolutionary movements in Europe, intellectuals and commoners started the revolutions of 1848 in the German states, raising the German question. King Frederick William IV of Prussia was offered the title of emperor, but with a loss of power; he rejected the crown and the proposed constitution, a temporary setback for the movement.King William I appointed Otto von Bismarck as the Minister President of Prussia in 1862. Bismarck successfully concluded the war with Denmark in 1864; the subsequent decisive Prussian victory in the Austro-Prussian War of 1866 enabled him to create the North German Confederation which excluded Austria. After the defeat of France in the Franco-Prussian War, the German princes proclaimed the founding of the German Empire in 1871. Prussia was the dominant constituent state of the new empire; the King of Prussia ruled as its Kaiser, and Berlin became its capital.In the Gründerzeit period following the unification of Germany, Bismarck's foreign policy as chancellor of Germany secured Germany's position as a great nation by forging alliances and avoiding war. However, under Wilhelm II, Germany took an imperialistic course, leading to friction with neighbouring countries. A dual alliance was created with the multinational realm of Austria-Hungary; the Triple Alliance of 1882 included Italy. Britain, France and Russia also concluded alliances to protect against Habsburg interference with Russian interests in the Balkans or German interference against France. At the Berlin Conference in 1884, Germany claimed several colonies including German East Africa, German South West Africa, Togoland, and Kamerun. Later, Germany further expanded its colonial empire to include holdings in the Pacific and China. The colonial government in South West Africa (present-day Namibia), from 1904 to 1907, carried out the annihilation of the local Herero and Namaqua peoples as punishment for an uprising; this was the 20th century's first genocide.The assassination of Austria's crown prince on 28 June 1914 provided the pretext for Austria-Hungary to attack Serbia and trigger World War I. After four years of warfare, in which approximately two million German soldiers were killed, a general armistice ended the fighting. In the German Revolution (November 1918), Wilhelm II and the ruling princes abdicated their positions, and Germany was declared a federal republic. Germany's new leadership signed the Treaty of Versailles in 1919, accepting defeat by the Allies. Germans perceived the treaty as humiliating, which was seen by historians as influential in the rise of Adolf Hitler. Germany lost around 13% of its European territory and ceded all of its colonial possessions in Africa and the Pacific.
Weimar Republic and Nazi Germany
On 11 August 1919, President Friedrich Ebert signed the democratic Weimar Constitution. In the subsequent struggle for power, communists seized power in Bavaria, but conservative elements elsewhere attempted to overthrow the Republic in the Kapp Putsch. Street fighting in the major industrial centres, the occupation of the Ruhr by Belgian and French troops, and a period of hyperinflation followed. A debt restructuring plan and the creation of a new currency in 1924 ushered in the Golden Twenties, an era of artistic innovation and liberal cultural life.The worldwide Great Depression hit Germany in 1929. Chancellor Heinrich Brüning's government pursued a policy of fiscal austerity and deflation which caused unemployment of nearly 30% by 1932. The Nazi Party led by Adolf Hitler became the largest party in the Reichstag after a special election in 1932 and Hindenburg appointed Hitler as chancellor of Germany on 30 January 1933. After the Reichstag fire, a decree abrogated basic civil rights and the first Nazi concentration camp opened. On 23 March 1933, the Enabling Act gave Hitler unrestricted legislative power, overriding the constitution, and marked the beginning of Nazi Germany. His government established a centralised totalitarian state, withdrew from the League of Nations, and dramatically increased the country's rearmament. A government-sponsored programme for economic renewal focused on public works, the most famous of which was the Autobahn.In 1935, the regime withdrew from the Treaty of Versailles and introduced the Nuremberg Laws which targeted Jews and other minorities. Germany also reacquired control of the Saarland in 1935, remilitarised the Rhineland in 1936, annexed Austria in 1938, annexed the Sudetenland in 1938 with the Munich Agreement, and in violation of the agreement occupied Czechoslovakia in March 1939. Kristallnacht (Night of Broken Glass) saw the burning of synagogues, the destruction of Jewish businesses, and mass arrests of Jewish people.In August 1939, Hitler's government negotiated the Molotov–Ribbentrop Pact that divided Eastern Europe into German and Soviet spheres of influence. On 1 September 1939, Germany invaded Poland, beginning World War II in Europe; Britain and France declared war on Germany on 3 September. In the spring of 1940, Germany conquered Denmark and Norway, the Netherlands, Belgium, Luxembourg, and France, forcing the French government to sign an armistice. The British repelled German air attacks in the Battle of Britain in the same year. In 1941, German troops invaded Yugoslavia, Greece and the Soviet Union. By 1942, Germany and its allies controlled most of continental Europe and North Africa, but following the Soviet victory at the Battle of Stalingrad, the Allied reconquest of North Africa and invasion of Italy in 1943, German forces suffered repeated military defeats. In 1944, the Soviets pushed into Eastern Europe; the Western allies landed in France and entered Germany despite a final German counteroffensive. Following Hitler's suicide during the Battle of Berlin, Germany signed the surrender document on 8 May 1945, ending World War II in Europe and Nazi Germany. Following the end of the war, surviving Nazi officials were tried for war crimes at the Nuremberg trials.In what later became known as the Holocaust, the German government persecuted minorities, including interning them in concentration and death camps across Europe. In total 17 million people were systematically murdered, including 6 million Jews, at least 130,000 Romani, 275,000 disabled people, thousands of Jehovah's Witnesses, thousands of homosexuals, and hundreds of thousands of political and religious opponents. Nazi policies in German-occupied countries resulted in the deaths of an estimated 2.7 million Poles, 1.3 million Ukrainians, 1 million Belarusians and 3.5 million Soviet prisoners of war. German military casualties have been estimated at 5.3 million, and around 900,000 German civilians died. Around 12 million ethnic Germans were expelled from across Eastern Europe, and Germany lost roughly one-quarter of its pre-war territory.
East and West Germany
After Nazi Germany surrendered, the Allies de jure abolished German state and partitioned Berlin and Germany's remaining territory into four occupation zones. The western sectors, controlled by France, the United Kingdom, and the United States, were merged on 23 May 1949 to form the Federal Republic of Germany (German: Bundesrepublik Deutschland); on 7 October 1949, the Soviet Zone became the German Democratic Republic (GDR) (German: Deutsche Demokratische Republik; DDR). They were informally known as West Germany and East Germany. East Germany selected East Berlin as its capital, while West Germany chose Bonn as a provisional capital, to emphasise its stance that the two-state solution was temporary.West Germany was established as a federal parliamentary republic with a "social market economy". Starting in 1948 West Germany became a major recipient of reconstruction aid under the American Marshall Plan. Konrad Adenauer was elected the first federal chancellor of Germany in 1949. The country enjoyed prolonged economic growth (Wirtschaftswunder) beginning in the early 1950s. West Germany joined NATO in 1955 and was a founding member of the European Economic Community. On 1 January 1957, the Saarland joined West Germany.East Germany was an Eastern Bloc state under political and military control by the Soviet Union via occupation forces and the Warsaw Pact. Although East Germany claimed to be a democracy, political power was exercised solely by leading members (Politbüro) of the communist-controlled Socialist Unity Party of Germany, supported by the Stasi, an immense secret service. While East German propaganda was based on the benefits of the GDR's social programmes and the alleged threat of a West German invasion, many of its citizens looked to the West for freedom and prosperity. The Berlin Wall, built in 1961, prevented East German citizens from escaping to West Germany, becoming a symbol of the Cold War.Tensions between East and West Germany were reduced in the late 1960s by Chancellor Willy Brandt's Ostpolitik. In 1989, Hungary decided to dismantle the Iron Curtain and open its border with Austria, causing the emigration of thousands of East Germans to West Germany via Hungary and Austria. This had devastating effects on the GDR, where regular mass demonstrations received increasing support. In an effort to help retain East Germany as a state, the East German authorities eased border restrictions, but this actually led to an acceleration of the Wende reform process culminating in the Two Plus Four Treaty under which Germany regained full sovereignty. This permitted German reunification on 3 October 1990, with the accession of the five re-established states of the former GDR. The fall of the Wall in 1989 became a symbol of the Fall of Communism, the Dissolution of the Soviet Union, German reunification and Die Wende ("the turning point").
Reunified Germany and the European Union
United Germany was considered the enlarged continuation of West Germany so it retained its memberships in international organisations. Based on the Berlin/Bonn Act (1994), Berlin again became the capital of Germany, while Bonn obtained the unique status of a Bundesstadt (federal city) retaining some federal ministries. The relocation of the government was completed in 1999, and modernisation of the East German economy was scheduled to last until 2019.Since reunification, Germany has taken a more active role in the European Union, signing the Maastricht Treaty in 1992 and the Lisbon Treaty in 2007, and co-founding the eurozone. Germany sent a peacekeeping force to secure stability in the Balkans and sent German troops to Afghanistan as part of a NATO effort to provide security in that country after the ousting of the Taliban.In the 2005 elections, Angela Merkel became the first female chancellor. In 2009, the German government approved a €50 billion stimulus plan. Among the major German political projects of the early 21st century are the advancement of European integration, the energy transition (Energiewende) for a sustainable energy supply, the debt brake for balanced budgets, measures to increase the fertility rate (pronatalism), and high-tech strategies for the transition of the German economy, summarised as Industry 4.0. During the 2015 European migrant crisis, the country took in over a million refugees and migrants.
Geography
Germany is the seventh-largest country in Europe; bordering Denmark to the north, Poland and the Czech Republic to the east, Austria to the southeast, and Switzerland to the south-southwest. France, Luxembourg and Belgium are situated to the west, with the Netherlands to the northwest. Germany is also bordered by the North Sea and, at the north-northeast, by the Baltic Sea. German territory covers 357,022 km2 (137,847 sq mi), consisting of 348,672 km2 (134,623 sq mi) of land and 8,350 km2 (3,224 sq mi) of water.
Elevation ranges from the mountains of the Alps (highest point: the Zugspitze at 2,963 metres or 9,721 feet) in the south to the shores of the North Sea (Nordsee) in the northwest and the Baltic Sea (Ostsee) in the northeast. The forested uplands of central Germany and the lowlands of northern Germany (lowest point: in the municipality Neuendorf-Sachsenbande, Wilstermarsch at 3.54 metres or 11.6 feet below sea level) are traversed by such major rivers as the Rhine, Danube and Elbe. Significant natural resources include iron ore, coal, potash, timber, lignite, uranium, copper, natural gas, salt, and nickel.
Climate
Most of Germany has a temperate climate, ranging from oceanic in the north and west to continental in the east and southeast. Winters range from the cold in the Southern Alps to cool and are generally overcast with limited precipitation, while summers can vary from hot and dry to cool and rainy. The northern regions have prevailing westerly winds that bring in moist air from the North Sea, moderating the temperature and increasing precipitation. Conversely, the southeast regions have more extreme temperatures.From February 2019–2020, average monthly temperatures in Germany ranged from a low of 3.3 °C (37.9 °F) in January 2020 to a high of 19.8 °C (67.6 °F) in June 2019. Average monthly precipitation ranged from 30 litres per square metre in February and April 2019 to 125 litres per square metre in February 2020. Average monthly hours of sunshine ranged from 45 in November 2019 to 300 in June 2019.
Biodiversity
The territory of Germany can be divided into five terrestrial ecoregions: Atlantic mixed forests, Baltic mixed forests, Central European mixed forests, Western European broadleaf forests, and Alps conifer and mixed forests. As of 2016 51% of Germany's land area is devoted to agriculture, while 30% is forested and 14% is covered by settlements or infrastructure.Plants and animals include those generally common to Central Europe. According to the National Forest Inventory, beeches, oaks, and other deciduous trees constitute just over 40% of the forests; roughly 60% are conifers, particularly spruce and pine. There are many species of ferns, flowers, fungi, and mosses. Wild animals include roe deer, wild boar, mouflon (a subspecies of wild sheep), fox, badger, hare, and small numbers of the Eurasian beaver. The blue cornflower was once a German national symbol.The 16 national parks in Germany include the Jasmund National Park, the Vorpommern Lagoon Area National Park, the Müritz National Park, the Wadden Sea National Parks, the Harz National Park, the Hainich National Park, the Black Forest National Park, the Saxon Switzerland National Park, the Bavarian Forest National Park and the Berchtesgaden National Park. In addition, there are 17 Biosphere Reserves, and 105 nature parks. More than 400 zoos and animal parks operate in Germany. The Berlin Zoo, which opened in 1844, is the oldest in Germany, and claims the most comprehensive collection of species in the world.
Politics
Germany is a federal, parliamentary, representative democratic republic. Federal legislative power is vested in the parliament consisting of the Bundestag (Federal Diet) and Bundesrat (Federal Council), which together form the legislative body. The Bundestag is elected through direct elections using the mixed-member proportional representation system. The members of the Bundesrat represent and are appointed by the governments of the sixteen federated states. The German political system operates under a framework laid out in the 1949 constitution known as the Grundgesetz (Basic Law). Amendments generally require a two-thirds majority of both the Bundestag and the Bundesrat; the fundamental principles of the constitution, as expressed in the articles guaranteeing human dignity, the separation of powers, the federal structure, and the rule of law, are valid in perpetuity.The president, currently Frank-Walter Steinmeier, is the head of state and invested primarily with representative responsibilities and powers. He is elected by the Bundesversammlung (federal convention), an institution consisting of the members of the Bundestag and an equal number of state delegates. The second-highest official in the German order of precedence is the Bundestagspräsident (President of the Bundestag), who is elected by the Bundestag and responsible for overseeing the daily sessions of the body. The third-highest official and the head of government is the chancellor, who is appointed by the Bundespräsident after being elected by the party or coalition with the most seats in the Bundestag. The chancellor, currently Olaf Scholz, is the head of government and exercises executive power through his Cabinet.Since 1949, the party system has been dominated by the Christian Democratic Union and the Social Democratic Party of Germany. So far every chancellor has been a member of one of these parties. However, the smaller liberal Free Democratic Party and the Alliance 90/The Greens have also been junior partners in coalition governments. Since 2007, the democratic socialist party The Left has been a staple in the German Bundestag, though they have never been part of the federal government. In the 2017 German federal election, the right-wing populist Alternative for Germany gained enough votes to attain representation in the parliament for the first time.
Constituent states
Germany is a federation and comprises sixteen constituent states which are collectively referred to as Länder. Each state (Land) has its own constitution, and is largely autonomous in regard to its internal organisation. As of 2017 Germany is divided into 401 districts (Kreise) at a municipal level; these consist of 294 rural districts and 107 urban districts.
Law
Germany has a civil law system based on Roman law with some references to Germanic law. The Bundesverfassungsgericht (Federal Constitutional Court) is the German Supreme Court responsible for constitutional matters, with power of judicial review. Germany's supreme court system is specialised: for civil and criminal cases, the highest court of appeal is the inquisitorial Federal Court of Justice, and for other affairs the courts are the Federal Labour Court, the Federal Social Court, the Federal Fiscal Court and the Federal Administrative Court.Criminal and private laws are codified on the national level in the Strafgesetzbuch and the Bürgerliches Gesetzbuch respectively. The German penal system seeks the rehabilitation of the criminal and the protection of the public. Except for petty crimes, which are tried before a single professional judge, and serious political crimes, all charges are tried before mixed tribunals on which lay judges (Schöffen) sit side by side with professional judges.Germany has a low murder rate with 1.18 murders per 100,000 as of 2016. In 2018, the overall crime rate fell to its lowest since 1992.Same-sex marriage has been legal in Germany since 2017, and LGBT rights are generally protected in the nation.
Foreign relations
Germany has a network of 227 diplomatic missions abroad and maintains relations with more than 190 countries. Germany is a member of NATO, the OECD, the G7, the G20, the World Bank and the IMF. It has played an influential role in the European Union since its inception and has maintained a strong alliance with France and all neighbouring countries since 1990. Germany promotes the creation of a more unified European political, economic and security apparatus. The governments of Germany and the United States are close political allies. Cultural ties and economic interests have crafted a bond between the two countries resulting in Atlanticism. After 1990, Germany and Russia worked together to establish a "strategic partnership" in which energy development became one of the most important factors. As a result of the cooperation, Germany imported most of its natural gas and crude oil from Russia.The development policy of Germany is an independent area of foreign policy. It is formulated by the Federal Ministry for Economic Cooperation and Development and carried out by the implementing organisations. The German government sees development policy as a joint responsibility of the international community. It was the world's second-biggest aid donor in 2019 after the United States.
Military
Germany's military, the Bundeswehr (Federal Defence), is organised into the Heer (Army and special forces KSK), Marine (Navy), Luftwaffe (Air Force), Zentraler Sanitätsdienst der Bundeswehr (Joint Medical Service), Streitkräftebasis (Joint Support Service) and Cyber- und Informationsraum (Cyber and Information Domain Service) branches. In absolute terms, German military expenditure is the eighth-highest in the world. In 2018, military spending was at $49.5 billion, about 1.2% of the country's GDP, well below the NATO target of 2%. However, in response to the 2022 Russian invasion of Ukraine, Chancellor Olaf Scholz announced that German military expenditure would be increased past the NATO target of 2%, along with a one-time 2022 infusion of 100 billion euros, representing almost double the 53 billion euro military budget for 2021.As of January 2020, the Bundeswehr has a strength of 184,001 active soldiers and 80,947 civilians. Reservists are available to the armed forces and participate in defence exercises and deployments abroad. Until 2011, military service was compulsory for men at age 18, but this has been officially suspended and replaced with a voluntary service. Since 2001 women may serve in all functions of service without restriction. According to the Stockholm International Peace Research Institute, Germany was the fourth-largest exporter of major arms in the world from 2014 to 2018.In peacetime, the Bundeswehr is commanded by the Minister of Defence. In state of defence, the Chancellor would become commander-in-chief of the Bundeswehr. The role of the Bundeswehr is described in the Constitution of Germany as defensive only. But after a ruling of the Federal Constitutional Court in 1994, the term "defence" has been defined to not only include protection of the borders of Germany, but also crisis reaction and conflict prevention, or more broadly as guarding the security of Germany anywhere in the world. As of 2017, the German military has about 3,600 troops stationed in foreign countries as part of international peacekeeping forces, including about 1,200 supporting operations against Daesh, 980 in the NATO-led Resolute Support Mission in Afghanistan, and 800 in Kosovo.
Economy
Germany has a social market economy with a highly skilled labour force, a low level of corruption, and a high level of innovation. It is the world's third-largest exporter and third-largest importer, and has the largest economy in Europe, which is also the world's fourth-largest economy by nominal GDP, and the fifth-largest by PPP. Its GDP per capita measured in purchasing power standards amounts to 121% of the EU27 average. The service sector contributes approximately 69% of the total GDP, industry 31%, and agriculture 1% as of 2017. The unemployment rate published by Eurostat amounts to 3.2% as of January 2020, which is the fourth-lowest in the EU.Germany is part of the European single market which represents more than 450 million consumers. In 2017, the country accounted for 28% of the eurozone economy according to the International Monetary Fund. Germany introduced the common European currency, the euro, in 2002. Its monetary policy is set by the European Central Bank, which is headquartered in Frankfurt.Being home to the modern car, the automotive industry in Germany is regarded as one of the most competitive and innovative in the world, and is the sixth-largest by production as of 2021. Germany is home to Volkswagen Group, the world's second-largest automotive manufacturer in 2022 by both vehicle production and sales, and is the third-largest exporter of cars as of 2023.The top ten exports of Germany are vehicles, machinery, chemical goods, electronic products, electrical equipments, pharmaceuticals, transport equipments, basic metals, food products, and rubber and plastics.Of the world's 500 largest stock-market-listed companies measured by revenue in 2019, the Fortune Global 500, 29 are headquartered in Germany. 30 major Germany-based companies are included in the DAX, the German stock market index which is operated by Frankfurt Stock Exchange. Well-known international brands include Mercedes-Benz, BMW, Volkswagen, Audi, Siemens, Allianz, Adidas, Porsche, Bosch and Deutsche Telekom. Berlin is a hub for startup companies and has become the leading location for venture capital funded firms in the European Union. Germany is recognised for its large portion of specialised small and medium enterprises, known as the Mittelstand model. These companies represent 48% of the global market leaders in their segments, labelled hidden champions.Research and development efforts form an integral part of the German economy. In 2018, Germany ranked fourth globally in terms of number of science and engineering research papers published. Research institutions in Germany include the Max Planck Society, the Helmholtz Association, and the Fraunhofer Society and the Leibniz Association. Germany is the largest contributor to the European Space Agency. Germany was ranked 8th in the Global Innovation Index in 2023.
Infrastructure
With its central position in Europe, Germany is a transport hub for the continent. Its road network is among the densest in Europe. The motorway (Autobahn) is widely known for having no general federally mandated speed limit for some classes of vehicles. The Intercity Express or ICE train network serves major German cities as well as destinations in neighbouring countries with speeds up to 300 km/h (190 mph). The largest German airports are Frankfurt Airport and Munich Airport. The Port of Hamburg is one of the twenty largest container ports in the world.In 2019, Germany was the world's seventh-largest consumer of energy. All nuclear power plants were phased out in 2023. It meets the country's power demands using 40% renewable sources, and it has been called an "early leader" in solar and offshore wind. Germany is committed to the Paris Agreement and several other treaties promoting biodiversity, low emission standards, and water management. The country's household recycling rate is among the highest in the world—at around 65%. The country's greenhouse gas emissions per capita were the ninth-highest in the EU in 2018, but these numbers have been trending downward. The German energy transition (Energiewende) is the recognised move to a sustainable economy by means of energy efficiency and renewable energy.
Tourism
Germany is the ninth-most visited country in the world as of 2017, with 37.4 million visits. Domestic and international travel and tourism combined directly contribute over €105.3 billion to German GDP. Including indirect and induced impacts, the industry supports 4.2 million jobs.Germany's most visited and popular landmarks include Cologne Cathedral, the Brandenburg Gate, the Reichstag, the Dresden Frauenkirche, Neuschwanstein Castle, Heidelberg Castle, the Wartburg, and Sanssouci Palace. The Europa-Park near Freiburg is Europe's second-most popular theme park resort.
Demographics
With a population of 80.2 million according to the 2011 German Census, rising to 83.7 million as of 2022, Germany is the most populous country in the European Union, the second-most populous country in Europe after Russia, and the nineteenth-most populous country in the world. Its population density stands at 227 inhabitants per square kilometre (590 inhabitants/sq mi). The fertility rate of 1.57 children born per woman (2022 estimates) is below the replacement rate of 2.1 and is one of the lowest fertility rates in the world. Since the 1970s, Germany's death rate has exceeded its birth rate. However, Germany is witnessing increased birth rates and migration rates since the beginning of the 2010s. Germany has the third oldest population in the world, with an average age of 47.4 years.Four sizeable groups of people are referred to as national minorities because their ancestors have lived in their respective regions for centuries: There is a Danish minority in the northernmost state of Schleswig-Holstein; the Sorbs, a Slavic population, are in the Lusatia region of Saxony and Brandenburg; the Roma and Sinti live throughout the country; and the Frisians are concentrated in Schleswig-Holstein's western coast and in the north-western part of Lower Saxony.After the United States, Germany is the second-most popular immigration destination in the world. In 2015, following the 2015 refugee crisis, the Population Division of the United Nations Department of Economic and Social Affairs listed Germany as host to the second-highest number of international migrants worldwide, about 5% or 12 million of all 244 million migrants. Refugee crises have resulted in substantial population increases. For example, the major influx of Ukrainian immigrants following the 2022 Russian invasion of Ukraine, meaning over 1.06 million refugees from Ukraine were recorded in Germany as of April 2023. As of 2019, Germany ranks seventh among EU countries in terms of the percentage of migrants in the country's population, at 13.1%. In 2022 there were 23.8 million people, 28.7 percent of the total population, who had a migration background.Germany has a number of large cities. There are 11 officially recognised metropolitan regions. The country's largest city is Berlin, while its largest urban area is the Ruhr.
Religion
Christianity was introduced to the area of modern Germany by 300 AD and became fully Christianized by the time of Charlemagne in the eighth and ninth century. After the Reformation started by Martin Luther in the early 16th century, many people left the Catholic Church and became Protestant, mainly Lutheran and Calvinist.According to the 2011 census, Christianity was the largest religion in Germany, with 66.8% of respondents identifying as Christian, of which 3.8% were not church members. 31.7% declared themselves as Protestants, including members of the Protestant Church in Germany (which encompasses Lutheran, Reformed, and administrative or confessional unions of both traditions) and the free churches (Evangelische Freikirchen); 31.2% declared themselves as Roman Catholics, and Orthodox believers constituted 1.3%. According to data from 2016, the Catholic Church and the Evangelical Church claimed 28.5% and 27.5%, respectively, of the population. Islam is the second-largest religion in the country.In the 2011 census, 1.9% of respondents (1.52 million people) gave their religion as Islam, but this figure is deemed unreliable because a disproportionate number of adherents of this faith (and other religions, such as Judaism) are likely to have made use of their right not to answer the question. Most of the Muslims are Sunnis and Alevites from Turkey, but there are a small number of Shi'ites, Ahmadiyyas and other denominations. Other religions comprise less than one per cent of Germany's population.A study in 2018 estimated that 38% of the population are not members of any religious organization or denomination, though up to a third may still consider themselves religious. Irreligion in Germany is strongest in the former East Germany, which used to be predominantly Protestant before the enforcement of state atheism, and in major metropolitan areas.
Languages
German is the official and predominant spoken language in Germany. It is one of 24 official and working languages of the European Union, and one of the three procedural languages of the European Commission. German is the most widely spoken first language in the European Union, with around 100 million native speakers.Recognised native minority languages in Germany are Danish, Low German, Low Rhenish, Sorbian, Romani, North Frisian and Saterland Frisian; they are officially protected by the European Charter for Regional or Minority Languages. The most used immigrant languages are Turkish, Arabic, Kurdish, Polish, Greek, Serbo-Croatian, Bulgarian and other Balkan languages, as well as Russian. Germans are typically multilingual: 67% of German citizens claim to be able to communicate in at least one foreign language and 27% in at least two.
Education
Responsibility for educational supervision in Germany is primarily organised within the individual states. Optional kindergarten education is provided for all children between three and six years old, after which school attendance is compulsory for at least nine years depending on the state. Primary education usually lasts for four to six years. Secondary schooling is divided into tracks based on whether students pursue academic or vocational education. A system of apprenticeship called Duale Ausbildung leads to a skilled qualification which is almost comparable to an academic degree. It allows students in vocational training to learn in a company as well as in a state-run trade school. This model is well regarded and reproduced all around the world.Most of the German universities are public institutions, and students traditionally study without fee payment. The general requirement for attending university is the Abitur. According to an OECD report in 2014, Germany is the world's third leading destination for international study. The established universities in Germany include some of the oldest in the world, with Heidelberg University (established in 1386), Leipzig University (established in 1409) and the University of Rostock (established in 1419) being the oldest. The Humboldt University of Berlin, founded in 1810 by the liberal educational reformer Wilhelm von Humboldt, became the academic model for many Western universities. In the contemporary era Germany has developed eleven Universities of Excellence.
Health
Germany's system of hospitals, called Krankenhäuser, dates from medieval times, and today, Germany has the world's oldest universal health care system, dating from Bismarck's social legislation of the 1880s. Since the 1880s, reforms and provisions have ensured a balanced health care system. The population is covered by a health insurance plan provided by statute, with criteria allowing some groups to opt for a private health insurance contract. According to the World Health Organization (WHO), Germany's health care system was 77% government-funded and 23% privately funded as of 2013. In 2014, Germany spent 11.3% of its GDP on health care.Germany ranked 21st in the world in 2019 in life expectancy with 78.7 years for men and 84.8 years for women according to the WHO, and it had a very low infant mortality rate (4 per 1,000 live births). In 2019, the principal cause of death was cardiovascular disease, at 37%. Obesity in Germany has been increasingly cited as a major health issue. A 2014 study showed that 52 per cent of the adult German population was overweight or obese.
Culture
Culture in German states has been shaped by major intellectual and popular currents in Europe, both religious and secular. Historically, Germany has been called Das Land der Dichter und Denker ('the land of poets and thinkers'), because of the major role its scientists, writers and philosophers have played in the development of Western thought. A global opinion poll for the BBC revealed that Germany is recognised for having the most positive influence in the world in 2013 and 2014.Germany is well known for such folk festival traditions as the Oktoberfest and Christmas customs, which include Advent wreaths, Christmas pageants, Christmas trees, Stollen cakes, and other practices. As of 2016 UNESCO inscribed 41 properties in Germany on the World Heritage List. There are a number of public holidays in Germany determined by each state; 3 October has been a national day of Germany since 1990, celebrated as the Tag der Deutschen Einheit (German Unity Day).
Music
German classical music includes works by some of the world's most well-known composers. Dieterich Buxtehude, Johann Sebastian Bach and Georg Friedrich Händel were influential composers of the Baroque period. Ludwig van Beethoven was a crucial figure in the transition between the Classical and Romantic eras. Carl Maria von Weber, Felix Mendelssohn, Robert Schumann and Johannes Brahms were significant Romantic composers. Richard Wagner was known for his operas. Richard Strauss was a leading composer of the late Romantic and early modern eras. Karlheinz Stockhausen and Wolfgang Rihm are important composers of the 20th and early 21st centuries.As of 2013, Germany was the second-largest music market in Europe, and fourth-largest in the world. German popular music of the 20th and 21st centuries includes the movements of Neue Deutsche Welle, pop, Ostrock, heavy metal/rock, punk, pop rock, indie, Volksmusik (folk music), schlager pop and German hip hop. German electronic music gained global influence, with Kraftwerk and Tangerine Dream pioneering in this genre. DJs and artists of the techno and house music scenes of Germany have become well known (e.g. Paul van Dyk, Felix Jaehn, Paul Kalkbrenner, Robin Schulz and Scooter).
Art, design and architecture
German painters have influenced Western art. Albrecht Dürer, Hans Holbein the Younger, Matthias Grünewald and Lucas Cranach the Elder were important German artists of the Renaissance, Johann Baptist Zimmermann of the Baroque, Caspar David Friedrich and Carl Spitzweg of Romanticism, Max Liebermann of Impressionism and Max Ernst of Surrealism. Several German art groups formed in the 20th century; Die Brücke (The Bridge) and Der Blaue Reiter (The Blue Rider) influenced the development of expressionism in Munich and Berlin. The New Objectivity arose in response to expressionism during the Weimar Republic. After World War II, broad trends in German art include neo-expressionism and the New Leipzig School.German designers became early leaders of modern product design. The Berlin Fashion Week and the fashion trade fair Bread & Butter are held twice a year.Architectural contributions from Germany include the Carolingian and Ottonian styles, which were precursors of Romanesque. Brick Gothic is a distinctive medieval style that evolved in Germany. Also in Renaissance and Baroque art, regional and typically German elements evolved (e.g. Weser Renaissance). Vernacular architecture in Germany is often identified by its timber framing (Fachwerk) traditions and varies across regions, and among carpentry styles. When industrialisation spread across Europe, classicism and a distinctive style of historicism developed in Germany, sometimes referred to as Gründerzeit style. Expressionist architecture developed in the 1910s in Germany and influenced Art Deco and other modern styles. Germany was particularly important in the early modernist movement: it is the home of Werkbund initiated by Hermann Muthesius (New Objectivity), and of the Bauhaus movement founded by Walter Gropius. Ludwig Mies van der Rohe became one of the world's most renowned architects in the second half of the 20th century; he conceived of the glass façade skyscraper. Renowned contemporary architects and offices include Pritzker Prize winners Gottfried Böhm and Frei Otto.
Literature and philosophy
German literature can be traced back to the Middle Ages and the works of writers such as Walther von der Vogelweide and Wolfram von Eschenbach. Well-known German authors include Johann Wolfgang von Goethe, Friedrich Schiller, Gotthold Ephraim Lessing and Theodor Fontane. The collections of folk tales published by the Brothers Grimm popularised German folklore on an international level. The Grimms also gathered and codified regional variants of the German language, grounding their work in historical principles; their Deutsches Wörterbuch, or German Dictionary, sometimes called the Grimm dictionary, was begun in 1838 and the first volumes published in 1854.Influential authors of the 20th century include Gerhart Hauptmann, Thomas Mann, Hermann Hesse, Heinrich Böll, and Günter Grass. The German book market is the third-largest in the world, after the United States and China. The Frankfurt Book Fair is the most important in the world for international deals and trading, with a tradition spanning over 500 years. The Leipzig Book Fair also retains a major position in Europe.German philosophy is historically significant: Gottfried Leibniz's contributions to rationalism; the enlightenment philosophy by Immanuel Kant; the establishment of classical German idealism by Johann Gottlieb Fichte, Georg Wilhelm Friedrich Hegel and Friedrich Wilhelm Joseph Schelling; Arthur Schopenhauer's composition of metaphysical pessimism; the formulation of communist theory by Karl Marx and Friedrich Engels; Friedrich Nietzsche's development of perspectivism; Gottlob Frege's contributions to the dawn of analytic philosophy; Martin Heidegger's works on Being; Oswald Spengler's historical philosophy; and the development of the Frankfurt School have all been very influential.
Media
The largest internationally operating media companies in Germany are the Bertelsmann enterprise, Axel Springer SE and ProSiebenSat.1 Media. Germany's television market is the largest in Europe, with some 38 million TV households. Around 90% of German households have cable or satellite TV, with a variety of free-to-view public and commercial channels. There are more than 300 public and private radio stations in Germany; Germany's national radio network is the Deutschlandradio and the public Deutsche Welle is the main German radio and television broadcaster in foreign languages. Germany's print market of newspapers and magazines is the largest in Europe. The papers with the highest circulation are Bild, Süddeutsche Zeitung, Frankfurter Allgemeine Zeitung and Die Welt. The largest magazines include ADAC Motorwelt and Der Spiegel. Germany has a large video gaming market, with over 34 million players nationwide. The Gamescom is the world's largest gaming convention.German cinema has made major technical and artistic contributions to film. The first works of the Skladanowsky Brothers were shown to an audience in 1895. The renowned Babelsberg Studio in Potsdam was established in 1912, thus being the first large-scale film studio in the world. Early German cinema was particularly influential with German expressionists such as Robert Wiene and Friedrich Wilhelm Murnau. Director Fritz Lang's Metropolis (1927) is referred to as the first major science-fiction film. After 1945, many of the films of the immediate post-war period can be characterised as Trümmerfilm (rubble film). East German film was dominated by state-owned film studio DEFA, while the dominant genre in West Germany was the Heimatfilm ("homeland film"). During the 1970s and 1980s, New German Cinema directors such as Volker Schlöndorff, Werner Herzog, Wim Wenders, and Rainer Werner Fassbinder brought West German auteur cinema to critical acclaim.
The Academy Award for Best Foreign Language Film ("Oscar") went to the German production The Tin Drum (Die Blechtrommel) in 1979, to Nowhere in Africa (Nirgendwo in Afrika) in 2002, and to The Lives of Others (Das Leben der Anderen) in 2007. Various Germans won an Oscar for their performances in other films. The annual European Film Awards ceremony is held every other year in Berlin, home of the European Film Academy. The Berlin International Film Festival, known as "Berlinale", awarding the "Golden Bear" and held annually since 1951, is one of the world's leading film festivals. The "Lolas" are annually awarded in Berlin, at the German Film Awards.
Cuisine
German cuisine varies from region to region and often neighbouring regions share some culinary similarities, including with the southern regions of Bavaria and Swabia, Switzerland, and Austria. International varieties such as pizza, sushi, Chinese food, Greek food, Indian cuisine, and doner kebab are popular.
Bread is a significant part of German cuisine and German bakeries produce about 600 main types of bread and 1,200 types of pastries and rolls (Brötchen). German cheeses account for about 22% of all cheese produced in Europe. In 2012 over 99% of all meat produced in Germany was either pork, chicken or beef. Germans produce their ubiquitous sausages in almost 1,500 varieties, including Bratwursts and Weisswursts.The national alcoholic drink is beer. German beer consumption per person stands at 110 litres (24 imp gal; 29 US gal) in 2013 and remains among the highest in the world. German beer purity regulations date back to the 16th century. Wine has become popular in many parts of the country, especially close to German wine regions. In 2019, Germany was the ninth-largest wine producer in the world.The 2018 Michelin Guide awarded eleven restaurants in Germany three stars, giving the country a cumulative total of 300 stars.
Sports
Football is the most popular sport in Germany. With more than 7 million official members, the German Football Association (Deutscher Fußball-Bund) is the largest single-sport organisation worldwide, and the German top league, the Bundesliga, attracts the second-highest average attendance of all professional sports leagues in the world. The German men's national football team won the FIFA World Cup in 1954, 1974, 1990, and 2014, the UEFA European Championship in 1972, 1980 and 1996, and the FIFA Confederations Cup in 2017.Germany is one of the leading motor sports countries in the world. Constructors like BMW and Mercedes are prominent manufacturers in motor sport. Porsche has won the 24 Hours of Le Mans race 19 times, and Audi 13 times (as of 2017). The driver Michael Schumacher has set many motor sport records during his career, having won seven Formula One World Drivers' Championships. Sebastian Vettel is also among the most successful Formula One drivers of all time.German athletes historically have been successful contenders in the Olympic Games, ranking third in an all-time Olympic Games medal count when combining East and West German medals prior to German reunification. In 1936 Berlin hosted the Summer Games and the Winter Games in Garmisch-Partenkirchen. Munich hosted the Summer Games of 1972.
See also
Outline of Germany
Notes
References
Sources
Fulbrook, Mary (1991). A Concise History of Germany. Cambridge University Press. ISBN 978-0-521-36836-0.
Murdoch, Adrian (2004). "Germania Romana". In Murdoch, Brian; Read, Malcolm (eds.). Early Germanic Literature and Culture. Boydell & Brewer. pp. 55–73. ISBN 1-57113-199-X.
External links
Official site of the Federal Government
Official tourism site
Germany from BBC News
Germany. The World Factbook. Central Intelligence Agency.
Germany from the OECD
Germany at the EU
Geographic data related to Germany at OpenStreetMap |
coal-fired power station | A coal-fired power station or coal power plant is a thermal power station which burns coal to generate electricity. Worldwide there are over 2,400 coal-fired power stations, totaling over 2,000 gigawatts capacity. They generate about a third of the world's electricity, but cause many illnesses and the most early deaths, mainly from air pollution.A coal-fired power station is a type of fossil fuel power station. The coal is usually pulverized and then burned in a pulverized coal-fired boiler. The furnace heat converts boiler water to steam, which is then used to spin turbines that turn generators. Thus chemical energy stored in coal is converted successively into thermal energy, mechanical energy and, finally, electrical energy.
Coal-fired power stations emit over 10 billion tonnes of carbon dioxide each year, about one fifth of world greenhouse gas emissions, so are the single largest cause of climate change. More than half of all the coal-fired electricity in the world is generated in China. In 2020 the total number of plants started falling as they are being retired in Europe and America although still being built in Asia, almost all in China. Some remain profitable because costs to other people due to the health and environmental impact of the coal industry are not priced into the cost of generation, but there is the risk newer plants may become stranded assets. The UN Secretary General has said that OECD countries should stop generating electricity from coal by 2030, and the rest of the world by 2040. Vietnam is among the few coal-dependent fast developing countries that fully pledged to phase out unbated coal power by the 2040s or as soon as possible thereafter.
History
The first coal-fired power stations were built in the late 19th century and used reciprocating engines to generate direct current. Steam turbines allowed much larger plants to be built in the early 20th century and alternating current was used to serve wider areas.
Transport and delivery of coal
Coal is delivered by highway truck, rail, barge, collier ship or coal slurry pipeline. Generating stations are sometimes built next to a mine; especially one mining coal, such as lignite, which is not valuable enough to transport long-distance; so may receive coal by conveyor belt or massive diesel-electric-drive trucks. A large coal train called a "unit train" may be 2 km long, containing 130-140 cars with around 100 tonnes of coal in each one, for a total load of over 10,000 tonnes. A large plant under full load requires at least one coal delivery this size every day. Plants may get as many as three to five trains a day, especially in "peak season" during the hottest summer or coldest winter months (depending on local climate) when power consumption is high.
Modern unloaders use rotary dump devices, which eliminate problems with coal freezing in bottom dump cars. The unloader includes a train positioner arm that pulls the entire train to position each car over a coal hopper. The dumper clamps an individual car against a platform that swivels the car upside down to dump the coal. Swiveling couplers enable the entire operation to occur while the cars are still coupled together. Unloading a unit train takes about three hours.
Shorter trains may use railcars with an "air-dump", which relies on air pressure from the engine plus a "hot shoe" on each car. This "hot shoe" when it comes into contact with a "hot rail" at the unloading trestle, shoots an electric charge through the air dump apparatus and causes the doors on the bottom of the car to open, dumping the coal through the opening in the trestle. Unloading one of these trains takes anywhere from an hour to an hour and a half. Older unloaders may still use manually operated bottom-dump rail cars and a "shaker" attached to dump the coal.
A collier (cargo ship carrying coal) may hold 41,000 tonnes (40,000 long tons) of coal and takes several days to unload. Some colliers carry their own conveying equipment to unload their own bunkers; others depend on equipment at the plant. For transporting coal in calmer waters, such as rivers and lakes, flat-bottomed barges are often used. Barges are usually unpowered and must be moved by tugboats or towboats.
For start up or auxiliary purposes, the plant may use fuel oil as well. Fuel oil can be delivered to plants by pipeline, tanker, tank car or truck. Oil is stored in vertical cylindrical steel tanks with capacities as high as 14,000 cubic metres (90,000 bbl). The heavier no. 5 "bunker" and no. 6 fuels are typically steam-heated before pumping in cold climates.
Operation
As a type of thermal power station, a coal-fired power station converts chemical energy stored in coal successively into thermal energy, mechanical energy and, finally, electrical energy. The coal is usually pulverized and then burned in a pulverized coal-fired boiler. The heat from the burning pulverized coal converts boiler water to steam, which is then used to spin turbines that turn generators. Compared to a thermal power station burning other fuel types, coal specific fuel processing and ash disposal is required.
For units over about 200 MW capacity, redundancy of key components is provided by installing duplicates of the forced and induced draft fans, air preheaters, and fly ash collectors. On some units of about 60 MW, two boilers per unit may instead be provided. The hundred largest coal power stations range in size from 3,000MW to 6,700MW.
Fuel processing
Coal is prepared for use by crushing the rough coal to pieces less than 5 cm in size. The coal is then transported from the storage yard to in-plant storage silos by conveyor belts at rates up to 4,000 tonnes per hour.
In plants that burn pulverized coal, silos feed coal to pulverizers (coal mills) that take the larger 5 cm pieces, grind them to the consistency of talcum powder, sort them, and mix them with primary combustion air, which transports the coal to the boiler furnace and preheats the coal in order to drive off excess moisture content. A 500 MWe plant may have six such pulverizers, five of which can supply coal to the furnace at 250 tonnes per hour under full load.
In plants that do not burn pulverized coal, the larger 5 cm pieces may be directly fed into the silos which then feed either mechanical distributors that drop the coal on a traveling grate or the cyclone burners, a specific kind of combustor that can efficiently burn larger pieces of fuel.
Boiler operation
Plants designed for lignite (brown coal) are used in locations as varied as Germany, Victoria, Australia, and North Dakota. Lignite is a much younger form of coal than black coal. It has a lower energy density than black coal and requires a much larger furnace for equivalent heat output. Such coals may contain up to 70% water and ash, yielding lower furnace temperatures and requiring larger induced-draft fans. The firing systems also differ from black coal and typically draw hot gas from the furnace-exit level and mix it with the incoming coal in fan-type mills that inject the pulverized coal and hot gas mixture into the boiler.
Ash disposal
The ash is often stored in ash ponds. Although the use of ash ponds in combination with air pollution controls (such as wet scrubbers) decreases the amount of airborne pollutants, the structures pose serious health risks for the surrounding environment. Power utility companies have often built the ponds without liners, especially in the United States, and therefore chemicals in the ash can leach into groundwater and surface waters.Since the 1990s, power utilities in the U.S. have designed many of their new plants with dry ash handling systems. The dry ash is disposed in landfills, which typically include liners and groundwater monitoring systems. Dry ash may also be recycled into products such as concrete, structural fills for road construction and grout.
Fly ash collection
Fly ash is captured and removed from the flue gas by electrostatic precipitators or fabric bag filters (or sometimes both) located at the outlet of the furnace and before the induced draft fan. The fly ash is periodically removed from the collection hoppers below the precipitators or bag filters. Generally, the fly ash is pneumatically transported to storage silos and stored on site in ash ponds, or transported by trucks or railroad cars to landfills.
Bottom ash collection and disposal
At the bottom of the furnace, there is a hopper for collection of bottom ash. This hopper is kept filled with water to quench the ash and clinkers falling down from the furnace. Arrangements are included to crush the clinkers and convey the crushed clinkers and bottom ash to on-site ash ponds, or off-site to landfills. Ash extractors are used to discharge ash from municipal solid waste–fired boilers.
Flexibility
A well-designed energy policy, energy law and electricity market are critical for flexibility. Although technically the flexibility of some coal-fired power stations could be improved they are less able to provide dispatchable generation than most gas-fired power plants. The most important flexibility is low minimum load, however some flexibility improvements may be more expensive than renewable energy with batteries.
Coal power generation
As of 2020 two-thirds of coal burned is to generate electricity. In 2020 coal was the largest source of electricity at 34%. Over half coal generation in 2020 was in China. About 60% of electricity in China, India and Indonesia is from coal.In 2020 worldwide 2059 GW of coal power was operational, 50 GW was commissioned, and 25 GW started construction (most of these three in China); and 38 GW retired (mostly USA and EU).
Efficiency
There are 4 main types of coal-fired power station in increasing order of efficiency are: subcritical, supercritical, ultra-supercritical and cogeneration (also called combined heat and power or CHP). Subcritical is the least efficient type, however recent innovations have allowed retrofits to older subcritical plants to meet or even exceed efficiency of supercritical plants.
Integrated gasification combined cycle design
Integrated gasification combined cycle (IGCC) is a coal power generation technology that uses a high pressure gasifier to turn coal (or other carbon based fuels) into pressurized gas—synthesis gas (syngas). Converting the coal to gas enables the use of a combined cycle generator, typically achieving high efficiency. The IGCC process can also enable removal of some pollutants from the syngas prior to the power generation cycle. However, the technology is costly compared with conventional coal-fired power stations.
Carbon dioxide emissions
As coal is mainly carbon, coal-fired power stations have a high carbon intensity. On average, coal power stations emit far more greenhouse gas per unit electricity generated compared with other energy sources (see also life-cycle greenhouse-gas emissions of energy sources). In 2018 coal burnt to generate electricity emitted over 10 Gt CO2 of the 34 Gt total from fuel combustion (the overall total greenhouse gas emissions for 2018 was 55 Gt CO2e).
Mitigation
Phase out
From 2015 to 2020, although coal generation hardly fell in absolute terms, some of its market share was taken by wind and solar. In 2020 only China increased coal power generation, and globally it fell by 4%. The UN Secretary General has said that OECD countries should stop generating electricity from coal by 2030 and the rest of the world by 2040, otherwise limiting global warming to 1.5 °C, a target of the Paris Agreement, would be extremely difficult. Phasing out in Asia can be a financial challenge as plants there are relatively young: in China the co-benefits of closing a plant vary greatly depending on its location.
Ammonia co-firing
Ammonia has a high hydrogen density and is easy to handle. It can be used as storing carbon-free fuel in gas turbine power generation and help significantly reduce CO₂ emissions as a fuel.
In Japan, the first major four-year test project was started in June 2021 to develop technology to enable co-firing a significant amount of ammonia at a large-scale commercial coal-fired plant. However low-carbon hydrogen and ammonia is in demand for sustainable shipping, which unlike electricity generation, has few other clean options.
Conversion
Some power stations are being converted to burn gas, biomass or waste, and conversion to thermal storage will be trialled in 2023.
Carbon capture
Retrofitting some existing coal-fired power stations with carbon capture and storage was being considered in China in 2020, but this is very expensive, reduces the energy output and for some plants is not technically feasible.
Pollution
In some countries pollution is somewhat controlled by best available techniques, for example those in the EU through its Industrial Emissions Directive. In the United States, coal-fired plants are governed at the national level by several air pollution regulations, including the Mercury and Air Toxics Standards (MATS) regulation, by effluent guidelines for water pollution, and by solid waste regulations under the Resource Conservation and Recovery Act (RCRA).Coal-fired power stations continue to pollute in lightly regulated countries; such as the Western Balkans, India, Russia and South Africa; causing over a hundred thousand early deaths each year.
Local air pollution
Damage to health from particulates, sulphur dioxide and nitrogen oxide occurs mainly in Asia and is often due to burning low quality coal, such as lignite, in plants lacking modern flue gas treatment. Early deaths due to air pollution have been estimated at 200 per GW-year, however they may be higher around power plants where scrubbers are not used or lower if they are far from cities.
Water pollution
Pollutants such as heavy metals leaching into ground water from unlined coal ash storage ponds or landfills pollute water, possibly for decades or centuries. Pollutant discharges from ash ponds to rivers (or other surface water bodies) typically include arsenic, lead, mercury, selenium, chromium, and cadmium.Mercury emissions from coal-fired power plants can fall back onto the land and water in rain, and then be converted into methylmercury by bacteria. Through biomagnification, this mercury can then reach dangerously high levels in fish. More than half of atmospheric mercury comes from coal-fired power plants.Coal-fired power plants also emit sulfur dioxide and nitrogen. These emissions lead to acid rain, which can restructure food webs and lead to the collapse of fish and invertebrate populations.
Mitigation of local pollution
As of 2018 local pollution in China, which has by far the most coal-fired power stations, is forecast to be reduced further in the 2020s and 2030s, especially if small and low efficiency plants are retired early.
Economics
Subsidies
Coal power plants tend to serve as base load technology, as they have high availability factors, and are relatively difficult and expensive to ramp up and down. As such, they perform poorly in real-time energy markets, where they are unable to respond to changes in the locational marginal price. In the United States, this has been especially true in light of the advent of cheap natural gas, which can serve as a fuel in dispatchable power plants that substitute the role of baseload on the grid.In 2020 the coal industry was subsidized $US18 billion.
Finance
Coal financing is the financial support provided for coal-related projects, encompassing coal mining and coal-fired power stations. Its role in shaping the global energy landscape and its environmental and climate impacts have made it a subject of concern. The misalignment of coal financing with international climate objectives, particularly the Paris Agreement, has garnered attention.The Paris Agreement aims to restrict global warming to well below 2 degrees Celsius and ideally limit it to 1.5 degrees Celsius. Achieving these goals necessitates a substantial reduction in coal-related activities.Studies, including finance-based accounting of coal emissions, have revealed a misalignment of coal financing with climate objectives. Major nations, such as China, Japan, and the U.S., have extended financial support to overseas coal power infrastructure. The largest backers are Chinese banks under the Belt and Road Initiative (BRI). This support has led to significant long-term climate and financial risks and harms the objectives of reducing CO2 emissions set by the Paris Agreement, of which China, the United States and Japan are signatories. A substantial portion of the associated CO2 emissions is anticipated to occur after 2019.Coal financing poses challenges to the global decarbonization of the power generation sector. As renewable energy technologies become cost-competitive, the economic viability of coal projects diminishes, making past fossil fuel investments less attractive. To address these concerns and align with climate goals, there is a growing call for stricter policies regarding overseas coal financing. Countries, including Japan and the U.S., have faced criticism for permitting the financing of certain coal projects. Strengthening the policies, potentially by banning public financing of coal projects entirely, would enhance their climate efforts and credibility. In addition, Enhanced transparency in disclosing financing details is crucial for evaluating their environmental impacts.
Capacity factors
In India capacity factors are below 60%. In 2020 coal-fired power stations in the United States had an overall capacity factor of 40%; that is, they operated at a little less than half of their cumulative nameplate capacity.
Stranded assets
If global warming is limited to well below 2 °C as specified in the Paris Agreement, coal plant stranded assets of over US$500 billion are forecast by 2050, mostly in China. In 2020 think tank Carbon Tracker estimated that 39% of coal-fired plants were already more expensive than new renewables and storage and that 73% would be by 2025. As of 2020 about half of China's coal power companies are losing money and old and small power plants "have no hope of making profits". As of 2021 India is keeping potential stranded assets operating by subsidizing them.
Politics
In May 2021, the G7 committed to end support for coal-fired power stations within the year.The energy policy of China regarding coal and coal in China are the most important factors regarding the future of coal-fired power stations, because the country has so many. According to one analysis local officials overinvested in coal-fired power in the mid-2010s because central government guaranteed operating hours and set a high wholesale electricity price.In democracies coal power investment follows an environmental Kuznets curve. The energy policy of India about coal is an issue in the politics of India.
Protests
In the 21st century people have often protested against opencast mining, for example at Hambach Forest, Akbelen Forest and Ffos-y-fran; and at sites of proposed new plants, such as in Kenya and China.
See also
Powering Past Coal Alliance
Global Energy Monitor
References
External links
Coal fired power plant Energy Education by the University of Calgary
How a coal plant works video by the Tennessee Valley Authority
How a coal plant works video by Ontario Power Generation
Electricity from coal by the World Coal Association
World's coal power plants mapped by Carbon Brief
End Coal Archived 1 May 2022 at the Wayback Machine by various environmental, social justice and health advocates
Coal-fired power by the International Energy Agency
Economics of coal by Carbon Tracker
Centre for Research on Energy and Clean Air |
cadent gas | Cadent Gas is a British regional gas distribution company that owns, operates and maintains the largest natural gas distribution network in the United Kingdom, transporting gas to 11 million homes and businesses across North West England, West Midlands, East Midlands, East of England and North London.Cadent Gas Limited represents four of the eight gas distribution networks in the United Kingdom. Following production and importation, all gas in the UK passes through National Grid's national transmission system, before entering the distribution networks. The distribution network providers, one of which is Cadent Limited, are responsible for the safe and efficient transportation of the gas to the end consumer, on behalf of the chosen supplier.
The company does not produce or own the gas that passes through their pipeline networks but 50% of UK gas customers are served by their pipeline system.
The company also manages the national gas emergency service free phone line on behalf of the gas industry in the UK, taking calls and giving safety advice on behalf of the industry. In 2017/18 1.952 million gas emergency calls were answered.The company invests in raising awareness of the dangers of carbon monoxide poisoning through community and school initiatives as well as improving services to protect and support customers in vulnerable situations.
In 2017, the company launched a two-year fundraising partnership with Alzheimer's Society and committed to creating 1,000 Dementia Friends and also raising £100,000 company-wide for the charity. Cadent Gas Limited also sponsor ‘EmployAbility - Let’s Work Together’ scheme which changes young disabled peoples’ lives for the better. It is founded on relationships with local schools, Dorothy Goodman in Leicestershire and Oakwood, Woodlands and Exhall Grange in Warwickshire. It is an employee-led supported internship scheme for young people aged 17 to 19. Since the scheme began in 2014 an average of 71% of interns gained paid employment either with them or with other local companies, compared to the national average of 6%. The company was recently awarded ‘Most Supportive Employer’ by the National Autistic Society.In 2017/18 Cadent Gas Limited replaced and improved 1,625 kilometres (1,010 mi) of mains pipe.In 2018, Cadent Gas Limited were awarded for being the top UK company for apprentices to work and amongst the top 20 companies for graduates.
History
1986: Transfer of assets of British Gas Corporation to British Gas plc (integrated gas company for UK), with trading of shares in British Gas plc commencing in December
1997: Demerger of Centrica from British Gas
2000: Demerger of Lattice from British Gas
2002: Merger of National Grid and Lattice Transco.
2005: Sale of four gas distribution networks, and adoption of National Grid as the single name for principal businesses
2016: Creation of National Grid Gas Distribution Ltd as part of National Grid
2017: Sale of a majority stake of National Grid Gas Distribution in March, with operations beginning under the Cadent brand from May
2019: Sale of National Grid's remaining stake in Cadent
2022: A major outage took place on the Cadent Gas network in Stannington, Sheffield, affecting thousands of properties for more than a week amid below-freezing temperatures; Sheffield City Council declared a major incident. The outage was caused by a burst water main flooding the complex local gas main network.
References
External links
Media related to Cadent Gas at Wikimedia Commons
Official website |
carbon pricing in australia | A carbon pricing scheme in Australia was introduced by the Gillard Labor minority government in 2011 as the Clean Energy Act 2011 which came into effect on 1 July 2012. Emissions from companies subject to the scheme dropped 7% upon its introduction. As a result of being in place for such a short time, and because the then Opposition leader Tony Abbott indicated he intended to repeal "the carbon tax", regulated organizations responded rather weakly, with very few investments in emissions reductions being made. The scheme was repealed on 17 July 2014, backdated to 1 July 2014. In its place the Abbott government set up the Emission Reduction Fund in December 2014. Emissions thereafter resumed their growth evident before the tax.The carbon price was part of a broad energy reform package called the Clean Energy Futures Plan, which aimed to reduce greenhouse gas emissions in Australia by 5% below 2000 levels by 2020 and 80% below 2000 levels by 2050. Although Australia does not levy a direct carbon price, the plan set out to achieve these targets by encouraging Australia's largest emitters to increase energy efficiency and invest in sustainable energy. The scheme was administered by the Clean Energy Regulator. Compensation to industry and households was funded by the revenue derived from the charge. The scheme required entities which emit over 25,000 tonnes of carbon dioxide equivalent greenhouse gases per year, and which were not in the transport or agriculture sectors, to obtain emissions permits, called carbon units. Carbon units were either purchased from the government or issued free as part of industry assistance measures. As part of the scheme, personal income tax was reduced for those earning less than A$80,000 per year and the tax-free threshold was increased from A$6,000 to A$18,200. Initially the price of a permit for one tonne of carbon was fixed at A$23 for the 2012–13 financial year, with unlimited permits being available from the government. The fixed price rose to A$24.15 for 2013–14.
The government had announced that the scheme was part of a transition to an emissions trading scheme in 2014–15, where the available permits will be limited in line with a pollution cap. The scheme primarily applied to electricity generators and industrial sectors. It did not apply to road transport and agriculture. The Department of Climate Change and Energy Efficiency stated that in June 2013 only 260 entities were subject to the scheme, of which approximately 185 were liable to pay for carbon units. Domestic aviation did not face the carbon price scheme, but was subject to an additional fuel excise levy of approximately 6 cents per liter.
In February 2012, the Sydney Morning Herald reported that Clean Energy Future carbon price scheme had not deterred new investment in the coal industry, as spending on exploration had increased by 62% in 2010–2011, more than any other mineral commodity. The government agency Geoscience Australia reported that investment in coal prospecting reached $520 million in 2010–2011. Falls in carbon emissions were observed following implementation of this policy. It was noted that emissions from sectors subject to the pricing mechanism were
1.0% lower and nine months after the introduction of the pricing scheme, Australia's carbon dioxide emissions from electricity generation had fallen to a 10-year low, with coal generation down 11% from 2008 to 2009. However, attribution of these trends to carbon pricing have been disputed, with Frontier Economics claiming trends are largely explained by factors unrelated to the carbon tax. Electricity demand had been falling and in 2012 was at the lowest level seen since 2006 in the National Electricity Market.
History
In October 2006 the Stern Review on the effect of climate change on the world's economy was released for the British government. This report recommended a range of measures including ecotaxes to address the market failure represented by climate change with the least amount of economic and social disruption. In response to this report and subsequent pressure from the Kim Beazley led Labor opposition, in December 2006 the Howard government established the Prime Ministerial Task Group on Emissions Trading, chaired by Peter Shergold, to advise on the implementation of an emissions trading scheme (ETS) in Australia. In opposition, Kevin Rudd called for a cut to greenhouse gas emissions by 60% before 2050. Both the incumbent Howard government and the Rudd Labor opposition promised to implement an emissions trading scheme (ETS) before the 2007 federal election. Following the release of the final Shergold report, the Howard government committed to introduce an ETS in June 2007.Going into the 2007 federal election, the Labor opposition party presented itself as a "pro-climate" alternative to the Government, with Kevin Rudd, who had by then deposed Beazley as leader, famously describing climate change as "the great moral challenge of our generation". Labor differentiated itself from the government by promising an ETS with an earlier start date of 2010 rather than the 2012 timeframe advocated by Howard. It also promised ratification of the Kyoto Protocol, investment in clean coal and renewable energy, and slightly more aggressive targets for renewable energy.Labor won the election on 24 November 2007, and on 3 December 2007 the Rudd government signed the ratification of the Kyoto Protocol at the 2007 United Nations Climate Change Conference. By ratifying the Kyoto Protocol, Australia committed to keeping emissions to no more than 108% of its 1990 emissions level by 2012. Australia's ratification came into effect on 11 March 2008.The Rudd government began negotiating the passage of an ETS through the Parliament. The Opposition led by Brendan Nelson called for the vote on the government's ETS be delayed until after the United Nations climate change summit in Copenhagen in December 2009. Prime Minister Rudd said in response that it would be "an act of absolute political cowardice, an absolute failure of leadership not to act on climate change until other nations had done so" and the government pursued the early introduction of the Scheme.On 16 July 2008, the Rudd government released a green paper for its Carbon Pollution Reduction Scheme (CPRS) (also known as Australia's ETS), outlining the intended design of the scheme. The CPRS was criticised by those who were both for and against action to mitigate climate change. Environmental lobby groups protested that the emissions reductions targets were too low, and that the level of assistance to polluters was too high. Industry and business lobby groups however argued for more permits and assistance to offset the economic impacts of the scheme on many enterprises, particularly during the financial crisis of 2007–2008. Malcolm Turnbull became the new Liberal Opposition Leader on 18 September 2008. On 30 September 2008, the Garnaut Climate Change Review, commissioned in April 2007 by Rudd when he was leader of the Opposition, released its final report. Garnaut recommended a price between $20 and $30 per tonne of carbon dioxide (CO2) equivalent with a rise of 4% each year. A more detailed white paper on the CPRS was released on 15 December 2008.
Unable to secure the support of the crossbench for their preferred model, the government entered negotiations with Turnbull, and in the lead up to the Copenhagen Conference, presented an amended CPRS scheme, with the support of Turnbull. The Turnbull-led Opposition supported the CPRS scheme in principle, although at times over 2009 they indicated disagreement with various details including the timing of implementation of the scheme, timing of the vote on the relevant legislation and on the level of assistance to be provided to polluting industries. The Opposition was able to negotiate greater compensation for polluters affected by the scheme in November 2009.Shortly before the Senate was due to vote on the carbon bills, on 1 December 2009 Tony Abbott replaced Turnbull as leader of the Liberal Party. Abbott immediately called a secret ballot on support for the ETS among coalition MPs, which was overwhelmingly rejected. The Coalition then withdrew their support for the carbon pricing policy and joined the Greens and Independents in voting against the relevant legislation in the Parliament of Australia on 2 December 2009. As the Rudd government required the support of either the Coalition or the Greens to secure passage of the bill, it was defeated in the Senate. Abbott described Labor's ETS plan as a 'Great big tax on everything'.
Abbott announced a new Coalition policy on carbon emission reduction in February 2010, which committed the Coalition to a 5% reduction in emissions by 2020. Abbott proposed the creation of an 'emissions reduction fund' to provide 'direct' incentives to industry and farmers to reduce carbon emissions. In April 2010, Rudd deferred attempts to advance the scheme to at least 2013, opting not to present the legislation to the Senate a second time, creating a trigger for a double dissolution election. In June 2010, Julia Gillard replaced Rudd as leader of the Labor Party and became Prime Minister. Factional leader and key Gillard supporter Bill Shorten said that the sudden announcement of change of policy on the ETS was a factor that had contributed to a collapse in support for Rudd's leadership.Shortly afterwards Gillard called a federal election for 21 August 2010. During the election campaign Gillard stated that she supported a price on carbon emissions and that she would prosecute the case for action for as long as she needed to win community support. However, she also indicated that she would not introduce carbon pricing until there was a sufficient consensus on the issue, that any carbon price legislated would not come into effect until after the 2013 election, and she specifically ruled out the introduction of a "carbon tax".The result of the election left Australia with its first hung parliament in 70 years. To form a majority in the House of Representatives both of the major parties needed to acquire the support of cross-benchers, including the Greens. After two weeks of negotiations Gillard had enough support to gain a majority including the support of the Greens and their single MP in the House, Adam Bandt. Gillard, therefore, remained Prime Minister and Abbott remained in Opposition. One of the conditions for Greens support was that the formation of a cross-party parliamentary committee to determine policy on climate change. Gillard honoured that agreement and on 27 September 2010 the Multi-Party Climate Change Committee (MPCCC) was formed, its terms of reference including that it was to report to Cabinet on ways to introduce a carbon price. The MPCCC agreed on the introduction of a fixed carbon price commencing 1 July 2012, transitioning to a flexible-price cap-and-trade ETS on 1 July 2015. Initially the price of permits is fixed and the quantity unlimited i.e. there is no cap; the scheme thus functions similarly, and is popularly referred to as a tax.
In February 2011, the government proposed the Clean Energy Bill, which the opposition claimed to be a broken election promise. The Liberal Party vowed to overturn the bill if it was elected.The Gillard government had asked the Productivity Commission to report on the steps taken by eight major economies to address climate change. In June 2011, the report found that more than 1,000 climate policies were already enacted across the globe. It also supported a market-based carbon price as being the most cost-effective way to reduce emissions. The report's findings were one of the major reasons that support for the carbon tax was provided by independent Tony Windsor. Windsor made it clear that he would not support the clean energy legislation if it included a carbon tax on transport fuels. He did not want to penalise people who lived in rural areas, where there was no public transport as an alternative to private vehicles.
The Clean Energy Plan was released on 10 July 2011. The Clean Energy Bill 2011 passed the Australian House of Representatives in October 2011 and the Australian Senate in November 2011 and was thus brought into law.On 1 July 2012 the Australian Federal government introduced a carbon price scheme. To offset the impact of the tax on some sectors of society, the government reduced income tax (by increasing the tax-free threshold) and increased pensions and welfare payments slightly to cover expected price increases, as well as introducing compensation for some affected industries. On 17 July 2014, a report by the Australian National University estimated that the Australian scheme had cut carbon emissions by as much as 17 million tonnes, the biggest annual reduction in greenhouse gas emissions in 24 years of records in 2013 as the carbon tax helped drive a large drop in pollution from the electricity sector.On 17 July 2014, the Abbott government passed repeal legislation through the Senate to abolish the carbon pricing scheme. In its place the government set up the Emission Reduction Fund, paid by taxpayers from consolidated revenue, which according to RepuTex, a markets consultancy, estimated the government's main climate policy may only meet a third of the emissions reduction challenge if Australia is to cut by 5% of 2000 levels by 2020.
Scope and covered emissions
The carbon price came into effect on 1 July 2012 and applied to direct emissions from a facility (scope-1 emissions), but not to indirect emissions (scope-2 emissions). The scheme only applied to facilities which emit more than 25,000 tonnes CO2-e per year, and did not apply to agriculture or to transport fuels. The carbon price was set at AUD$23 per tonne of emitted CO2-e on selected fossil fuels consumed by major industrial emitters and government bodies such as councils.
Agricultural emissions were exempt due to difficulty in tracking emissions and the related complexity of administering such a scheme. Households and business use of light vehicles did not incur a carbon price. However, changes to the fuel tax regime were proposed to effectively impose a carbon tax on business liquid and gaseous fuel emissions. There were plans for heavy on-road vehicles to pay from 1 July 2014.In effect, the scope of the scheme meant that only a small number of large electricity generators and larger industrial plants were subject to the carbon price scheme. The tax was payable by surrendering carbon units, which had been either purchased (at $20 per tonne in 2012–13) or acquired free under an industry assistance program. The pricing mechanism was expected to cover 60% of Australia's carbon emissions. 75% of each company's annual obligation were to be paid by 15 June each year with the remaining 25% by the following 1 February.
A list of companies which had paid the carbon tax, and the amount which each had paid, was published by the Clean Energy Regulator (CER). This was called the Liable Entities Public Information Database or LEPID. The LEPID for 2012–13 was updated on 12 July 2013 and the companies which were the fifteen largest payers of carbon tax in 2012–13 are shown in the summary below (related companies are grouped together where identifiable).
The Climate Change Authority, a statutory agency, was created to advise the government on the setting of carbon pollution caps, to conduct periodic reviews of the carbon pricing process, and to report on progress towards meeting national targets. These pollution caps were to form the basis for the cap-and-trade structure to commence in 2015.
Industry assistance programs
The Government ran several major 'Industry Assistance' programs to reduce the impact of carbon tax for the 185 affected companies. These have the effect of significantly reducing the actual carbon tax raised.
Jobs and Competitiveness Program
The 'Jobs and Competitiveness Program' was for the non-electricity sector and was targeted at the 'emissions-intensive trade-exposed' activities – that is, companies which emitted a lot of CO2 and were exposed to imports or who trade internationally. There was a list of 48 trade-exposed activities, including business such as steel making, alumina refining, cement making and similar activities.
Depending on whether a company was 'highly' or 'moderately' emissions intensive, it received 94.5% or 66% of 'average industry carbon costs' supplied as free carbon units.
Overall in 2012–13 under the 'Jobs and Competitiveness Program', there were 104 million free carbon units issued to 123 applicants, valued at approximately $2.4 billion. The fifteen largest recipients of free carbon units in 2012–13, with related companies grouped together where identifiable, were:
To put this into context, the LEPID list indicated that the total amount of carbon units to be surrendered would be 283 million units for 2012–13. 37% of these were awarded for free under the Jobs and Competitiveness Program.
Coal Fired Generation Assistance
Under the 'Coal Fired Generation Assistance' for coal-based electricity generating companies the Government gave out 42 million of free carbon units each year, valued at almost $5 billion. These were only issued to the generators with the highest amount of CO2 emission intensity, above 1.0 tonne of CO2 per MWh of energy. These were primarily the brown coal-fired generators in Victoria's Latrobe Valley
The free units were shared according to their size and the amount of CO2 produced compared to a more efficient black coal-fired power station. The list of companies which received the free units was published by the Clean Energy Regulator. Nine power stations qualified – the big four brown coal plants in Victoria, and five other much smaller plants. The four big brown coal plants in Victoria received the majority share of free carbon units, around 37 million of the 42 million free carbon units in September each year.
With an average emissions intensity of 1.3, that effectively meant there was no carbon tax on the first 20 TWh (or approximately 50%) they collectively produced each year.
Steel Transformation Plan package
The Steel Transformation Plan was a A$500 million package for Australia's two steelmakers. In 2012, payments of A$160 million were made, A$200 M to BlueScope and A$70 M to OneSteel.
Effect of the carbon price
Reduction in emissions of greenhouse gases
Because the Australian carbon tax did not apply to all fossil fuels usage, it only had an effect on some of the emitters of greenhouse gases. Among those emitters to which it applied, emissions were significantly lower after introduction of the tax. According to the Investor Group on Climate Change, emissions from companies subject to the tax went down 7% with the introduction of the tax, and the tax was "the major contributor" to this reduction.
Continuing growth in greenhouse emissions
Australia's total greenhouse gas emissions increased by 0.3% in the first six months of the Carbon Tax to December 2012 to 276.5 Mt CO2 equiv, while Australia's gross domestic product grew at a rate of 2.5% per year.
Greenhouse emissions from stationary energy (excluding electricity) and transport grew by 4% in the first six months of the carbon tax to December 2012.
However, there is a five-year trend for emissions from the electricity generation sector in Australia to decline. Electricity emissions peaked at 38% of the national total in September quarter 2008, coinciding with the start of the Global Financial Crisis. In December 2012, electricity emissions were just 33% of national emissions. The decline is due partly to an almost 6% reduction in electricity demand in the National Electricity market since 2008. This fall in electricity demand followed:·
Retail electricity prices rising by approximately 80% over the past five years; ·
Reduced economic activity and closure of the Kurri Kurri aluminium smelter in mid-2012; and·
A burst in residential solar PV generation following generous State Government incentives, now all curtailed.Other factors contributing to the five-year fall in greenhouse emissions from the electricity sector are:·
An increase in wind generation supported by the Renewable Energy Target subsidies; and·
Fuel switching from coal to gas.The Australian Government said in July 2013 that the carbon tax was a factor in reducing the emissions intensity in the National Electricity Market from 0.92 t of CO2 per MWh to 0.87 in the 11 months following its introduction.Since the carbon tax was introduced, wholesale electricity prices in the National Electricity market increased significantly. The Energy Users Association of Australia in its June 2013 paper said that electricity generators have been able to pass through more than 100% of the cost of the carbon tax. "If the outcomes observed in the spot market persist then it can be unequivocally concluded that both fossil fuel generators and renewable generators will have gained as a result of emission pricing, at users' expense. Surely this is not what was intended."
Alternative explanations of emissions reductions
Frontier Economics said the reduction in emissions from the electricity sector in the first year of the carbon tax was 'largely explained by factors unrelated to the carbon tax'.The Energy Users Association of Australia (EUAA) said in June 2013 "we suggest that it cannot be said that pricing emissions has reduced emissions in stationary energy to any meaningful extent"
Significant announcements which have, or may have, relevance to the carbon tax
AGL – In relation to its purchase of the Loy Yang brown coal fired power station in 2012, one of the single largest emitters of CO2 in Australia states “On the supply side of the business, the most significant strategic development was the decision to buy the Loy Yang A power station. ... The Board also recognised that coal fired generation would be required for decades to come if the demand from Australian households and businesses for electricity was to continue to be satisfied" Adelaide Brighton (Australia’s second largest cement producer) “ – AdelaideBrighton expects it will significantly mitigate the impact of the carbon tax over the next five years by:·
Enhancing its import flexibility;·
Reducing reliance on domestic manufacture; ·
Increasing the use of alternative fuels and cementitious substitutes"BlueScope (Australia's largest steelmaker) – "When funds from the Steel Transformation Plan are taken into account, the Company does not expect to face a net carbon liability over the period”.
Investments as a result of carbon tax
David Kassulke, the manager of AJ Bush & Sons, expressed grave concerns over the carbon tax during the lead up to its implementation. However, he later stated the carbon tax has had a positive impact on the business. The company expects to cut carbon emissions from 85,000 to 30,000 tonnes per year with the construction of a new biogas plant in 2013.
"The end result of the introduction of the new biogas technology will not only be a saving of millions of dollars in energy and carbon costs, but also an opportunity for the company to be positioned at the cutting edge of renewable energy technology in the rendering industry, Mr Kassulke said." "It means companies are now looking at ways to use less energy which equates to less cost and a subsequent reduction in the tax that is being levied."That has been the intention of the tax and clearly from that perspective it is working and working well".
Political and industry response
The introduction of a carbon price in Australia was controversial. The day before the 2010 federal election, Prime Minister, Julia Gillard sent out a message regarding carbon pricing, stating "I don't rule out the possibility of legislating a Carbon Pollution Reduction Scheme, a market-based mechanism." However the article also articulated her position on that term of government. "While any carbon price would not be triggered until after the 2013 election... She would legislate the carbon price next term if sufficient consensus existed", and the federal opposition accused the government of breaking an election promise to not introduce a carbon tax. Julia Gillard responded to these accusations by saying that circumstances changed following the 2010 election. Then opposition leader Tony Abbott criticised the carbon pricing policy on economic grounds referring to it as "toxic" and likening it to an octopus embracing the whole of the economy. He pledged to repeal the tax after the 18 clean energy bills passed through the House of Representatives and stated that the next election would be a referendum on the "carbon tax".The opposition (and since the 2013 election the Abbott government) proposed an alternative "direct-action" carbon emissions reduction scheme. Modelling produced by the Department of the Treasury indicated that this scheme would cost twice as much as the Clean Energy Futures Plan. Abbott was unable to find an Australian economist who supported his policy, although he did cite international economists who are supportive. Tony Abbott's "Direct Action Plan" has been criticised because there is no disincentive to continue polluting at the same rate, meaning that emissions will increase rather than decrease by 2020. In addition, "under Direct Action it is the public, not polluters who pay."The Australian Renewable Energy Agency (ARENA) was established as part of the Clean Energy Fund, and commenced operations on 1 July 2012. It consolidated existing renewable energy technology innovation programs. It had funds to provide financial assistance to research, develop, demonstrate, deploy and commercialise renewable energy in Australia and related technologies. The government-established but independent Clean Energy Finance Corporation (CEFC) commenced investment operations from 1 July 2013, with a focus on investments in renewable energy, low-emissions and energy efficiency technology and the manufacturing companies that produce materials used in such technologies.The majority of big emitters in Australia supported a price on carbon as at July 2012. However business groups and some big emitters, especially in the mining sector, were opposed to the pricing scheme.Research by Preston Teeter and Jorgen Sandberg at the University of Queensland revealed that liable organisations responded with very few investments in emissions reduction activities, largely due to the great deal of policy uncertainty surrounding the scheme.
One criticism of the carbon pricing scheme has been that Australia should not proceed with its introduction ahead of other countries. However, according to the Department of Climate Change and Energy Efficiency, Australia will be one of around 50 jurisdictions implementing similar schemes worldwide. The starting price of $23 per tonne has also been a point of contention.Emissions figures from the 2010–11 financial year suggest the electricity generation sector may be due to pay around A$3.9 billion. Loans have been made available so that electricity generators can purchase carbon permits. Macquarie Generation, a Government of New South Wales owned electricity generator, wrote down the value of its assets by about A$1 billion as a result of the carbon tax. Power generators in the La Trobe Valley also face substantial write-downs.
Modelling undertaken by the Virgin Australia airline calculated that the average increase per flight would be A$3. They responded by implementing a surcharge of between $1.00 and 5.00 to a one-way flight starting in July 2012. Qantas is raising its ticket prices by between $1.50 and 5.50.
In a survey conducted by the Economic Society of Australia, 60% of economists thought the carbon pricing proposal was sound economic policy, while 25% disagreed. A number of public protests both in support of and against the carbon price (or tax) were held in the run up to its introduction. These include the No Carbon Tax Climate Sceptics rallies and Say Yes demonstrations.
Effects and impacts
The carbon pricing scheme was intended to improve energy efficiency, convert electricity generation from coal to alternatives and shift economic activity towards a low carbon economy. Its impact on business was forecast to be 0.1 – 0.2% lower than the business as usual scenario. The scheme aimed to prevent 160 million tonnes of carbon dioxide from entering the atmosphere by 2020, as well as generating $24 billion over three years.In May 2012, the Australian Competition & Consumer Commission (ACCC) reported it was investigating about 100 cases where customers had possibly been misled into paying excessive price rises falsely claimed to be as a result of the carbon tax. By the middle of June, the commission was investigating about 200 cases. The consumer watchdog also set up a phone hotline and online form for complaints regarding excess pricing claimed to be due to the carbon tax. The ACCC had forecast that home construction costs would be at the lower end of the 0.7% to 1.8% range predicted by building companies. The Housing Industry Association estimated an average new house would experience a price increase of between 0.8% and 1.7% due to the carbon price. Housing construction was expected to be significantly impacted by the carbon tax because new homes require cement, bricks, aluminium, and glass, which are all typically energy-intensive materials. A forecast by the Centre for International Economics predicted the housing construction industry could decline by 12.6% as a result of the carbon price.The coal industry was expected to be impacted due to the emissions produced as coal is mined, however a similar expense is not expected to be incurred by Australia's coal exporting competitors. The Institute of Public Affairs claimed that the Australian coal industry would lose jobs to overseas competitors and mines will be closed. Despite the announcement of the scheme, spending on mineral exploration in the March quarter was the highest ever at $1.086 billion. The impact on the LNG industry in Australia was expected to be minor to moderate. No major projects were expected to be cancelled as a result of the introduction of the carbon pricing scheme. Dairy farmers will be impacted because of higher power costs for milk processing.Household bills were expected to rise by an average of around $5 per week. Energy retailer Synergy said the carbon price would result in a 7.1% rise to power bills.
Compensation
Because carbon pricing would indirectly flow through to consumers, the Australian government implemented household assistance measures.
The measures included changes to income tax: the tax-free threshold increased from A$6000 to 18200 on 1 July 2012, and was scheduled to rise to A$19,400 from 1 July 2015. The changes meant those earning less than A$20,000 received a tax cut with those earning up to A$5,000 receiving the greatest tax reduction. The changes were described as the biggest overhaul of taxation since the Goods and Services Tax was introduced in 2000.Other steps included direct payments into bank accounts beginning in May 2012. The payments, called the Clean Energy Advance, were targeted at low- and middle-income households.Some industries received direct compensation. As part of the Energy Security Fund, A$1 billion was promised to highly emissions-intensive coal-fired generators. Most of that funding was intended for coal-fired power generators in Victoria. Research by the Grattan Institute suggested that no black coal mining or liquefied natural gas projects would be scrapped as a result of carbon pricing, regardless of industry compensation; it further claimed that, if coupled with compensation, the carbon pricing regime would in fact leave the steel industry better off.Under the Carbon Farming Initiative, farmers and graziers would have been able to plant trees to earn carbon credits, which could have been on-sold to companies liable to pay a carbon price. The Clean Technology Investment Program was touted as helping the manufacturing sector to support investments in "energy-efficient capital equipment and low emission technologies, processes and products". Companies in the food sector would also have been able to apply for grants to improve their energy efficiency.
Emissions reduction
Six months after the introduction of carbon pricing the Department of Climate Change and Renewable Energy reported a 9% decrease in emissions from electricity generators.Nine months after the introduction of the pricing scheme, Australia's emissions of carbon dioxide due to electricity generation fell to a 10-year low, with coal generation down 6% from 2008 to 2009.
Repeal
Heading into the 2013 Australian federal election, the Liberal Party platform included the removal of the 'Carbon Tax', claiming that the election was in effect a referendum on carbon pricing in Australia. The incoming Liberal government placed removing the carbon pricing scheme at the head of its legislative program.
The carbon tax repeal legislation received Royal Assent on 17 July 2014 and the bills which were part of the package became law, with effect from 1 July 2014.
See also
Climate change in Australia
Energy development
Economics of climate change mitigation
List of climate change initiatives
New South Wales Greenhouse Gas Abatement Scheme
References
External links
Clean Energy Future archived on 7 August 2013 by the Internet Archive. |
electricity sector in argentina | The electricity sector in Argentina constitutes the third largest power market in Latin America. It relies mostly on thermal generation (60% of installed capacity) and hydropower generation (36%). The prevailing natural gas-fired thermal generation is at risk due to the uncertainty about future gas supply.
Faced with rising electricity demand (over 6% annually) and declining reserve margins, the government of Argentina is in the process of commissioning large projects, both in the generation and transmission sectors. To keep up with rising demand, it is estimated that about 1,000 MW of new generation capacity are needed each year. An important number of these projects are being financed by the government through trust funds, while independent private initiative is still limited as it has not fully recovered yet from the effects of the 2002 Argentine economic crisis.
The electricity sector was unbundled in generation, transmission and distribution by the reforms carried out in the early 1990s. Generation occurs in a competitive and mostly liberalized market in which 75% of the generation capacity is owned by private utilities. In contrast, the transmission and distribution sectors are highly regulated and much less competitive than generation.
Electricity supply and demand
Generation
Thermal plants fueled by natural gas (CCGT) are the leading source of electricity generation in Argentina. Argentina generates electricity using thermal power plants based on fossil fuels (60%), hydroelectric plants (36%), and nuclear plants (3%), while wind and solar power accounted for less than 1%. Installed nominal capacity in 2019 was 38,922 MW. However, this scenario of gas dominance is likely to undergo changes due to gas exhaustion derived from the existing "bottlenecks" in exploration and production (E+P) and pipeline capacity. Gas output dropped for the first time in 2005 (-1.4%) and gas reserves dropped to ten years of consumption by the end of 2004 (down from an average of 30 years in the 1980s). Today, gas reserves are 43% lower than in 2000. This situation is further aggravated by the uncertainty surrounding the gas deals with Bolivia and the plans to build new regional pipeline connections. Total generation in 2005 was 96.65 TW·h. In 2015, the Atucha II Nuclear Power Plant reached 100% power, increasing the percentage of nuclear power in Argentina from 7% to 10%.The generators are divided into eight regions: Cuyo (CUY), Comahue (COM), Northwest (NOA), Center (CEN), Buenos Aires/Gran Buenos Aires (GBA-BAS), Littoral (LIT), Northeast (NEA) and Patagonia (PAT).
Installed capacity in the wholesale market as of December 2020:
The above table does not consider off-grid installed capacity nor distributed generation (small biogas/biomass facilities, rooftop solar panels, etc.)
While CAMESSA categorises hydropower larger than 50MW as non-renewable, the renewable classification of large hydropower is in line with international standars and how other countries classify their hydropower as renewable energy. Pumped storage is considered as non-renewable due to its consumption of electricity from the grid.
Imports and exports
In 2005, Argentina imported 6.38 TW·h of electricity while it exported 3.49 TW·h. Net energy imports thus were about 3% of consumption.
Argentina also imports electricity from Paraguay, produced by the jointly built Yaciretá Dam. On 18 September 2006 Paraguay agreed to settling its debt of $11,000,000,000 owed to Argentina for the construction of Yaciretá by paying in electricity, at the rate of 8,000 GWh per year for 40 years.
Demand
Electricity demand in Argentina has steadily grown since 1991, with just a temporary decline caused by the economic crisis of 2001-2002 that has been followed by a quick recovery (6%-8% annual increase) in the last five years, partially due to economic recovery. In 2005, the country consumed 94.3 TW·h of electricity, which corresponds to 2,368 kWh per capita. Residential consumption accounted for 29% of the total, while industrial, and commercial and public represented 43% and 26% respectively.
Demand and supply projections
Argentina currently faces a tight supply/demand scenario as reserve margins have declined from above 30% in 2001 to less than 10%. This fact, together with the deterioration in distribution companies services (i.e. cables, transformers, etc.), has the potential to endanger supply. To sustain a 6-8% annual increase in demand, it is estimated that the system should incorporate about 1,000 MW of generation capacity each year.
Transmission and distribution
In Argentina, there are two main wide area synchronous grids systems, SADI (Sistema Argentino de Interconexión, Argentine Interconnected System) in the North and center-South of the country, and SIP (Sistema de Interconexión Patagónico, Patagonian Interconnected System) in the South. Both systems are integrated since March 2006. The electricity market in the SADI area is managed by the MEM (Mercado Eléctrico Mayorista).
Access to electricity
Total electricity coverage in Argentina was close to 100% of the population in 2016. However, access to electricity is more deficient in certain rural areas. The Renewable Energy in the Rural Market Project (PERMER) in 2012 was one of several programs being implemented to enlarge electricity coverage in rural areas. (See World Bank projects below).
Service quality
Interruption frequency and duration
Interruption frequency and duration are considerably below the averages for the LAC region. In 2002, the average number of interruptions per subscriber was 5.15, while duration of interruptions per subscriber was 5.25 hours. The weighted averages for LAC were 13 interruptions and 14 hours respectively.
Distribution and transmission losses
Distribution losses in 2005 were 13.6%, down from 17% a decade before. In 2014, losses were about 3.3%.
Responsibilities in the Electricity Sector
Policy and regulation
The Energy Secretariat (SENER) is responsible for policy setting, while the National Electricity Regulator (ENRE) is the independent entity within the Energy Secretariat responsible for applying the regulatory framework established by Law 26,046 of 1991. ENRE is in charge of regulation and overall supervision of the sector under federal control. Provincial regulators regulate the rest of the utilities. ENRE and the provincial regulators set tariffs and supervise compliance of regulated transmission and distribution entities with safety, quality, technical and environmental standards. CAMMESA (Compañía Administradora del Mercado Mayorista Eléctrico) is the administrator of the wholesale electricity market. Its main functions include the operation and dispatch of generation and price calculation in the spot market, the real-time operation of the electricity system and the administration of the commercial transactions in the electricity market.The Electric Power Federal Council (CFEE), created in 1960, plays a very important role in the sector as well. It is the administrator of funds that specifically target electricity operations (i.e. National Fund for Electric Power, see Recent developments below) and is also an adviser to the National and the Provincial Governments in issues relating to the power industry, public and private energy services, priorities in the execution of new projects and studies, concessions and authorizations, and electricity tariffs and prices. It is also an adviser for legislative modifications in the power industry.The Argentine power sector is one of the most competitive and deregulated in South America. However, the fact that the Energy Secretariat has veto power over CAMMESA has the potential to alter the functioning of the competitive market. The functions of generation, transmission, and distribution are open to the private sector, but there are restrictions on cross-ownership between these three functions. Argentine law guarantees access to the grid in order to create a competitive environment and to allow generators to serve customers anywhere in the country.
Generation
Private and state-owned companies carry out generation in a competitive, mostly liberalized electricity market, with 75% of total installed capacity in private hands. The share in public hands corresponds to nuclear generation and to the two bi-national hydropower plants: Yacyretá (Argentina-Paraguay) and Salto Grande (Argentina-Uruguay). The generation sector is highly fragmented with more than ten large companies, all of them providing less than 15% of the system's total capacity. Power generators sell their electricity in the wholesale market operated by the CAMMESA.
Transmission and distribution
The transmission and distribution sectors are highly regulated and less competitive than generation. In transmission, the Compañía Nacional de Transporte Energético en Alta Tension (Transener) operates the national electricity transmission grid under a long-term agreement with the Argentine government. In the distribution sector, three private companies, Edenor (Empresa Distribuidora y Comercializadora Norte), Edesur (Electricidad Distribuidora Sur) and Edelap (Empresa de Electricidad de la Plata), dominate a market with 75% control by private firms.Other important distribution companies at the provincial level are:
Public provincial: EPEC (Empresa Provincial de Energía de Córdoba), EPESFI (Empresa Provincial de Energía de Santa Fé)
Private provincial: ESJ ( Energía San Juan), EDET (Empresa de Distribución Eléctrica de Tucumán): EDEN (Empresa Distribuidora de Energía Norte), EDEA (Empresa Distribuidora de Energía Atlántica), EDES (Empresa Distribuidora de Energía Sur)
Renewable energy resources
The National Promotion Direction (DNPROM) within the Energy Secretariat (SENER) is responsible for the design of programs and actions conducive to the development of renewable energies (through the Renewable Energy Coordination) and energy efficiency (through the Energy Efficiency Coordination) initiatives. Complementarily, the Secretariat for the Environment and Natural Resources (SEMARNAT) is responsible for environmental policy and the preservation of renewable and non-renewable resources.The most important legal instruments for the promotion of renewable energy are Law 25,019 from 1998 and Law 26,190 from 2007. The 1998 law, known as the "National Wind and Solar Energy Rules", declared wind and solar generation of national interest and introduced a mechanism that established an additional payment per generated kWh which, in 1998, meant a 40% premium over market price. It also granted certain tax exemptions for a period of 15 years from the law's promulgation. The 2007 Law complemented the previous one, declaring of national interest the generation of electricity from any renewable source intended to deliver a public service. This law also set an 8% target for renewable energy consumption in the period of 10 years and mandated the creation of a trust fund whose resources will be allocated to pay a premium for electricity produced from renewable sources.
Hydropower
At the end of 2021 Argentina was the 21st country in the world in terms of installed hydroelectric power (11.3 GW). Argentina's hydroelectric potential is being exploited only partially. While the identified potential is 170,000 GW·h/year, in 2006 hydroelectric production amounted just to 42,360 GW·h. There are also untapped mini-hydropower resources, whose potential is estimated at 1.81% of overall electricity production (in contrast with its current 0.88%).(For a comprehensive list of plants see Hydroelectric power stations in Argentina.)
Wind
At the end of 2021 Argentina was the 26th country in the world in terms of installed wind energy (3.2 GW). As of 2020 Argentina had an installed wind energy capacity of 1.6 GW, with 931 MW installed in 2019 alone. Electricity production from onshore wind power in Argentina has increased from 1.41 TWh in 2018 to 9.42 TWh in 2020.The technical potential for offshore wind in Argentina has been estimated to amount to 2.5 TW, but no offshore turbines have been built so far.The Argentine Patagonia region has a very large wind potential. The Chubut Wind Power Regional Center (CREE) estimated the theoretical potential for the region at 500 GW of electricity generation. However, this large potential is still largely unexploited. One of the reasons for this underdevelopment is that existing tariffs and incentives do not make wind power development attractive enough yet. However, the main deterrent to wind power development in the region has been the lack of transmission lines that connect the Patagonia region with the National Interconnected System. The completion of the Choele-Choel-Puerto Madryn high voltage line, the first section of Línea Patagónica under the framework of the Plan Federal de Transporte de Energía Eléctrica, eliminated this bottleneck in March 2006.Nevertheless, wind power has increased significantly in Argentina during the last decade. Total operating wind power capacity in 2005 was 26.6 MW, shared by 13 plants. This is still only about 0.05% of the theoretical potential of wind energy in Argentina. In 2007, the distribution of number plants and total capacity was:
Buenos Aires Province: 6 plants, 6,100 kW
Chubut Province: 4 plants, 17,460 kW
Santa Cruz Province: 1 plant, 2,400 kW
La Pampa Province: 1 plant, 1,800 kW
Neuquen Province: 1 plant, 400 kWOf the 13 plants, only three have been commissioned after the year 2000, with the remaining 10 built during the 1990s.(See Wind power regime map for Argentina).
Solar
At the end of 2021 Argentina was the 43rd country in the world in terms of installed solar energy (1.0 GW). Solar power is only present in remote areas. Just 81 MW·h were generated in 2005, less than 0.1% of total electricity production. In 2012 the first of four 5 MW stages of Cañada Honda was completed, as part of a plan to install 117 MW of renewable energy.
History of the electricity sector
The reforms of 1991/92
Prior to 1991, the electricity sector in Argentina was vertically integrated. The sector experienced a serious crisis in the summers of 1988/1989, primarily due to the lack of maintenance of the country's thermal power plants (50% were unavailable). Shortly after the crisis, the government of Carlos Menem introduced a new legal framework for the electricity sector through Law 24,065, which included the following elements: vertical and horizontal unbundling of generation, transmission and distribution; opening up of all segments to the private sector; and separation of the regulatory function from policy setting. As a result of the new law, there was substantial private investment which, together with the public power plants that started production in the 1990s, transformed a situation of power shortage and low quality into one of abundance and reliability at lower prices.ENRE (Electricity National Regulatory Entity) was created in 1992. The Wholesale Electricity Market (MEM), which covers up to 93% of total demand corresponding to the Argentine Interconnected System (SADI), was also created in 1992. The remaining 7% share of the demand corresponds to Patagonia, which had its own interconnected market, the Patagonian Wholesale Electricity Market MEMSP), now interconnected with the MEM. CAMMESA (Wholesale Electricity Market Administration Company) was also created that year and assigned the responsibilities of coordinating dispatch operations, setting wholesale prices and administrating economic transactions performed through the Argentine Interconnected System.The reforms implemented in the 1990s led to high investment, which allowed for a 75% increase in generation capacity, resulting in the decrease of prices in the wholesale market from US$40/MW·h in 1992 to US$23/MW·h in 2001. However, the reforms failed to deliver the necessary increase in transmission capacity. Only one relevant project, the addition of the 1,300 km high voltage line between Comahue and Buenos Aires, was built in the 1990s. Distribution networks were also renovated and expanded, which resulted in efficiency and quality improvements.
Tariff freeze
As a response to the 2001 economic crisis, electricity tariffs were converted to the Argentine peso and frozen in January 2002 through the Public Emergency and Exchange Regime Law. Together with high inflation (see Economy of Argentina) and the devaluation of the peso, many companies in the sector had to deal with high levels of debt in foreign currency under a scenario in which their revenues remained stable while their costs increased. This situation has led to severe underinvestment and unavailability to keep up with an increasing demand, factors that contributed to the 2003-2004 energy crisis. Since 2003, the government has been in the process of introducing modifications that allow for tariff increases. Industrial and commercial consumers' tariffs have already been raised (near 100% in nominal terms and 50% in real terms), but residential tariffs still remain the same.
Creation of Enarsa
In 2004, President Néstor Kirchner created Energía Argentina Sociedad Anónima (Enarsa), a company managed by the national state of Argentina for the exploitation and commercialization of petroleum and natural gas, but also the generation, transmission and trade of electricity. Through the creation of Enarsa, the state will regain a relevant place in the energy market that was largely privatized during the 1990s.
Energy Plus and Gas Plus programs
In September 2006, SENER launched the Energy Plus (Energía Plus) program with the objective of increasing generation capacity and meeting the rising demand for electricity. The program applies for consumption levels above those for 2005. CAMMESA requires all large users (above 300 kW) to contract the difference between their current demand and their demand in the year 2005 in the Energy Plus market. In this new de-regulated market, only energy produced from new generation plants will be traded. The aim of the program is twofold. In one hand, it seeks to guarantee supply to residential consumers, public entities, and small and medium enterprises. On the other hand, it aims at encouraging self-generation by the industrial sector and electricity cogeneration.In March 2008, the government approved Resolution 24/2008, which created a new natural gas market called "Gas Plus" to encourage private investment in natural gas exploration and production. The Gas Plus regime applies to new discoveries and to "tight gas" fields. The price of the new gas, whose commercialization will be restricted to the domestic market, will not be subject to the conditions established in the "Agreement with Natural Gas Producers 2007-2011" but will be based on costs and a reasonable profit. Experts believe that, if the Gas Plus regime is successful, it could stimulate new investments in electricity generation plants under the Energy Plus regime as it could ensure fuel supply to the new plants.
The National Program for the Rational and Efficient Use of Energy (PRONUREE)
In December 2007, the government launched the National Program for the Rational and Efficient Use of Energy (PRONUREE, Decree 140/2007). This decree declared the rational and efficient use of energy to be in the national interest and is also part of the energy sector strategy to counter supply/demand imbalance. The PRONUREE, under the responsibility of the Secretariat of Energy, aims to be a vehicle for improving energy efficiency in the energy-consuming sectors and acknowledges that energy efficiency needs to be promoted with a long-term commitment and vision. It also acknowledges the connection between energy efficiency and sustainable development, including the reduction of greenhouse gas emissions. The program also recognizes the need for individual behavioral changes to be promoted with an educational strategy, with the public sector setting the example by assuming a leadership role in the implementation of energy conservation measures in its facilities.The PRONUREE includes short- and long-term measures aimed at improving the energy efficiency in the industrial, commercial, transport, residential and service sectors and public buildings. It also supports educational programs on energy efficiency, enhanced regulations to expand cogeneration activities; labeling of equipment and appliances that use energy; improvements to the energy efficiency regulations; and broader utilization of the Clean Development Mechanism (CDM) to support the development of energy efficiency projects. The objective of the program is to reduce electricity consumption by 6%.One of the first activities defined under PRONUREE is the national program to phase out incandescent bulbs by 2011 in Argentina. The program, financed by the government, aims to replace incandescent bulbs with energy efficient compact fluorescent lamps (CFLs) in all households connected to the electricity grid and selected public buildings. The program, which has initially undergone a pilot phase and expects to replace 5 million incandescent lamps in the next six months, foresees the distribution of 25 million lamps overall. Staff from the distribution companies will visit each household to replace the incandescent lamps and to inform residential users on the advantages of replacing the bulbs and of the efficient use of energy in general.
Recent tariff increases, 2008
In Argentina, retail tariffs for the distribution utilities in the Metropolitan Area of Buenos Aires and La Plata city (i.e. Edenor, Edesur and Edelap) are regulated by the national regulatory agency (ENRE) while provincial utilities are regulated by local regulators. While the utilities under ENRE's jurisdiction had not been allowed to raise residential tariffs since they were frozen in 2002 as a result of the Emergency and Exchange Regime Law, some provincial regulators had recently approved additional charges to residential tariffs. In particular, the Public Service Regulatory Agency in the Province of Córdoba (ERSeP) agreed in February 2008 to a 17.4% additional charge to residential customers. Likewise, Santa Fé approved increases between 10% and 20%; Mendoza between 0 and 5% below 300 kWh and between 10% and 27% above 300 kWh; Jujuy between 22% and 29% and Tucumán between 10% and 24%. Other provinces (i.e. San Juan, Chaco, Formosa, Corrientes, La Pampa, Neuqen, Río Negro and Entre Ríos) are expected to raise tariffs in the near future.Recently, in August 2008, after a 7-year tariff freeze, residential electricity tariffs in the Buenos Aires metropolitan area (served by the Edenor, Edesur and Edelap utilities) have been increased by 10-30% for households that consume more than 650 kWh every two months. For consumption between 651 kWh and 800 kWh, the increase will be 10%; on the other end, for users over 1,201 kWh, the increase amounts to 30%. The increase impacts around 24% of all Edenor, Edesur and Edelap customers (1,600,000 households). For commercial and industrial users the increase will be 10%.At the end of August 2008, ENRE also approved increases in transmission tariffs in the 17%-47% range. The increase granted by ENRE was below the increase determined by the Energy Secretariat for some transmission companies (e.g. Transener, Transba, Distrocuyo and Transnoa. Some of them (i.e. Transener, Transba), will most likely challenge ENRE's decision. An overall tariff revision is still pending and has been put off until February 2009.
Tariffs, cost recovery and subsidies
Tariffs
Electricity tariffs in Argentina are well below the LAC average. In 2004, the average residential tariff was US$0.0380 per kWh, very similar to the average industrial tariff, which was US$0.0386 per kWh in 2003. Weighted averages for LAC were US$0.115 per kWh for residential consumers and US$0.107 per kWh for industrial customers.
(See History of the electricity sector for more information on the evolution of tariffs).
Subsidies
See Fund for the Electric Development of the Interior (FEDEI) below.
Investment and financing
In 1991, the Government of Argentina created the National Fund for Electric Power (FNEE, Fondo Nacional de la Energía Eléctrica), to be funded by a share of the petrol tax and a surcharge on sales from the wholesale market. This Fund, which is administered by CFEE (Electric Power Federal Council), provides funding to the following other funds at the shares indicated:
47.4%: Subsidiary Fund for Regional Tariff Compensation to Final Users (FCT), for homogenization of tariffs across the country (this created a de facto subsidy for those consumers in the areas with higher electricity costs)
31.6%: Fund for the Electric Development of the Interior (FEDEI), for generation, transmission and rural and urban distribution works. Most funds have been directed to rural electrification
19.75%: Fiduciary Fund for Federal Electricity Transmission(FFTEF) (created in 2000), for co-financing or projects in electricity transmission.
1.26%: Wind Energy Fund (created in 2002), for the development of wind energy,In addition, CAMMESA, the administrator of the wholesale electricity market, had projected that by 2007 the country's energy demand would require an additional capacity of 1,600 MW. Faced with the need for specific investments but also with a lack of private investment, the Energy Secretariat (SENER) enacted Resolutions 712 and 826 in 2004, which created FONINVEMEM, the Fund for the Investment Needed to Increase the Supply of Electricity in the Wholesale Market. The Fund, which sought to encourage participation from creditors of the wholesale market, invited those creditors, mainly generation companies, to participate with their credit in the creation of the Fund itself.
Ongoing projects
There are several projects that are part of the government's response to the predicted electricity shortages. If all those plans are completed as expected, the capacity requirements for the next few years will be met.
Thermal power
Two new CCGT plants, the José de San Martín Thermoelectric and Manuel Belgrano Thermoelectric, of 830 MW each, are under construction and expected to start full operations at the beginning of 2009. Endesa, Total S.A., AES Corporation, Petrobras, EDF and Duke Energy are the main shareholders in the plants. Both plants, which have been financed through the FONINVEMEM (total investment amounts up to US$1,097 million), are expected to start full operations at the beginning of 2009.In addition, the Planning Ministry announced in July 2007 the commissioning of five new thermal plants with a total capacity of 1.6 GW and an overall investment of US$3,250 million. These dual-generation turbine (gas or fuel oil) plants, which are expected to start operations in 2008, will be located in Ensenada (540 MW), Necochea (270 MW), Campana (540 MW), Santa Fe (125 MW) and Córdoba (125 MW). Finally, Enarsa has recently launched bidding for eleven small and transportable generation units (15-30 MW each) and for other three larger generation units (50-100 MW) to be installed on barges. These new units, whose base price is still unknown, will add between 400 and 500 MW of new generation capacity.
Nuclear power
In 2006, the Argentine government launched a plan to boost nuclear energy. The Atucha II nuclear power plant, whose construction started in 1981, was to be completed and to add 750 MW of generation capacity by 2010. The plant started producing power in June 2014. In addition, the Embalse nuclear power plant, with 648 MW of generation capacity, was to be refurbished to extend its operational life beyond 2011.
Hydropower
On the hydropower side, the Yacyretá dam's reservoir was elevated by 7 m to the height of 83 m as contemplated in its original design, which increased its capacity from 1,700 to 3,100 MW. This will lead to a 60% increase in its electricity output (from 11,450 GW·h to 18,500 GW·h). The reservoir rise was complete in February 2011 despite a serious controversy regarding the resettlement of people. Additionally, in 2006, bidding for the expansion of Yacyretá with the construction of a new 3-turbine plant in the Añá Cuá arm of the Paraná River was announced by the Government. This expansion, to be finalized in 2010, would add 300 MW of new generation capacity.
Transmission
In regard to transmission, the Federal Plan for Transport of Electric Energy at 500 kV is under implementation under the umbrella of the FFTEF (Fondo Fiduciario para el Transporte Eléctrico Federal). The main lines of the plan (Línea Patagónica, Línea Minera, Yacyretá, Puerto Madryn – Pico Truncado, NEA-NOA, Comahue – Cuyo, Pico Truncado – Río Turbio – Río Gallegos) are already built or currently under construction. The lines built between 2007 and 2009 will add 4,813 new kilometers of high voltage transmission capacity.In addition, the Federal Plan for Transport of Electric Energy II, defined in 2003 and updated in 2006, has the objective of addressing the constraints faced by the regional transmission networks in the period up to 2010. This complementary plan has prioritized the necessary works according to their ability to address short-term demand issues. 109 of the 240 works identified in 2003 were considered of high priority and have already been completed or are under execution. Initially, investment for high priority works was estimated at US$376 million, while estimated investment for the rest of the works totaled US$882.2 million. However, this budget is under revision due to the increasing costs of materials such as steel and aluminum and of labor.
Summary of private participation in the electricity sector
Prior to 1991, the electricity sector in Argentina was vertically integrated. The new legal framework for the electricity sector included: vertical and horizontal unbundling of generation, transmission and distribution; opening up of all segments to the private sector; and separation of the regulatory function from policy setting.
Currently, private and state-owned companies carry out generation in a completive, mostly liberalized electricity market, with 75% of total installed capacity in private hands. The publicly owned share corresponds to nuclear generation and to the two bi-national hydropower plants: Yacyretá (Argentina-Paraguay) and Salto Grande (Argentina-Uruguay). On the other hand, the transmission and distribution sectors are highly regulated and less competitive than generation. In transmission, the Compañía Nacional de Transporte Energético en Alta Tension (Transener) operates the national electricity transmission grid, while in the distribution sector, three private companies, Edenor (Empresa Distribuidora y Comercializadora Norte), Edesur (Electricidad Distribuidora Sur) and Edelap (Empresa de Electricidad de la Plata), dominate a market with 75% control by private firms.
Electricity and the Environment
Responsibility for the environment
The Secretariat of Environment and Sustainable Development holds responsibility for the environment in Argentina.
Greenhouse gas emissions
OLADE (Organización Latinoamericana de Energía) estimated that CO2 emissions from electricity production in 2003 were 20.5 million tons of CO2, which represents 17% of total emissions for the energy sector.
In 2011, according to the International Energy Agency, the actual CO2 emissions from electricity generation were 67.32 million metric tons, a share of 36.7% of the countries' total CO2 emissions from fuel combustion.
CDM projects in electricity
As of August 2007, there are only three energy-related registered CDM projects in Argentina, with expected total emissions reductions of 673,650 tons of CO2e per year. Of the three projects, only one is large-scale: the 10.56 MW Antonio Morán wind power plant in the Patagonia region. Production of electricity from biomass waste in the Aceitera General Deheza and methane recovery and electricity generation from the Norte III-B landfill are the two small-scale existing projects.
External assistance
World Bank
The only active energy project financed by the World Bank in Argentina is the Renewable Energy in the Rural Market Project (PERMER). This project has the objective of guaranteeing access to electricity to 1.8 million people (314,000 households) and to 6,000 public services (schools, hospitals, etc.) located far from electricity distribution centers. Electrification of this dispersed market will be mostly carried out through the installation of solar photovoltaic systems, but also through other technologies such as micro-hydraulic turbines, wind and, eventually, diesel generators. The project, which started in 1999 and is expected to end in December 2008, has received a US$10 million grant from GEF and a US$30 million loan from the World Bank. The Argentine Energy Secretariat has recently presented an Energy Efficiency project to the GEF. The objective of the project is to improve energy use, reducing its costs to consumers and contributing to the sustainability of the energy sector in the long term. A reduction in greenhouse gas emissions is also sought.
Inter-American Development Bank
In November 2006, the Inter-American Development Bank approved a $580 million loan for the construction of a new 760-mile transmission line in northern Argentina that will connect separate grids in the northeastern and northwestern parts of the country, the Norte Grande Electricity Transmission Program.
Andean Development Corporation (CAF)
In 2006, Argentina received financing from CAF (Andean Development Corporation) for two electricity projects: the Electricity Interconection Comahue-Cuyo (US$200 million) and the Electricity Interconnection Rincón Santa María-Rodríguez (US$300 million), two of the high voltage transmission lines included in the Federal Transportation Plan. In the same year, Argentina also borrowed US$210 million from CAF for a program that aims at repairing the country's hydroelectric infrastructure.
In June 2007, CAF approved a US$45 million loan to the Buenos Aires province for partial financing of the electricity transport capacity in the North of the province.
Sources
Cámara Argentina de la Construcción, 2006. La construcción como herramienta del crecimiento continuado. Sector eléctrico. Evaluación de las inversiones necesarias para el sector eléctrico nacional en el mediano plazo. Consultor: Dr. Ing. Alberto del Rosso.
Coordinación de Energías Renovables, 2006.Potencial de los aprovechamientos energéticos en la República Argentina
Oxford Analytica, 2006. Argentina: Energy issues threaten sustained growth
Secretaria de Energía, 2006. Informe del Sector Eléctrico 2005.
Secretaría de Energía, 2007. Balance Energético Nacional. Avance 2006
See also
Argentina
Argentine economic crisis (1999-2002)
Argentine energy crisis (2004)
Water supply and sanitation in Argentina
National Atomic Energy Commission
Renewable energy by country
2019 Argentina and Uruguay blackout
Notes
External links
Public entities
Energy Secretariat
Energy Office - Province of Buenos Aires
Secretariat of Environment and Sustainable Development
Electric Power Federal Council (CFEE)
Administrator of the Wholesale Electricity Market (CAMMESA)
National Electricity Regulator (ENRE)
Transener
Electricity distributors
Edenor
Edesur
Edelap
Federación Argentina de Cooperativas de Electricidad y Otros Servicios Públicos Limitada
External partners
List of World Bank projects in Argentina
List of Inter-American Development Bank projects in Argentina
Other
Energy Efficiency Programs and Projects
Climate Change Scenarios for Argentina
La Energía Eléctrica en la República Argentina
Early History of the electricity sector in Argentina
(in Spanish) CNEA - Electricity Bulletin |
energiewende | The Energiewende (German for 'energy turnaround') (pronounced [ʔenɐˈɡiːˌvɛndə] ) is the ongoing transition by Germany to a low carbon, environmentally sound, reliable, and affordable energy supply. The new system intends to rely heavily on renewable energy (particularly wind, photovoltaics, and hydroelectricity), energy efficiency, and energy demand management.
Legislative support for the Energiewende was passed in late 2010 and included greenhouse gas (GHG) reductions of 80–95% by 2050 (relative to 1990) and a renewable energy target of 60% by 2050.
Germany had already made significant progress on its GHG emissions reduction target before the introduction of the program, achieving a 27% decrease between 1990 and 2014. However, the country would need to maintain an average GHG emissions abatement rate of 3.5% per year to reach its Energiewende goal, equal to the maximum historical value thus far.
Germany's overall energy mix still has a high CO2 intensity due to large share of coal and fossil gas.As part of the Energiewende, Germany phased out nuclear power in 2023, and plans to retire all existing coal power plants by 2038.
The early retirement of the country's nuclear reactors was particularly controversial.
Etymology
The term Energiewende is regularly used in English language publications without being translated (a loanword).The term Energiewende was first contained in the title of a 1980 publication by the German Öko-Institut, calling for the complete abandonment of nuclear and petroleum energy.: 223
The most groundbreaking claim was that economic growth was possible without increased energy consumption. On 16 February 1980, the German Federal Ministry of the Environment also hosted a symposium in Berlin, called Energiewende – Atomausstieg und Klimaschutz (Energy Transition: Nuclear Phase-Out and Climate Protection). The Öko-Institut was funded by both environmental and religious organizations, and the importance of religious and conservative figures like Wolf von Fabeck and Peter Ahmels was crucial. In the following decades, the term Energiewende expanded in scope – in the present form it dates back to at least 2002.Energiewende designated a significant change in energy policy. The term encompassed a reorientation of policy from demand to supply and a shift from centralized to distributed generation (for example, producing heat and power in small co-generation units), which should replace overproduction and avoidable energy consumption with energy-saving measures and increased efficiency.In a broader sense, this transition also entailed a democratization of energy. In the traditional energy industry, a few large companies with large centralized power stations were perceived as dominating the market as an oligopoly and consequently amassing a worrisome level of both economic and political power. Renewable energies, in contrast, can, in theory, be established in a decentralized manner. Public wind farms and solar parks can involve many citizens directly in energy production. Photovoltaic systems can even be set up by individuals. Municipal utilities can also benefit citizens financially, while the conventional energy industry profits from a relatively small number of shareholders. Also significant, is the decentralized structure of renewable energies enables the creation of value locally and minimizes capital outflows from a region. Renewable energy sources, therefore, play an increasingly important role in municipal energy policy, and local governments often promote them.
Status
The key policy document outlining the Energiewende was published by the German government in September 2010, some six months before the Fukushima nuclear accident. Legislative support was passed in September 2010. On 6 June 2011, following Fukushima, the government removed the use of nuclear power as a bridging technology as part of their policy. The program was later described as "Germany's vendetta against nuclear" and attributed to the growing influence of ideologically anti-nuclear green movements in mainstream politics. In 2014, then-Federal Minister for Economic Affairs and Energy Sigmar Gabriel lobbied Swedish company Vattenfall to continue investments in brown coal mines in Germany, explaining that "we cannot simultaneously quit nuclear energy and coal-based power generation". A similar statement by Gabriel was recalled by James Hansen in his 2009 book "Storms of my grandchildren" — Gabriel argued that "coal use was essential because Germany was going to phase out nuclear power. Period. It was a political decision, and it was non-negotiable".In 2011, the Ethical Committee on Secure Energy Supply was tasked with assessing the feasibility of the nuclear phase-out and transition to renewable energy, and it concluded:
The Ethics Committee is firmly convinced that the phase-out of nuclear energy can be completed within a decade using the energy transition measures presented here.
In 2019, Germany's Federal Court of Auditors determined the program had cost €160 billion over the last 5 years and criticized the expenses for being "in extreme disproportion to the results." Despite widespread initial support, the program is perceived as "expensive, chaotic, and unfair", and a "massive failure" as of 2019.Russian fossil gas was perceived as a "safe, cheap, and temporary" fuel to replace nuclear power in the initial phase of Energiewende, as part of the German policy of integrating Russia with the European Union through mutually beneficial trade relations. German dependency on Russian gas imports was presented as "mutual dependency."
Initial phase 2013–2016
After the 2013 federal elections, the new CDU/CSU and SPD coalition government continued the Energiewende, with only minor modification of its goals in the coalition agreement. An intermediate target was introduced of a 55–60% share of renewable energy in gross electricity consumption in 2035. These targets were described as "ambitious". The Berlin-based policy institute Agora Energiewende noted that "while the German approach is not unique worldwide, the speed and scope of the Energiewende are exceptional". A particular characteristic of the Energiewende compared to other planned energy transitions was the expectation that the transition is driven by citizens and not large energy utilities. Germany's switch to renewables was described as "democratization of the energy supply". The Energiewende also sought a greater transparency in relation to national energy policy formation.As of 2013, Germany was spending €1.5 billion per year on energy research to solve the technical and social issues raised by the transition, which are provided by the individual federal states, universities, and the government, which provided €400 million per year. The government's contribution was increased to €800 million in 2017.Important aspects included (as of November 2016):
In addition, there was an associated research and development drive. A chart showing German energy legislation in 2016 is available.These targets went well beyond European Union legislation and the national policies of other European states. The policy objectives have been embraced by the German federal government and has resulted in a huge expansion of renewables, particularly wind power. Germany's share of renewables has increased from around 5% in 1999 to 22.9% in 2012, surpassing the OECD average of 18% usage of renewables.
Producers have been guaranteed a fixed feed-in tariff for 20 years, guaranteeing a fixed income. Energy co-operatives have been created, and efforts were made to decentralize control and profits. However, in some cases poor investment designs have caused bankruptcies and low returns, and unrealistic promises have been shown to be far from reality.
Nuclear power plants were closed, and the existing nine plants were scheduled to close earlier than planned, in 2022.
One factor that has inhibited efficient employment of new renewable energy has been the lack of an accompanying investment in power infrastructure to bring the power to market. It is believed 8,300 km of power lines must be built or upgraded. In 2010 legislation has been passed seeking construction and upgrade of 7'700 km of new grid lines, but only 950 km have been built by 2019 — and in 2017 only 30 km has been built.The different German States have varying attitudes to the construction of new power lines. Industry has had their rates frozen and so the increased costs of the Energiewende have been passed on to consumers, who have had rising electricity bills. Germans in 2013 had some of the highest electricity prices (including taxes) in Europe. In comparison, its neighbours (Poland, Sweden, Denmark and nuclear-reliant France) have some of the lowest costs (excluding taxes) in the EU.On 1 August 2014, a revised Renewable Energy Sources Act entered into force. Specific deployment corridors stipulated the extent to which renewable energy is to be expanded in the future and the funding rates (feed-in tariffs) will no longer be fixed by the government, but will be determined by auction.Market redesign was perceived as a key part of the Energiewende. The German electricity market needed to be reworked to suit.
Among other things, wind and solar power cannot be principally refinanced under the current marginal cost based market. Carbon pricing is also central to the Energiewende, and the European Union Emissions Trading Scheme (EU ETS) needs to be reformed to create a genuine scarcity of certificates.
The German federal government is calling for such reform.
Most of the computer scenarios used to analyse the Energiewende rely on a substantial carbon price to drive the transition to low-carbon technologies.
Coal-fired generation needs to be retired as part of the Energiewende. Some argue for an explicit negotiated phase-out of coal plants, along the lines of the well-publicized nuclear phase-out, but as German minister of economy noted, "we cannot shut down both our nuclear and coal-fired power plants". Coal comprised 42% of electricity generation in 2015. If Germany is to limit its contribution to a global temperature increase to 1.5 °C above pre-industrial levels, as declared in the 2015 Paris Agreement, a complete phase-out of fossil fuels together with a shift to 100% renewable energy is required by about 2040.The Energiewende is made up of various technical building blocks and assumptions. Electricity storage, while too expensive at the beginning of the program, was hoped to become a useful technology in the future. As of 2019 however as number of potential storage projects (power-to-gas, hydrogen storage and others) are still in prototype phase with losses up to 40% of the stored energy in the existing small scale installations.Energy efficiency plays a key but under-recognised role. Improved energy efficiency is one of Germany's official targets. Greater integration with adjoining national electricity networks can offer mutual benefits — indeed, systems with high shares of renewables can use geographical diversity to offset intermittency.Germany invested €1.5 billion in energy research in 2013.
Of that the German federal government spent €820 million supporting projects ranging from basic research to applications. The federal government also foresees an export role for German expertise in the area.The social and political dimensions of the Energiewende have been subject to study. Strunz argues that the underlying technological, political and economic structures will need to change radically — a process he calls regime shift.
Schmid, Knopf, and Pechan analyse the actors and institutions that will be decisive in the Energiewende and how latency in the national electricity infrastructure may restrict progress.On 3 December 2014, the German federal government released its National Action Plan on Energy Efficiency (NAPE) in order to improve the uptake of energy efficiency.
The areas covered are the energy efficiency of buildings, energy conservation for companies, consumer energy efficiency, and transport energy efficiency. German industry is expected to make a sizeable contribution.
An official federal government report on progress under the Energiewende, updated for 2014, notes that:
energy consumption fell by 4.7% in 2014 (from 2013) and at 13132 petajoules reached its lowest level since 1990
renewable generation is the number-one source of electricity
energy efficiency increased by an average annual 1.6% between 2008 and 2014
final energy consumption in the transport sector was 1.7% higher in 2014 than in 2005
for the first time in more than ten years, electricity prices for household customers fell at the beginning of 2015A commentary on the progress report expands on many of the issues raised.
Slowdown from 2016
Slow progress on transmission network reinforcement has led to a deferment of new windfarms in northern Germany. The German cabinet earlier approved costly underground cabling in October 2015 in a bid to dispel local resistance against above-ground pylons and to speed up the expansion process.
Analysis by Agora Energiewende in late-2016 suggests that Germany will probably miss several of its key Energiewende targets, despite recent reforms to the Renewable Energy Sources Act and the wholesale electricity market. The goal to cut emissions by 40% by 2020 "will most likely be missed ... if no further measures are taken" and the 55–60% share of renewable energy in gross electricity consumption by 2035 is "unachievable" with the current plans for renewables expansion. In November 2016, Agora Energiewende reported on the impact of the new EEG (2017) and several other related new laws. It concludes that this new legislation will bring "fundamental changes" for large sections of the energy industry, but have limited effect on the economy and on consumers.The 2016 Climate Action Plan for Germany, adopted on 14 November 2016, introduced sector targets for greenhouse gas (GHG) emissions. The goal for the energy sector is shown in the table. The plan states that the energy supply must be "almost completely decarbonised" by 2050, with renewables as its main source. For the electricity sector, "in the long-term, electricity generation must be based almost entirely on renewable energies" and "the share of wind and solar power in total electricity production will rise significantly". Notwithstanding, during the transition, "less carbon-intensive natural gas power plants and the existing most modern coal power plants play an important role as interim technologies".
The fifth monitoring report on the Energiewende for 2015 was published in December 2016. The expert commission which wrote the report warns that Germany will probably miss its 2020 climate targets and believes that this could threaten the credibility of the entire endeavour. The commission puts forward a number of measures to address the slowdown, including a flat national CO2 price imposed across all sectors, a greater focus on transport, and full market exposure for renewable generation. Regarding the carbon price, the commission thinks that a reformed EU ETS would be better, but that achieving agreement across Europe is unlikely.
After 2017
Since 2017, it had become clear that the Energiewende was not progressing at the anticipated speed, with the country's climate policy regarded as "lackluster" and the energy transition "stalling." High electricity prices, growing resistance against the use of wind turbines over their environmental and potential health impacts, and regulatory hurdles, have been identified as causes for this. As of 2017 Germany imported more than half of its energy.A 2018 European Commission case study report on the Energiewende noted 27% decrease in CO2 emissions against the 1990 levels with a slight increase over the few preceding years and concluded achieving of the intended 40% reduction target by 2020 in unfeasible, primarily due to the "simultaneous nuclear phase-out and increased energy consumption". Also 50% increase of electricity prices was observed (compared to base 2007 prices). Germany's energy sector remains the largest single source of CO2 emissions, contributing over 40%.In 2018 the slow-down of deployment of new renewable energy was partially attributed to high demand for land, which has been highlighted as a potential "downside" by a WWF report.
In March 2019, Chancellor Merkel formed a so-called climate cabinet to find a consensus on new emissions reduction measures to meet 2030 targets. The result was the Climate Action Program 2030, which Berlin adopted on 9 October 2019. The Program contains plans for a carbon pricing system for the heating and transportation sectors, which are not covered by the EU ETS. It also includes tax- and other incentives to encourage energy-efficient building renovations, higher EV subsidies, and more public transport investments. The IEA report concludes that "[t]he package represents a clear step in the right direction towards Germany meeting its 2030 targets." The German Coal Commission, composed of 28 industrial, environmental, and regional organizations, voted on the coal phase-out date. Ultimately, 27 members voted in favor of 2038 coal phase-out date, with only one regional organization from Lusatia voting against, and Greenpeace voting in favor and later releasing a non-binding "dissenting opinion".As result of phasing out nuclear power and, in long term, coal, Germany declared increased reliance on fossil gas.
We will have phased out nuclear energy by 2022. We have a very difficult problem, namely that almost the only sources of energy that will be able to provide baseload power are coal and lignite. Naturally, we cannot do without baseload energy. Natural gas will therefore play a greater role for another few decades. I believe we would be well advised to admit that if we phase out coal and nuclear energy then we have to be honest and tell people that we'll need more natural gas.
A similar statement was voiced by SPD MP Udo Bullmann who explained that Germany has to stick with fossil fuels as it's trying to replace both coal and nuclear "at the same time", while countries that rely on nuclear power have "easier task replacing fossil fuels". In 2020 Agora Energiewende also declared a number of new fossil gas plants will be also required to "guarantee supply security as Germany relies more and more on intermittent renewable electricity". In January 2019 Germany's Economy Minister Peter Altmaier he doesn't want to import "cheap nuclear power" from other countries to compensate for planned phase-out of coal. In 2021 Green MEP Sven Giegold admitted that Germany may require new fossil gas power plants in order to "stabilise the more fluctuating power supply of renewables".
The 2020 climate goals were successful in the following areas:
closure of nuclear plants
increasing renewable energy share
reducing greenhouse gas emissionsThe following climate goals however failed:
increasing renewable energy share in the transport sector
reducing primary energy consumption
final energy productivity.In 2020 a number of previously shut down fossil gas plants (Irsching units 4 and 5) were restarted due to "heavy fluctuations of level of power generated from the wind and sun" and a new fossil gas power plant was announced by RWE near the former Biblis nuclear power plant shut down in 2017. The project is declared as part of "decarbonization plan" where renewable energy capacity is accompanied by fossil gas plants to cover for intermittency. In 2020 a new coal power plant unit, Datteln 4, was also connected to the grid. A new fossil gas power plant will be also opened from 2023 in Leipheim, Bavaria to compensate for loss of power caused by "nuclear exit" in this region. In 2021, the planned decommissioning of Heyden 4 coal power plant was cancelled and the plant remains online to compensate for shutdown of the Grohnde nuclear power station. In 2022, another coal power plant was restarted in Schongau for the same reasons.In June 2021 professor André Thess from Stuttgart university published an open letter accusing Klaus Töpfer and Matthias Kleiner, the authors of the 2011 Ethical Committee for Secure Energy Supply report that served as the scientific background of the "nuclear exit" decision, of disregarding the basic rules of scientific independence. The analysis promised that phase-out of nuclear energy and full transition to renewables "can be completed within a decade". Thess highlighted that the authors lacked the expertise necessary to properly understand and "balance between the risk of more rapid climate change without nuclear energy and the risk of slower climate change with nuclear energy".High average amounts of wind in 2019 and 2020 were presented in Germany as a success of the renewables, but when the amount of wind was low for the first half of 2021, use of coal rose by 21% as compared to the previous years. In the first half of 2021 coal, gas and nuclear power delivered 56% of overall electricity in Germany, with proportionally higher CO2 intensity due high inputs from coal and fossil gas. According to another analysis by Oekomoderne, in 2021 Germany produced nearly 260 TWh of electricity from coal in the first half of 2021, making it the single largest source of energy in that period — as it used "one billion tons" of coal.This situation once again raised questions about the future of weather-dependent electricity system that is also highly dependent on fossil sources for stability and its contradiction with the initial objectives of decarbonization.Projections Report published in 2021 predicted that Germany will miss its 2030 target by 16 percentage points (49% reduction vs 65% planned) and the 2040 target by 21 percentage points (67% vs 88% planned). Reduction of emissions in other sectors of the economy is also expected to miss the original targets.In October 2021 over 20 climate scientists and activists signed an open letter to the German government to reconsider the nuclear exit as it will lead to emissions of extra 60 millions of tons of CO2 each year and hinder decarbonization efforts even further.The new coalition formed after the 2021 elections proposed earlier phase-out of coal and internal combustion cars by 2035, 65% energy generated from renewables by 2030 and 80% by 2040. In addition, 2% of land surface is to be set aside for on-shore wind power, and off-shore wind capacity is to be increased to 75 GW. Fossil gas role was reinforced as "indispensable" transition fuel with low-carbon nuclear power imported from France to ensure stability of supplies.By end of 2021, the single largest source of electricity in Germany was coal (9.5% hard and 20.2% brown), increase of 20% compared to 2020 due to significant drop in wind (−14.5%) and solar (−5%) power output in that year. Solar power only produced 9.9% electricity, while nuclear power produced 13% even as it was going through the process of decommissioning.In 2022 Agora Energiewende warned that Germany has missed its 2020 emission targets and is likely going to miss the 2030 targets, and increase of total emissions after 2022 is likely. Previously celebrated 2020 record low emissions were described as one-off effect of favorable weather and lower demand due to COVID-19 pandemics. Nuclear phase-out, skyrocketing gas prices and low wind and solar output resulting in increased reliance on coal were also attributed to the increase in emissions.In January 2022 the new coalition government reiterated its opposition to the inclusion of nuclear power in the EU sustainable taxonomy, but also requested that fossil gas is instead included as a "transitional" fuel and carbon intensity thresholds for gas are relaxed. As the subsidies for gas were ultimately upheld, a number of new fossil gas plants plan to benefit from the subsidies, while expecting increased profits thanks to "rising wholesale electricity prices" as result of "the last nuclear power plants to be removed from the grid" at the same time.
Post-2022
Following the 2022 Russian invasion of Ukraine, Germany announced they would re-open 10 GW of coal power to allegedly "conserve natural gas" following the recent shortage in Europe. This led to a subsequent criticism of Energiewende's strategy, and how this impacted different countries in Europe. Michael Kretschmer (CDU) declared the Energiewende to be a failure, highlighting that renewable generation is insufficient and baseload capabilities have reached its limits. He called for nuclear power phase-out to be cancelled and remaining reactors restarted, until a new feasible strategy is created.From February 2022 there was a heated debate about pausing the nuclear phase-out and restarting still operational reactors in order to better cope with the energy crisis caused by the Russian invasion of Ukraine. Also in August 2022 German counter-intelligence started an investigation into two high-ranked officials at German ministry of energy suspected of representing interests of Russia.In October 2022 Germany ministry of energy approved extension of RWE brown coal open pit mine in Lutzerath, claiming it's "necessary for energy security". In October 2022 the government also declared the still operational nuclear power plants will not be shut down by end of 2022, but will instead operate until 15 April to help cope with the electricity demand through the winter.In 2023 government declared it plans to remove a key clause from the law that binds all ministries to reduce carbon emissions within their area of responsibility. The only binding target will be the overall 2030 emissions reduction target. The largest CO2 emissions source in Germany is electricity production, and in that sector emissions have been growing since 2020.
Criticism
The Energiewende has been criticized for the high costs, the early nuclear phase-out which increased carbon emissions, continuation or even increase in use of fossil fuels, risks to power supply stability and the environmental damage of biomass.German association of local utilities VKU said the strategy creates significant risks to the stability of power supply in case of "lengthy periods" of weather unsuitable for wind and solar generation since energy storage in Germany is "largely non-existent".After introduction of the original Renewable Energy Act in 2000, there was a focus on long term costs, while in later years this has shifted to a focus on short term costs and the "financial burden" of the Energiewende while ignoring environmental externalities of fossil fuels.
Electricity prices for household customers in Germany have been generally increasing in the last decade.
The renewable energy levy to finance green power investment is added to Germans' electricity unit price. The surcharge (22.1% in 2016) pays the state-guaranteed price for renewable energy to producers and is 6.35 cents per kWh in 2016.A comprehensive study, published in Energy Policy in 2013, reported that Germany's nuclear power phase-out, to be complete by 2022, is contradictory to the goal of the climate portion of the program.In June 2019, an open letter to "the leadership and people of Germany", written by almost 100 Polish environmentalists and scientist, urged Germany to "reconsider the decision on the final decommissioning of fully functional nuclear power plants" for the benefit of the fight against global warming.Former German Economy and Energy Minister Sigmar Gabriel 2014 said "For a country like Germany with a strong industrial base, exiting nuclear and coal-fired power generation at the same time would not be possible."As nuclear and coal power plants are being phased out, the government has begun to promote the use of fossil gas in order to bridge the gap between other fossil fuels and low-carbon energy sources. This move has been criticised by international observers, who argue that fossil fuel gas is "essentially methane, which constitutes at least one-third of global warming and is leaking into the atmosphere all across the gas production and delivery chain." It is also a more potent greenhouse gas than carbon-dioxide. It is also feared that the European Union, but particularly Germany, is making itself overly dependent on Russia for gas supplies via Nord Stream 2, thereby undermining its energy security. In light of the 2022 Russian invasion of Ukraine the Nord Stream 2 project was first postponed indefinitely and ultimately cancelled. The Scholz cabinet has spent considerable efforts since February 2022 to find replacements for Russian fossil gas both in the near and the long term.
Germany's electricity transmission network is currently inadequately developed, therefore lacking the capability of delivering offshore wind energy produced on the Northern coast to industrial regions in the country's South. The transmission system operators are planning to build 4000 additional kilometers of transmission lines until 2030.Slow reduction of CO2 emissions in Germany, especially in the energy sector, has been contrasted with France's successful decarbonization of its energy sector under the Messmer plan (from 1973) and the United Kingdom's carbon tax, which saw a drastic reduction of coal-powered energy from 88% in 1973 to below 1% in 2019.German federal audit office report published in March 2021 highlighted very high costs of Energiewende for the household users, where taxes and fees account for 50% of the bills, and the energy price is 43% higher than the EU average. It also noted predicted shortage of 4.5 GW between 2022 and 2025 as result of planned shutdown of nuclear power plants.A study found that if Germany had postponed the nuclear phase out and phased out coal first it could have saved 1,100 lives and €3 to €8 billion in social costs per year. The study concludes that policymakers would have to overestimate the risk or cost of a nuclear accident to conclude that the benefits of the phase-out exceed its social costs. An open letter by a number of climate scientists published in 2021 calls against the shut-down of the remaining nuclear reactors in Germany, that would lead to 5% increase in CO2 emissions from the electricity sector.
Biomass
Biomass made up 7.0% of Germany's power generation mix in 2017. Biomass has the potential to be a carbon-neutral fuel because growing biomass absorbs carbon dioxide from the atmosphere and a portion of the carbon absorbed remains in the ground after harvest. However, using biomass as a fuel produces air pollution in the form of carbon monoxide, carbon dioxide, NOx (nitrogen oxides), VOCs (volatile organic compounds), particulates and other pollutants, although biomass produces less sulfur dioxide than coal.Between 2004 and 2011 policies lead to around 7000 km2 new maize-fields for biomass-energy by ploughing-up of at least 2700 km2 of permanent grassland. This released large amounts of climate active gases, loss of biodiversity and potential of groundwater recharge.There are also attempts to use biogas as partially renewable fuel with Green Planet Energy selling gas containing 10% of biogas, 1% hydrogen and 90% imported fossil gas.
Citizen support and participation
As of 2016, citizen support for the Energiewende remained high, with surveys indicating that about 80–90% of the public are in favor.
One reason for the high acceptance was the substantial participation of German citizens in the Energiewende, as private households, land owners, or members of energy cooperatives (Genossenschaft).
A 2016 survey showed that roughly one in two Germans would consider investing in community renewable energy projects.
Manfred Fischedick, Director of the Wuppertal Institute for Climate, Environment and Energy has commented that "if people participate with their own money, for example in a wind or solar power plant in their area, they will also support [the Energiewende]." A 2010 study shows the benefits to municipalities of community ownership of renewable generation in their locality.
Estimates for 2012 suggested that almost half the renewable energy capacity in Germany was owned by citizens through energy cooperatives and private initiatives. More specifically, citizens accounted for nearly half of all installed biogas and solar capacity and half of the installed onshore wind capacity.According to a 2014 survey conducted by TNS Emnid for the German Renewable Energies Agency among 1015 respondents, 94 percent of the Germans supported the enforced expansion of Renewable Energies. More than two-thirds of the interviewees agreed to renewable power plants close to their homes.
The share of total final energy from renewables was 11% in 2014.: 137 However, changes in energy policy, starting with the Renewable Energy Sources Act in 2014, have jeopardized the efforts of citizens to participate. The share of citizen-owned renewable energy has since dropped to 42.5% as of 2016.The Renewable Energy Sources Act provides compensation to wind turbine operators for every kilowatt-hour of electricity not produced if wind power surpasses peak grid capacity, while grid operators must splice electricity from renewable sources into the grid even in times of low or no demand for it. This can lead to a negative price of electricity, and grid operators may pass associated costs on to customers, estimated to be costing them an extra €4 billion in 2020. This has resulted in greater resistance to certain Energiewende policies, specifically wind power.By 2019 Germany also saws a significant increase of organized opposition against on-shore wind farms, especially in Bavaria and Baden-Württemberg.
Computer studies
Much of the policy development for the Energiewende is underpinned by computer models, run mostly by universities and research institutes. The models are usually based on scenario analysis and are used to investigate different assumptions regarding the stability, sustainability, cost, efficiency, and public acceptability of various sets of technologies. Some models cover the entire energy sector, while others are confined to electricity generation and consumption. A 2016 book investigates the usefulness and limitations of energy scenarios and energy models within the context of the Energiewende.A number of computer studies confirm the feasibility of the German electricity system being 100% renewable in 2050. Some investigate the prospect of the entire energy system (all energy carriers) being fully renewable too.
2009 WWF study
In 2009 WWF Germany published a quantitative study prepared by the Öko-Institut, Prognos, and Hans-Joachim Ziesing.
The study presumes a 95% reduction in greenhouse gases by the year 2050 and covers all sectors. The study shows that the transformation from a high-carbon to a low-carbon economy is possible and affordable. It notes that by committing to this transformation path, Germany could become a model for other countries.
2011 German Advisory Council on the Environment study
A 2011 report from the German Advisory Council on the Environment (SRU) concludes that Germany can attain 100% renewable electricity generation by 2050. The German Aerospace Center (DLR) REMix high-resolution energy model was used for the analysis. A range of scenarios were investigated and a cost-competitive transition with good security of supply is possible.
The authors presume that the transmission network will continue to be reinforced and that cooperation with Norway and Sweden would allow their hydro generation to be used for storage. The transition does not require Germany's nuclear phase-out (Atomausstieg) to be extended nor the construction of coal-fired plants with carbon capture and storage (CCS). Conventional generation assets need not be stranded and an orderly transition should prevail. Stringent energy efficiency and energy saving programs can bring down the future costs of electricity.
2015 Deep Decarbonization Pathways Project study
The Deep Decarbonization Pathways Project (DDPP) aims to demonstrate how countries can transform their energy systems by 2050 in order to achieve a low-carbon economy.
The 2015 German country report, produced in association with the Wuppertal Institute, examines the official target of reducing domestic GHG emissions by 80% to 95% by 2050 (compared with 1990). Decarbonization pathways for Germany are illustrated by means of three ambitious scenarios with energy-related emission reductions between 1990 and 2050 varying between 80% and more than 90%. Three strategies strongly contribute to GHG emission reduction:
energy efficiency improvements (in all sectors but especially in buildings)
increased use of domestic renewables (with a focus on electricity generation)
electrification and (in two of the scenarios also) use of renewable electricity-based synthetic fuels (especially in the transport and industry sector)In addition, some scenarios use controversially:
final energy demand reductions through behavioral changes (modal shift in transport, changes in eating and heating habits)
net imports of electricity from renewable sources or of bioenergy
use of carbon capture and storage (CCS) technology to reduce industry sector GHG emissions (including cement production)Potential co-benefits for Germany include increased energy security, higher competitiveness of and global business opportunities for companies, job creation, stronger GDP growth, smaller energy bills for households, and less air pollution.
2015 Fraunhofer ISE study
Using the model REMod-D (Renewable Energy Model – Germany), this 2015 Fraunhofer ISE study investigates several system transformation scenarios and their related costs. The guiding question of the study is: how can a cost-optimised transformation of the German energy system — with consideration of all energy carriers and consumer sectors — be achieved while meeting the declared climate protection targets and ensuring a secure energy supply at all times. Carbon capture and storage (CCS) is explicitly excluded from the scenarios. A future energy scenario emitting 85% less CO2 emissions than 1990 levels is compared with a reference scenario, which assumes that the German energy system operates in 2050 the same way as it does today. Under this comparison, primary energy supply drops 42%. The total cumulative costs depend on the future prices for carbon and oil. If the penalty for CO2 emissions increases to €100/tonne by 2030 and thereafter remains constant and fossil fuel prices increase annually by 2%, then the total cumulative costs of today's energy system are 8% higher than the costs required for the minus 85% scenario up to 2050. The report also notes:
From the macroeconomic perspective, the transformation of Germany's energy system demands a significant shift in cash flow, moving the cash spent on energy imports today to spend it instead on new investments in systems, their operation and maintenance. In this respect a transformed energy system requires a large expenditure for local added value, a factor which also does not appear in the shown cost analysis.: 8
2015 DIW study
A 2015 study uses DIETER or Dispatch and Investment Evaluation Tool with Endogenous Renewables, developed by the German Institute for Economic Research (DIW), Berlin, Germany. The study examines the power storage requirements for renewables uptake ranging from 60% to 100%. Under the baseline scenario of 80% (the German government target for 2050), grid storage requirements remain moderate and other options on both the supply side and demand side offer flexibility at low cost. Nonetheless, storage plays an important role in the provision of reserves. Storage becomes more pronounced under higher shares of renewables, but strongly depends on the costs and availability of other flexibility options, particularly on biomass availability. The model is fully described in the study report.
2016 acatech study
A 2016 acatech-lead study focused on so-called flexibility technologies used to balance the fluctuations inherent in power generation from wind and photovoltaics. Set in 2050, several scenarios use gas power plants to stabilise the backbone of energy system, ensuring supply security during several weeks of low wind and solar radiation. Other scenarios investigate a 100% renewable system and show these to be possible but more costly. Flexible consumption and storage control (demand-side management) in households and the industrial sector is the most cost-efficient means of balancing short-term power fluctuations. Long-term storage systems, based on power-to-X, are only viable if carbon emissions are to be reduced by more than 80%. On the question of costs, the study notes:
Assuming that the price of emissions allowances in 2050 will significantly surpass its current level, a power generation system boasting a high percentage of wind and photovoltaics will, as a rule, come cheaper than a system dominated by fossil fuel power plants.: 7
2016 Stanford University study
The Atmosphere/Energy Program at Stanford University has developed roadmaps for 139 countries to achieve energy systems powered only by wind, water, and sunlight (WWS) by 2050. In the case of Germany, total end-use energy drops from 375.8 GW for business-as-usual to 260.9 GW under a fully renewable transition. Load shares in 2050 would be: on-shore wind 35%, off-shore wind 17%, wave 0.08%, geothermal 0.01%, hydro-electric 0.87%, tidal 0%, residential PV 6.75%, commercial PV 6.48%, utility PV 33.8%, and concentrating solar power 0%. The study also assess avoided air pollution, eliminated global climate change costs, and net job creation. These co-benefits are substantial.
See also
References
Further reading
Energy Concept for an Environmentally Sound, Reliable and Affordable Energy Supply, 28 September 2010 (English translation of the German policy document)
Morris, Craig; Jungjohann, Arne (2016). Energy democracy: Germany's Energiewende to renewables. Cham, Switzerland: Springer International Publishing. doi:10.1007/978-3-319-31891-2. ISBN 978-3-319-31890-5.
Sturm, Christine (2020). Inside the Energiewende: Twists and Turns on Germany's Soft Energy Path. Cham, Switzerland: Springer. ISBN 978-3030427290.
External links
Clean Energy Wire (CLEW) – a news service covering the energy transition in Germany
Energy Topics – hosted by the Federal Ministry for Economic Affairs and Energy (BMWi)
German Energy Blog – a legal blog covering the Energiewende
German Energy Transition – a comprehensive website maintained by the Heinrich Böll Foundation
Presentation (30:47) by Amory Lovins to the Berlin Energy Transition Dialogue 2016, 17–18 March 2016
Strom-Report.de – a statistics website covering renewable energy topics as well as the energy transition in Germany |
ruminant | Ruminants are herbivorous grazing or browsing artiodactyls belonging to the suborder Ruminantia that are able to acquire nutrients from plant-based food by fermenting it in a specialized stomach prior to digestion, principally through microbial actions. The process, which takes place in the front part of the digestive system and therefore is called foregut fermentation, typically requires the fermented ingesta (known as cud) to be regurgitated and chewed again. The process of rechewing the cud to further break down plant matter and stimulate digestion is called rumination. The word "ruminant" comes from the Latin ruminare, which means "to chew over again".
The roughly 200 species of ruminants include both domestic and wild species. Ruminating mammals include cattle, all domesticated and wild bovines, goats, sheep, giraffes, deer, gazelles, and antelopes. It has also been suggested that notoungulates also relied on rumination, as opposed to other atlantogenates that rely on the more typical hindgut fermentation, though this is not entirely certain.Taxonomically, the suborder Ruminantia is a lineage of herbivorous artiodactyls that includes the most advanced and widespread of the world's ungulates. The suborder Ruminantia includes six different families: Tragulidae, Giraffidae, Antilocapridae, Cervidae, Moschidae, and Bovidae.
Taxonomy and evolution
Hofmann and Stewart divided ruminants into three major categories based on their feed type and feeding habits: concentrate selectors, intermediate types, and grass/roughage eaters, with the assumption that feeding habits in ruminants cause morphological differences in their digestive systems, including salivary glands, rumen size, and rumen papillae. However, Woodall found that there is little correlation between the fiber content of a ruminant's diet and morphological characteristics, meaning that the categorical divisions of ruminants by Hofmann and Stewart warrant further research.Also, some mammals are pseudoruminants, which have a three-compartment stomach instead of four like ruminants. The Hippopotamidae (comprising hippopotamuses) are well-known examples. Pseudoruminants, like traditional ruminants, are foregut fermentors and most ruminate or chew cud. However, their anatomy and method of digestion differs significantly from that of a four-chambered ruminant.Monogastric herbivores, such as rhinoceroses, horses, guinea pigs, and rabbits, are not ruminants, as they have a simple single-chambered stomach. These hindgut fermenters digest cellulose in an enlarged cecum. In smaller hindgut fermenters of the order Lagomorpha (rabbits, hares, and pikas), and Caviomorph rodents (Guinea pigs, capybaras, etc), cecotropes formed in the cecum are passed through the large intestine and subsequently reingested to allow another opportunity to absorb nutrients.
Phylogeny
Ruminantia is a crown group of ruminants within the order Artiodactyla, cladistically defined by Spaulding et al. as "the least inclusive clade that includes Bos taurus (cow) and Tragulus napu (mouse deer)". Ruminantiamorpha is a higher-level clade of artiodactyls, cladistically defined by Spaulding et al. as "Ruminantia plus all extinct taxa more closely related to extant members of Ruminantia than to any other living species." This is a stem-based definition for Ruminantiamorpha, and is more inclusive than the crown group Ruminantia. As a crown group, Ruminantia only includes the last common ancestor of all extant (living) ruminants and their descendants (living or extinct), whereas Ruminantiamorpha, as a stem group, also includes more basal extinct ruminant ancestors that are more closely related to living ruminants than to other members of Artiodactyla. When considering only living taxa (neontology), this makes Ruminantiamorpha and Ruminantia synonymous, and only Ruminantia is used. Thus, Ruminantiamorpha is only used in the context of paleontology. Accordingly, Spaulding grouped some genera of the extinct family Anthracotheriidae within Ruminantiamorpha (but not in Ruminantia), but placed others within Ruminantiamorpha's sister clade, Cetancodontamorpha.Ruminantia's placement within Artiodactyla can be represented in the following cladogram:
Within Ruminantia, the Tragulidae (mouse deer) are considered the most basal family, with the remaining ruminants classified as belonging to the infraorder Pecora. Until the beginning of the 21st century it was understood that the family Moschidae (musk deer) was sister to Cervidae. However, a 2003 phylogenetic study by Alexandre Hassanin (of National Museum of Natural History, France) and colleagues, based on mitochondrial and nuclear analyses, revealed that Moschidae and Bovidae form a clade sister to Cervidae. According to the study, Cervidae diverged from the Bovidae-Moschidae clade 27 to 28 million years ago. The following cladogram is based on a large-scale genome ruminant genome sequence study from 2019:
Classification
ORDER ARTIODACTYLA
Suborder Tylopoda: camels and llamas, 7 living species in 3 genera
Suborder Suina: pigs and peccaries
Suborder Cetruminantia: ruminants, whales and hippos
unranked Ruminantia
Infraorder Tragulina (paraphyletic)Family †Leptomerycidae
Family †Hypertragulidae
Family †Praetragulidae
Family †Gelocidae
Family †Bachitheriidae
Family Tragulidae: chevrotains, 6 living species in 4 genera
Family †Archaeomerycidae
Family †Lophiomerycidae
Infraorder Pecora
Family Cervidae: deer and moose, 49 living species in 16 genera
Family †Palaeomerycidae
Family †Dromomerycidae
Family †Hoplitomerycidae
Family †Climacoceratidae
Family Giraffidae: giraffe and okapi, 2 living species in 2 genera
Family Antilocapridae: pronghorn, one living species in one genus
Family Moschidae: musk deer, 4 living species in one genus
Family Bovidae: cattle, goats, sheep, and antelope, 143 living species in 53 genera
Digestive system of ruminants
The primary difference between ruminants and nonruminants is that ruminants' stomachs have four compartments:
rumen—primary site of microbial fermentation
reticulum
omasum—receives chewed cud, and absorbs volatile fatty acids
abomasum—true stomachThe first two chambers are the rumen and the reticulum. These two compartments make up the fermentation vat and are the major site of microbial activity. Fermentation is crucial to digestion because it breaks down complex carbohydrates, such as cellulose, and enables the animal to use them. Microbes function best in a warm, moist, anaerobic environment with a temperature range of 37.7 to 42.2 °C (100 to 108 °F) and a pH between 6.0 and 6.4. Without the help of microbes, ruminants would not be able to use nutrients from forages. The food is mixed with saliva and separates into layers of solid and liquid material. Solids clump together to form the cud or bolus.
The cud is then regurgitated and chewed to completely mix it with saliva and to break down the particle size. Smaller particle size allows for increased nutrient absorption. Fiber, especially cellulose and hemicellulose, is primarily broken down in these chambers by microbes (mostly bacteria, as well as some protozoa, fungi, and yeast) into the three volatile fatty acids (VFAs): acetic acid, propionic acid, and butyric acid. Protein and nonstructural carbohydrate (pectin, sugars, and starches) are also fermented. Saliva is very important because it provides liquid for the microbial population, recirculates nitrogen and minerals, and acts as a buffer for the rumen pH. The type of feed the animal consumes affects the amount of saliva that is produced.
Though the rumen and reticulum have different names, they have very similar tissue layers and textures, making it difficult to visually separate them. They also perform similar tasks. Together, these chambers are called the reticulorumen. The degraded digesta, which is now in the lower liquid part of the reticulorumen, then passes into the next chamber, the omasum. This chamber controls what is able to pass into the abomasum. It keeps the particle size as small as possible in order to pass into the abomasum. The omasum also absorbs volatile fatty acids and ammonia.After this, the digesta is moved to the true stomach, the abomasum. This is the gastric compartment of the ruminant stomach. The abomasum is the direct equivalent of the monogastric stomach, and digesta is digested here in much the same way. This compartment releases acids and enzymes that further digest the material passing through. This is also where the ruminant digests the microbes produced in the rumen. Digesta is finally moved into the small intestine, where the digestion and absorption of nutrients occurs. The small intestine is the main site of nutrient absorption. The surface area of the digesta is greatly increased here because of the villi that are in the small intestine. This increased surface area allows for greater nutrient absorption. Microbes produced in the reticulorumen are also digested in the small intestine. After the small intestine is the large intestine. The major roles here are breaking down mainly fiber by fermentation with microbes, absorption of water (ions and minerals) and other fermented products, and also expelling waste. Fermentation continues in the large intestine in the same way as in the reticulorumen.
Only small amounts of glucose are absorbed from dietary carbohydrates. Most dietary carbohydrates are fermented into VFAs in the rumen. The glucose needed as energy for the brain and for lactose and milk fat in milk production, as well as other uses, comes from nonsugar sources, such as the VFA propionate, glycerol, lactate, and protein. The VFA propionate is used for around 70% of the glucose and glycogen produced and protein for another 20% (50% under starvation conditions).
Abundance, distribution, and domestication
Wild ruminants number at least 75 million and are native to all continents except Antarctica and Australia. Nearly 90% of all species are found in Eurasia and Africa. Species inhabit a wide range of climates (from tropic to arctic) and habitats (from open plains to forests).The population of domestic ruminants is greater than 3.5 billion, with cattle, sheep, and goats accounting for about 95% of the total population. Goats were domesticated in the Near East circa 8000 BC. Most other species were domesticated by 2500 BC., either in the Near East or southern Asia.
Ruminant physiology
Ruminating animals have various physiological features that enable them to survive in nature. One feature of ruminants is their continuously growing teeth. During grazing, the silica content in forage causes abrasion of the teeth. This is compensated for by continuous tooth growth throughout the ruminant's life, as opposed to humans or other nonruminants, whose teeth stop growing after a particular age. Most ruminants do not have upper incisors; instead, they have a thick dental pad to thoroughly chew plant-based food. Another feature of ruminants is the large ruminal storage capacity that gives them the ability to consume feed rapidly and complete the chewing process later. This is known as rumination, which consists of the regurgitation of feed, rechewing, resalivation, and reswallowing. Rumination reduces particle size, which enhances microbial function and allows the digesta to pass more easily through the digestive tract.
Rumen microbiology
Vertebrates lack the ability to hydrolyse the beta [1–4] glycosidic bond of plant cellulose due to the lack of the enzyme cellulase. Thus, ruminants completely depend on the microbial flora, present in the rumen or hindgut, to digest cellulose. Digestion of food in the rumen is primarily carried out by the rumen microflora, which contains dense populations of several species of bacteria, protozoa, sometimes yeasts and other fungi – 1 ml of rumen is estimated to contain 10–50 billion bacteria and 1 million protozoa, as well as several yeasts and fungi.Since the environment inside a rumen is anaerobic, most of these microbial species are obligate or facultative anaerobes that can decompose complex plant material, such as cellulose, hemicellulose, starch, and proteins. The hydrolysis of cellulose results in sugars, which are further fermented to acetate, lactate, propionate, butyrate, carbon dioxide, and methane.
As bacteria conduct fermentation in the rumen, they consume about 10% of the carbon, 60% of the phosphorus, and 80% of the nitrogen that the ruminant ingests. To reclaim these nutrients, the ruminant then digests the bacteria in the abomasum. The enzyme lysozyme has adapted to facilitate digestion of bacteria in the ruminant abomasum. Pancreatic ribonuclease also degrades bacterial RNA in the ruminant small intestine as a source of nitrogen.During grazing, ruminants produce large amounts of saliva – estimates range from 100 to 150 litres of saliva per day for a cow. The role of saliva is to provide ample fluid for rumen fermentation and to act as a buffering agent. Rumen fermentation produces large amounts of organic acids, thus maintaining the appropriate pH of rumen fluids is a critical factor in rumen fermentation. After digesta passes through the rumen, the omasum absorbs excess fluid so that digestive enzymes and acid in the abomasum are not diluted.
Tannin toxicity in ruminant animals
Tannins are phenolic compounds that are commonly found in plants. Found in the leaf, bud, seed, root, and stem tissues, tannins are widely distributed in many different species of plants. Tannins are separated into two classes: hydrolysable tannins and condensed tannins. Depending on their concentration and nature, either class can have adverse or beneficial effects. Tannins can be beneficial, having been shown to increase milk production, wool growth, ovulation rate, and lambing percentage, as well as reducing bloat risk and reducing internal parasite burdens.Tannins can be toxic to ruminants, in that they precipitate proteins, making them unavailable for digestion, and they inhibit the absorption of nutrients by reducing the populations of proteolytic rumen bacteria. Very high levels of tannin intake can produce toxicity that can even cause death. Animals that normally consume tannin-rich plants can develop defensive mechanisms against tannins, such as the strategic deployment of lipids and extracellular polysaccharides that have a high affinity to binding to tannins. Some ruminants (goats, deer, elk, moose) are able to consume food high in tannins (leaves, twigs, bark) due to the presence in their saliva of tannin-binding proteins.
Religious importance
The Law of Moses in the Bible allowed the eating of some mammals that had cloven hooves (i.e. members of the order Artiodactyla) and "that chew the cud", a stipulation preserved to this day in Jewish dietary laws.
Other uses
The verb 'to ruminate' has been extended metaphorically to mean to ponder thoughtfully or to meditate on some topic. Similarly, ideas may be 'chewed on' or 'digested'. 'Chew the (one's) cud' is to reflect or meditate. In psychology, "rumination" refers to a pattern of thinking, and is unrelated to digestive physiology.
Ruminants and climate change
Methane is produced by a type of archaea, called methanogens, as described above within the rumen, and this methane is released to the atmosphere. The rumen is the major site of methane production in ruminants. Methane is a strong greenhouse gas with a global warming potential of 86 compared to CO2 over a 20-year period.As a by-product of consuming cellulose, cattle belch out methane, there-by returning that carbon sequestered by plants back into the atmosphere. After about 10 to 12 years, that methane is broken down and converted back to CO2. Once converted to CO2, plants can again perform photosynthesis and fix that carbon back into cellulose. From here, cattle can eat the plants and the cycle begins once again. In essence, the methane belched from cattle is not adding new carbon to the atmosphere. Rather it is part of the natural cycling of carbon through the biogenic carbon cycle.In 2010, enteric fermentation accounted for 43% of the total greenhouse gas emissions from all agricultural activity in the world, 26% of the total greenhouse gas emissions from agricultural activity in the U.S., and 22% of the total U.S. methane emissions. The meat from domestically raised ruminants has a higher carbon equivalent footprint than other meats or vegetarian sources of protein based on a global meta-analysis of lifecycle assessment studies. Methane production by meat animals, principally ruminants, is estimated 15–20% global production of methane, unless the animals were hunted in the wild. The current U.S. domestic beef and dairy cattle population is around 90 million head, approximately 50% higher than the peak wild population of American bison of 60 million head in the 1700s, which primarily roamed the part of North America that now makes up the United States.
See also
Monogastric
Pseudoruminant
References
External links
Digestive Physiology of Herbivores – Colorado State University (Last updated on 13 July 2006)
Britannica, The Editors of Encyclopaedia. "Ruminant". Encyclopædia Britannica, Invalid Date, https://www.britannica.com/animal/ruminant. Accessed 22 February 2021.
"Ruminantia" . Encyclopædia Britannica (11th ed.). 1911. |
reference re greenhouse gas pollution pricing act | In Reference re Greenhouse Gas Pollution Pricing Act 2021 SCC 11, the Supreme Court of Canada ruled on 25 March 2021 that the federal carbon pricing law is constitutional.
Background
In response to Canada's 2016 ratification of the Paris Agreement which set greenhouse gas emission reduction targets, the Canadian federal government under Prime Minister Justin Trudeau passed the Greenhouse Gas Pollution Pricing Act (GHGPPA), which came into effect on 21 June 2018, establishing national standards for a carbon price.
Procedural history
The province of Saskatchewan, under Premier Scott Moe, referred a reference question to the Court of Appeal for Saskatchewan regarding the law's constitutionality. On 3 May 2019 the Court of Appeal ruled in favour of the federal government, concluding that, GHGPPA is "not unconstitutional either in whole or in part" and was a legitimate exercise of federal jurisdiction under the peace, order, and good government" (POGG) branch of the constitution (Justices Ottenbreit and Caldwell dissenting). Saskatchewan appealed this decision to the Supreme Court of Canada on 31 May 2019.The province of Ontario, under the premiership of Doug Ford, also referred a reference question to the Court of Appeal for Ontario, seeking a finding that the GGHPPA was unconstitutional. On 28 June 28 2019, the ONCA issued its advisory opinion, finding the law constitutionally valid (Justice Huscroft dissenting). Ontario was granted leave to appeal this decision to the Supreme Court of Canada.
The province of Alberta, led by then-Premier Jason Kenney, referred its own reference question to the Court of Appeal of Alberta on June 20, 2019. On February 24, 2020, that court issued an opinion which found the GHGPPA unconstitutional (Justice Feehan dissenting). The Attorney General of British Columbia appealed this decision to the SCC.
Breakdown of the decision
The majority found the Act to be constitutional including Chief Justice Richard Wagner and Justices Rosalie Silberman Abella, Michael J. Moldaver, Andromache Karakatsanis, Sheilah L. Martin, and Nicholas Kasirer.Justices Russell Brown and Malcolm Rowe dissented and Justice Suzanne Côté dissented in part.
Climate change is real
The Supreme Court said that "all of the parties agree that global climate change is real. It's caused by greenhouse gas emissions resulting from human activities and it poses a grave threat to the future of humanity."
See also
List of Supreme Court of Canada cases (Wagner Court)
Carbon pricing in Canada
Environmental law
Canadian federalism
Notes and references
Notes
=== References === |
world energy resources | World energy resources are the estimated maximum capacity for energy production given all available resources on Earth. They can be divided by type into fossil fuel, nuclear fuel and renewable resources.
Fossil fuel
Remaining reserves of fossil fuel are estimated as:
These are the proven energy reserves; real reserves may be four or more times larger. These numbers are very uncertain. Estimating the remaining fossil fuels on the planet depends on a detailed understanding of Earth's crust. With modern drilling technology, we can drill wells in up to 3 km of water to verify the exact composition of the geology; but half of the ocean is deeper than 3 km, leaving about a third of the planet beyond the reach of detailed analysis.
There is uncertainty in the total amount of reserves, but also in how much of these can be recovered gainfully, for technological, economic and political reasons, such as the accessibility of fossil deposits, the levels of sulfur and other pollutants in the oil and the coal, transportation costs, and societal instability in producing regions. In general the easiest to reach deposits are the first extracted.
Coal
Coal is the most abundant and burned fossil fuel. This was the fuel that launched the industrial revolution and continued to grow in use; China, which already has many of the world's most polluted cities, was in 2007 building about two coal-fired power plants every week. Coal's large reserves would make it a popular candidate to meet the energy demand of the global community, short of global warming concerns and other pollutants.
Natural gas
Natural gas is a widely available fossil fuel with estimated 850 000 km³ in recoverable reserves and at least that much more using enhanced methods to release shale gas. Improvements in technology and wide exploration led to a major increase in recoverable natural gas reserves as shale fracking methods were developed. At present usage rates, natural gas could supply most of the world's energy needs for between 100 and 250 years, depending on increase in consumption over time.
Oil
It is estimated that there may be 57 zettajoule (ZJ) of oil reserves on Earth (although estimates vary from a low of 8 ZJ, consisting of currently proven and recoverable reserves, to a maximum of 110 ZJ) consisting of available, but not necessarily recoverable reserves, and including optimistic estimates for unconventional sources such as oil sands and oil shale. Current consensus among the 18 recognized estimates of supply profiles is that the peak of extraction will occur in 2020 at the rate of 93-million barrels per day (mbd). Current oil consumption is at the rate of 0.18 ZJ per year (31.1 billion barrels) or 85 mbd.
There is growing concern that peak oil production may be reached in the near future, resulting in severe oil price increases.
A 2005 French Economics, Industry and Finance Ministry report suggested a worst-case scenario that could occur as early as 2013.
There are also theories that peak of the global oil production may occur in as little as 2–3 years. The ASPO predicts peak year to be in 2010. Some other theories present the view that it has already taken place in 2005. World crude oil production (including lease condensates) according to US EIA data decreased from a peak of 73.720 mbd in 2005 to 73.437 in 2006, 72.981 in 2007, and 73.697 in 2008. According to peak oil theory, increasing production will lead to a more rapid collapse of production in the future, while decreasing production will lead to a slower decrease, as the bell-shaped curve will be spread out over more years.
In a stated goal of increasing oil prices to $75/barrel, which had fallen from a high of $147 to a low of $40, OPEC announced decreasing production by 2.2 mbd beginning 1 January 2009.
Sustainability
Political considerations over the security of supplies, environmental concerns related to global warming and sustainability are expected to move the world's energy consumption away from fossil fuels. The concept of peak oil shows that about half of the available petroleum resources have been produced, and predicts a decrease of production.
A government moving away from fossil fuels would most likely create economic pressure through carbon emissions and green taxation. Some countries are taking action as a result of the Kyoto Protocol, and further steps in this direction are proposed. For example, the European Commission has proposed that the energy policy of the European Union should set a binding target of increasing the level of renewable energy in the EU's overall mix from less than 7% in 2007 to 20% by 2020.The antithesis of sustainability is a disregard for limits, commonly referred to as the Easter Island Effect, which is the concept of being unable to develop sustainability, resulting in the depletion of natural resources. Some estimate that, assuming current consumption rates, current oil reserves could be completely depleted by 2050.
Nuclear energy
Nuclear energy
The International Atomic Energy Agency estimates the remaining uranium resources to be equal to 2500 ZJ. This assumes the use of breeder reactors, which are able to create more fissile material than they consume. IPCC estimated currently proved economically recoverable uranium deposits for once-through fuel cycles reactors to be only 2 ZJ. The ultimately recoverable uranium is estimated to be 17 ZJ for once-through reactors and 1000 ZJ with reprocessing and fast breeder reactors.Resources and technology do not constrain the capacity of nuclear power to contribute to meeting the energy demand for the 21st century. However, political and environmental concerns about nuclear safety and radioactive waste started to limit the growth of this energy supply at the end of last century, particularly due to a number of nuclear accidents. Concerns about nuclear proliferation (especially with plutonium produced by breeder reactors) mean that the development of nuclear power by countries such as Iran and Syria is being actively discouraged by the international community.Although at the beginning of the 21st century uranium is the primary nuclear fuel worldwide, others such as thorium and hydrogen had been under investigation since the middle of the 20th century.
Thorium reserves significantly exceed those of uranium, and of course hydrogen is abundant. It is also considered by many to be easier to obtain than uranium. While uranium mines are enclosed underground and thus very dangerous for the miners, thorium is taken from open pits, and is estimated to be roughly three times as abundant as uranium in the Earth's crust.Since the 1960s, numerous facilities throughout the world have burned Thorium.
Nuclear fusion
Alternatives for energy production through fusion of hydrogen have been under investigation since the 1950s. No materials can withstand the temperatures required to ignite the fuel, so it must be confined by methods which use no materials. Magnetic and inertial confinement are the main alternatives (Cadarache, Inertial confinement fusion) both of which are hot research topics in the early years of the 21st century.
Fusion power is the process driving the sun and other stars. It generates large quantities of heat by fusing the nuclei of hydrogen or helium isotopes, which may be derived from seawater. The heat can theoretically be harnessed to generate electricity. The temperatures and pressures needed to sustain fusion make it a very difficult process to control. Fusion is theoretically able to supply vast quantities of energy, with relatively little pollution. Although both the United States and the European Union, along with other countries, are supporting fusion research (such as investing in the ITER facility), according to one report, inadequate research has stalled progress in fusion research for the past 20 years.
Renewable resources
Renewable resources are available each year, unlike non-renewable resources, which are eventually depleted. A simple comparison is a coal mine and a forest. While the forest could be depleted, if it is managed it represents a continuous supply of energy, vs. the coal mine, which once has been exhausted is gone. Most of earth's available energy resources are renewable resources. Renewable resources account for more than 93 percent of total U.S. energy reserves. Annual renewable resources were multiplied times thirty years for comparison with non-renewable resources. In other words, if all non-renewable resources were uniformly exhausted in 30 years, they would only account for 7 percent of available resources each year, if all available renewable resources were developed.
Biomass
Production of biomass and biofuels are growing industries as interest in sustainable fuel sources is growing. Utilizing waste products avoids a food vs. fuel trade-off, and burning methane gas reduces greenhouse gas emissions, because even though it releases carbon dioxide, carbon dioxide is 23 times less of a greenhouse gas than is methane. Biofuels represent a sustainable partial replacement for fossil fuels, but their net impact on greenhouse gas emissions depends on the agricultural practices used to grow the plants used as feedstock to create the fuels. While it is widely believed that biofuels can be carbon neutral, there is evidence that biofuels produced by current farming methods are substantial net carbon emitters. Geothermal and biomass are the only two renewable energy sources that require careful management to avoid local depletion.
Geothermal
Estimates of exploitable worldwide geothermal energy resources vary considerably, depending on assumed investments in technology and exploration and guesses about geological formations. According to a 1998 study, this might amount to between 65 and 138 GW of electrical generation capacity 'using enhanced technology'. Other estimates range from 35 to 2000 GW of electrical generation capacity, with a further potential for 140 EJ/year of direct use.A 2006 report by the MIT that took into account the use of Enhanced Geothermal Systems (EGS) concluded that it would be affordable to generate 100 GWe (gigawatts of electricity) or more by 2050, just in the United States, for a maximum investment of 1 billion US dollars in research and development over 15 years. The MIT report calculated the world's total EGS resources to be over 13 YJ, of which over 0.2 YJ would be extractable, with the potential to increase this to over 2 YJ with technology improvements – sufficient to provide all the world's energy needs for several thousand years. The total heat content of the Earth is 13,000,000 YJ.
Hydropower
In 2005, hydroelectric power supplied 16.4% of world electricity, down from 21.0% in 1973, but only 2.2% of the world's energy.
Solar energy
Renewable energy sources are even larger than the traditional fossil fuels and in theory can easily supply the world's energy needs. 89 PW of solar power falls on the planet's surface. While it is not possible to capture all, or even most, of this energy, capturing less than 0.02% would be enough to meet the current energy needs. Barriers to further solar generation include the high price of making solar cells and reliance on weather patterns to generate electricity. Also, current solar generation does not produce electricity at night, which is a particular problem in high northern and southern latitude countries; energy demand is highest in winter, while availability of solar energy is lowest. This could be overcome by buying power from countries closer to the equator during winter months, and may also be addressed with technological developments such as the development of inexpensive energy storage. Globally, solar generation is the fastest growing source of energy, seeing an annual average growth of 35% over the past few years. China, Europe, India, Japan, and the United States are the major growing investors in solar energy. Solar power's share of worldwide electricity usage at the end of 2014 was 1%.
Wave and tidal power
At the end of 2005, 0.3 GW of electricity was produced by tidal power. Due to the tidal forces created by the Moon (68%) and the Sun (32%), and Earth's relative rotation with respect to Moon and Sun, there are fluctuating tides. These tidal fluctuations result in dissipation at an average rate of about 3.7 TW.Another physical limitation is the energy available in the tidal fluctuations of the oceans, which is about 0.6 EJ (exajoule). Note this is only a tiny fraction of the total rotational energy of Earth. Without forcing, this energy would be dissipated (at a dissipation rate of 3.7 TW) in about four semi-diurnal tide periods. So, dissipation plays a significant role in the tidal dynamics of the oceans. Therefore, this limits the available tidal energy to around 0.8 TW (20% of the dissipation rate) in order not to disturb the tidal dynamics too much.Waves are derived from wind, which is in turn derived from solar energy, and at each conversion there is a drop of about two orders of magnitude in available energy. The total power of waves that wash against Earth's shores adds up to 3 TW.
Wind power
The available wind energy estimates range from 300 TW to 870 TW. Using the lower estimate, just 5% of the available wind energy would supply the current worldwide energy needs. Most of this wind energy is available over the open ocean. The oceans cover 71% of the planet and wind tends to blow more strongly over open water because there are fewer obstructions.
== References == |
car | A car, or an automobile, is a motor vehicle with wheels. Most definitions of cars state that they run primarily on roads, seat one to eight people, have four wheels, and mainly transport people, not cargo. French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769, while French-born Swiss inventor François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile in 1808.
The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when German inventor Carl Benz patented his Benz Patent-Motorwagen. Commercial cars became widely available during the 20th century. One of the first cars affordable by the masses was the 1908 Model T, an American car manufactured by the Ford Motor Company. Cars were rapidly adopted in the US, where they replaced horse-drawn carriages. In Europe and other parts of the world, demand for automobiles did not increase until after World War II. The car is considered an essential part of the developed economy.
Cars have controls for driving, parking, passenger comfort, and a variety of lamps. Over the decades, additional features and controls have been added to vehicles, making them progressively more complex. These include rear-reversing cameras, air conditioning, navigation systems, and in-car entertainment. Most cars in use in the early 2020s are propelled by an internal combustion engine, fueled by the combustion of fossil fuels. Electric cars, which were invented early in the history of the car, became commercially available in the 2000s and are predicted to cost less to buy than petrol-driven cars before 2025. The transition from fossil fuels to electric cars features prominently in most climate change mitigation scenarios, such as Project Drawdown's 100 actionable solutions for climate change.There are costs and benefits to car use. The costs to the individual include acquiring the vehicle, interest payments (if the car is financed), repairs and maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance. The costs to society include maintaining roads, land use, road congestion, air pollution, noise pollution, public health, and disposing of the vehicle at the end of its life. Traffic collisions are the largest cause of injury-related deaths worldwide. Personal benefits include on-demand transportation, mobility, independence, and convenience. Societal benefits include economic benefits, such as job and wealth creation from the automotive industry, transportation provision, societal well-being from leisure and travel opportunities, and revenue generation from taxes. People's ability to move flexibly from place to place has far-reaching implications for the nature of societies. There are around one billion cars in use worldwide. Car usage is increasing rapidly, especially in China, India, and other newly industrialized countries.
Etymology
The English word car is believed to originate from Latin carrus/carrum "wheeled vehicle" or (via Old North French) Middle English carre "two-wheeled cart", both of which in turn derive from Gaulish karros "chariot". It originally referred to any wheeled horse-drawn vehicle, such as a cart, carriage, or wagon."Motor car", attested from 1895, is the usual formal term in British English. "Autocar", a variant likewise attested from 1895 and literally meaning "self-propelled car", is now considered archaic. "Horseless carriage" is attested from 1895."Automobile", a classical compound derived from Ancient Greek autós (αὐτός) "self" and Latin mobilis "movable", entered English from French and was first adopted by the Automobile Club of Great Britain in 1897. It fell out of favour in Britain and is now used chiefly in North America, where the abbreviated form "auto" commonly appears as an adjective in compound formations like "auto industry" and "auto mechanic".
History
The first steam-powered vehicle was designed by Ferdinand Verbiest, a Flemish member of a Jesuit mission in China around 1672. It was a 65-centimetre-long (26 in) scale-model toy for the Kangxi Emperor that was unable to carry a driver or a passenger. It is not known with certainty if Verbiest's model was successfully built or run.Nicolas-Joseph Cugnot is widely credited with building the first full-scale, self-propelled mechanical vehicle in about 1769; he created a steam-powered tricycle. He also constructed two steam tractors for the French Army, one of which is preserved in the French National Conservatory of Arts and Crafts. His inventions were limited by problems with water supply and maintaining steam pressure. In 1801, Richard Trevithick built and demonstrated his Puffing Devil road locomotive, believed by many to be the first demonstration of a steam-powered road vehicle. It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
The development of external combustion (steam) engines is detailed as part of the history of the car but often treated separately from the development of true cars. A variety of steam-powered road vehicles were used during the first part of the 19th century, including steam cars, steam buses, phaetons, and steam rollers. In the United Kingdom, sentiment against them led to the Locomotive Acts of 1865.
In 1807, Nicéphore Niépce and his brother Claude created what was probably the world's first internal combustion engine (which they called a Pyréolophore), but installed it in a boat on the river Saone in France. Coincidentally, in 1807, the Swiss inventor François Isaac de Rivaz designed his own "de Rivaz internal combustion engine", and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture of Lycopodium powder (dried spores of the Lycopodium plant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture of hydrogen and oxygen. Neither design was successful, as was the case with others, such as Samuel Brown, Samuel Morey, and Etienne Lenoir, who each built vehicles (usually adapted carriages or carts) powered by internal combustion engines.In November 1881, French inventor Gustave Trouvé demonstrated a three-wheeled car powered by electricity at the International Exposition of Electricity. Although several other German engineers (including Gottlieb Daimler, Wilhelm Maybach, and Siegfried Marcus) were working on cars at about the same time, the year 1886 is regarded as the birth year of the modern car—a practical, marketable automobile for everyday use—when the German Carl Benz patented his Benz Patent-Motorwagen; he is generally acknowledged as the inventor of the car.In 1879, Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle. His first Motorwagen was built in 1885 in Mannheim, Germany. He was awarded the patent for its invention as of his application on 29 January 1886 (under the auspices of his major company, Benz & Cie., which was founded in 1883). Benz began promotion of the vehicle on 3 July 1886, and about 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a cheaper model. They also were powered with four-stroke engines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz car to his line of products. Because France was more open to the early cars, initially more were built and sold in France through Roger than Benz sold in Germany. In August 1888, Bertha Benz, the wife of Carl Benz, undertook the first road trip by car, to prove the road-worthiness of her husband's invention.
In 1896, Benz designed and patented the first internal-combustion flat engine, called boxermotor. During the last years of the 19th century, Benz was the largest car company in the world with 572 units produced in 1899 and, because of its size, Benz & Cie., became a joint-stock company. The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra) in 1897, the Präsident automobil.
Daimler and Maybach founded Daimler Motoren Gesellschaft (DMG) in Cannstatt in 1890, and sold their first car in 1892 under the brand name Daimler. It was a horse-drawn stagecoach built by another manufacturer, which they retrofitted with an engine of their design. By 1895, about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after disputes with their backers. Benz, Maybach, and the Daimler team seem to have been unaware of each other's early work. They never worked together; by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG. Daimler died in 1900 and later that year, Maybach designed an engine named Daimler-Mercedes that was placed in a specially ordered model built to specifications set by Emil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG car was produced and the model was named Mercedes after the Maybach engine, which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to the Daimler brand name were sold to other manufacturers.
In 1890, Émile Levassor and Armand Peugeot of France began producing vehicles with Daimler engines, and so laid the foundation of the automotive industry in France. In 1891, Auguste Doriot and his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler powered Peugeot Type 3 completed 2,100 kilometres (1,300 mi) from Valentigney to Paris and Brest and back again. They were attached to the first Paris–Brest–Paris bicycle race, but finished six days after the winning cyclist, Charles Terront.
The first design for an American car with a petrol internal combustion engine was made in 1877 by George Selden of Rochester, New York. Selden applied for a patent for a car in 1879, but the patent application expired because the vehicle was never built. After a delay of 16 years and a series of attachments to his application, on 5 November 1895, Selden was granted a US patent (U.S. Patent 549,160) for a two-stroke car engine, which hindered, more than encouraged, development of cars in the United States. His patent was challenged by Henry Ford and others, and overturned in 1911.
In 1893, the first running, petrol-driven American car was built and road-tested by the Duryea brothers of Springfield, Massachusetts. The first public run of the Duryea Motor Wagon took place on 21 September 1893, on Taylor Street in Metro Center Springfield. Studebaker, subsidiary of a long-established wagon and coach manufacturer, started to build cars in 1897: 66 and commenced sales of electric vehicles in 1902 and petrol vehicles in 1904.In Britain, there had been several attempts to build steam cars with varying degrees of success, with Thomas Rickett even attempting a production run in 1860. Santler from Malvern is recognized by the Veteran Car Club of Great Britain as having made the first petrol-driven car in the country in 1894, followed by Frederick William Lanchester in 1895, but these were both one-offs. The first production vehicles in Great Britain came from the Daimler Company, a company founded by Harry J. Lawson in 1896, after purchasing the right to use the name of the engines. Lawson's company made its first car in 1897, and they bore the name Daimler.In 1892, German engineer Rudolf Diesel was granted a patent for a "New Rational Combustion Engine". In 1897, he built the first diesel engine. Steam-, electric-, and petrol-driven vehicles competed for a few decades, with petrol internal combustion engines achieving dominance in the 1910s. Although various pistonless rotary engine designs have attempted to compete with the conventional piston and crankshaft design, only Mazda's version of the Wankel engine has had more than very limited success.
All in all, it is estimated that over 100,000 patents created the modern automobile and motorcycle.
Mass production
Large-scale, production-line manufacturing of affordable cars was started by Ransom Olds in 1901 at his Oldsmobile factory in Lansing, Michigan, and based upon stationary assembly line techniques pioneered by Marc Isambard Brunel at the Portsmouth Block Mills, England, in 1802. The assembly line style of mass production and interchangeable parts had been pioneered in the US by Thomas Blanchard in 1821, at the Springfield Armory in Springfield, Massachusetts. This concept was greatly expanded by Henry Ford, beginning in 1913 with the world's first moving assembly line for cars at the Highland Park Ford Plant.
As a result, Ford's cars came off the line in 15-minute intervals, much faster than previous methods, increasing productivity eightfold, while using less manpower (from 12.5 manhours to 1 hour 33 minutes). It was so successful, paint became a bottleneck. Only Japan black would dry fast enough, forcing the company to drop the variety of colors available before 1913, until fast-drying Duco lacquer was developed in 1926. This is the source of Ford's apocryphal remark, "any color as long as it's black". In 1914, an assembly line worker could buy a Model T with four months' pay.Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury. The combination of high wages and high efficiency is called "Fordism" and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the US. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921, Citroën was the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going broke; by 1930, 250 companies which did not, had disappeared.Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electric ignition and the electric self-starter (both by Charles Kettering, for the Cadillac Motor Company in 1910–1911), independent suspension, and four-wheel brakes.
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced car design. It was Alfred P. Sloan who established the idea of different makes of cars produced by one company, called the General Motors Companion Make Program, so that buyers could "move up" as their fortunes improved.
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s, LaSalles, sold by Cadillac, used cheaper mechanical parts made by Oldsmobile; in the 1950s, Chevrolet shared bonnet, doors, roof, and windows with Pontiac; by the 1990s, corporate powertrains and shared platforms (with interchangeable brakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such as Apperson, Cole, Dorris, Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with the Great Depression, by 1940, only 17 of those were left.In Europe, much the same would happen. Morris set up its production line at Cowley in 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practice of vertical integration, buying Hotchkiss (engines), Wrigley (gearboxes), and Osberton (radiators), for instance, as well as competitors, such as Wolseley: in 1925, Morris had 41 per cent of total British car production. Most British small-car assemblers, from Abbey to Xtra, had gone under. Citroën did the same in France, coming to cars in 1919; between them and other cheap cars in reply such as Renault's 10CV and Peugeot's 5CV, they produced 550,000 cars in 1925, and Mors, Hurtu, and others could not compete. Germany's first mass-manufactured car, the Opel 4PS Laubfrosch (Tree Frog), came off the line at Rüsselsheim in 1924, soon making Opel the top car builder in Germany, with 37.5 per cent of the market.In Japan, car production was very limited before World War II. Only a handful of companies were producing vehicles in limited numbers, and these were small, three-wheeled for commercial uses, like Daihatsu, or were the result of partnering with European companies, like Isuzu building the Wolseley A-9 in 1922. Mitsubishi was also partnered with Fiat and built the Mitsubishi Model A based on a Fiat vehicle. Toyota, Nissan, Suzuki, Mazda, and Honda began as companies producing non-automotive products before the war, switching to car production during the 1950s. Kiichiro Toyoda's decision to take Toyoda Loom Works into automobile manufacturing would create what would eventually become Toyota Motor Corporation, the largest automobile manufacturer in the world. Subaru, meanwhile, was formed from a conglomerate of six companies who banded together as Fuji Heavy Industries, as a result of having been broken up under keiretsu legislation.
Fuel and propulsion technologies
The transport sector is a major contributor to air pollution, noise pollution and climate change.Most cars in use in the early 2020s run on petrol burnt in an internal combustion engine (ICE). The International Organization of Motor Vehicle Manufacturers says that, in countries that mandate low sulphur motor spirit, petrol-fuelled cars built to late 2010s standards (such as Euro-6) emit very little local air pollution. Some cities ban older petrol-driven cars and some countries plan to ban sales in future. However, some environmental groups say this phase-out of fossil fuel vehicles must be brought forwards to limit climate change. Production of petrol-fuelled cars peaked in 2017.Other hydrocarbon fossil fuels also burnt by deflagration (rather than detonation) in ICE cars include diesel, autogas, and CNG. Removal of fossil fuel subsidies, concerns about oil dependence, tightening environmental laws and restrictions on greenhouse gas emissions are propelling work on alternative power systems for cars. This includes hybrid vehicles, plug-in electric vehicles and hydrogen vehicles. Out of all cars sold in 2021, nine per cent were electric, and by the end of that year there were more than 16 million electric cars on the world's roads. Despite rapid growth, less than two per cent of cars on the world's roads were fully electric and plug-in hybrid cars by the end of 2021. Cars for racing or speed records have sometimes employed jet or rocket engines, but these are impractical for common use.
Oil consumption has increased rapidly in the 20th and 21st centuries because there are more cars; the 1980s oil glut even fuelled the sales of low-economy vehicles in OECD countries. The BRIC countries are adding to this consumption.
As of 2023 few production cars use wheel hub motors.
Batteries
In almost all hybrid (even mild hybrid) and pure electric cars regenerative braking recovers and returns to a battery some energy which would otherwise be wasted by friction brakes getting hot. Although all cars must have friction brakes (front disc brakes and either disc or drum rear brakes) for emergency stops, regenerative braking improves efficiency, particularly in city driving.
User interface
Cars are equipped with controls used for driving, passenger comfort, and safety, normally operated by a combination of the use of feet and hands, and occasionally by voice on 21st-century cars. These controls include a steering wheel, pedals for operating the brakes and controlling the car's speed (and, in a manual transmission car, a clutch pedal), a shift lever or stick for changing gears, and a number of buttons and dials for turning on lights, ventilation, and other functions. Modern cars' controls are now standardized, such as the location for the accelerator and brake, but this was not always the case. Controls are evolving in response to new technologies, for example, the electric car and the integration of mobile communications.
Some of the original controls are no longer required. For example, all cars once had controls for the choke valve, clutch, ignition timing, and a crank instead of an electric starter. However, new controls have also been added to vehicles, making them more complex. These include air conditioning, navigation systems, and in-car entertainment. Another trend is the replacement of physical knobs and switches by secondary controls with touchscreen controls such as BMW's iDrive and Ford's MyFord Touch. Another change is that while early cars' pedals were physically linked to the brake mechanism and throttle, in the early 2020s, cars have increasingly replaced these physical linkages with electronic controls.
Electronics and interior
Cars are typically equipped with interior lighting which can be toggled manually or be set to light up automatically with doors open, an entertainment system which originated from car radios, sideways windows which can be lowered or raised electrically (manually on earlier cars), and one or multiple auxiliary power outlets for supplying portable appliances such as mobile phones, portable fridges, power inverters, and electrical air pumps from the on-board electrical system. More costly upper-class and luxury cars are equipped with features earlier such as massage seats and collision avoidance systems.Dedicated automotive fuses and circuit breakers prevent damage from electrical overload.
Lighting
Cars are typically fitted with multiple types of lights. These include headlights, which are used to illuminate the way ahead and make the car visible to other users, so that the vehicle can be used at night; in some jurisdictions, daytime running lights; red brake lights to indicate when the brakes are applied; amber turn signal lights to indicate the turn intentions of the driver; white-colored reverse lights to illuminate the area behind the car (and indicate that the driver will be or is reversing); and on some vehicles, additional lights (e.g., side marker lights) to increase the visibility of the car. Interior lights on the ceiling of the car are usually fitted for the driver and passengers. Some vehicles also have a boot light and, more rarely, an engine compartment light.
Weight
During the late 20th and early 21st century, cars increased in weight due to batteries, modern steel safety cages, anti-lock brakes, airbags, and "more-powerful—if more efficient—engines" and, as of 2019, typically weigh between 1 and 3 tonnes (1.1 and 3.3 short tons; 0.98 and 2.95 long tons). Heavier cars are safer for the driver from a crash perspective, but more dangerous for other vehicles and road users. The weight of a car influences fuel consumption and performance, with more weight resulting in increased fuel consumption and decreased performance. The Wuling Hongguang Mini EV, a typical city car, weighs about 700 kilograms (1,500 lb). Heavier cars include SUVs and extended-length SUVs like the Suburban.
Some places tax heavier cars more: as well as improving pedestrian safety this can encourage manufacturers to use materials such as recycled aluminium instead of steel. It has been suggested that one benefit of subsidizing charging infrastructure is that cars can use lighter batteries.
Seating and body style
Most cars are designed to carry multiple occupants, often with four or five seats. Cars with five seats typically seat two passengers in the front and three in the rear. Full-size cars and large sport utility vehicles can often carry six, seven, or more occupants depending on the arrangement of the seats. On the other hand, sports cars are most often designed with only two seats. Utility vehicles like pickup trucks, combine seating with extra cargo or utility functionality. The differing needs for passenger capacity and their luggage or cargo space has resulted in the availability of a large variety of body styles to meet individual consumer requirements that include, among others, the sedan/saloon, hatchback, station wagon/estate, coupe, and minivan.
Safety
Traffic collisions are the largest cause of injury-related deaths worldwide. Mary Ward became one of the first documented car fatalities in 1869 in Parsonstown, Ireland, and Henry Bliss one of the US's first pedestrian car casualties in 1899 in New York City. There are now standard tests for safety in new cars, such as the Euro and US NCAP tests, and insurance-industry-backed tests by the Insurance Institute for Highway Safety (IIHS).
Costs and benefits
The costs of car usage, which may include the cost of: acquiring the vehicle, repairs and auto maintenance, fuel, depreciation, driving time, parking fees, taxes, and insurance, are weighed against the cost of the alternatives, and the value of the benefits—perceived and real—of vehicle usage. The benefits may include on-demand transportation, mobility, independence, and convenience, and emergency power. During the 1920s, cars had another benefit: "[c]ouples finally had a way to head off on unchaperoned dates, plus they had a private space to snuggle up close at the end of the night."Similarly the costs to society of car use may include; maintaining roads, land use, air pollution, noise pollution, road congestion, public health, health care, and of disposing of the vehicle at the end of its life; and can be balanced against the value of the benefits to society that car use generates. Societal benefits may include: economy benefits, such as job and wealth creation, of car production and maintenance, transportation provision, society wellbeing derived from leisure and travel opportunities, and revenue generation from the tax opportunities. The ability of humans to move flexibly from place to place has far-reaching implications for the nature of societies.
Environmental effects
Cars are a major cause of urban air pollution, with all types of cars producing dust from brakes, tyres, and road wear, although these may be limited by vehicle emission standards. While there are different ways to power cars, most rely on petrol or diesel, and they consume almost a quarter of world oil production as of 2019. Both petrol and diesel cars pollute more than electric cars. Cars and vans caused 8% of direct carbon dioxide emissions in 2021. As of 2021, due to greenhouse gases emitted during battery production, electric cars must be driven tens of thousands of kilometers before their lifecycle carbon emissions are less than fossil fuel cars; however this varies considerably and is expected to improve in future due to lower carbon electricity, and longer lasting batteries produced in larger factories. Many governments use fiscal policies, such as road tax, to discourage the purchase and use of more polluting cars; and many cities are doing the same with low-emission zones. Fuel taxes may act as an incentive for the production of more efficient, hence less polluting, car designs (e.g., hybrid vehicles) and the development of alternative fuels. High fuel taxes or cultural change may provide a strong incentive for consumers to purchase lighter, smaller, more fuel-efficient cars, or to not drive.The lifetime of a car built in the 2020s is expected to be about 16 years, or about 2 million km (1.2 million miles) if driven a lot. According to the International Energy Agency the average rated fuel consumption of new light-duty vehicles fell by only 0.9% between 2017 and 2019, far smaller than the 1.8% annual average reduction between 2010 and 2015. Given slow progress to date, the IEA estimates fuel consumption will have to decrease by 4.3% per year on average from 2019 to 2030. The increase in sales of SUVs is bad for fuel economy. Many cities in Europe have banned older fossil fuel cars and all fossil fuel vehicles will be banned in Amsterdam from 2030. Many Chinese cities limit licensing of fossil fuel cars, and many countries plan to stop selling them between 2025 and 2050.The manufacture of vehicles is resource intensive, and many manufacturers now report on the environmental performance of their factories, including energy usage, waste and water consumption. Manufacturing each kWh of battery emits a similar amount of carbon as burning through one full tank of petrol. The growth in popularity of the car allowed cities to sprawl, therefore encouraging more travel by car, resulting in inactivity and obesity, which in turn can lead to increased risk of a variety of diseases.Animals and plants are often negatively affected by cars via habitat destruction and pollution. Over the lifetime of the average car, the "loss of habitat potential" may be over 50,000 square metres (540,000 sq ft) based on primary production correlations. Animals are also killed every year on roads by cars, referred to as roadkill. More recent road developments are including significant environmental mitigation in their designs, such as green bridges (designed to allow wildlife crossings) and creating wildlife corridors.
Growth in the popularity of cars and commuting has led to traffic congestion. Moscow, Istanbul, Bogotá, Mexico City and São Paulo were the world's most congested cities in 2018 according to INRIX, a data analytics company.
Social issues
Mass production of personal motor vehicles in the United States and other developed countries with extensive territories such as Australia, Argentina, and France vastly increased individual and group mobility and greatly increased and expanded economic development in urban, suburban, exurban and rural areas.In the United States, the transport divide and car dependency resulting from domination of car-based transport systems presents barriers to employment in low-income neighbourhoods, with many low-income individuals and families forced to run cars they cannot afford in order to maintain their income. The historic commitment to a car-based transport system continued during the presidency of Joe Biden. Dependency on automobiles by African Americans may result in exposure to the hazards of driving while black and other types of racial discrimination related to buying, financing and insuring them.
Emerging car technologies
Although intensive development of conventional battery electric vehicles is continuing into the 2020s, other car propulsion technologies that are under development include wireless charging, hydrogen cars, and hydrogen/electric hybrids. Research into alternative forms of power includes using ammonia instead of hydrogen in fuel cells.New materials which may replace steel car bodies include aluminium, fiberglass, carbon fiber, biocomposites, and carbon nanotubes. Telematics technology is allowing more and more people to share cars, on a pay-as-you-go basis, through car share and carpool schemes. Communication is also evolving due to connected car systems.
Autonomous car
Fully autonomous vehicles, also known as driverless cars, already exist as robotaxis but have a long way to go before they are in general use.
Open source development
There have been several projects aiming to develop a car on the principles of open design, an approach to designing in which the plans for the machinery and systems are publicly shared, often without monetary compensation. None of the projects have succeeded in developing a car as a whole including both hardware and software, and no mass production ready open-source based designs have been introduced. Some car hacking through on-board diagnostics (OBD) has been done so far.
Car sharing
Car-share arrangements and carpooling are also increasingly popular, in the US and Europe. For example, in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007. Services like car sharing offer residents to "share" a vehicle rather than own a car in already congested neighbourhoods.
Industry
The automotive industry designs, develops, manufactures, markets, and sells the world's motor vehicles, more than three-quarters of which are cars. In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year.The automotive industry in China produces by far the most (20 million in 2020), followed by Japan (seven million), then Germany, South Korea and India. The largest market is China, followed by the US.
Around the world, there are about a billion cars on the road; they burn over a trillion litres (0.26×10^12 US gal; 0.22×10^12 imp gal) of motor spirit and diesel fuel yearly, consuming about 50 exajoules (14,000 TWh) of energy. The numbers of cars are increasing rapidly in China and India. In the opinion of some, urban transport systems based around the car have proved unsustainable, consuming excessive energy, affecting the health of populations, and delivering a declining level of service despite increasing investment. Many of these negative effects fall disproportionately on those social groups who are also least likely to own and drive cars. The sustainable transport movement focuses on solutions to these problems. The car industry is also facing increasing competition from the public transport sector, as some people re-evaluate their private vehicle usage.
Alternatives
Established alternatives for some aspects of car use include public transport such as busses, trolleybusses, trains, subways, tramways, light rail, cycling, and walking. Bicycle sharing systems have been established in China and many European cities, including Copenhagen and Amsterdam. Similar programs have been developed in large US cities. Additional individual modes of transport, such as personal rapid transit could serve as an alternative to cars if they prove to be socially accepted.
See also
Notes
References
Further reading
Halberstam, David (1986). The Reckoning. New York: Morrow. ISBN 0-688-04838-2.
Kay, Jane Holtz (1997). Asphalt nation : how the automobile took over America, and how we can take it back. New York: Crown. ISBN 0-517-58702-5.
Williams, Heathcote (1991). Autogeddon. New York: Arcade. ISBN 1-55970-176-5.
Sachs, Wolfgang (1992). For love of the automobile: looking back into the history of our desires. Berkeley: University of California Press. ISBN 0-520-06878-5.
Margolius, Ivan (2020). "What is an automobile?". The Automobile. 37 (11): 48–52. ISSN 0955-1328.
Cole, John; Cole, Francis (213). A Geography of the European Union. London: Routledge. p. 110. ISBN 9781317835585. – Number of cars in use (in millions) in various European countries in 1973 and 1992
Latin America: Economic Growth Trends. US: Agency for International Development, Office of Statistics and Reports. 1972. p. 11. – Number of motor vehicles registered in Latin America in 1970
World Motor Vehicle Production and Registration. US: Business and Defense Services Administration, Transportation Equipment Division. p. 3. – Number of registered passenger cars in various countries in 1959-60 and 1969-70
External links
Media related to Automobiles at Wikimedia Commons
Fédération Internationale de l'Automobile
Forum for the Automobile and Society
Transportation Statistics Annual Report 1996: Transportation and the Environment by Fletcher, Wendell; Sedor, Joanne; p. 219 (contains figures on vehicle registrations in various countries in 1970 and 1992) |
science based targets initiative | The Science Based Targets initiative (SBTi) is a collaboration between the CDP (was Carbon Disclosure Project), the United Nations Global Compact, World Resources Institute (WRI) and the World Wide Fund for Nature (WWF). Since 2015 more than 1,000 companies have joined the initiative to set a science-based climate target.
Organization
The Science Based Targets initiative was established in 2015 to help companies to set emission reduction targets in line with climate science and Paris Agreement goals. It is funded by IKEA Foundation, Amazon, Bezos Earth Fund, We Mean Business coalition, Rockefeller Brothers Fund and UPS Foundation. In October 2021 SBTi developed and launched the world's first net zero standard, providing the framework and tools for companies to set science-based net zero targets and limit global temperature rise above pre-industrial levels to 1.5 °C. Best practice as identified by SBTi is for companies to adopt transition plans covering scope 1, 2 and 3 emissions, set out short-term milestones, ensure effective board-level governance and link executive compensation to the company's adopted milestones.
Sector-specific guidance
SBTi developed separate sector-specific methodologies, frameworks and requirements for different industries. As of December 2021, guidance is available for:
Aviation
Apparel and footwear
Financial institutions
Information and Communication Technology
See also
World Resources Institute
Carbon Disclosure Project
United Nations Global Compact
Paris Agreement
Carbon neutrality
== References == |
electricity sector in brazil | Brazil has the largest electricity sector in Latin America.
Its capacity at the end of 2021 was 181,532 MW.
The installed capacity grew from 11,000 MW in 1970 with an average yearly growth of 5.8% per year.
Brazil has the largest capacity for water storage in the world, being dependent on hydroelectricity generation capacity, which meets over 60% of its electricity demand. The national grid runs at 60 Hz and is powered 83% from renewable sources.
This dependence on hydropower makes Brazil vulnerable to power supply shortages in drought years, as was demonstrated by the 2001–2002 energy crisis.The National Interconnected System (SIN) comprises the electricity companies in the South, South-East, Center-West, North-East and part of the North region. Only 3.4% of the country's electricity production is located outside the SIN, in small isolated systems located mainly in the Amazonian region.
Electricity supply and demand
Installed capacity
At the end of 2021 Brazil was the 2nd country in the world in terms of installed hydroelectric power (109.4 GW) and biomass (15.8 GW), the 7th country in the world in terms of installed wind power (21.1 GW) and the 14th country in the world in terms of installed solar power (13.0 GW) - on track to also become one of the top 10 in the world in solar energy. At the end of 2021, Brazil was the 4th largest producer of wind energy in the world (72 TWh), behind only China, USA and Germany, and the 11th largest producer of solar energy in the world (16.8 TWh).The main characteristic of the Brazilian energy matrix is that it is much more renewable than that of the world. While in 2019 the world matrix was only 14% made up of renewable energy, Brazil's was at 45%. Petroleum and oil products made up 34.3% of the matrix; sugar cane derivatives, 18%; hydraulic energy, 12.4%; natural gas, 12.2%; firewood and charcoal, 8.8%; varied renewable energies, 7%; mineral coal, 5.3%; nuclear, 1.4%, and other non-renewable energies, 0.6%.In the electric energy matrix, the difference between Brazil and the world is even greater: while the world only had 25% of renewable electric energy in 2019, Brazil had 83%. The Brazilian electric matrix was composed of: hydraulic energy, 64.9%; biomass, 8.4%; wind energy, 8.6%; solar energy, 1%; natural gas, 9.3%; oil products, 2%; nuclear, 2.5%; coal and derivatives, 3.3%.Generation capacity in Brazil is still dominated by hydroelectric plants, which accounted for 77% of total installed capacity, with 24 plants above 1,000 MW. In the old days, about 88 percent of the electricity fed into the national grid is estimated to came from hydroelectric generation, with over 25% coming from a single hydropower plant, the massive 14 GW Itaipu dam facility, located between Brazil and Paraguay on the Paraná River. Natural gas generation is second in importance, representing about 10% of total capacity, close to the 12% goal for 2010 established in 1993 by the Ministry of Mines and Energy.This reliance on abundant hydroelectric resources allegedly reduces the overall generation costs. However, this large dependence on hydropower made the country especially vulnerable to supply shortages in low-rainfall years (see The 2001–2002 crisis below).By the end of 2016, the breakdown of generation by source was:
Source: Ministry of Mines and Energy, 2016
As summarized in the table above, Brazil has two nuclear power plants, Angra 1 (657 MW) and Angra 2 (1,350 MW), both of them owned by Eletronuclear, a subsidiary of the state-owned (Mixed economy) Eletrobrás.
New generation projects
Brazil needs to add 6000 MW of capacity every year in order to satisfy growing demand from an increasing and more prosperous population. The Brazilian Ministry of Energy has decided to generate 50% of new supplies from hydropower, 30% from wind and biomass such as bagasse, and 20% from gas and other sources. Wind in the North-East is strongest during the dry season when hydropower plants produce less, so the two energy sources are seasonally complementary.
Hydroelectric plants
Brazil has an untapped hydropower potential of 180,000 MW, including about 80,000 MW in protected regions for which there are no development plans. The government expects to develop the rest by 2030. Most new hydropower plants are run-of-river plants that are less damaging to the environment, because their reservoirs are small. However, they are more vulnerable to droughts and less efficient, because only a fraction of their capacity can be used during the dry season.The National Agency for Electricity (ANEEL) has commissioned feasibility studies for several hydroelectric plants (small, medium and large) in the period 2006–2008. These studies correspond to a total potential capacity of 31,000 MW. In 2007, Ibama, the environmental agency, gave approval for the construction of two new dams, the Jirau Dam (3,300 MW) and the Santo Antônio Dam (3,150 MW), on the Madeira River in the state of Rondônia. The bid for the Santo Antônio plant was awarded in December 2007 to Madeira Energy, with a 39% participation from state-owned Furnas, while the bid for the Jirau plant will be launched in May 2008. The government is also pursuing development of the controversial 11,000 MW Belo Monte Dam in the state of Pará, on the Xingu River. IBAMA approved Belo Monte's provisional environmental license in February 2010 despite internal uproar from technicians over incomplete data.
Nuclear plants
Also in 2007, Electronuclear was granted permission to resume construction of Angra 3, a 1,350 MW plant, and is currently in the process of selecting a site for a fourth nuclear power plant. In February 2014, Eletrobras Eletronuclear awarded contracts to begin construction, with an estimated completion date of 2018.
Thermoelectric plants
Currently, the development of gas-fired thermoelectric power is somewhat jeopardized by the lack of secure gas supplies. In fact, having a secure gas contract is a prerequisite to build a new thermoelectric plant and to participate in a new energy auction (see Energy auctions below). In order to counter the risk of unavailability of gas supplies, Brazil is in the initial stages of planning to build two LNG terminals that would likely come on-stream around 2010. However, in the meantime, several thermoelectric plants are converting their machinery to dual-fuel capacity (oil and gas).
Demand
Total electricity consumed in 2007 was 410 terawatt hour (TWh), while annual consumption per capita for the same year averaged 2,166 kWh.
The share of consumption by sector was as follows:
Residential: 40% (including 6% for the rural sector)
Industrial: 25%
Commercial: 22%
Rural: 6%
Public: 13%Electricity demand is expected to grow an average of 3.6% in the next few years, leading to total estimated consumption of 504 TWh and average per capita consumption of 2,527 kWh.In Brazil, capacity addition traditionally lagged behind demand growth.
Electricity demand is expected to continue to grow at a quick pace.
The income elasticity of demand for electricity is estimated by Eletrobras at above unity.
Between 1980 and 2000, electricity demand increased on average by 5.4 percent per year while GDP grew by 2.4 percent on average per year.
Investment is therefore needed to boost generation and transmission capacity because there is limited excess supply, despite the reduction in demand following the energy rationing program implemented in 2001 in response to the energy crisis.
Access to electricity
Brazil, together with Chile, is the country with the highest access rate in Latin America. The power sector in Brazil serves more than 50 million customers, which corresponds to about 97% of the country's households, who have access to reliable electricity.
Service quality
Interruption frequency and duration
Interruption frequency and duration are very close to the averages for the LAC region. In 2005, the average number of interruptions per subscriber was 12.5, while duration of interruptions per subscriber was 16.5 hours. The weighted averages for LAC were 13 interruptions and 14 hours respectively.
Distribution losses
Distribution losses in 2005 were 14%, well in line with the 13.5% average for the LAC region but about double that of an OECD country such as the Great Britain, with 7% distribution losses.
Responsibilities in the electricity sector
Policy and regulation
The Ministry of Energy and Mines (MME) has the overall responsibility for policy setting in the electricity sector while ANEEL, which is linked to the Ministry of Mines and Energy, is the Brazilian Electricity Regulatory Agency created in 1996 by Law 9427. ANEEL's function is to regulate and control the generation, transmission and distribution of power in compliance with the existing legislation and with the directives and policies dictated by the Central Government. The National Council for Energy Policies (CNPE), is an advisory body to the MME in charge of approving supply criteria and "structural" projects while the Electricity Industry Monitoring Committee (CMSE) monitors supply continuity and security.ANEEL and the Ministry of Environment play almost no part in which investment projects go ahead, but they only influence how projects are executed once the decision has been taken. Both have had their bosses resign rather than supporting infrastructure projects in the Amazon.The Operator of the National Electricity System (ONS) is a non-profit private entity created in August 1998 that is responsible for the coordination and control of the generation and transmission installations in the National Interconnected System (SIN). The ONS is under ANEEL's control and regulation.The Power Commercialization Chamber (CCEE), successor of MAE (Mercado Atacadista de Energia Electrica), is the operator of the commercial market. The initial role of the operator was to create a single, integrated commercial electricity market, to be regulated under published rules. This role has become more active since now CCEE is in charge of the auction system. The rules and commercialization procedures that regulate CCEE's activities are approved by ANEEL.Finally, the Power Research Company (EPE) was created in 2004 with the specific mission of developing an integrated long-term planning for the power sector in Brazil. Its mission is to carry out studies and research services in the planning of the energy sector in areas such as power, oil and natural gas and its derivates, coal, renewable energy resources and energy efficiency, among others. Its work serves as input for the planning and implementation of actions by the Ministry of Energy and Mines in the formulation of the national energy policyThe Brazilian electricity model is fully deregulated, which allows generators to sell all of their "assured energy" via freely negotiated contracts with consumers above 3 MW or via energy auctions administered by CCEE (see energy auctions below). . Under this model, distributors are required to contract 100% of their expected demand. Currently, Brazilian generation supply can be sold under four types of markets:
"Old energy"* auction contracts (long term): approximately 41% of the 2006 market
"New energy"* auction contracts (long term):delivery starts in 2008
Free-market contracts (long term): approximately 27% of 2006 market
Spot Market Sales (size uncertain)(*The government identifies two types of generation capacity, "old energy" and "new energy". Old energy represents existing plants that were already contracted in the 1990s, while new energy refers to that energy produced by plants that have not yet been built, or by existing plants that meet certain criteria.)
Generation
In Brazil, large government-controlled companies dominate the electricity sector. Federally owned Eletrobras holds about 40% of capacity (including 50% of the Itaipu dam), with state-companies CESP, Cemig and Copel controlling 8%, 7% and 5% of generation capacity respectively.Generation capacity is shared among the different companies as follows:
Source: Eletrobras, CESP, Cemig, Copel, Tractebel Energia, AES Tiete, Ministry of Energy and Mines
(1) Considering 6,300MW of Iguaçú
Currently, about 27 percent of the generation assets are in the hands of private investors. Considering the plants under construction, as well as the concessions and licenses already granted by ANEEL, this figure is expected to grow up to 31 percent in the medium term and to reach almost 44 percent over 5–6 years. Private capital participation in the generation business will likely represent 50 percent of the installed capacity in the years to come
Transmission
Brazil's transmission system is growing in importance since adequate transmission capacity is essential to manage the effects of regional droughts, allowing to move power from areas where rainfall is plentiful. As a matter of fact, the rationing that occurred in Brazil in 2001–2002 (see The 2001–2002 crisis below), could have largely been averted if there had been adequate transmission capacity between the south (excess supply) and the southeast (severe deficit).Transmission has remained almost exclusively under government control through both federal (Eletrobras) and state companies (mainly Sao-Paulo-CTEEP, Minas Gerais-Cemig, and Parana-Copel) until recently. However, under the new sector regulatory model, there are about 40 transmission concessions in Brazil. Most of them are still controlled by the government, with subsidiaries under federal company Eletrobras holding 69% of total transmission lines.
Source: Bear Stearns 2007
Distribution
In Brazil, there are 49 utilities with distribution concessions and about 64% of Brazilian distribution assets are controlled by private sector companies. The following table lists Brazil's most important distribution companies:
Source: Bear Stearns, 2007
Renewable energy resources
In Brazil, hydroelectricity supplies about 60% of total electricity demand. It is estimated that about 70% of the overall hydroelectricity potential of the country, is still unexploited.At the end of 2021 Brazil was the 2nd country in the world in terms of installed hydroelectric power (109.4 GW) and biomass (15.8 GW), the 7th country in the world in terms of installed wind power (21.1 GW) and the 14th country in the world in terms of installed solar power (13.0 GW) - on track to also become one of the top 10 in the world in solar energy. From 2013, Brazil started to deploy wind energy on a large scale, and from 2017, it started to deploy solar energy on a large scale, to diversify its energy portfolio and avoid the problems arising from dependence on hydroelectricity.The potential for wind energy, which is concentrated in the Northeast, is very large. Brazil's gross wind resource potential was estimated, in 2019, to be about 522 GW (this, only onshore), enough energy to meet three times the country's current demand.
PROINFA
In 2002, the government of Brazil created a Program to Foster Alternative Sources of Electric Power (PROINFA). The program aims to increase the participation of wind power sources, biomass sources and small hydropower systems in the supply of the Brazilian grid system through Autonomous Independent Producers (PIA). The medium to long-term objective (i.e. 20 years) of the program is that the defined sources supply 15% of the annual market growth until they reach 10% of the nation's annual electric power demand/total consumption.
History
Situation prior to the reforms: the state-dominated model
The power sector in Brazil was essentially in government's hands until the early 1990s. The sector had seen remarkable development in the 1970s. However, by the late 1980s, the state-ownership model was on the verge of collapse. This delicate situation was the result of heavily subsidized tariffs and a revenue shortfall in the sector of about US$35 billion, which led to the delay in the construction of about 15 large hydro plants due to lack of funds for investment. Efforts to address the deterioration of the sector were not successful, a situation that further intensified the need for deep reforms. A major commitment was made by President Cardoso to carry out a substantial reform of the Brazilian electricity sector. The first reforms introduced in the power sector were aimed to allow the participation of private capital and also to improve its economic situation.
1990s reforms
The Project for Restructuring the Brazilian Electric Sector, RESEB, which laid down the first steps for the implementation of the power sector reform, was initiated in 1996 during the administration of President Cardoso. The objective of the reform was to build a more competitive power sector with the creation of a level playing field for private sector participation. In addition, state-owned utilities and assets were privatized. Although transmission assets were not privatized, most of the expansion of the transmission network has been carried out by private capital. This reform also led to the creation, in 1996, of ANEEL (Brazil's National Electricity Regulatory Agency), a quasi-independent regulatory body in charge of overseeing the electricity sector. However, the main restructuring steps were taken with the enactment of the 1998 Law (Law 9648/98). Those steps included the creation of an independent operator of the national transmission system (ONS) and an operator of the commercial market (MAE), which did not become operational until 2001.As a result of the reforms of the power sector, new capital was attracted, both in terms of privatization and greenfield projects. Some of the state-owned generation capacity was acquired by foreign investors such as Tractebel, AES, Prisma Energy, El Paso and Duke, which became significant producers. In addition, local investors such as industrial groups, large customers, utilities and pension funds also invested heavily in the national generation sector. Other companies such as EdF (Électricité de France), Endesa and Chilectra focused on the distribution segment, a segment in which privatization brought improved quality of service and a reduction of theft, non-payments and technical losses.However, the reforms were not successful in preventing the energy crisis that was to unfold in 2001. Installed capacity expanded by only 28 percent between 1990 and 1999, whereas electricity demand increased by 45 percent. In 1999, as the power shortage was already foreseen, the President Cardoso Administration made efforts to increase private investment in the electricity sector through a Priority Thermal Power Program (PPT) that aimed at the expeditious construction of more than 40 gas-fired thermal plants. Unfortunately, the needed investment did not materialize and the crisis became unavoidable.
2001–2002 crisis and the government's response
Brazil was faced with one of the most serious energy crises in its history in 2001–2002. The crisis was the direct result of a sequence of a few years drier than average in a country with over 80% of hydroelectric generation capacity. Additionally, several delays in the commissioning of new generation plants and transmission problems in the third circuit from the Itaipu hydropower plant accounted for a third of the energy deficit. Reservoir levels reached such low levels that supply could not be ensured for more than four months.It was soon clear that strict demand reduction programs would be needed to avoid widespread blackouts. In June 2001, the government created the Crisis Management Board (CGE), chaired by President Cardoso himself. The CGE received special powers among which was the authority to set up special tariffs, implement compulsory rationing and blackouts, and bypass normal bidding procedures of the purchase of new plant equipment. Instead of resorting to rolling blackouts, the government chose to apply a quota system. Quotas were established for all the consumers based on historical and target consumption level, applying bonuses for consumption well below the prescribed level, penalties for over-consumption and some freedom for the large users to trade their quotas in a secondary market. The government's goal of reducing historical consumption levels by at least 20% for an eight-month period was successfully achieved, with the government having to pay over US$200 million in bonuses to residential, industrial, and commercial customers. This achievement allowed the system to overcome that long period without blackouts and brownouts and proved the potential of demand-side management and energy efficiency efforts, which were able to create a virtual capacity of 4,000 MW, helping the country to bridge the supply demand gap in a very economic way. In addition, the government launched a program for contracting emergency generation capacity, with bids for a total of 2,100MW of new thermal capacity accepted.However, the crisis affected numerous actors. Generators and distributors experienced a 20% reduction in their revenues due to the contraction in consumption. This situation was eventually addressed by an increase of tariffs approved by the government. The financial situation of distributors was also damaged, with customers also suffering from the increase in electricity prices (140% in nominal terms between 1995 and 2002).
2003–2004 reforms: energy auctions
In January 2003, the new administration led by Luiz Inácio Lula da Silva took over among criticism of the reforms introduced in the electricity sector by the administration of President Cardoso, supporting a model in which the system should be fully regulated. The pending privatizations of three generation subsidiaries of the large state-owned utility, Eletrobras, were stopped. However, despite initial expectations, the new administration opted for a model that clearly aims to attract long-term private investment to the sector and that heavily relies on competition. In addition, the existing institutions were preserved and in some cases strengthened, with a new company, EPE, created with the specific mission of developing an integrated long-term planning for the power sector in Brazil.The new legislative framework was defined by Law 10,848/2004, which established clear, stable and transparent rules aimed at ensuring supply and the continuous expansions of the intrinsic sector activities (generation, transmission and distribution). The expansion was linked to a fair return on investments and to universal service access, together with tariff adjustments. Decree 5,081/2004 approved the regulatory framework for the power sector, specifying specific provisions to achieve the objectives of the reform.
One of the defining elements of the model adopted by the new administration is the establishment of energy auctions as the main procurement mechanism for distribution companies to acquire energy to serve their captive consumers. This initiative assisted in the introduction of competition in the power sector and also helped to address some of the existing market imperfections. Under this system, auctions of capacity from new generation projects will be held three to five years in advance of delivery dates. The Ministry of Mines and Energy wants to ensure that the totality of future expansion needs is met and that plants are only built once they have won bids in energy auctions and are guaranteed long-term contracts. The first auction was held in December 2004, with contracts for a total of about 40 GW traded.
Tariffs and subsidies
Tariffs
Average electricity tariffs for the different sectors in 2007 were as follows:
Residential: 15.3 US¢/kWh
Industrial: 11.3 US¢/kWh
Commercial: 14.2 US¢/kWh
Rural: 9.1 US¢/kWh
Investment and financing
In the last 20 years, Brazil has been one of the main recipients of private capital investment in its power sector. Total investment by private actors in the power sector between 1994 and 2006 amounted to US$56,586 million in 124 projects. However, despite Brazil's deregulation and higher tariffs in the "new energy" auction system, investment, particularly in generation, has slowed significantly. This situation is not considered to be the result of concerns about the regulatory model or auction pricing caps, but it reflects the lack of available projects. The existing delays in granting environmental licenses and the uncertainties on the Bolivian gas supply, explain to a great extent the lack of hydroelectric and gas-fired thermoelectric projects respectively.The investment required in power generation over the next 10 years is R$40 billion or around US$24.2 billion (April 29, 2008). This high investment will only be realized if the government succeeds in attracting greater private-sector investment.
Summary of private participation in the electricity sector
In Brazil, large government-controlled companies dominate the electricity sector. Federally owned Eletrobras holds about 40% of capacity (including 50% of Itaipu), with state-companies CESP, Cemig and Copel controlling 8%, 7% and 5% of generation capacity respectively. About 27% of generation assets are currently in the hands of private investors.
Transmission, it has remained almost exclusively under government control through both federal (Eletrobras) and state companies (mainly Sao-Paulo-CTEEP, Minas Gerais-Cemig, and Parana-Copel) until recently. However, under the new sector regulatory model, there are about 40 transmission As for distribution, there are 49 utilities with distribution concessions and about 64% of distribution assets are controlled by private sector companies.
Electricity and the environment
Responsibility for the environment
The Ministry of the Environment holds the environmental responsibilities in Brazil. One of its associated institutions is Ibama, the Brazilian Institute for the Environment and Renewable Natural Resources, which is in charge of executing the environmental policies dictated by the Ministry regarding environmental licensing; environmental quality control; authorization of the use of natural resources; and environmental monitoring and control among others.
Greenhouse gas emissions
OLADE (Latin American Energy Association) estimated that CO2 emissions from electricity production in 2003 were 20 million tons of CO2, which corresponds to less than 7% of total emissions from the energy sector. This low contribution to emissions from electricity production in comparison with other countries in the region is due to the high share of hydroelectric generation.
CDM projects in electricity
Brazil is host to the largest number of CDM projects in the Latin America region. Registered projects represent 40% of the total in the region and account for 45% of Certified Emission Reductions (CERs) (up to 2012).As for the power sector, there were 91 projects registered in March 2008, adding up to an estimated total of 9 million tons of CO2 per year. The distribution of projects by category is as follows:
Source: UNFCCC
Energy costing of Brazilian electricity
An exergoeconomic assessment accounting for the total and non-renewable unit exergy costs and specific CO2 emissions of Brazilian electricity is performed by Flórez-Orrego et al. (2014), comprising thermal, nuclear, hydro, wind farms and biomass-fired power plants. The analysis starts from the fuel obtainment and continues through the different stages of construction, fuel transportation and processing, operation and decommissioning of the plant, with electricity generation as the desired output. This approach allows the calculation of direct CO2 emissions as well as the upstream and downstream emissions, which play an important role in some technologies. In this way, a better comparison between the utilization of different fuels in the electricity generation can be achieved. An iterative calculation procedure is used to determine the unit exergy costs of electricity and processed fuels, since both electricity and processed fuel are used in their own production routes.
As it was expected, fossil-fired power plants presents the highest specific CO2 emissions, with the coal-fired power plants leading the group. However, even though fossil-fired power plants presents the most marked environmental impacts, their total unit exergy costs are much lower than that presented by sugar cane bagasse-fired power plants. This shows that, although almost renewable, the typical configurations of sugar cane bagasse-fired power plants are far from being efficient technologies. Hydro and wind farms present the lowest specific CO2 emissions as well as the lowest unit exergy cost. Due to the high participation of renewable sources in the production of electricity (near to 89% of the total), Brazilian electricity mix emissions are found to be 7.5 and 11.8 times lower than Europe and World electricity mixes. Also, owed to the higher efficiency of hydroelectric power plants, which contribute to the major part of the electricity generation in Brazil, the total unit exergy cost is lower, and thus, exergy efficiency of electricity generation is higher if compared with countries based on fossil fuels for electricity generation.
Apparently, total exergy cost of wind and natural-gas fired technologies are almost the same, but contrarily to the wind power plants, the non-renewable unit exergy costs of NG-fired power plants is practically equal to the total cost. This result is a consequence of the efficiency assumed for wind power plants. If energy storage is to be taken into account for intermittent technologies such as wind farms, the total exergy cost could be slightly increased. The upstream and downstream CO2 emissions in the coal route represent a very small part of the total CO2 emissions, if compared with the direct emissions of coal burning in the power plant. Finally, it is pointed out that controversies related to the flooding dams of vast zones with complex ecosystems should be carefully analysed since, according to the results reported by Dones et al., the GHG emissions could be increased up to achieve emission levels comparable to those of gas combined cycles power plants.
External assistance
Inter-American Development Bank
The Inter-American Development Bank (IDB) is currently (April 2008) supporting several projects and contributing to various technical assistance initiatives in the power sector in Brazil. The most relevant projects with financing from the IDB are:
The Renewable Energy Service Delivery Project is a technical cooperation that seeks to implement several pilot projects that demonstrate three promising, private-sector-led business models to provide renewable energy services to isolated communities in Brazil. The IDB supports this US$45 million technical assistance with US$2.25 million.
The Celpa Capital Investment Program aims to expand and improve Celpa's distribution electrical system allowing the Company to (i) provide electricity to new customers mostly in rural areas; (ii) allow productivity gains and reduce costs and (iii) improve quality and reliability of its network distribution. The IDB supports this US$400 million project with a US$75 million loan.
In February 2008, the IDB approved a US$95.5 million loan for the ATE III Transmission Project, a US$402 million project for the development, construction, erection, commissioning, operation and maintenance of approximately 459-kilometer transmission lines from the State of Pará to the State of Tocantins.
World Bank
The World Bank is currently (April 2008) supporting three rural poverty reduction projects that include the provision of access to electricity services:
Rural Poverty Reduction Project in Pernambuco: US$60 million loan (10% electricity component)
Rural Poverty Reduction Project in the State of Ceara: US$50 million loan (10% electricity component)
Bahia State Integrated Project – Rural Poverty: US$54.35 million (16% electricity component)
Sources
Economist Intelligence Unit, 2007. Industry Briefing. Brazil: Energy and electricity forecast. Aug 22, 2007
Economist Intelligence Unit, 2008. Industry Briefing. Brazil: Energy and electricity profile. Jan 30, 2008
Millán, J. 2006. Entre el mercado y el estado. Tres décadas de reformas en el sector eléctrico de América Latina. Chapter 3:La reforma en Brasil. Inter-American Development Bank.
"Capacidade Instalada de Geração Elétrica. Brasil e Mundo (2016)". Ministry of Mines and Energy. 2017. Archived from the original on 27 October 2017.
World Bank, 2007.Closing the Electricity Supply-Demand Gap. Case Study: Brazil.
See also
2009 Brazil and Paraguay blackout
Economy of Brazil
Energy policy of Brazil
Ethanol fuel in Brazil
Water supply and sanitation in Brazil
Water resources management in Brazil
Irrigation in Brazil
Environment of Brazil
History of Brazil
Notes
External links
National Regulatory Agency (ANEEL)
National System Operator (ONS)
Ministry of Environment
Brazilian Institute for the Environment and Renewable Natural Resources (Ibama)
Power Commercialization Chamber (CCEE) Archived 18 April 2008 at the Wayback Machine
Brazilian Association of Energy Traders (ABRACEEL)
Brazilian Association of Electricity Generation Companies (ABRADEE)
Brazilian Association of Electricity Generation Companies (ABRAGE)
Brazilian Association of Hydric Resources (ABRH)
Brazilian Association of Big Industrial Energy Consumers (ABRACE)
Brazilian Association of IPPs (APINE)
Electrobras Archived 27 April 2008 at the Wayback Machine
Sao Paulo Electricity Company (CESP)
Minas Gerais Energy Company (CEMIG)
Electrical Energy Research Center (CEPEL)
AES Tiete
Tractebel Energia
List of World Bank projects in Brazil
List of IDB projects in Brazil |
western climate initiative | Western Climate Initiative, Inc. (WCI) is a 501(c)(3) non-profit corporation which administers the shared emissions trading market between the American state of California and the Canadian province of Quebec as well as separately administering the individual emissions trading systems in the Canadian province of Nova Scotia and American state of Washington. It also provides administrative, technical and infrastructure services to support the implementation of cap-and-trade programs in other North American jurisdictions. The organization was originally founded in February 2007 by the governors of five western states with the goal of developing a multi-sector, market-based program to reduce greenhouse gas emissions; it was incorporated in its current form in 2011.
Structure
Since its reincorporation in 2011 as a non-profit corporation, WCI is governed by a Board of Directors appointed by the participating jurisdictions. Each jurisdiction appoints two voting directors to the Board.
History
The Western Climate Initiative was founded as the Western Regional Climate Action Initiative on February 26, 2007, by the governors of Arizona, California, New Mexico, Oregon and Washington. The founding agreement stated the goal of the WRCAI was to evaluate and implement ways to reduce their states's emissions of greenhouse gases and achieve related co-benefits. These states and future participants in the initiative (collectively known as WCI "partners") also committed to set an overall regional goal to reduce emissions (set in August 2007 as 15 percent below 2005 emission levels by 2020), participate in a cross-border greenhouse gas registry to consistently measure and track emissions, and adopt clean tailpipe standards for passenger vehicles. By July 2008, the initiative had expanded to include two more U.S. states (Montana and Utah) and four Canadian provinces (British Columbia, Manitoba, Ontario and Quebec). Together, these partners comprised 20 percent of the U.S. GDP and 76 percent of the Canadian GDP.
Goals and design
The most ambitious and controversial objective of the WCI was to develop a multi-sector, market-based program to reduce greenhouse gas emissions. Detailed design recommendations for a regional cap-and-trade program to reduce greenhouse gas emissions were released by the WCI in September 2008 and July 2010. By December 2011, California and Quebec adopted regulations based on these recommendations. (The WCI has no regulatory authority of its own.) Key administrative aspects of the regional cap-and-trade program are being implemented in 2012. Power plants, refineries, and other large emitters must comply with the cap in 2013. Other greenhouse gas emission sources, such as suppliers of transportation fuels, must comply with the cap beginning in 2015. Among other things, the Western Climate Initiative lays the foundation for a North American cap-and-trade program, not only in its design and implementation, but in its potential acceptance of greenhouse gas emissions offsets from projects across North America.
Criticisms of WCI
Some observers described the entire project as greenwash designed to avoid committing to the Kyoto Protocol, and cited evidence that much more drastic cuts, up to 40%, could be achieved without affecting investment yield in equities, a good indicator that such cuts would not affect economic prospects in the economy as a whole.
Partners vs. observers
Several U.S. partners, although active participants in the design of the program, announced in 2010 that they would either delay or not implement the program in their jurisdictions. The partnership was therefore streamlined to include only California and the four Canadian provinces actively working to implement the program. As of January 2012, regulations have not been issued by British Columbia, Manitoba, or Ontario, although a carbon tax in British Columbia will be increasing to $30/tonne of CO2 equivalents in July 2012. Several WCI partners also remain active in the International Carbon Action Partnership, an international coordinating body for several such regional carbon trading bodies.
Alberta and Saskatchewan object to cap-and-trade and in July 2008 called WCI's plan a "cash grab by some of Canada's resource-poor provinces." However, Alberta already had legislated its own emissions trading system for large emitters in 2007. The objections seem to be more related to the reporting and disclosure requirements that would be much higher for a North American project than for one based strictly in Alberta. Some of the states that withdrew by late 2011 also intended to develop oil shale, hydraulic fracturing of natural gas and coal resources that would have broad impacts beyond climate on water, including more ocean acidification.
Until late 2011, the initiative included two types of participants: partners and observers.For several years, the partners were the U.S. states of California, Montana, New Mexico, Oregon, Utah, and Washington, and the Canadian provinces of British Columbia, Manitoba, Ontario, and Quebec. All states except California and Quebec withdrew in 2011. See below: Membership changes.
The observers included at various times Alaska, Colorado, Idaho, Kansas, Nevada, Wyoming, the province of Saskatchewan (which objects to WCI plans for a cap and trade system), and the Mexican states of Baja California, Chihuahua, Coahuila, Nuevo Leon, Sonora and Tamaulipas.
Membership changes
26 February 2007: Arizona, California, New Mexico, Oregon, and Washington form the Western Regional Climate Action Initiative.
24 April 2007: British Columbia joined with the five western states, turning the WCI into an international partnership.
21 May 2007: Utah became the sixth state to join the WCI when Governor Jon Huntsman Jr. signed the Initiative. Huntsman was the second Republican governor to join, after California governor Arnold Schwarzenegger.
13 June 2007: Manitoba said that it would be the second Canadian province to join the WCI.
24 September 2007: Alaska joined the WCI as an observer.
19 November 2007: The Governor of Montana announced that his state would also join.
January 2008: Montana officially joins the WCI.
18 April 2008: Quebec, previously an observer, became a partner.
18 July 2008: Ontario, previously an observer, became a partner.
2 February 2010: Arizona announces it will not implement a cap-and-trade program, particularly during the economic downturn, but maintains its membership in the WCI as a partner.
29 June 2011: California announces that enforcement of the cap will be delayed one year to allow the necessary elements of the program to be in place and fully functional. The stringency of the cap and expected amount of emission reductions, however, will remain unchanged.
6 July 2011: In order to maintain a coordinated approach to implementing the regional program, Quebec announces a one-year delay in enforcement of the cap.
18 November 2011: Arizona, Montana, New Mexico, Oregon, Utah and Washington leave WCI formally. Participation by BC was cast into doubt also, where officials indicated satisfaction with the existing carbon tax approach and had not committed to implementing a cap and trade system to replace it.As of December 2011, the remaining WCI partners were California and the Canadian provinces British Columbia, Manitoba, Ontario, and Quebec.
11 May 2018: Nova Scotia becomes WCI partner, and executes cap and trade program on 1 January 2019.
11 October 2018: Ontario passes the Cap and Trade Cancellation Act, 2018 and formally withdraws from the WCI.After British Columbia ceased participation in emissions trading in 2018, it remained a participating jurisdiction under WCI by-laws until amendments were made.As of 2022, the participating WCI jurisdictions are the American states of California and Washington; and the Canadian provinces of Nova Scotia and Quebec.
See also
List of climate change initiatives
Intergovernmental Panel on Climate Change
Midwestern Greenhouse Gas Accord
Regional Greenhouse Gas Initiative (Eastern North America)
The Climate Registry
United States Carbon Cap and Trade Program
Project Vulcan
Regional climate change initiatives in the United States
United States Climate Alliance
== References == |
environmentally friendly | Environment friendly processes, or environmental-friendly processes (also referred to as eco-friendly, nature-friendly, and green), are sustainability and marketing terms referring to goods and services, laws, guidelines and policies that claim reduced, minimal, or no harm upon ecosystems or the environment.Companies use these ambiguous terms to promote goods and services, sometimes with additional, more specific certifications, such as ecolabels. Their overuse can be referred to as greenwashing. To ensure the successful meeting of Sustainable Development Goals (SDGs) companies are advised to employ environmental friendly processes in their production. Specifically, Sustainable Development Goal 12 measures 11 targets and 13 indicators "to ensure sustainable consumption and production patterns".The International Organization for Standardization has developed ISO 14020 and ISO 14024 to establish principles and procedures for environmental labels and declarations that certifiers and eco-labellers should follow. In particular, these standards relate to the avoidance of financial conflicts of interest, the use of sound scientific methods and accepted test procedures, and openness and transparency in the setting of standards.
Regional variants
Europe
Products located in members of the European Union can use the EU Ecolabel pending the EU's approval. EMAS is another EU label that signifies whether an organization management is green as opposed to the product. Germany also uses the Blue Angel, based on Germany's standard.In Europe, there are many different ways that companies are using environmentally friendly processes, eco-friendly labels, and overall changing guidelines to ensure that there is less harm being done to the environment and ecosystems while their products are being made. In Europe, for example, many companies are already using EMAS labels to show that their products are friendly.
Companies
Many companies in Europe make putting eco-labels on their products a top-priority since it can result to an increase in sales when there are eco-labels on these products. In Europe specifically, a study was conducted that shows a connection between eco-labels and the purchasing of fish: "Our results show a significant connection between the desire for eco-labeling and seafood features, especially the freshness of the fish, the geographical origin of the fish and the wild vs farmed origin of the fish". This article shows that eco-labels are not only reflecting a positive impact on the environment when it comes to creating and preserving products, but also increase sales. However, not all European countries agree on whether certain products, especially fish, should have eco-labels. In the same article, it is remarked: "Surprisingly, the country effect on the probability of accepting a fish eco-label is tricky to interpret. The countries with the highest level of eco-labeling acceptability are Belgium and France". According to the same analysis and statistics, France and Belgium are most likely of accepting these eco-labels.
North America
In the United States, environmental marketing claims require caution. Ambiguous titles such as environmentally friendly can be confusing without a specific definition; some regulators are providing guidance. The United States Environmental Protection Agency has deemed some ecolabels misleading in determining whether a product is truly "green".In Canada, one label is that of the Environmental Choice Program. Created in 1988, only products approved by the program are allowed to display the label.Overall, Mexico was one of the first countries in the world to pass a specific law on climate change. The law set an obligatory target of reducing national greenhouse-gas emissions by 30% by 2020. The country also has a National Climate Change Strategy, which is intended to guide policymaking over the next 40 years.
Oceania
The Energy Rating Label is a Type III label that provides information on "energy service per unit of energy consumption". It was first created in 1986, but negotiations led to a redesign in 2000.Oceania generates the second most e-waste, 16.1 kg, while having the third lowest recycling rate of 8.8%. Out of Oceania, only Australia has a policy in policy to manage e-waste, that being the Policy Stewardship Act published in 2011 that aimed to manage the impact of products, mainly those in reference to the disposal of products and their waste. Under the Act the National Television and Computer Recycling Scheme (NTCRS) was created, which forced manufactures and importers of electrical and electronic equipment (EEE) importing 5000 or more products or 15000 or more peripherals be liable and required to pay the NTCRS for retrieving and recycling materials from electronic products.
New Zealand does not have any law that directly manages their e-waste, instead they have voluntary product stewardship schemes such as supplier trade back and trade-in schemes and voluntary recycling drop-off points. Though this has helped it costs the provider money with labor taking up 90% of the cost of recycling. In addition, e-waste is currently not considered a priority product, which would encourage the enforcement of product stewardship. In Pacific Island Regions (PIR), e-waste management is a hard task since they lack the adequate amount of land to properly dispose of it even though they produce one of the lowest amounts of e-waste in the world due to their income and population. Due to this there are large stockpiles of waste unable to be recycled safely.
Currently, The Secretariat of the Pacific Regional Environment Programme (SPREP), an organization in charge of managing the natural resources and environment of the Pacific region, is in charge of region coordination and managing the e-waste of the Oceania region. SPREP uses Cleaner Pacific 2025 as a framework to guide the various governments in the region. They also work with PacWaste (Pacific Hazardous Waste) to identify and resolve the different issues with waste management of the islands, which largely stem from the lack of government enforcement and knowledge on the matter. They have currently proposed a mandatory product stewardship policy be put in place along with an advance recycling fee which would incentivize local and industrial recycling. They are also in the mindset that the islands should collaborate and share resources and experience to assist in the endeavor.
With the help from the NTCRS, though the situation has improved they have been vocal about the responsibilities of stakeholders in the situation and how they need to be more clearly defined. In addition to there being a differences in state and federal regulations, with only Southern Australia, Australian Capital Territory, and Victoria having banned e-waste landfill, it would be possible to make this apply the rest of the region if a federal decision was made. They have also advocated for reasonable access to collection points for waste, with there being only one collection point within a 100 km radius in some cases. It has been shown that the reason some residents do not recycle is because of their distance from a collection point. In addition, there have been few campaigns to recycle, with the company, MobileMuster, a voluntary collection program managed by the Australian Mobile Telecommunication Association, aimed to collect phones before they went to a landfill and has been doing so since 1999. Upon further study, it was found that only 46% of the public was award of the program, which later increased to 74% in 2018, but this was after an investment of $45 million from the Australian Mobile Telecommunication Association.
Asia
"Economic growth in Asia has increased in the past three decades and has heightened energy demand, resulting in rising greenhouse gas emissions and severe air pollution. To tackle these issues, fuel switching and the deployment of renewables are essential." However, as countries continue to advance, it leads to more pollution as a result of increased energy consumption. In recent years, the biggest concern for Asia is its air pollution issues. Major Chinese cities such as Beijing have received the worst air quality rankings (Li et al., 2017). Seoul, the capital of South Korea, also suffers from air pollution (Kim et al., 2017). Currently, Indian cities such as Mumbai and Delhi are overtaking Chinese cities in the ranking of worst air quality. In 2019, 21 of the world's 30 cities with the worst air quality were in India."
The environmentally friendly trends are marketed with a different color association, using the color blue for clean air and clean water, as opposed to green in western cultures. Japanese- and Korean-built hybrid vehicles use the color blue instead of green all throughout the vehicle, and use the word "blue" indiscriminately.
China
According to Shen, Li, Wang, and Liao, the emission trading system that China had used for its environmentally friendly journey was implemented in certain districts and was successful in comparison to those which were used in test districts that were approved by the government. This shows how China tried to effectively introduce new innovative systems to impact the environment. China implemented multiple ways to combat environmental problems even if they didn't succeed at first. It led to them implementing a more successful process which benefited the environment. Although China needs to implement policies like, "The “fee-to-tax” process should be accelerated, however, and the design and implementation of the environmental tax system should be improved. This would form a positive incentive mechanism in which a low level of pollution correlates with a low level of tax." By implementing policies like these companies have a higher incentive to not over pollute the environment and instead focus on creating an eco-friendlier environment for their workplaces. In doing so, it will lead to less pollution being emitted while there also being a cleaner environment. Companies would prefer to have lower taxes to lessen the costs they have to deal with, so it encourages them to avoid polluting the environment as much as possible.
International
Energy Star is a program with a primary goal of increasing energy efficiency and indirectly decreasing greenhouse gas emissions. Energy Star has different sections for different nations or areas, including the United States, the European Union and Australia. The program, which was founded in the United States, also exists in Canada, Japan, New Zealand, and Taiwan. Additionally, the United Nations Sustainable Development Goal 17 has a target to promote the development, transfer, dissemination, and diffusion of environmentally friendly technologies to developing countries as part of the 2030 Agenda.
See also
== References == |
air pollution in turkey | In Turkey, air pollution is the most lethal of the nation's environmental issues, with almost everyone across the country exposed to more than World Health Organization guidelines.: 7 Over 30,000 people die each year from air pollution-related illnesses; over 8% of the country's deaths. Air pollution is particularly damaging to children's health. Researchers estimate that reducing air pollution to World Health Organization limits would save seven times the number of lives that were lost in traffic accidents in 2017.
Road transport in Turkish cities and coal in Turkey are major polluters, but the main factor affecting air pollution levels is vehicle density. The number of vehicles traversing Turkey's roads has increased from 4 million in 1990 to 25 million in 2020. Additionally, ambient air quality and national emissions ceilings do not meet EU standards, and unlike other European countries, many air pollution indicators are not available in Turkey. There is no limit on very small airborne particles (PM2.5), which cause lung diseases and, as of 2021 they have not been completely inventoried and are not officially reported.: 11 Cars and lorries emit diesel exhaust, particulates, nitrogen oxides (NOx) and other fumes in cities, but the first of several Turkish national electric cars is planned to start production in 2022. Low-quality lignite coal, burnt in cities and the oldest of the country's coal-fired power stations, is also a big part of the problem.In early 2020 air pollution in major cities fell significantly due to COVID-19 restrictions, but it started to rise again by the middle of the year. Right to Clean Air Platform Turkey and the Chamber of Environmental Engineers are among organisations campaigning for cleaner air.
Sources of air pollution
Traffic
In 2019 Istanbul had a dangerously high level of NO2 (over three times WHO guidelines).: 11 Although Istanbul's urban smog had cleared by early 2020 air pollution in the city increased again once COVID-19 restrictions had been eased. Increasing Turkey's proportion of electric cars in use to 10% by 2030 would also reduce greenhouse gas emissions by Turkey. There are high purchase taxes on new cars and in 2019 about 45% of cars were over 10 years old and energy-inefficient. Continued electrification of the rail network and more high-speed line is one countermeasure being taken. In 2020 strict enforcement of diesel truck emissions was suggested by Sabancı University as a way to get old, polluting vehicles off the road: also in that year tractors have a legal exemption to burn 1000 ppm sulfur diesel.
Heating and cooking
As of 2018, Turkish coal is still burnt for home heating in low-income districts of Ankara and some other cities, which is bad because Turkish coal is very low-quality.
Coal-fired power stations
Emissions from coal-fired power stations cause severe impacts on public health. A report from the Health and Environment Alliance (HEAL) estimates that in 2019, there were almost 5,000 premature deaths caused by pollution from coal-fired power stations in Turkey, and over 1.4 million work-days lost to illness. The Director for Strategy and Campaigns said:Pollution from coal power plants puts everyone at risk of cardiovascular diseases, stroke, chronic obstructive pulmonary disease, lung cancer as well as acute respiratory infections. But it particularly affects those most vulnerable – pregnant women, children, the elderly, those already ill or poor. The HEAL report estimates that the health costs of illness caused by coal-fired power stations make up between 13 and 27 percent of Turkey's total annual health expenditure (including both public and private sectors).Greenpeace Mediterranean say that the coal-fired power plants in Afşin-Elbistan are the power plants with the highest health risk in a European country, followed by Soma power station.
Flue gas emission limits
Since January 2020 flue gas emission limits in mg/Nm3 (milligrams per cubic metre) have been:
These limits allow more pollution than the EU Industrial Emissions Directive. In China (which has a similar income per person), the limits for particulate matter (PM), sulfur dioxide (SO2) and NOx emissions are 10, 35, and 50 mg/m3, respectively.
Passive smoking
More than a quarter of adults smoke in Turkey, and secondhand smoking, also known as passive smoking, is a danger in itself and increases the risk of respiratory infection.
Industry and construction
Air pollution from cement production is one of the environmental impacts of concrete. Although asbestos was completely banned in 2010, it can still be a risk when older buildings are demolished, in dumps, and in buildings in some rural areas where it occurs naturally.
Types and levels
Levels across the country are above World Health Organization guidelines. There is no limit on PM2.5 and limits for other pollutants (except SO2) are above WHO guidelines:
Although there is some monitoring of air pollution, many air pollution indicators are not available. The air quality index in Turkey does not include particles smaller than 2.5 microns (PM 2.5), but does include nitrogen dioxide, sulfur dioxide, carbon monoxide, tropospheric ozone and particles between 10 and 2.5 microns in diameter (PM10). According to the OECD Turkey plans to meet EU limits by 2024.
Particulates
Like in other countries, particulates, such as from tyre wear of vehicles in cities, are a danger to people's lungs. Regulations in Turkey do not contain restrictions on particles less than 2.5 microns in diameter (PM 2.5), which cause lung diseases. As of 2016 average PM2.5 concentration was 42μg/m3, whereas 10 μg/m3 is the World Health Organization (WHO) guideline, and is at dangerous levels in Batman, Hakkari, Siirt, Iğdır, Afyon, Gaziantep, Karaman, and Isparta.
Nitrogen oxides
Asthma is expensive to treat and can be caused by nitrogen oxides. NO2 in cities such as Ankara is visible from satellites. Existing diesel vehicles emit diesel exhaust NOx and other air pollutants in cities but the first model of Turkish national electric car is planned to start production in 2022.
Sulfur dioxide
Emissions are mostly from coal-fired power stations, and rose 14% in 2019 to over a megatonne of the world total of 29 megatonnes: Kemerköy power station and the Afşin-Elbistan power stations polluted the surrounding areas with 300 kilotonnes each in 2019.
Volatile organic compounds
As of 2014 levels of volatile organic compounds (VOCs) in Istanbul were on average similar to those in London and Paris but more variable, with maxima usually exceeding 10 ppb.
Persistent organic pollutants
The emission levels of persistent organic pollutants are regulated, but totals for these emissions were not reported in 2019.
Greenhouse gases
As of 2018 Turkey emits one percent of the world's greenhouse gas emissions. Because most of the air pollution is caused by burning fossil fuels, greenhouse gas emissions by Turkey would also be reduced by, for example, low emission zones for city traffic, and replacing the distribution of free coal with a different support for poor families. In other words, helping to limit climate change would be a co-benefit of the main health benefits, and health improvement would be a co-benefit of climate change mitigation.
Monitoring and reporting
In 2018 air quality data was available on the website of the Ministry of Environment and Urbanization for 16% of districts and the ministry plans for it to be available for all districts by 2023, increasing the number of monitoring stations to 380. In 2023 the Right to Clean Air Platform said that half of the 360 monitoring stations were not working properly. The Chamber of Environmental Engineers publishes a report every year based on this data. The ministry also continuously monitors smokestack emissions from 305 power plants and industrial sites to ensure they do not surpass the limits, but this data is not published as Turkey has not ratified the Gothenburg Protocol on air pollution. There are hourly, daily and yearly average limits for various pollutants in the area around a coal-fired power station, defined as a radius 50 times the chimney height:
Some industrial companies reach Global Reporting Initiative (GRI) 305 emissions standard.
Medical dangers
About 8% of all deaths have been estimated to be due to air pollution. However estimates of annual excess mortality vary between 37,000 and 60,000.: 7 The Right to Clean Air Platform estimates at least 48,000 early deaths in 2021. Air pollution is a health risk mainly due to burning fossil fuels, such as coal and diesel. Researchers estimate that reducing air pollution to World Health Organization limits would save seven times the number of lives that were lost in traffic accidents in 2017. Although in many places the health effects of air pollution cannot be estimated, because there is not enough monitoring of PM10 and PM2.5 particulates, average excess loss of life (compared to how many would be lost if WHO air pollution guidelines were followed) is estimated to be 0.4 years per person but this will vary by location because, as of 2019, air pollution is severe in some cities. In general it increases the risk from respiratory infections, such as COVID-19, especially in highly polluted cities such as Zonguldak, but this is disputed for some places and more research is needed.
Cities
Many cities in Turkey are more polluted than typical European cities. For example the capital of neighbouring Bulgaria is introducing a low emission zone and restricting coal and wood burning.
Istanbul
Pollution has lessened since the 1990s. But as of 2019, measured with the air quality index, Istanbul's air affects the hearts and respiratory systems even of healthy individuals during busy traffic. NO2 is visible in measurements by Orbiting Carbon Observatory 3.
Bursa
As of 2020, industry located within the city of Bursa is a particular problem, and it is said to have the worst air pollution in the country. Breathing the air there is equivalent to smoking 38 packs of cigarettes a year. NO2 is visible in satellite measurements.
Ereğli
A higher rate of multiple sclerosis may be related to local industry in Ereğli.
Relationship to climate change
Some of the sulphur compounds emitted from Turkey's coal-fired power station chimneys become stratospheric sulfur aerosols, which are the type of short-lived climate forcers which reflect sunlight back into space. However this cooling effect is temporary, as short-lived climate forcers are almost all gone from the atmosphere after 30 years.: 7 Significant amounts of coal were burnt over 30 years ago, so the effect of that on global warming is dominated by CO2,: 7 even though there were no limits on sulphur compounds until 2004. Between 2004 and 2020, the limit on concentrations of sulphur compounds in flue gas was greatly reduced.
Politics
The Climate Change and Air Management Coordination Board is responsible for coordination between government departments. As of 2019, however, according to the EU, better coordinated policies need to be established and implemented.
Economics
The impact of air pollution on the economy via damage to health may be billions of dollars, and an attempt to estimate this more precisely began in 2019. A study of 2015-16 hospital admissions in Erzincan estimated direct costs of air pollution as 2.5% of the total health-related expenditures for the 15–34 and over 65 age groups, but stated that the total cost is likely much higher: for example, the economic costs of the reductions in the intelligence of adults and children have not been estimated. According to medical group Health and Environment Alliance (HEAL), reducing PM 2.5 air pollution in the country would substantially increase GDP. According to the OECD, in 2019 bitumen's exemption from special consumption tax was a subsidy of 5.9 billion lira. Bitumen, also known as asphalt, is used for road surfaces and in hot weather releases secondary organic aerosols, which can damage people's health in cities.
International
As of 2019, ambient air quality and national emissions ceilings are not up to EU standards. As of 2020 Turkey has not ratified the Gothenburg Protocol, although it has ratified the original Convention on Long-Range Transboundary Air Pollution and those reports are public. Pollution affects neighbouring countries. The Armenian Nuclear Power Plant, 16 km over the border, is old and said to be insufficiently earthquake proof and vulnerable to military attack.
Proposed solutions
In the Constitution of Turkey, Article 56 reads, "Everyone has the right to live in a healthy and balanced environment. It is the duty of the State and citizens to improve the natural environment, to protect the environmental health and to prevent environmental pollution."According to the Eleventh Development Plan (2019-2023), all districts will be monitored by 2023 and:
Air quality management practices will be enabled to prevent air pollution from production, heating and traffic, and air quality will be improved by controlling emissions.
Air quality action plans will be prepared at local level and legislation on pollution and emission control will be updated.
Air quality management capacity will be improved by strengthening regional clean air centres.
Research on air quality modelling and monitoring will be conducted and infrastructure will be developed.
Quitting coal is said to be essential, and the market share of diesel cars is falling. Strengthening environmental laws is said to benefit the economy of Turkey. The Ministry of the Environment has drafted a law limiting PM 2.5 but it has not yet been passed. According to the HEAL, over 500 premature deaths could be avoided per year by shutting down three power stations in Muğla.Electric ferryboats have been proposed for the Bosphoros. A low-emission zone for road traffic has been suggested for Istanbul and it has been suggested that Turkey's vehicle tax system should be changed to better charge for pollution. More green space is suggested for cities. Seven regional clean air centers have been created and the deputy environment minister said in 2020 that low-emissions areas will be created and bike lanes increased.
History
Lead was first smelted around 5000 BC in Anatolia and in 535 AD Justinian I acknowledged the importance of clean air. In the 19th century air pollution was thought of in terms of miasma, the idea that foul smells could cause disease. Due to the high cost of oil after the 1970s oil crisis, cities burnt more lignite for residential heating. An Air Pollution Control Regulation was issued in the 1980s and air quality monitoring began in that decade. In early 2020 most air pollution in major cities fell significantly due to the COVID-19 restrictions, but tropospheric ozone (a leading cause of smog) increased as there were fewer particles to block the sunlight. Air pollution started to rise again by the middle of the year.
References
Sources
"Turkey 2019 Report" (PDF). European Commission (EC). May 2019.The European environment — state and outlook 2020 (Report). European Environment Agency (EEA). 2019.Jensen, Genon K. (December 2018). Lignite coal – health effects and recommendations from the health sector (PDF) (Report). Health and Environment Alliance (HEAL).Saygın, Değer; Tör, Osman Bülent; Teimourzadeh, Saeed; Koç, Mehmet; Hildermeier, Julia; Kolokathis, Christos (December 2019). Transport sector transformation: Integrating electric vehicles into Turkey's distribution grids (PDF). SHURA Energy Transition Center (Report). Archived from the original (PDF) on 2020-08-01. Retrieved 2019-12-26.Air Pollution Report 2018 (Report). Chamber of Environmental Engineers.Hava Kirliliği Raporu 2019 [Air Pollution Report 2019] (PDF) (Report) (in Turkish). Chamber of Environmental Engineers.Türki̇ye'ni̇n Enerji̇ Görünümü 2020 [Turkey's energy overview 2020] (PDF) (in Turkish). TMMOB Maki̇na Mühendi̇sleri̇ Odasi. May 2020. ISBN 978-605-01-1367-9.Aytaç, Orhan (May 2020). Ülkemi̇zdeki̇ Kömür Yakitli Santrallar Çevre Mevzuatiyla uyumlu mu? [Are Turkey's coal-fired power stations in accordance with environmental laws?] (PDF) (Report) (in Turkish). TMMOB Maki̇na Mühendi̇sleri̇ Odasi.OECD (2021). OECD Economic Surveys: Turkey 2021 (Report). OECD. ISSN 1999-0480.IEA (March 2021). Turkey 2021 – Energy Policy Review (Technical report). International Energy Agency.
External links
Official Air Quality Map
Methane map
Turkey fact sheets European Environment Agency
Chamber of Environmental Engineers
2019 World Clean Air Congress in Istanbul |
regulation of greenhouse gases under the clean air act | The, United States Environmental Protection Agency (EPA) began regulating greenhouse gases (GHGs) under the Clean Air Act ("CAA" or "Act") from mobile and stationary sources of air pollution for the first time on January 2, 2011. Standards for mobile sources have been established pursuant to Section 202 of the CAA, and GHGs from stationary sources are currently controlled under the authority of Part C of Title I of the Act. The basis for regulations was upheld in the United States Court of Appeals for the District of Columbia in June 2012.Various regional climate change initiatives in the United States have been undertaken by state and local governments, in addition to federal Clean Air Act regulations.
History
Initial petition and initial denial
Section 202(a)(1) of the Clean Air Act requires the Administrator of the EPA to establish standards "applicable to the emission of any air pollutant from…new motor vehicles or new motor vehicle engines, which in [her] judgment cause, or contribute to, air pollution which may reasonably be anticipated to endanger public health or welfare" (emphasis added). On October 20, 1999, the International Center for Technology Assessment (ICTA) and several other parties (petitioners) petitioned the EPA to regulate greenhouse gases from new motor vehicles under the Act. The petitioners argued that carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), and hydrofluorocarbons meet the definition of an air pollutant under section 302(g) of the Act, and that statements made by the EPA, other federal agencies, and the United Nations Intergovernmental Panel on Climate Change (IPCC) amounted to a finding that these pollutants are reasonably anticipated to endanger public health and welfare. Based on these factors, the petitioners asserted that EPA had a mandatory duty to regulate greenhouse gases under section 202 of the Act, and asked the agency to carry out that duty.On September 8, 2003, EPA denied the ICTA petition on the ground that it did not have authority under the CAA to promulgate regulations to address global climate change and that CO2 and other GHGs therefore could not be considered air pollutants under the provisions of the CAA, including section 202. The agency further stated that even if it had the authority to regulate GHGs from motor vehicles, it would decline to do so as a matter of policy. The agency maintained that regulating motor vehicle GHG emissions would neither address the global problem effectively, nor be consistent with President Bush's policies for addressing climate change, which centered on non-regulatory efforts such as voluntary reductions in GHGs, public-private partnerships aimed at reducing the economy's reliance on fossil fuels, and research to probe into scientific uncertainties regarding climate change.
Supreme Court requires regulation
The agency's action on the ICTA petition set the stage for a prolonged legal battle, which was eventually argued before the Supreme Court on November 29, 2006.
In a 5–4 decision in Massachusetts v. Environmental Protection Agency, the Supreme Court held that "greenhouse gases fit well within the Act's capacious definition of 'air pollutant' " and that EPA therefore has statutory authority to regulate GHG emissions from new motor vehicles. The court further ruled that "policy judgments have nothing to do with whether greenhouse gas emissions contribute to climate change and do not amount to a reasoned justification for declining to form a scientific judgment." In EPA's view, this required the agency to make a positive or negative endangerment finding under Section 202(a) of the CAA.
EPA endangerment findings
On December 7, 2009, the EPA Administrator found that under section 202(a) of the Clean Air Act greenhouse gases threaten both the public health and the public welfare, and that greenhouse gas emissions from motor vehicles contribute to that threat. This final action has two distinct findings, which are:
1) The Endangerment Finding, in which the Administrator found that the mix of atmospheric concentrations of six key, well-mixed greenhouse gases threatens both the public health and the public welfare of current and future generations. These six greenhouse gases are: carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6). These greenhouse gases in the atmosphere constitute the "air pollution" that threatens both public health and welfare.
2) The Cause or Contribute Finding, in which the Administrator found that the combined greenhouse gas emissions from new motor vehicles and motor vehicle engines contribute to the atmospheric concentrations of these key greenhouse gases and hence to the threat of climate change.
The EPA issued these endangerment findings in response to the 2007 supreme court case Massachusetts v. EPA, when the court determined that greenhouse gases are air pollutants according to the Clean Air Act. The court made the decision that the EPA must determine whether greenhouse gas emissions from new motor vehicles "cause or contribute to air pollution which may be reasonably be anticipated to endanger public health or welfare, or whether the science is too uncertain to make a reasoned decision" (EPA's Endangerment Finding).
The EPA determined that, according to this decision, there are six greenhouse gases that need to be regulated. These include:
carbon dioxide (CO2)
methane (CH4)
nitrous oxide (N2O)
hydrofluorocarbons (HFCs)
perfluorocarbons (PFCs)
sulfur hexafluoride (SF6)This action allowed the EPA to set the greenhouse gas emission standards to light-duty vehicles proposed jointly with the Department of Transportation's Corporate Average Fuel Economy (CAFE) standards in 2009.
Advocacy group opinion
The Center for Biological Diversity and 350.org had requested earlier in December that the EPA set the NAAQS for carbon dioxide at no greater than 350 ppm. In addition to the six pollutants mentioned in the lawsuit, they proposed that nitrogen trifluoride (NF3) be added as a seventh regulated greenhouse gas.
Inflation Reduction Act of 2022
In June 2022, the Supreme Court found in West Virginia v. EPA that Congress did not authorize the EPA to regulate "outside the fence" options such as introducing renewable sources for regulations of power plants as the EPA had proposed in the Obama administration's Clean Power Plan. The Court did still acknowledge that, as per Massachusetts, the EPA could still regulate carbon dioxide as a pollutant under the CAA. As a result, building upon an economic stimulus bill to support Joe Biden's policies, Congress passed the Inflation Reduction Act of 2022 in August of that year. In its language, the bill specifically identifies carbon dioxide and other greenhouse gases earlier defined by the EPA as regulated pollutants under the EPA's remit. The bill also gives the EPA more than $27 billion in funding for regulation under the CAA, through a green bank for carbon dioxide and direct grants for methane.
Timeline
Contributors to and consequences of climate change
Physical and social contributors to climate change
"Suddenly, combustion of fossil fuels for transportation, power generation, industrial processes, and heating our homes has been abruptly transformed from a great solution to a huge problem…. We have based our cities, or businesses, and our lifestyle on the convenience and relatively low cost of gasoline, coal-fired electricity, and plastic. It's no wonder that the specter of global warming appears so devastating: it threatens the roots of our culture."While there is a general consensus among the scientific community that anthropogenic GHG emissions are forcing changes in the global climate system, there is much less agreement about what should be done to address the problem because both the causes and potential solutions involve significant economic, political, and as Ingrid Kelley points out, social and cultural issues. Addressing climate change has been, and will continue to be particularly difficult for the United States because the U.S. was born during the Industrial Revolution and its growth has been powered by fossil fuels. Coal, for example, was first commercially mined in the United States in 1748, and within a few years after the ratification of the U.S. Constitution, Pittsburgh became the first industrial center in the country to use coal-fired steam power in its manufacturing operations. Electric generating plants first used pulverized coal in 1918, and that same basic method of generation is still in use today. In 2009, coal was used to produce 45% of electricity in the U.S. and that proportion is projected to remain more or less constant through 2035. Largely because of the Country's reliance on coal, electricity generation accounted for 34% of U.S. GHG emissions in 2007, followed by transportation sources (28%) and other industrial sources (19%). In 2005, the United States emitted 18% of the world's total GHG emissions, making it the second largest emitter after China. While the technology that powers our "fossil fuel culture" is part of the climate change problem, and more sustainable technologies will undoubtedly be part of the solution, another important factor in our efforts to stop climate change is the "blindness" that our economic system exhibits toward environmental degradation.According to Al Gore, "[f]ree market capitalist economics is arguably the most powerful tool used by civilization" but it is also, "the single most powerful force behind what seem to be irrational decisions about the global environment." The force to which Gore refers draws strength from the fact that environmental amenities such as clean air and other public goods do not have a price tag and are feely shared by everyone. The lack of price mechanisms to signal the scarcity of such goods means there are no economic incentives to conserve them. In what is known as the tragedy of the commons, rational individuals are motivated to over-utilize these resources for short-term economic gain despite the potential for collective action to deplete the resource or result in other adverse consequences in the long run. What this means for GHGs is that their effects on climate stability are often disregarded in economic analyses as externalities. As a frequently used measure of a country's well-being, the GDP, for example, places value on the goods and services produced within a country but fails to account for the GHG emissions and other environmental effects created in the process. Nations are thus incentivized to utilize their resources and promote consumption at ever increasing rates despite the consequences of those actions on atmospheric concentrations of GHGs and climate stability. In this respect, the ongoing UN climate change negotiations are fundamentally about the economic future of the nations involved.
Greenhouse gas effect on public health and welfare
On December 15, 2009, EPA Administrator Lisa P. Jackson made two important findings under section 202(a) of the CAA:
that six greenhouse gases in the atmosphere – CO2, CH4, N2O, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride – may reasonably be anticipated to endanger both public health and public welfare; and
that GHG emissions from mobile sources covered under CAA section 202(a) contribute to the total greenhouse gas air pollution, and thus to the problem of climate change.In the Agency's view, the body of work produced by the IPCC, the U.S. Global Change Research Program, and the National Research Council of the U.S. National Academy of Sciences represents the most comprehensive, advanced, and thoroughly reviewed documents on the science of climate change. Accordingly, the Administrator relied primarily upon assessment reports and other scientific documents produced by these entities in reaching her conclusions.The IPCC defines "climate change" as "a change in the state of the climate that can be identified (e.g. using statistical tests) by changes in the mean and/or the variability of its properties, and that persists for an extended period, typically decades or longer. It refers to any change in climate over time, whether due to natural variability or as a result of human activity." In its latest assessment on climate change, the IPCC found that "warming of the climate system is unequivocal" and that the anthropogenic component of this warming over the last thirty years has likely had a discernible influence on observed changes in many physical and biological systems. The properties of greenhouse gases are such that they retain heat in the atmosphere, which would otherwise escape to space. GHGs accumulate in the atmosphere when they are emitted faster than they can be naturally removed, and that accumulation prompts changes in the climate system. Once emitted into the atmosphere, GHGs influence the Earth's energy balance for a period of decades to centuries. Consistent with their long-lasting impacts, the IPCC found that even if the concentrations of all GHGs had been kept constant at year 2000 levels, a further warming of approximately 0.1 °C per decade would be expected. The fact that GHGs are long-lived in the atmosphere also means that they become well mixed across the globe – hence the global nature of the problem.After considering the scientific evidence before her, Administrator Jackson found that greenhouse gases could be reasonably anticipated to endanger the health of the U.S. population in several ways. They are:
Direct temperature effects – Heat is the leading cause of weather-related deaths in the U.S. and severe heat waves are projected to intensify over the portions of the country where these events already occur.
Effects on air quality – Ground level ozone (the main component of smog) can induce chest pain, coughing, throat irritation, and congestion, and it can exacerbate respiratory illnesses such as bronchitis, emphysema, and asthma.Elevated temperatures associated with climate change are expected to intensify ground level ozone formation in polluted areas of the U.S.
Effects on extreme weather events – The IPCC reports evidence of an increase in intense tropical cyclone activity in the North Atlantic since approximately 1970. Increases in tropical cyclone intensity are associated with increased death, injury, water- and food-borne disease, and post-traumatic stress disorder.
Effects on climate-sensitive diseases – Expected changes in the climate will likely increase the spread of food- and water-borne pathogens among susceptible populations.Administrator Jackson also found that GHGs could be reasonably anticipated to endanger public welfare in the following ways:
Agriculture – While higher atmospheric CO2 concentrations may stimulate plant growth, climatic changes may also promote the spread of pests and weeds, increase ground level ozone formation (which is detrimental to plant life), and change temperature and precipitation patterns. Uncertainty remains about the extent to which these factors will balance each other but the evidence suggests a net disbenefit, with the potential for future crop failure.
Forestry – As with agriculture, uncertainties remain but there is evidence of an increase in the size and occurrence of wildfires, insect outbreaks, and tree mortality in parts of the U.S. These effects are expected to continue with future changes in climate.
Water resources – The effects of climate change on the water cycle have already been observed. For example, there is "well-documented evidence of shrinking snowpack due to warming" in the western U.S. These changes in snowfall are likely to affect areas such as California that rely on snowmelt for their water supply. Climate change is also expected to impact the water supply in other areas of the country, increasing competition for its use.
Sea level rise – The greatest risk to the U.S. associated with sea level rise is the extent to which it will exacerbate storm-surge flooding. Areas along the Atlantic and Gulf coasts including New Orleans, Miami, and New York City are particularly vulnerable to such effects.
Energy – Climate change is expected to increase peak electricity demand. This may further constrain water resources as power plants rely heavily on water for cooling. A large portion of U.S. energy infrastructure is located in coastal areas and may be at risk to damage from flooding.
Ecosystems and wildlife – Changes in habitat range, timing of migration, and reproductive behavior have already been observed and are expected to increase with further warming. Ocean warming and acidification are expected to impair marine species such as corals, and the loss of arctic sea ice will reduce habitat for a number of species. Spruce-fir forests are, "likely to disappear from the contiguous United States."
Regulatory approaches under the Clean Air Act
From mobile sources
LDV Rule (2010)
EPA's endangerment finding in 2009 did not impose any limitations on GHGs by itself, but was instead a prerequisite for establishing regulations for GHGs from mobile sources under CAA section 202(a). Actual emissions requirements came later on May 7, 2010, when the EPA and the National Highway Traffic Safety Administration finalized the Light-Duty Vehicle Greenhouse Gas Emission Standards and Corporate Average Fuel Economy Standards Rule (LDV Rule). The LDV Rule applies to light-duty vehicles, light-duty trucks, and medium-duty passenger vehicles (e.g., cars, sport utility vehicles, minivans, and pickup trucks used for personal transportation) for model years 2012 through 2016. The EPA estimated that this rule will prevent 960 million metric tons of CO2-equivalent emissions from being emitted to the atmosphere, and that it will save 1.8 billion barrels of oil over the lifetime of the vehicles subject to the rule.The LDV Rule accomplishes its objectives primarily through a traditional command-and-control approach. The most substantial requirements come in the form of two separate CO2 standards (one for cars and the other for trucks, expressed on a gram per mile basis) that apply to a manufacturer's fleet of vehicles. To determine compliance with the requirements of the rule, manufacturers must calculate a production-weighted fleet average emissions rate at the end of a model year and compare it to a fleet average emission standard. The emission standard for a manufacturer in a given model year is calculated based on the footprints of the vehicles in its fleet and the number of vehicles produced by the manufacturer at each footprint. The standards are designed so that they gradually become more stringent each year from 2012 to 2016. The LDV Rule also includes standards of 0.010 gram/mile and 0.030 gram/mile for N2O and CH4, respectively. These standards were primarily put in place as an anti-backsliding measure as N2O and CH4 are already emitted from motor vehicles in relatively low amounts.Although prescriptive regulation is among the most common policy approaches for environmental regulation, it is not without its drawbacks. Prescriptive regulations are often criticized, for example, as being overly rigid and uneconomical because they do not necessarily encourage regulated entities to reduce emissions beyond the minimum standards and they may not achieve the intended benefits at the lowest possible cost. The LDV Rule addresses these criticisms in a number of ways. First, as noted above, the emissions standards gradually become more stringent over time. This not only prevents stagnation that might occur with a single standard, it gives manufacturers sufficient lead time to adapt to the most stringent requirements. In addition, the LDV Rule includes a number of regulatory flexibilities, the most significant of which is a program that allows for banking and trading of credits. In general terms, the LDV Rule allows manufacturers to bank emissions credits in instances where their fleet average CO2 emissions are less than the applicable standard. Manufacturers can then use the credits themselves where certain vehicle models fall short of the standard, or they can sell the credits to another manufacturer. According to EPA, these banking and trading provisions promote the environmental objectives of the rule by addressing issues associated with technological feasibility, lead time, and cost of compliance with the standards.
State regulation from motor vehicles
With one exception, the responsibility for regulating emissions from new motor vehicles under the Clean Air Act rests with the EPA. Section 209(a) of the Act states in part: "No state or any political subdivision thereof shall adopt or attempt to enforce any standard relating to the control of emissions from new motor vehicles or new motor vehicle engines subject to this part." Section 209(b) of the Act provides for the exception; it grants the EPA the authority to waive this prohibition for any state that had adopted emissions standards for new motor vehicles or engines prior to March 30, 1966. California is the only state that meets this eligibility requirement and is thus the only state in the nation, which can seek to obtain a waiver from the EPA. In order to obtain a waiver and establish its own emissions requirements, the State must demonstrate, among other things, that its standards will be at least as protective as public health as any applicable federal standards. Once California obtains a waiver for a particular standard, other states may generally adopt that standard as their own.
On September 24, 2004, the California Air Resources Board (CARB) adopted emissions standards for GHGs from new passenger cars, light-duty trucks and medium-duty vehicles. Not unlike the LDV Rule, California's regulations establish standards for CO2 equivalent emissions from two classes of vehicles on a gram per mile basis. Also like those in the LDV Rule, California's standards become more stringent over time. CARB initially requested a waiver from EPA for these standards on December 21, 2005. EPA denied that request on March 6, 2008, stating that the State did not need the standards to address compelling and extraordinary conditions (as is required by CAA section 209(b)(1)(B)) because the effects of climate change in California were not extraordinary compared to the effects in the rest of the country. Upon reconsideration, EPA later withdrew its previous denial and approved California's waiver request on July 8, 2009. Fifteen states have adopted California's standards. Further, California and EPA have worked together so that the two programs converge and allow automakers to produce a single national fleet, which complies with both programs.State regulations outside the Clean Air Act do affect emissions, especially gas tax rates. As of 2020, several states in the northeastern United States were discussing a regional cap and trade system for carbon emissions from motor vehicle fuel sources, called the Transportation Climate Initiative. In 2021, Massachusetts withdrew from this initiative citing as one of the reasons that it was no longer necessary.
From stationary sources
"New Source Review" (NSR) is a permitting program established by the CAA, which requires the owners or operators of "major" stationary sources of air pollution to obtain permits prior to the construction or modification of those sources. The major source NSR program has two parts:
the Prevention of Significant Deterioration (PSD) Program, which applies to a) sources located in areas of the country that meet the National Ambient Air Quality Standards (NAAQS), and b) to pollutants for which there are no NAAQS; and
the Nonattainment New Source Review (NNSR) program, which applies to sources located in areas that do not meet the NAAQS.PSD permits are issued by EPA or a state or local government agency, depending on who has jurisdiction over the area in which the facility is located. In order to obtain a PSD permit, applicants must demonstrate that the proposed new major source, or major modification to an existing source, meets several regulatory requirements. Among those requirements are the use of the Best Available Control Technology (BACT) to limit air pollutant emissions, and a demonstration through air quality modeling that the source or modification will not cause or contribute to a violation of the NAAQS.
Under federal regulations, the PSD program applies only to sources that emit one or more "regulated NSR pollutants." In 2008, EPA Administrator Stephen Johnson issued a memorandum to document the Agency's interpretation of this regulatory text. In particular, Administrator Johnson stated that a pollutant becomes a "regulated NSR pollutant" when a provision of the Act, or regulations established under the Act, require actual control of that pollutant but not when the Act or such regulations simply require monitoring or reporting of emissions of that pollutant. Upon request to reconsider this interpretation, EPA Administrator Lisa Jackson confirmed that that Agency would continue to apply the interpretation expressed in the 2008 memorandum but she further clarified that the time at which a pollutant becomes a "regulated NSR pollutant" is when the requirements that control emissions of the pollutant take effect, rather than upon the promulgation of those requirements. Because the LDV Rule requires vehicle manufacturers to meet applicable GHG standards for model year 2012 vehicles, and January 2, 2011, is the first day upon which model year 2012 vehicles can be introduced into commerce, the six GHGs regulated by that rule became regulated NSR pollutants as of January 2, 2011 for purposes of the PSD program.Among the components of the PSD program, the one that primarily applies to GHGs is the requirement that source owners or operators utilize BACT to limit GHG emissions from the source. Permitting authorities generally establish BACT through a five-step analytical process, the result of which is the selection of one or more methods to reduce emissions of the pollutant in question, and the setting of one or more emission limits and operational restrictions for the emissions units undergoing review. Because the PSD program and its requirement to utilize BACT apply to any new source or modification at an existing source that meets established applicability criteria, the PSD program represents a traditional command-and-control approach to the regulation of GHGs. However, BACT is established by a permitting authority on a case-by-case basis considering site- and source-specific factors. During the course of a BACT analysis, a permitting authority may, for example, temper the stringency of the final emission limit if there are compelling adverse economic, energy, or environmental considerations.
The PSD program is complex, and obtaining a permit can be costly for both the applicant and the permitting authority. It has been estimated, for example, that permit applicants expend 866 hours of labor and $125,120 on the average PSD permit. The administrative cost to the permitting authority for the same permit is 301 hours of labor and $23,280. Traditionally, this permitting process has been focused on controlling emissions from large industrial sources of air pollution such as fossil fuel-fired power plants, petroleum refineries, and a wide range of manufacturing plants that emit more than 250 tons per year of the regulated pollutants (in some cases the applicability threshold is 100 tons per year). Prior to the regulation of GHGs under the CAA, approximately 280 such permits were issued each year. However, GHGs are generally emitted from sources in amounts far greater than other pollutants regulated under the PSD program. So much so, that sources like office buildings and large shopping malls could easily cross the 250 ton per year threshold and become subject to PSD permitting requirements. In fact, without any action to change how the PSD program is applied, the EPA estimated that as many as 41,000 sources may require permits every year with the addition of GHGs as a regulated pollutant. To prevent the unbearable administrative burdens on permitting authorities associated with such "absurd results," the EPA took action on June 3, 2010, to modify the applicability criteria in the PSD regulations. This action is known as the Prevention of Significant Deterioration and Title V Greenhouse Gas Tailoring Rule (Tailoring Rule). Through the Tailoring Rule, EPA raised the major source regulatory threshold for GHGs from the 100/250 ton per year levels to 100,000 tons per year of CO2 equivalent emissions.
In a 5-4 decision authored by Justice Scalia, the Supreme Court remanded the Tailoring Rule to EPA on the grounds that the Clean Air Act did not authorize the agency to regulate all the sources encompassed by the Rule. The Court determined that EPA could only require the "anyway" sources – those that participated in the PSD program because of their emissions of non-GHG pollutants—the comply with PSD program and Title V permitting requirements for GHGs. This effectively excluded the "Step 2" sources identified in the final rule.
Suitability of the Clean Air Act
Because the LDV Rule and the application of the PSD program to GHGs took effect only recently, it is too soon to assess by how much they have reduced actual GHG emissions and at what cost. However, given the tremendous implications that regulation of GHGs has for almost every aspect of our society, it should not be surprising that much has already been written about the adequacy of the CAA for regulating global pollutants. It should also be no surprise that opinions on this question are widely varied. Allen and Lewis, for example, contend the CAA is wholly unsuitable for regulating GHGs because it was not designed to do so and the costs of such regulation potentially far exceed the costs associated with climate change legislation recently considered in Congress. As support for their argument that the CAA is a poor fit for GHGs, they point to the "absurd results" that EPA itself acknowledged and sought to avoid through passage of the Tailoring Rule:
"EPA is entirely correct: Congress did not intend to apply PSD and Title V to small entities, did not intend for those programs to crash under their own weight, and did not intend for PSD to stifle economic growth. [footnote omitted] And Congress never intended for EPA to control CO2 emissions under the CAA! [footnote omitted] Congressional support for regulatory climate policy is much stronger today than it was in 1970 and 1977, when Congress enacted and amended CAA section 202. [footnote omitted] Yet even today, the prospects for cap-and-trade legislation and for U.S. ratification of a successor treaty to the Kyoto Protocol remain in doubt. [footnote omitted] The notion that Congress, in 1970 or 1977, implicitly authorized EPA to adopt economy-wide, or even industry-specific, controls on CO2 is ludicrously unfounded. [footnote omitted]"Economists Dallas Burtraw and Arthur G. Fraas with the nonprofit and nonpartisan research organization Resources for the Future offer a different perspective on the subject of GHG regulation under the CAA. Although they agree that new legislation specifically designed to address climate change is the best long-term option, they characterize the CAA as a, "potentially potent alternative" in the absence of such legislation for the short-term. They note, for example, that EPA has already identified improvements in energy efficiency as the most attractive short-term option for mitigating GHGs at existing facilities in many industrial sectors. They go on to state that such improvements would most likely be among the first moves made by regulated entities under a legislative approach so it is unlikely that requiring these moves through regulation would result in comparatively higher costs. Based on their research, they conclude that domestic GHG reductions of up to 10% relative to 2005 levels could be achieved at moderate costs, which is comparable to reductions that would have been achieved under the Waxman-Markey climate change bill that was passed by the House of Representatives in 2009. In their view, the success of regulating GHGs under the CAA as it exists today rests with the EPA:
"We see substantial opportunities under the Clean Air Act for domestic emissions reductions that can be achieved at what will probably be moderate cost. However, enthusiasm about the act as a vehicle for carbon regulation should be tempered. First, this paper suggests that achieving meaningful emissions benefits at reasonable cost is possible, but it will require EPA to be bold. The agency must interpret sections of the act to enable use of flexible mechanisms, must be ambitious in setting emissions targets, and must shift its focus to a new regulatory program. In short, to do all of this well, the agency will need to innovate…. Second, EPA action under the CAA is inferior to new legislation from Congress, especially over the long term. Although it is possible to identify some readily available opportunities for emissions reductions and push them via regulation (with market tools to keep costs down), it quickly becomes difficult to identify what steps should be taken next…. With those reservations, however, the Clean Air Act—if used wisely by EPA—can be a useful vehicle for short-term greenhouse gas regulation."
Influence of stakeholders
Because EPA's authority to regulate GHG emissions has such significant implications for the economy, the environment, and our society at large, it is a topic of interest to a broad range of organizations including Congress, the courts, the states, environmental organizations, and the regulated industry. All of these entities have had a direct hand in shaping the laws, regulations, and policies concerning GHGs into what they are today and will likely continue to do so in the future. As discussed above, California has played a large role in shaping the motor vehicle regulations. EPA's authority to regulate GHGs under the CAA is also a topic of continuing political debate in both chambers of Congress. On January 21, 2010, Senator Lisa Murkowski (R-AK) introduced a disapproval resolution under the Congressional Review Act, which would have nullified EPA's endangerment finding. The resolution was defeated by a vote of 53–47, with six Democrats voting in favor of it. Later that year on March 4, Senator Jay Rockefeller (D-WV) introduced a bill that would suspend for two years any EPA action to regulate CO2 or CH4 under the CAA except for the vehicle standards under section 202. He re-introduced similar legislation on January 31, 2011, the same day Senator John Barrasso (R-WY) introduced broad legislation to preempt regulation of GHGs under federal law. On February 2, 2011, Representative Fred Upton (R-MI), Representative Ed Whitfield (R-KY), and Senator James Inhofe (R-OK) released a draft bill, which would amend the CAA to, "prohibit the Administrator of the Environmental Protection Agency from promulgating any regulation concerning, taking action relating to, or taking into consideration the emission of a greenhouse gas due to concerns regarding possible climate change, and for other purposes."That the major opposition to regulation of GHGs under the CAA is headed largely by a contingent of elected officials from major coal, oil, and gas states exemplifies the political warfare that can erupt when leaders attempt to appeal to their core constituencies even though doing so may impede action on pressing national problems. It also underscores Al Gore's view that the political system will be the "decisive factor" in our efforts to address global climate change: "the real work must be done by individuals, and politicians need to assist citizens in their efforts to make new and necessary choices." Whether Congress will act any time soon to pass cap-and-trade legislation or revoke EPA's authority to regulate GHGs is questionable. However, even inaction by Congress in this area leaves EPA's future options open.
In his book Earth in the Balance, Al Gore observes that, "the American people often give their leaders permission to take action [on an issue] by signaling agreement in principle while reserving the right to object strenuously to each and every specific sacrifice necessary to follow through." As a recent illustration of Gore's point, consider the results of a public opinion poll conducted in June 2010. One thousand people were asked the question: "How important is the issue of global warming to you personally?" Seventy six percent of respondents said they considered global warming to be extremely important, very important, or somewhat important. Sixty eight percent of people in the same survey also said that the United States should take action on global warming even if other major industrial countries such as China and India do not agree to do equally effective things. However, when asked, "[P]lease tell me whether you favor or oppose the federal government…[increasing] taxes on gasoline so people either drive less, or buy cars that use less gas," seventy one percent of the survey respondents said they opposed increased gasoline taxes, despite the fact that such a tax would be "one of the logical first steps" we would likely take in an effort to reduce oil consumption and address climate change. Thus, there is an apparent discrepancy between the public's feelings about the threat of climate change and its willingness to make personal sacrifices to address it. The reluctance of both the American public and Congress to make sometimes difficult choices to address climate change has left opportunities wide open for other stakeholders to influence climate change policy; among the most influential thus far are non-governmental organizations (NGOs).
In his article, "Learning to Live with NGOs," P.J. Simmons wrote that NGOs can, "make the impossible possible by doing what governments cannot or will not." History shows that this is particularly true where climate change regulation is concerned. While NGOs cannot themselves pass climate change regulations, they have played a clear role in forcing EPA's hand through action in the courts. EPA's endangerment finding, the LDV Rule, and the consequent regulation of GHGs from stationary sources under the PSD program are a direct result of the Supreme Court's decision in Massachusetts v. EPA. As discussed supra, that case is founded on the petition submitted to EPA in 1999 by the International Center for Technology Assessment and nineteen other NGOs. With passage of the LDV Rule, EPA did what ICTA and its fellow petitioners demanded more than ten years earlier. And a number of NGOs have continued to apply pressure to EPA. On December 23, 2010, EPA announced that it would establish GHG standards for new and modified electric generating units (EGUs) and petroleum refineries under section 111(b) of the CAA, and that it would set GHG emissions guidelines for existing sources in those same categories under CAA section 111(d); these emissions standards and guidelines will be established according to a schedule set forth in settlement agreements into which EPA entered to settle legal challenges brought forth by several NGOs and states after EPA failed to establish GHG standards when it revised its rules for EGUs and refineries in 2006 and 2008. Under the terms of the agreement, EPA will issue final standards for EGUs and refineries in May 2012 and November 2012, respectively.
EPA's actions to address climate change are also being challenged by those most directly affected – the regulated community. Over eighty claims have been filed by thirty-five different petitioners against EPA related to the endangerment finding, the LDV Rule, the Tailoring Rule, and another rule related to the PSD program. A large number of the parties in these cases are businesses and industry associations (acting in the interests of businesses) such as Peabody Energy Company, Gerdau Ameristeel U.S. Inc., Coalition for Responsible Regulation, National Association of Manufacturers, Portland Cement Association, National Mining Association, American Farm Bureau Association, and the U.S. Chamber of Commerce. These claims have been consolidated into Coalition for Responsible Regulation v. U.S. Environmental Protection Agency (CRR v. EPA) under three main docket numbers, 09–1322, 10–1092, and 10–1073. The arguments put forth by the plaintiffs are numerous and varied, depending on the particular case but most are fundamentally about the economic impacts of regulation. A summary of the arguments has been compiled by Gregory Wannier of The Center for Climate Change Law at the Columbia Law School. On June 26, 2012, the Court of Appeals for the District of Columbia Circuit issued an opinion in CRR v. EPA which dismissed the challenges in these cases to the EPA's endangerment finding and the related GHG regulations. The three-judge panel unanimously upheld the EPA's central finding that GHG such as carbon dioxide endanger public health and were likely responsible for the global warming experienced over the past half century.
See also
Climate change policy of the United States
Climate change in the United States
Environmental policy of the Donald Trump administration
== References == |
avoiding dangerous climate change (2005 conference) | In 2005, an international conference titled Avoiding Dangerous Climate Change: A Scientific Symposium on Stabilisation of Greenhouse Gases examined the link between atmospheric greenhouse gas concentration and global warming and its effects. The conference name was derived from Article 2 of the charter for the United Nations Framework Convention on Climate Change The conference explored the possible impacts at different levels of greenhouse gas emissions and how the climate might be stabilized at a desired level. The conference took place under the United Kingdom's presidency of the G8, with the participation of around 200 "internationally renowned" scientists from 30 countries. It was chaired by Dennis Tirpak and hosted by the Hadley Centre for Climate Prediction and Research in Exeter, from 1 February to 3 February.
The conference was one of many meetings leading up to the 2015 Paris Agreement, at which the international community agreed to limit global warming to no more than 2 °C in order to have a 50-50 chance of avoiding dangerous climate change. However, a 2018 published study points at a threshold at which temperatures could rise to 4 or 5 degrees through self-reinforcing feedbacks in the climate system, suggesting the threshold (or 'tipping point') is below the 2 degree temperature target.
Objectives
The conference was called to bring together the latest research into what would be necessary to achieve the objective of the 1992 United Nations Framework Convention on Climate Change:
to achieve, in accordance with the relevant provisions of the Convention, stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.It was also intended to encourage further research in the area. In the 2001 IPCC Third Assessment Report, an initial assessment of the subject had been included; however, the topic had received relatively little international discussion.Specifically, the conference explored three issues:
For different levels of climate change what are the key impacts, for different regions and sectors and for the world as a whole?
What would such levels of climate change imply in terms of greenhouse gas stabilisation concentrations and emission pathways required to achieve such levels?
What options are there for achieving stabilisation of greenhouse gases at different stabilisation concentrations in the atmosphere, taking into account costs and uncertainties?
Conclusions
Among the conclusions reached, the most significant was a new assessment of the link between the concentration of greenhouse gases in the atmosphere and the increase in global temperature levels. Some researchers have argued that the most serious consequences of global warming might be avoided if global average temperatures rose by no more than 2 °C (3.6 °F) above pre-industrial levels (1.4 °C above present levels). It had generally been assumed that this would occur if greenhouse gas concentrations rose above 550 ppm carbon dioxide equivalent by volume. This concentration was, for example, informing government in certain countries, including the European Union.The conference concluded that, at the level of 550 ppm, it was likely that 2 °C would be exceeded, according to the projections of more recent climate models. Stabilising greenhouse gas concentrations at 450 ppm would only result in a 50% likelihood of limiting global warming to 2 °C, and that it would be necessary to achieve stabilisation below 400 ppm to give a relatively high certainty of not exceeding 2 °C.The conference also claimed that, if action to reduce emissions is delayed by 20 years, rates of emission reduction may need to be 3 to 7 times greater to meet the same temperature target.
Reaction
As a result of changing opinion on the 'safe' atmospheric concentration of greenhouse gases, to which this conference contributed, the UK government changed the target in the Climate Change Act from 60% to 80% by 2050.
See also
4 Degrees and Beyond International Climate Conference
Action on climate change
Climate change mitigation scenarios
Environmental impact of aviation
Hypermobility (travel)
Index of climate change articles
References
Further reading
Related book: Avoiding Dangerous Climate Change, Editors: Hans Joachim Schellnhuber, Wolfgang Cramer, Nebojsa Nakicenovic, Tom Wigley, and Gary Yohe, Cambridge University Press, February 2006, ISBN 9780521864718.
PDF version at the Wayback Machine (archived 2007-09-26)
External links
Avoiding Dangerous Climate Change - official conference website at the Library of Congress Web Archives (archived 2005-07-19)
Tyndall Centre - A strategic assessment of scientific and behavioural perspectives on 'dangerous' climate change
WWF-UK - 2°C Is Too Much! Evidence and Implications of Dangerous Climate Change in the Arctic
Netherlands Environmental Assessment Agency - Meeting the European Union 2°C climate target: global and regional emission implications
Dr. James Hansen, Climate Scientist's web page at the Wayback Machine (archived 2004-12-04)NewsApril 19, 2007, Reuters: World needs to axe greenhouse gases by 80 pct: report
February 1, 2006, Euractive: UK chief scientific adviser: Keeping CO2 concentration below 450ppm is 'unfeasible'
January 30, 2006, BBC: Stark warning over climate change
January 30, 2006, BBC: Climate report: the main points
January 29, 2006, Washington Post: Debate on Climate Shifts to Issue of Irreparable Change
January 1, 2006, Times online: World has only 20 years to stop climate disaster
February 3, 2005, Guardian Unlimited: Climate conference hears degree of danger |
the first global revolution | The First Global Revolution is a book written by Alexander King and Bertrand Schneider, and published by Pantheon Books in 1991. The book follows up the earlier 1972 work-product from the Club of Rome titled The Limits to Growth. The book's tagline is A Report by the Council of the Club of Rome. The book was intended as a blueprint for the 21st century putting forward a strategy for world survival at the onset of what they called the world's first global revolution.
Contents
The ProblematiqueThe Whirlwind of Change
Some Areas of Acute Concern
The International Mismanagement of the World Economy
Intimitations of Solidarity
The Vacuum
The Human Malaise
Conclusion: The ChallengeThe ResolutiqueIntroduction
The Three Immediacies
Governance and the Capacity to Govern
Agents of the Resolutique
Motivations and Values
Learning Our Way Into a New Era
Overview
The book is a blueprint for the twenty-first century at a time when the Club of Rome thought that the onset of the first global revolution was upon them. The authors saw the world coming into a global-scale societal revolution amid social, economic, technological, and cultural upheavals that started to push humanity into an unknown. The goal of the book was to outline a strategy for mobilizing the world's governments for environmental security and clean energy by purposefully converting the world from a military to a civil economy, tackling global warming and solving the energy problem, dealing with world poverty and disparities between the northern hemisphere and the Southern Hemisphere.
The book saw humankind at the center of the revolution centered on:
Global economic growth
New technologies
Governments and the ability to govern
Mass Media
Global food security
Water availability
Environment
Energy
Population growth
Learning systems
Values/Religions
MaterialsThe product of a think tank, the book attempted to transcend the nation-state governance paradigm of the nineteenth-century and the twentieth-century and sought a way to eliminate some of the challenges seen inherent with those older systems of global governance. As such, it explored new and sometimes controversial viewpoints.
Because of the sudden absence of traditional enemies, "new enemies must be identified." "In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill...All these dangers are caused by human intervention, and it is only through changed attitudes and behavior that they can be overcome. The real enemy then is humanity itself."
Later editions
An English language edition of this book was published in 1993 (ISBN 978-0001160323) by Orient Longman of Hyderabad, India.
See also
Politics of global warming
United Nations Framework Convention on Climate Change and accompanying Kyoto Protocol (CO2 Regulations)
Post–Kyoto Protocol negotiations on greenhouse gas emissions
Green Climate Fund
== References == |
ipcc fifth assessment report | The Fifth Assessment Report (AR5) of the United Nations Intergovernmental Panel on Climate Change (IPCC) is the fifth in a series of such reports and was completed in 2014. As had been the case in the past, the outline of the AR5 was developed through a scoping process which involved climate change experts from all relevant disciplines and users of IPCC reports, in particular representatives from governments. Governments and organizations involved in the Fourth Report were asked to submit comments and observations in writing with the submissions analysed by the panel. Projections in AR5 are based on "Representative Concentration Pathways" (RCPs). The RCPs are consistent with a wide range of possible changes in future anthropogenic greenhouse gas emissions. Projected changes in global mean surface temperature and sea level are given in the main RCP article.
The IPCC Fifth Assessment Report followed the same general format as the Fourth Assessment Report, with three Working Group reports and a Synthesis report. The report was delivered in stages, starting with the report from Working Group I in September 2013. It reported on the physical science basis, based on 9,200 peer-reviewed studies. The Synthesis Report was released on 2 November 2014, in time to pave the way for negotiations on reducing carbon emissions at the UN Climate Change Conference in Paris during late 2015.
The report's Summary for Policymakers stated that warming of the climate system is 'unequivocal' with changes unprecedented over decades to millennia, including warming of the atmosphere and oceans, loss of snow and ice, and sea level rise. Greenhouse gas emissions, driven largely by economic and population growth, have led to greenhouse gas concentrations that are unprecedented in at least the last 800,000 years. These, together with other anthropogenic drivers, are "extremely likely" (where that means more than 95% probability) to have been the dominant cause of the observed global warming since the mid-20th century.Conclusions of the fifth assessment report are summarized below:
Working Group I: "Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia". "Atmospheric concentrations of carbon dioxide, methane, and nitrous oxide have increased to levels unprecedented in at least the last 800,000 years". Human influence on the climate system is clear. It is extremely likely (95–100% probability) that human influence was the dominant cause of global warming between 1951 and 2010.
Working Group II: "Increasing magnitudes of [global] warming increase the likelihood of severe, pervasive, and irreversible impacts". "A first step towards adaptation to future climate change is reducing vulnerability and exposure to present climate variability". "The overall risks of climate change impacts can be reduced by limiting the rate and magnitude of climate change"
Working Group III: Without new policies to mitigate climate change, projections suggest an increase in global mean temperature in 2100 of 3.7 to 4.8 °C, relative to pre-industrial levels (median values; the range is 2.5 to 7.8 °C including climate uncertainty). "(T)he current trajectory of global annual and cumulative emissions of GHGs is not consistent with widely discussed goals of limiting global warming at 1.5 to 2 degrees Celsius above the pre-industrial level." Pledges made as part of the Cancún Agreements are broadly consistent with cost-effective scenarios that give a "likely" chance (66–100% probability) of limiting global warming (in 2100) to below 3 °C, relative to pre-industrial levels.
Current status
The Fifth Assessment Report (AR5) consists of three Working Group (WG) Reports and a Synthesis Report. The first Working Group Report was published in 2013 and the rest were completed in 2014. The summaries for policy makers were released on 27 September 2013 for the first report, on 31 March 2014 for the second report entitled "Impacts, Adaptation, and Vulnerability", and on 14 April 2014 for the third report entitled "Mitigation of Climate Change".
WG I: The Physical Science Basis – 30 September 2013, Summary for Policymakers published 27 September 2013.
WG II: Impacts, Adaptation and Vulnerability – 31 March 2014
WG III: Mitigation of Climate Change – 15 April 2014
AR5 Synthesis Report (SYR) – 2 November 2014The AR5 provides an update of knowledge on the scientific, technical and socio-economic aspects of climate change.
More than 800 authors, selected from around 3,000 nominations, were involved in writing the report. Lead authors' meetings and a number of workshops and expert meetings, in support of the assessment process, were held. A schedule of AR5 related meetings, review periods, and other important dates was published.A key statement of the report was that:
Continued emission of greenhouse gases will cause further warming and long-lasting changes in all components of the climate system, increasing the likelihood of severe, pervasive and irreversible impacts for people and ecosystems. Limiting climate change would require substantial and sustained reductions in greenhouse gas emissions which, together with adaptation, can limit climate change risks.
Authors and editors
The IPCC was established in 1988 by the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP) to assess scientific, technical and socio-economic information concerning climate change, its potential effects and options for adaptation and mitigation.
In March 2010, the IPCC received approximately 3,000 author nominations from experts around the world. At the bureau session held in Geneva, 19–20 May 2010, the three working groups presented their selected authors and review editors for the AR5. Each of the selected scientists, specialists and experts was nominated in accordance with IPCC procedures, by respective national IPCC focal-points, by approved observer organizations, or by the bureau. The IPCC received 50% more nominations of experts to participate in AR5 than it did for AR4. A total of 559 authors and review editors had been selected for AR4 from 2,000 proposed nominees. On 23 June 2010 the IPCC announced the release of the final list of selected coordinating lead authors, comprising 831 experts who were drawn from fields including meteorology, physics, oceanography, statistics, engineering, ecology, social sciences and economics. In comparison to the Fourth Assessment Report (AR4), participation from developing countries was increased, reflecting the ongoing efforts to improve regional coverage in the AR5. About 30% of authors came from developing countries or economies in transition. More than 60% of the experts chosen were new to the IPCC process, bringing fresh knowledge and perspectives.
Climate change 2013: report overview
On 23 June 2010, the IPCC announced the release of the final list of selected coordinating lead authors, comprising 831 experts. The working group reports would be published during 2013 and 2014. These experts would also provide contributions to the Synthesis Report published in late 2014.The Fifth Assessment Report (Climate Change 2013) would be released in four distinct sections:
Working Group I Report (WGI): Focusing on the physical science basis and including 258 experts.
Working Group II Report (WGII): Assessing the impacts, adaptation strategies and vulnerability related to climate change and involving 302 experts.
Working Group III Report (WGIII): Covering mitigation response strategies in an integrated risk and uncertainty framework and its assessments carried out by 271 experts.
The Synthesis Report (SYR): Final summary and overview.
Working group I contribution
The full text of Climate Change 2013: The Physical Science Basis was released in an unedited form on Monday, 30 September 2013. It was over 2,000 pages long and cited 9,200 scientific publications. The full, edited report was released online in January 2014 and published in physical form by Cambridge University Press later in the year.
Summary for Policymakers
A concise overview of Working Group I's findings was published as the Summary for Policymakers on 27 September 2013. The level of confidence in each finding was rated on a confidence scale, qualitatively from very low to very high and, where possible, quantitatively from exceptionally unlikely to virtually certain (determined based on statistical analysis and expert judgement).
The principal findings were:
General
Warming of the atmosphere and ocean system is unequivocal. Many of the associated impacts such as sea level change (among other metrics) have occurred since 1950 at rates unprecedented in the historical record.
There is a clear human influence on the climate
It is extremely likely that human influence has been the dominant cause of observed warming since 1950, with the level of confidence having increased since the fourth report.
IPCC pointed out that the longer we wait to reduce our emissions, the more expensive it will become.
Historical climate metrics
It is likely (with medium confidence) that 1983–2013 was the warmest 30-year period for 1,400 years.
It is virtually certain the upper ocean warmed from 1971 to 2010. This ocean warming accounts, with high confidence, for 90% of the energy accumulation between 1971 and 2010.
It can be said with high confidence that the Greenland and Antarctic ice sheets have been losing mass in the last two decades and that Arctic sea ice and Northern Hemisphere spring snow cover have continued to decrease in extent.
There is high confidence that the sea level rise since the middle of the 19th century has been larger than the mean sea level rise of the prior two millennia.
Concentration of greenhouse gases in the atmosphere has increased to levels unprecedented on earth in 800,000 years.
Total radiative forcing of the earth system, relative to 1750, is positive and the most significant driver is the increase in CO2's atmospheric concentration.
Models
Climate model simulations in support of AR5 use a different approach to account for increasing greenhouse gas concentrations than in the previous report. Instead of the scenarios from the Special Report on Emissions Scenarios the models are performing simulations for various Representative Concentration Pathways.
AR5 relies on the Coupled Model Intercomparison Project Phase 5 (CMIP5), an international effort among the climate modeling community to coordinate climate change experiments. Most of the CMIP5 and Earth System Model (ESM) simulations for AR5 WRI were performed with prescribed CO2 concentrations reaching 421 ppm (RCP2.6), 538 ppm (RCP4.5), 670 ppm (RCP6.0), and 936 ppm (RCP 8.5) by the year 2100. (IPCC AR5 WGI, page 22).
Climate models have improved since the prior report.
Model results, along with observations, provide confidence in the magnitude of global warming in response to past and future forcing.
Projections
Further warming will continue if emissions of greenhouse gases continue.
The global surface temperature increase by the end of the 21st century is likely to exceed 1.5 °C relative to the 1850 to 1900 period for most scenarios, and is likely to exceed 2.0 °C for many scenarios
The global water cycle will change, with increases in disparity between wet and dry regions, as well as wet and dry seasons, with some regional exceptions.
The oceans will continue to warm, with heat extending to the deep ocean, affecting circulation patterns.
Decreases are very likely in Arctic sea ice cover, Northern Hemisphere spring snow cover, and global glacier volume
Global mean sea level will continue to rise at a rate very likely to exceed the rate of the past four decades
Changes in climate will cause an increase in the rate of CO2 production. Increased uptake by the oceans will increase the acidification of the oceans.
Future surface temperatures will be largely determined by cumulative CO2, which means climate change will continue even if CO2 emissions are stopped.The summary also detailed the range of forecasts for warming, and climate impacts with different emission scenarios. Compared to the previous report, the lower bounds for the sensitivity of the climate system to emissions were slightly lowered, though the projections for global mean temperature rise (compared to pre-industrial levels) by 2100 exceeded 1.5 °C in all scenarios.In August 2020 scientists reported that observed ice-sheet losses in Greenland and Antarctica track worst-case scenarios of the IPCC Fifth Assessment Report's sea-level rise projections.
Reception
On 14 December 2012, drafts of the Working Group 1 (WG1) report were leaked and posted on the Internet. The release of the summary for policymakers occurred on 27 September 2013. Halldór Thorgeirsson, a UN official, warned that, because big companies are known to fund the undermining of climate science, scientists should be prepared for an increase in negative publicity at the time. "Vested interests are paying for the discrediting of scientists all the time. We need to be ready for that," he said.Marking the finalization of the Physical Science Basis UN Secretary General Ban Ki-moon addressed the IPCC at Stockholm on 27 September 2013. He stated that "the heat is on. We must act". Jennifer Morgan, from the World Resources Institute, said "Hopefully the IPCC will inspire leadership, from the Mom to the business leader, to the mayor to the head of state." US Secretary of State John Kerry responded to the report saying "This is yet another wakeup call: those who deny the science or choose excuses over action are playing with fire."Reporting on the publication of the report, The Guardian said that:
In the end it all boils down to risk management. The stronger our efforts to reduce greenhouse gas emissions, the lower the risk of extreme climate impacts. The higher our emissions, the larger climate changes we'll face, which also means more expensive adaptation, more species extinctions, more food and water insecurities, more income losses, more conflicts, and so forth.
The New York Times reported that:
In Washington, President Obama's science adviser, John P. Holdren, cited increased scientific confidence "that the kinds of harm already being experienced from climate change will continue to worsen unless and until comprehensive and vigorous action to reduce emissions is undertaken worldwide."
It went on to say that Ban Ki-moon, the United Nations secretary general, had declared his intention to call a meeting of heads of state in 2014 to develop such a treaty. The last such meeting, in Copenhagen in 2009, the NY Times reported, had ended in disarray.
See also
Renewable Energy Sources and Climate Change Mitigation – IPCC special report, 2011
IPCC Sixth Assessment Report
References
Sources
IPCC AR5 WG3 (2014), Edenhofer, O.; et al. (eds.), Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III (WG3) to the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC), Cambridge University Press, archived from the original on 2014-10-29{{citation}}: CS1 maint: bot: original URL status unknown (link) CS1 maint: ref duplicates default (link). Archived
IPCC AR5 WG2 A (2014), Field, C.B.; et al. (eds.), Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II (WG2) to the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC), Cambridge University Press, archived from the original on 2016-04-28{{citation}}: CS1 maint: bot: original URL status unknown (link) CS1 maint: ref duplicates default (link). Archived
IPCC AR5 WG1 (2013), Stocker, T.F.; et al. (eds.), Climate Change 2013: The Physical Science Basis. Working Group 1 (WG1) Contribution to the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5), Cambridge University Press.
Climate Change 2013 Working Group 1 website
External links
Official IPCC WGI AR5 website |
lliuya v rwe ag | Lliuya v RWE AG (2015) Case No. 2 O 285/15 (Essen Oberlandesgericht) is a German tort law and climate litigation case, concerning liability for climate damage in Peru from a melting glacier, against Germany's largest coal burning power company, RWE, which has caused approximately 0.47% of all historic greenhouse gas emissions. It is currently on appeal in the Upper State Court, Oberlandesgericht Hamm.
Facts
Lliuya, a Peruvian farmer from Huaraz, population 12,000, claimed against RWE, Germany's largest electric company, that it knowingly contributed to climate damage by emitting GHG, and was partly responsible for melting mountain glaciers near his town. This meant Palcacocha, a glacial lake, had increased in size since 1975, and accelerated since 2003. The emissions were a nuisance under the German Civil Code, BGB §1004, and RWE should reimburse a portion of costs incurred to establish flood protection, namely 0.47% of total cost, given RWE's annual contribution to GHG emissions.
Judgment
State Court
The Essen Landesgericht dismissed Lliuya's claim for an injunction and damages. It stated that it could not provide effective redress, because Lliuya's situation would be the same even if RWE stopped emitting and there was no 'linear causal chain' within the complex causal relationship between particular emissions and climate change impacts. RWE was not a 'disturber by conduct' under BGB §1004, and given the number of contributors to climate change, attributing individual damage to specific actors was impossible.
Upper State Court
The Oberlandesgericht Hamm, on 30 November 2017, said the claim was admissible. It would go to evidential phase on whether Lliuya's home is (a) threatened by floods or mudslides (b) how RWE contributed. It will review expert opinion on RWE CO2 emissions, contribution and impact on the glacier.
Significance
This is the first case to potentially hold fossil fuel corporations directly responsible in tort for the damage they knowingly cause through releasing greenhouse gases and making profit. If Lliuya is successful in German law, the principle would apply to all other companies responsible for releasing greenhouse gas emissions and the damage they cause, and could be applied in similar ways in all countries' tort laws.
See also
EU law
Urgenda v State of Netherlands (20 December 2019) duty of state to cut emissions in line with Paris Agreement and right to life
Neubauer v Germany (24 March 2021) 1 BvR 2656/18, duty on state to reduce carbon emissions faster than government required in Act to protect right to life and environment
Smith v Fonterra Co-operative Group Ltd [2021] NZCA 552 duty of oil and power companies in tort to pay for climate damage
Milieudefensie v Royal Dutch Shell (26 May 2021) duty of oil company in tort to cut emissions in line with Paris Agreement and right to life
McGaughey and Davies v Universities Superannuation Scheme Ltd [2022] EWHC 1233 (Ch), directors' duties to plan to divest fossil fuels in light of Paris Agreement and right to life
Notes
References
External links
Lliuya v RWE AG on the Sabin Center database
D Collyns, Climate change has turned Peru's glacial lake into a deadly flood timebomb' (2018) Guardian |
international maritime organization | The International Maritime Organization (IMO; French: Organisation maritime internationale; Spanish: Organización Marítima Internacional) is a specialised agency of the United Nations responsible for regulating shipping. The IMO was established following agreement at a UN conference held in Geneva in 1948 and the IMO came into existence ten years later, meeting for the first time in 17 March 1958. Headquartered in London, United Kingdom, IMO currently has 175 Member States and three Associate Members.The IMO's primary purpose is to develop and maintain a comprehensive regulatory framework for shipping and its remit today includes maritime safety, environmental concerns, legal matters, technical co-operation, maritime security and the efficiency of shipping. IMO is governed by an assembly of members which meets every two years. Its finance and organization is administered by a council of 40 members elected from the assembly. The work of IMO is conducted through five committees and these are supported by technical subcommittees. Other UN organisations may observe the proceedings of the IMO. Observer status is granted to qualified non-governmental organisations.IMO is supported by a permanent secretariat of employees who are representative of the organisation's members. The secretariat is composed of a Secretary-General who is periodically elected by the assembly, and various divisions such as those for marine safety, environmental protection and a conference section.
History
IMO was established in 1948 following a UN conference in Geneva to bring the regulation of the safety of shipping into an international framework. Hitherto such international conventions had been initiated piecemeal, notably the Safety of Life at Sea Convention (SOLAS), first adopted in 1914 following the Titanic disaster. Under the name of the Inter-Governmental Maritime Consultative Organization (IMCO), IMO's first task was to update the SOLAS convention; the resulting 1960 convention was subsequently recast and updated in 1974 and it is that convention that has been subsequently modified and updated to adapt to changes in safety requirements and technology.Since 1978, every last Thursday of September has been celebrated as World Maritime Day. The day that coincides with the establishment of the International Maritime Organisation in 1958.When IMCO began its operations in 1959 certain other pre-existing conventions were brought under its aegis, most notable the International Convention for the Prevention of Pollution of the Sea by Oil (OILPOL) 1954. In January 1959, IMO began to maintain and promote the 1954 OILPOL Convention. Under the guidance of IMO, the convention was amended in 1962, 1969, and 1971. The first meetings of the newly formed IMCO were held in London in 1959.As oil trade and industry developed, many people in the industry began to recognise a need for further improvements in regards to oil pollution prevention at sea. This became increasingly apparent in 1967, when the tanker Torrey Canyon spilled 120,000 tons of crude oil when it ran aground entering the English Channel The Torrey Canyon grounding was the largest oil pollution incident recorded up to that time. This incident prompted a series of new conventions.
IMO held an emergency session of its council to deal with the need to readdress regulations pertaining to maritime pollution. In 1969, the IMO Assembly decided to host an international gathering in 1973 dedicated to this issue. The goal at hand was to develop an international agreement for controlling general environmental contamination by ships when out at sea. During the next few years IMO brought to the forefront a series of measures designed to prevent large ship accidents and to minimise their effects. It also detailed how to deal with the environmental threat caused by routine ship duties such as the cleaning of oil cargo tanks or the disposal of engine room wastes. By tonnage, the aforementioned was a bigger problem than accidental pollution. The most significant development to come out of this conference was the International Convention for the Prevention of Pollution from Ships, 1973. It covers not only accidental and operational oil pollution but also different types of pollution by chemicals, goods in packaged form, sewage, garbage and air pollution. The original MARPOL was signed on 17 February 1973, but did not come into force due to lack of ratifications. The current convention is a combination of 1973 Convention and the 1978 Protocol. It entered into force on 2 October 1983. As of January 2018, 156 states, representing 99.42 per cent of the world's shipping tonnage, are signatories to the MARPOL convention.As well as updates to MARPOL and SOLAS, the IMO facilitated several updated international maritime conventions in the mid to late 20th century, including the International Convention on Load Lines in 1966 (replacing an earlier 1930 Convention), the International Regulations for Preventing Collisions at Sea in 1972 (also replacing an earlier set of rules) and the STCW Convention in 1978. In 1975, the assembly of the IMO decided that future conventions of the International Convention for the Safety of Life at Sea (SOLAS) and other IMO instruments should use SI units only. As such, sea transportation is one of few industrial areas that still commonly uses non-metric units such as the nautical mile (nmi) for distance and knots (kn) for speed or velocity.In 1982, IMCO was renamed as the International Maritime Organization (IMO). Throughout its existence, the IMO has continued to produce new and updated conventions across a wide range of maritime issues covering not only safety of life and marine pollution but also encompassing safe navigation, search and rescue, wreck removal, tonnage measurement, liability and compensation, ship recycling, the training and certification of seafarers, and piracy. More recently SOLAS has been amended to bring an increased focus on maritime security through the International Ship and Port Facility Security (ISPS) Code. The IMO has also increased its focus on smoke emissions from ships. In 1983, the IMO established the World Maritime University in Malmö, Sweden and also facilitated the adoption of the IGC Code. In 1991, the IMO facilitated the adoption of the International Grain Code.In December 2002, new amendments to the 1974 SOLAS Convention were enacted by the IMO. These amendments gave rise to the International Ship and Port Facility Security (ISPS) Code, which went into effect on 1 July 2004. The concept of the code is to provide layered and redundant defences against smuggling, terrorism, piracy, stowaways, etc. The ISPS Code required most ships and port facilities engaged in international trade to establish and maintain strict security procedures as specified in ship and port specific Ship Security Plans and Port Facility Security Plans.
Headquarters
The IMO headquarters are located in a large purpose-built building facing the River Thames on the Albert Embankment, in Lambeth, London. The organisation moved into its new headquarters in late 1982, with the building being officially opened by Queen Elizabeth II on 17 May 1983. The architects of the building were Douglass Marriott, Worby & Robinson. The front of the building is dominated by a seven-metre high, ten-tonne bronze sculpture of the bow of a ship, with a lone seafarer maintaining a look-out. The previous headquarters of IMO were at 101 Piccadilly (now the home of the Embassy of Japan), prior to that at 22 Berners Street in Fitzrovia and originally in Chancery Lane.
Structure
The IMO consists of an Assembly, a Council and five main Committees. The organization is led by a Secretary-General. A number of Sub-Committees support the work of the main technical committees.
Governance of IMO
The governing body of the International Maritime Organization is the Assembly which meets every two years. In between Assembly sessions a Council, consisting of 40 Member States elected by the Assembly, acts as the governing body. The technical work of the International Maritime Organization is carried out by a series of Committees. The Secretariat consists of some 300 international civil servants headed by a Secretary-General.The current Secretary-General is Kitack Lim (South Korea), elected for a four-year term at the 114th session of the IMO Council in June 2015 and at the 29th session of the IMO's Assembly in November 2015. His mandate started on 1 January 2016. At the 31st session of the Assembly in 2019 he was re-appointed for a second term, ending on 31 December 2023.
Technical committees
The technical work of the International Maritime Organisation is carried out by five principal Committees. These include:
The Maritime Safety Committee (MSC)
The Marine environment Protection Committee (MEPC)
The Legal Committee
The Technical Cooperation Committee, for capacity building
The Facilitation Committee, to simplify the documentation and formalities required in international shipping.
Maritime Safety Committee
It is regulated in the Article 28(a) of the Convention on the IMO:
ARTICLE 28
(a) The Maritime Safety Committee shall consider any matter within the scope of the Organization concerned with aids to navigation, construction and equipment of vessels, manning from a safety standpoint, rules for the prevention of collisions, handling of dangerous cargoes, maritime safety procedures and requirements, hydrographic information, log-books and navigational records, marine casualty investigation, salvage and rescue, and any other matters directly affecting maritime safety.
(b) The Maritime Safety Committee shall provide machinery for performing any duties assigned to it by this Convention, the Assembly or the Council, or any duty within the scope of this Article which may be assigned to it by or under any other international instrument and accepted by the Organization.
(c) Having regard to the provisions of Article 25, the Maritime Safety Committee, upon request by the Assembly or the Council or, if it deems such action useful in the interests of its own work, shall maintain such close relationship with other bodies as may further the purposes of the Organization
The Maritime Safety Committee is the most senior of these and is the main Technical Committee; it oversees the work of its nine sub-committees and initiates new topics. One broad topic it deals with is the effect of the human element on casualties; this work has been put to all of the sub-committees, but meanwhile, the Maritime Safety Committee has developed a code for the management of ships which will ensure that agreed operational procedures are in place and followed by the ship and shore-side staff.
Sub-Committees
The MSC and MEPC are assisted in their work by a number of sub-committees which are open to all Member States. The committees are:
Sub-Committee on Human Element, Training and Watchkeeping (HTW)
Sub-Committee on Implementation of IMO Instruments (III)
Sub-Committee on Navigation, Communications and Search and Rescue (NCSR)
Sub-Committee on Pollution Prevention and Response (PPR)
Sub-Committee on Ship Design and Construction (SDC)
Sub-Committee on Ship Systems and Equipment (SSE)
Sub-Committee on Carriage of Cargoes and Containers (CCC).The names of the IMO sub-committees were changed in 2013. Prior to 2013 there were nine Sub-Committees as follows:
Bulk Liquids and Gases (BLG)
Carriage of Dangerous Goods, Solid Cargoes and Containers(DSC)
Fire Protection (FP)
Radio-communications and Search and Rescue (COMSAR)
Safety of Navigation (NAV)
Ship Design and Equipment (DE)
Stability and Load Lines and Fishing Vessels Safety (SLF)
Standards of Training and Watchkeeping (STW)
Flag State Implementation (FSI)
Membership
To become a member of the IMO, a state ratifies a multilateral treaty known as the Convention on the International Maritime Organization. As of 2020, there are 175 member states of the IMO, which includes 174 of the UN member states plus the Cook Islands. The first state to ratify the convention was Canada in 1948. The three most recent members to join were Armenia and Nauru (which became IMO members in January and May 2018, respectively) and Botswana (which joined the IMO in October 2021).These are the current members with the year they joined:
The three associate members of the IMO are the Faroe Islands, Hong Kong and Macao.
In 1961, the territories of Sabah and Sarawak, which had been included through the participation of United Kingdom, became joint associate members. In 1963 they became part of Malaysia.Most UN member states that are not members of IMO are landlocked countries. These include Afghanistan, Andorra, Bhutan, Burkina Faso, Burundi, Central African Republic, Chad, Eswatini, Kyrgyzstan, Laos, Lesotho, Liechtenstein, Mali, Niger, Rwanda, South Sudan, Tajikistan and Uzbekistan. The Federated States of Micronesia, an island-nation in the Pacific Ocean, is also a non-member. Taiwan is neither a member of the IMO nor of the UN, although it has a major shipping industry.
Legal instruments
IMO is the source of approximately 60 legal instruments that guide the regulatory development of its member states to improve safety at sea, facilitate trade among seafaring states and protect the maritime environment. The most well known is the International Convention for the Safety of Life at Sea (SOLAS), as well as International Convention for the Prevention of Pollution from Ships (MARPOL). Others include the International Oil Pollution Compensation Funds (IOPC). It also functions as a depository of yet to be ratified treaties, such as the International Convention on Liability and Compensation for Damage in Connection with the Carriage of Hazardous and Noxious Substances by Sea, 1996 (HNS Convention) and Nairobi International Convention of Removal of Wrecks (2007).IMO regularly enacts regulations, which are broadly enforced by national and local maritime authorities in member countries, such as the International Regulations for Preventing Collisions at Sea (COLREG). The IMO has also enacted a Port State Control (PSC) authority, allowing domestic maritime authorities such as coast guards to inspect foreign-flag ships calling at ports of the many port states. Memoranda of Understanding (protocols) were signed by some countries unifying Port State Control procedures among the signatories.Conventions, Codes and Regulations:
MARPOL Convention
Marpol Annex I
SOLAS Convention
IMDG Code
ISM Code
ISPS Code
Polar Code
IGF Code
IGC Code
STCW Convention
International Code of Signals
International Ballast Water Management Convention
International Convention on Civil Liability for Oil Pollution Damage (CLC Convention)
International Convention on Maritime Search and Rescue (SAR Convention)
International Convention on Oil Pollution Preparedness, Response and Co-operation (OPRC)
HNS Convention
International Regulations for Preventing Collisions at Sea (COLREG)
International Convention on Load Lines (CLL)
International Convention on the Establishment of an International Fund for Compensation for Oil Pollution Damage (FUND92)
Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation (SUA Convention)
International Convention on the Control of Harmful Anti-fouling Systems on Ships (AFS Convention)
The Casualty Investigation Code - enacted through Resolution MSC.255(84), of 16 May 2008. The full title is Code of the International Standards and Recommended Practices for a Safety Investigation into a Marine casualty or Marine Incident.The IMO is also responsible for publishing the International Code of Signals for use between merchant and naval vessels.
Current priorities
Recent initiatives at the IMO have included amendments to SOLAS, which among other things, included upgraded fire protection standards on passenger ships, the International Convention on Standards of Training, Certification and Watchkeeping for Seafarers (STCW) which establishes basic requirements on training, certification and watchkeeping for seafarers and to the Convention on the Prevention of Maritime Pollution (MARPOL 73/78), which required double hulls on all tankers.
IMO has harmonised information available to seafarers and shore-side traffic services called e-Navigation. An e-Navigation strategy was ratified in 2005, and an implementation plan was developed through three IMO sub-committees. The plan was completed by 2014 and implemented in November of that year. IMO has also served as a key partner and enabler of US international and interagency efforts to establish maritime domain awareness.
Environmental issues
The IMO has a role in tackling international climate change. The First Intersessional Meeting of IMO's Working Group on Greenhouse Gas Emissions from Ships took place in Oslo, Norway (23–27 June 2008), tasked with developing the technical basis for the reduction mechanisms that may form part of a future IMO regime to control greenhouse gas emissions from international shipping, and a draft of the actual reduction mechanisms themselves, for further consideration by IMO's Marine Environment Protection Committee (MEPC). The IMO participated in the 2015 United Nations Climate Change Conference in Paris seeking to establish itself as the "appropriate international body to address greenhouse gas emissions from ships engaged in international trade". Nonetheless, there has been widespread criticism of the IMO's relative inaction since the conclusion of the Paris conference, with the initial data-gathering step of a three-stage process to reduce maritime greenhouse emissions expected to last until 2020. In 2018, the Initial IMO Strategy on the reduction of GHG emissions from ships was adopted. In 2021, The New York Times wrote that the IMO "has repeatedly delayed and watered down climate regulations".The IMO has also taken action to mitigate the global effects of ballast water and sediment discharge, through the 2004 Ballast Water Management Convention, which entered into force in September 2017.
Fishing safety
The IMO Cape Town Agreement is an international International Maritime Organization legal instrument established in 2012, that sets out minimum safety requirements for fishing vessels of 24 metres in length and over or equivalent in gross tons. The Agreement is not yet in force but the IMO is encouraging more member States to ratify the Agreement.
See also
Active Shipbuilding Experts' Federation
IMO ship identification number
International Hydrographic Organization
International Maritime Law Institute
International Maritime Rescue Federation
United Nations Convention on the Law of the Sea
Standard Marine Communication Phrases developed by the IMO, to improve safety at sea
NAVAREA
Notes and references
Further reading
"Convention on the International Maritime Organization". International Maritime Organization. 6 March 1948.
Corbett, Jack; Ruwet, Mélodie; Xu, Yi-Chong; Weller, Patrick (2020). "Climate governance, policy entrepreneurs and small states: explaining policy change at the International Maritime Organisation". Environmental Politics. 29 (5): 825–844. doi:10.1080/09644016.2019.1705057. hdl:10072/390551. S2CID 212837341.
Mankabady, Samir (1986). The International Maritime Organization. London: Routledge. ISBN 978-0-7099-3591-9.
Nordquist, Myron H.; Moore, John Morton (1999). Current Maritime Issues and the International Maritime Organization. The Hague: Martinus Nijhoff Publishers. ISBN 978-90-411-1293-4. OCLC 42652709.
External links
International Maritime Organization
International Maritime Organization | Flickr page |
european union law | European Union law is a system of rules operating within the member states of the European Union (EU). Since the founding of the European Coal and Steel Community following World War II, the EU has developed the aim to "promote peace, its values and the well-being of its peoples". The EU has political institutions, social and economic policies, which transcend nation states for the purpose of cooperation and human development. According to its Court of Justice the EU represents "a new legal order of international law".The EU's legal foundations are the Treaty on European Union and the Treaty on the Functioning of the European Union, currently unanimously agreed on by the governments of 27 member states. New members may join if they agree to follow the rules of the union, and existing states may leave according to their "own constitutional requirements". Citizens are entitled to participate through the Parliament, and their respective state governments through the Council in shaping the legislation the EU makes. The Commission has the right to propose new laws (the right of initiative), the Council of the European Union represents the elected member-state governments, the Parliament is elected by European citizens, and the Court of Justice is meant to uphold the rule of law and human rights. As the Court of Justice has said, the EU is "not merely an economic union" but is intended to "ensure social progress and seek the constant improvement of the living and working conditions of their peoples".
History
Democratic ideals of integration for international and European nations are as old as the modern nation state. Ancient concepts of European unity were generally undemocratic, and founded on domination, like the Empire of Alexander the Great, the Roman Empire, or the Catholic Church controlled by the Pope in Rome. In the Renaissance, medieval trade flourished in organisations like the Hanseatic League, stretching from English towns like Boston and London, to Frankfurt, Stockholm and Riga. These traders developed the lex mercatoria, spreading basic norms of good faith and fair dealing through their business. In 1517, the Protestant Reformation triggered a hundred years of crisis and instability. Martin Luther nailed a list of demands to the church door of Wittenberg, King Henry VIII declared a unilateral split from Rome with the Act of Supremacy 1534, and conflicts flared across the Holy Roman Empire until the Peace of Augsburg 1555 guaranteed each principality the right to its chosen religion (cuius regio, eius religio). This unstable settlement unravelled in the Thirty Years' War (1618–1648), killing around a quarter of the population in central Europe. The Treaty of Westphalia 1648, which brought peace according to a system of international law inspired by Hugo Grotius, is generally acknowledged as the beginning of the nation-state system. Even then, the English Civil War broke out and only ended with the Glorious Revolution of 1688, by Parliament inviting William and Mary from Hannover to the throne, and passing the Bill of Rights 1689. In 1693 William Penn, a Quaker from London who founded Pennsylvania in North America, argued that to prevent ongoing wars in Europe a "European dyet, or parliament" was needed.The French diplomat, Charles-Irénée Castel de Saint-Pierre, who worked negotiating the Treaty of Utrecht at the end of the War of Spanish Succession proposed, through "Perpetual Union", "an everlasting peace in Europe", a project taken up by Jean-Jacques Rousseau, and Immanuel Kant after him. After the Napoleonic Wars and the Revolutions of 1848 in the 19th century, Victor Hugo at the International Peace Congress in 1849 envisioned a day when there would be a "United States of America and the United States of Europe face to face, reaching out for each other across the seas". World War I devastated Europe's society and economy, and the Versailles Treaty failed to establish a workable international system in the League of Nations, any European integration, and imposed punishing terms of reparation payments for the losing countries. After another economic collapse and the rise of fascism led to a Second World War, European civil society was determined to create a lasting union to guarantee world peace through economic, social and political integration.
To "save succeeding generations from the scourge of war, which twice.. brought untold sorrow to mankind", the United Nations Charter was passed in 1945, and the Bretton Woods Conference set up a new system of integrated World Banking, finance and trade. Also, the Council of Europe, formed by the Treaty of London 1949, adopted a European Convention on Human Rights, overseen by a new transnational court in Strasbourg in 1950. Already in 1946, Winston Churchill, who had been defeated as UK Prime Minister in 1945, had called for a "United States of Europe", though this did not mean the UK would sever its ties to the Commonwealth. In 1950, the French Foreign Minister Robert Schuman proposed that, beginning with integration of French and German coal and steel production, there should be "an organisation open to the participation of the other countries of Europe", where "solidarity in production" would make war "not merely unthinkable, but materially impossible". The 1951 Treaty of Paris created the first European Coal and Steel Community (ECSC), signed by France, West Germany, Belgium, the Netherlands, Luxembourg and Italy, with Jean Monnet as its president. Its theory was simply that war would be impossibly costly if ownership and production of every country's economy was mixed together. It established an Assembly (now the European Parliament) to represent the people, a Council of Ministers for the member states, a Commission as the executive, and a Court of Justice to interpret the law. In the East, the Soviet Union had installed dictatorial governments, controlling East Germany, and the rest of Eastern Europe. Although Stalin died in 1953 and the new general secretary Nikita Khrushchev had denounced him in 1956, Soviet tanks crushed a democratic Hungarian Revolution of 1956, and repressed every other attempt of its people to win democracy and human rights.
In the West, the decision was made through the 1957 Treaty of Rome to launch the first European Economic Community. It shared the Assembly and Court with the Coal and Steel Community, but set up parallel bodies for the Council and Commission. Based on the Spaak Report of 1956, it sought to break down all barriers to trade in a common market for goods, services, labour and capital, and prevent distortion of competition and regulate areas of common interest like agriculture, energy and transport. A separate treaty was signed for a European Atomic Energy Community to manage nuclear production. In 1961 the United Kingdom, Denmark, Ireland and Norway applied for membership only to be vetoed in 1963 by France's Charles de Gaulle. Spain also applied and was rejected as it was still led by the Franco dictatorship. The same year, the Court of Justice proclaimed that the Community constituted a "new legal order of international law". The Merger Treaty finally placed the ECSC and Euratom within the EEC. Shortly after, de Gaulle boycotted the commission, which he believed was pressing supranationalism too far. The Luxembourg compromise in 1966 agreed that France (or other countries) could veto issues of "very important national interest", particularly relating to the Common Agricultural Policy, instead of making decisions by "qualified majority". But after the May 1968 events in France and de Gaulle's resignation, the way was free for the United Kingdom, Ireland, and Denmark to join in 1973. Norway had rejected joining in a 1972 referendum, while the UK confirmed its membership in a 1975 referendum.Aside from the European Economic Community itself, the European continent underwent a profound transition towards democracy. The dictators of Greece and Portugal were deposed in 1974, and Spain's dictator died in 1975, enabling their accession in 1981 and 1986. In 1979, the European Parliament had its first direct elections, reflecting a growing consensus that the EEC should be less a union of member states, and more a union of peoples. The 1986 Single European Act increased the number of treaty issues in which qualified majority voting (rather than consensus) would be used to legislate, as a way to accelerate trade integration. The Schengen Agreement of 1985 (not initially signed by Italy, the UK, Ireland, Denmark or Greece) allowed movement of people without any border checks. Meanwhile, in 1987, the Soviet Union's Mikhail Gorbachev announced policies of "transparency" and "restructuring" (glasnost and perestroika). This revealed the depths of corruption and waste. In April 1989, the People's Republic of Poland legalised the Solidarity organisation, which captured 99% of available parliamentary seats in June elections. These elections, in which anti-communist candidates won a striking victory, inaugurated a series of peaceful anti-communist revolutions in Central and Eastern Europe that eventually culminated in the fall of communism. In November 1989, protestors in Berlin began taking down the Berlin Wall, which became a symbol of the collapse of the Iron Curtain, with most of Eastern Europe declaring independence and moving to hold democratic elections by 1991.
The Treaty of Maastricht renamed the EEC as the "European Union", and expanded its powers to include a social chapter, set up a European Exchange Rate Mechanism, and limit government spending. The UK initially opted out of the social provisions, and then monetary union after the 1992 sterling crisis ("Black Wednesday") where speculators bet against the British currency. Sweden, Finland and Austria joined in 1995, but Norway again chose not to do so after its 1994 referendum, instead remaining part of the European Free Trade Area (EFTA) and thus the European Economic Area (EEA), abiding by most EU law but without any voting rights. At the Treaty of Amsterdam, with a new Labour government, the UK joined the social chapter. A newly confident EU then sought to expand. First, the Treaty of Nice made voting weight more proportionate to population. Second, the Euro currency went into circulation in 2002. Third came the accession of Malta, Cyprus, Slovenia, Poland, the Czech Republic, Slovakia, Hungary, Latvia, Estonia, and Lithuania. Fourth, in 2005 a Treaty establishing a Constitution for Europe was proposed. This proposed "constitution" was largely symbolic, but was rejected by referendums in France and the Netherlands. Most of its technical provisions were inserted into the Treaty of Lisbon, without the emotive symbols of federalism or the word "constitution". In the same year, Bulgaria and Romania joined.
During the subprime mortgage crisis and the financial crisis of 2007–2008, European banks that were invested in derivatives were put under severe pressure. British, French, German, and other governments were forced to turn some banks into partially or wholly state-owned banks. Some governments instead guaranteed their banks' debts. In turn, the European debt crisis developed when international investment withdrew and Greece, Spain, Portugal, and Ireland saw international bond markets charge unsustainably high interest rates on government debt. Eurozone governments and staff of the European Central Bank believed that it was necessary to save their banks by taking over Greek debt, and impose "austerity" and "structural adjustment" measures on debtor states. This exacerbated further contraction in the economies. In 2011 two new treaties, the European Fiscal Compact and European Stability Mechanism were signed among the nineteen Eurozone states. In 2013, Croatia entered the union. However a further crisis was triggered after the UK's Conservative government chose to hold a referendum in 2016, and campaigners for "leave" (or "Brexit") won 51.89 per cent of votes on a 72.2 per cent turnout. This referendum was politically inconclusive given the UK's system of Parliamentary sovereignty, with no agreement after the 2017 election, until the 2019 UK general election brought a Conservative majority with a manifesto commitment to drive through Brexit. The UK left EU membership in February 2020, with uncertain economic, territorial and social consequences.
Constitutional law
Although the European Union does not have a codified constitution, like every political body it has laws which "constitute" its basic governance structure. The EU's primary constitutional sources are the Treaty on European Union and the Treaty on the Functioning of the European Union, which have been agreed or adhered to among the governments of all 27 member states. The Treaties establish the EU's institutions, list their powers and responsibilities, and explain the areas in which the EU can legislate with Directives or Regulations. The European Commission has the right to propose new laws, formally called the right of legislative initiative. During the ordinary legislative procedure, the Council (which are ministers from member state governments) and the European Parliament (elected by citizens) can make amendments and must give their consent for laws to pass.The Commission oversees departments and various agencies that execute or enforce EU law. The "European Council" (rather than the Council of the European Union, made up of different government Ministers) is composed of the Prime Ministers or executive presidents of the member states. It appoints the Commissioners and the board of the European Central Bank. The European Court of Justice is the supreme judicial body which interprets EU law, and develops it through precedent. The Court can review the legality of the EU institutions' actions, in compliance with the Treaties. It can also decide upon claims for breach of EU laws from member states and citizens.
Treaties
The Treaty on European Union (TEU) and the Treaty on the Functioning of the European Union (TFEU) are the two main sources of EU law. Representing agreements between all member states, the TEU focuses more on principles of democracy, human rights, and summarises the institutions, while the TFEU expands on all principles and fields of policy in which the EU can legislate. In principle, the EU treaties are like any other international agreement, which will usually be interpreted according to principles codified by the Vienna Convention 1969. It can be amended by unanimous agreement at any time, but TEU itself, in article 48, sets out an amendment procedure through proposals via the Council and a Convention of national Parliament representatives. Under TEU article 5(2), the "principle of conferral" says the EU can do nothing except the things which it has express authority to do. The limits of its competence are governed by the Court of Justice, and the courts and Parliaments of member states.As the European Union has grown from 6 to 27 member states, a clear procedure for accession of members is set out in TEU article 49. The European Union is only open to a "European" state which respects the principles of "human dignity, freedom, democracy, equality, the rule of law, and respect for human rights, including the rights of persons belonging to minorities". Countries whose territory is wholly outside the European continent cannot therefore apply. Nor can any country without fully democratic political institutions which ensure standards of "pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail". Article 50 says any member state can withdraw in accord "with its own constitutional requirements", by negotiated "arrangements for its withdrawal, taking account of the framework for its future relationship with the Union". This indicates that the EU is not entitled to demand a withdrawal, and that member states should follow constitutional procedures, for example, through Parliament or a codified constitutional document. Once article 50 is triggered, there is a two-year time limit to complete negotiations, a procedure which would leave a seceding member without any bargaining power in negotiations, because the costs of having no trade treaty would be proportionally greater to the individual state than the remaining EU bloc.Article 7 allows member states to be suspended for a "clear risk of a serious breach" of values in article 2 (for example, democracy, equality, human rights) with a four-fifths vote of the Council of the European Union, and the consent of the Parliament. Within the treaties' framework, sub-groups of member states may make further rules that only apply to those member states who want them. For example, the Schengen Agreements of 1985 and 1990 allow people to move without any passport or ID checks anywhere in the EU, but did not apply to the UK or Ireland. During the European debt crisis, the Treaty Establishing the European Stability Mechanism 2012 and the Treaty on Stability, Co-ordination and Governance 2012 (the "Fiscal Compact") were adopted only for member states who had the Euro (i.e. not Denmark, Sweden, the UK, Poland, Czech Republic, Hungary, Romania or Bulgaria). This required, among other things, a pledge to balance the government budget and limit structural deficits to 0.5 per cent of GDP, with fines for non-compliance. The jurisdiction for these rules remains with the Court of Justice.
Executive institutions
The European Commission is the main executive body of the European Union. Article 17(1) of the Treaty on European Union states the commission should "promote the general interest of the Union" while Article 17(3) adds that Commissioners should be "completely independent" and not "take instructions from any Government". Under Article 17(2), "Union legislative acts may only be adopted on the basis of a Commission proposal, except where the Treaties provide otherwise". This means that the commission has a monopoly on initiating the legislative procedure, although the council or Parliament are the "de facto catalysts of many legislative initiatives".
The commission's President (as of 2021 Ursula von der Leyen) sets the agenda for its work. Decisions are taken by a simple majority vote, often through a "written procedure" of circulating the proposal and adopting it if there are no objections. In response to Ireland's initial rejection of the Treaty of Lisbon, it was agreed to keep the system of one Commissioner from each of the member states, including the President and the High Representative for Foreign and Security Policy (currently Josep Borrell) The Commissioner President is elected by the European Parliament by an absolute majority of its members, following the parliamentary elections every five years, on the basis of a proposal by the European Council. The latter must take account of the results of the European elections, in which European political parties announce the name of their candidate for this post. Hence, in 2014, Juncker, the candidate of the European People's Party which won the most seats in Parliament, was proposed and elected.
The remaining commissioners are appointed by agreement between the president-elect and each national government, and are then, as a block, subject to a qualified majority vote of the council to approve, and majority approval of the Parliament. The Parliament can only approve or reject the whole commission, not individual commissioners but conducts public hearings with each of them prior to its vote, which in practice often triggers changes to individual appointments or portfolios. TFEU art 248 says the president may reshuffle commissioners, though this is uncommon, without member state approval. A proposal that the commissioners be drawn from the elected Parliament, was not adopted in the Treaty of Lisbon, though in practice several invariable are, relinquishing their seat in order to serve. Commissioners have various privileges, such as being exempt from member state taxes (but not EU taxes), and having immunity from prosecution for doing official acts. Commissioners have sometimes been found to have abused their offices, particularly since the Santer Commission was censured by Parliament in 1999, and it eventually resigned due to corruption allegations. This resulted in one main case, Commission v Edith Cresson where the European Court of Justice held that a Commissioner giving her dentist a job, for which he was clearly unqualified, did in fact not break any law. By contrast to the ECJ's strictly legalistic approach, a Committee of Independent Experts found that a culture had developed where few Commissioners had 'even the slightest sense of responsibility'. This led to the creation of the European Anti-fraud Office. In 2012, it investigated the Maltese Commissioner for Health, John Dalli, who quickly resigned after allegations that he received a €60m bribe in connection with a Tobacco Products Directive.Beyond the commission, the European Central Bank has relative executive autonomy in its conduct of monetary policy for the purpose of managing the euro. It has a six-person board appointed by the European Council, on the Council's recommendation. The president of the council and a commissioner can sit in on ECB meetings, but do not have voting rights.
Legislature
While the Commission has a monopoly on initiating legislation, the European Parliament and the Council of the European Union have powers of amendment and veto during the legislative process. According to the Treaty on European Union articles 9 and 10, the EU observes "the principle of equality of its citizens" and is meant to be founded on "representative democracy". In practice, equality and democracy are still in development because the elected representatives in the Parliament cannot initiate legislation against the commission's wishes, citizens of smallest countries have greater voting weight in Parliament than citizens of the largest countries, and "qualified majorities" or consensus of the council are required to legislate. This "democratic deficit" has encouraged numerous proposals for reform, and is usually perceived as a hangover from earlier days of integration led by member states. Over time, the Parliament gradually assumed more voice: from being an unelected assembly, to its first direct elections in 1979, to having increasingly more rights in the legislative process. Citizens' rights are therefore limited compared to the democratic polities within all European member states: under TEU article 11, citizens and associations have the right to publicise their views and the right to submit an initiative that must be considered by the Commission if it has received at least one million signatures. TFEU article 227 contains a further right for citizens to petition the Parliament on issues which affect them.Parliament elections, take place every five years, and votes for Members of the European Parliament (MEP) in member states must be organised by proportional representation or a single transferable vote. There are 750 MEPs and their numbers are "degressively proportional" according to member state size. This means – although the council is meant to be the body representing member states – in the Parliament citizens of smaller member states have more voice than citizens in larger member states. MEPs divide, as they do in national Parliaments, along political party lines: the conservative European People's Party is currently the largest, and the Party of European Socialists leads the opposition. Parties do not receive public funds from the EU, as the Court of Justice held in Parti écologiste "Les Verts" v European Parliament that this was entirely an issue to be regulated by the member states. The Parliament's powers include calling inquiries into maladministration or appoint an Ombudsman pending any court proceedings. It can require the Commission respond to questions and by a two-thirds majority can censure the whole Commission (as happened to the Santer Commission in 1999). In some cases, the Parliament has explicit consultation rights, which the Commission must genuinely follow. However its participation in the legislative process still remains limited because no member can actually or pass legislation without the Commission and Council, meaning power ("kratia") is not in the hands of directly elected representatives of the people ("demos"): in the EU it is not yet true that "the administration is in the hands of the many and not of the few".
The second main legislative body is the Council of the European Union, which is composed of different ministers of the member states. The heads of government of member states also convene a "European Council" (a distinct body) that the TEU article 15 defines as providing the 'necessary impetus for its development and shall define the general political directions and priorities'. It meets each six months and its President (currently former Belgian Prime Minister Charles Michel) is meant to 'drive forward its work', but it does not itself exercise 'legislative functions'. The Council does this: in effect this is the governments of the member states, but there will be a different minister at each meeting, depending on the topic discussed (e.g. for environmental issues, the member states' environment ministers attend and vote; for foreign affairs, the foreign ministers, etc.). The minister must have the authority to represent and bind the member states in decisions. When voting takes place it is weighted inversely to member state size, so smaller member states are not dominated by larger member states. In total there are 352 votes, but for most acts there must be a qualified majority vote, if not consensus. TEU article 16(4) and TFEU article 238(3) define this to mean at least 55 per cent of the Council members (not votes) representing 65 per cent of the population of the EU: currently this means around 74 per cent, or 260 of the 352 votes. This is critical during the legislative process.
To make new legislation, TFEU article 294 defines the "ordinary legislative procedure" that applies for most EU acts. The essence is there are three readings, starting with a Commission proposal, where the Parliament must vote by a majority of all MEPs (not just those present) to block or suggest changes, and the Council must vote by qualified majority to approve changes, but by unanimity to block Commission amendment. Where the different institutions cannot agree at any stage, a "Conciliation Committee" is convened, representing MEPs, ministers and the commission to try to get agreement on a joint text: if this works, it will be sent back to the Parliament and Council to approve by absolute and qualified majority. This means, legislation can be blocked by a majority in Parliament, a minority in the council, and a majority in the commission: it is harder to change EU law than for it to stay the same. A different procedure exists for budgets. For "enhanced cooperation" among a sub-set of at least member states, authorisation must be given by the Council. Member state governments should be informed by the Commission at the outset before any proposals start the legislative procedure. The EU as a whole can only act within its power set out in the Treaties. TEU articles 4 and 5 state that powers remain with the member states unless they have been conferred, although there is a debate about the Kompetenz-Kompetenz question: who ultimately has the "competence" to define the EU's "competence". Many member state courts believe they decide, other member state Parliaments believe they decide, while within the EU, the Court of Justice believes it has the final say.
Judiciary
The judiciary of the EU has played an important role in the development of EU law. It interprets the treaties, and has accelerated economic and political integration. Today the Court of Justice of the European Union (CJEU) is the main judicial body, within which there is a higher Court of Justice that deals with cases that contain more public importance, and a General Court that deals with issues of detail but without general importance, and then a separate Court of Auditors. Under the Treaty on European Union article 19(2) there is one judge from each member state in the Court of Justice and General Court (27 on each at present). Judges should "possess the qualifications required for appointment to the highest judicial offices" (or for the General Court, the "ability required for appointment to high judicial office"). A president is elected by the judges for three years. While TEU article 19(3) says the Court of Justice is the ultimate court to interpret questions of EU law, in practice, most EU law is applied by member state courts (e.g. the English Court of Appeal, the German Bundesgerichtshof, the Belgian Cour du travail, etc.). Member state courts can refer questions to the CJEU for a preliminary ruling. The CJEU's duty is to "ensure that in the interpretation and application of the Treaties the law is observed", although realistically it has the ability to expand and develop the law according to the principles it develops consistently with democratic values. Examples of landmark, and frequently controversial judgments, include Van Gend en Loos (holding EU law to created a new legal order, and citizens could sue for treaty rights), Mangold v Helm (establishing equality as a general principle of EU law), and Kadi v Commission (confirming international law had to conform with basic principles of EU law). Until 2016, there was the European Union Civil Service Tribunal, which dealt with EU institutions' staff issues.
The Statute of the Court and TFEU require judges are appointed only if they have no political occupation, with independence "beyond doubt". They are selected for renewable six-year terms by "common accord" of governments, with the advice of seven EU or member state judges that the Council and Parliament selects. The Rules of Procedure of the Court of Justice, article 11, says the court is usually organised into chambers of 3 or 5 judges each. A "grand chamber" of 15 more senior judges sit on questions of "difficulty or importance", or those requested by member states. The court's president and vice-president are elected by other judges for renewable 3-year terms by secret ballot. Judges can only be dismissed if all other judges and Advocates General unanimously agree. Advocates General are appointed by the court to give reasoned submissions on cases, especially involving new points of law. Unlike judges on the Court, they write opinions as themselves, rather than collectively, and often with a command of prose and reason, and while not binding are often followed in practice. In addition, each judge has secretaries or referendaires who research and write. Unlike the UK where judges always write their own opinions, referendaires often assist drafting the judgments in the Court of Justice. The Court's Translation Directorate will translate every final judgment into the 24 official languages of the European Union. The three main kinds of judgments the Court of Justice gives following (1) preliminary rulings, requested by the courts of member states, (2) enforcement actions, brought by the commission or Member States, against the EU, a member state, or any other party that is alleged to violate EU law, and (3) other direct actions, where the EU or member state is involved as a party to the dispute, and gives final rulings. The Rules of Procedure of the Court of Justice, modelled on the International Court of Justice, begin with submission of written cases to the court, followed by a short oral hearing. In each case a judge is designated to actively manage the hearing (called a rapporteur) and draft the judgment (probably with help from referendaires). The court always deliberates and votes before the final opinion is written and published. Cases in the General Court can be appealed to the Court of Justice on points of law. While there is no formal appeal procedure from the Court of Justice, in practice its actions are subject to scrutiny by both the supreme courts of member states and the European Court of Human Rights, even if the final balance of power is unresolved.
Conflict of laws
Since its founding, the EU has operated among an increasing plurality of member state and globalising legal systems. This has meant both the European Court of Justice and the supreme courts of the states have had to develop principles to resolve conflicts of laws between different systems. Within the EU itself, the Court of Justice's view is that if Union law conflicts with a provision of State law, then Union law has primacy. In the first major case in 1964, Costa v ENEL, a Milanese lawyer, and former shareholder of an energy company, named Mr Costa refused to pay his electricity bill to Enel, as a protest against the Nationalization of the Italian energy corporations. He claimed the Italian nationalisation law conflicted with the Treaty of Rome, and requested a reference be made to both the Italian Constitutional Court and the Court of Justice under TFEU article 267. The Italian Constitutional Court gave an opinion that because the nationalisation law was from 1962, and the treaty was in force from 1958, Costa had no claim. By contrast, the Court of Justice held that ultimately the Treaty of Rome in no way prevented energy nationalisation, and in any case under the Treaty provisions only the commission could have brought a claim, not Mr Costa. However, in principle, Mr Costa was entitled to plead that the Treaty conflicted with national law, and the court would have a duty to consider his claim to make a reference if there would be no appeal against its decision. The Court of Justice, repeating its view in Van Gend en Loos, said member states "have limited their sovereign rights, albeit within limited fields, and have thus created a body of law which binds both their nationals and themselves" on the "basis of reciprocity". EU law would not "be overridden by domestic legal provisions, however framed... without the legal basis of the community itself being called into question". This meant any "subsequent unilateral act" of the member state inapplicable. Similarly, in Amministrazione delle Finanze dello Stato v Simmenthal SpA, a company, Simmenthal SpA, claimed that a public health inspection fee under an Italian law of 1970 for importing beef from France to Italy was contrary to two Regulations from 1964 and 1968. In "accordance with the principle of the precedence of Community law", said the Court of Justice, the "directly applicable measures of the institutions" (such as the Regulations in the case) "render automatically inapplicable any conflicting provision of current national law". This was necessary to prevent a "corresponding denial" of Treaty "obligations undertaken unconditionally and irrevocably by member states", that could "imperil the very foundations of the" EU. But despite the views of the Court of Justice, the national courts of member states have not accepted the same analysis.Generally speaking, while all member states recognise that EU law takes primacy over national law where this agreed in the Treaties, they do not accept that the Court of Justice has the final say on foundational constitutional questions affecting democracy and human rights. In the United Kingdom, the basic principle is that Parliament, as the sovereign expression of democratic legitimacy, can decide whether it wishes to expressly legislate against EU law. This, however, would only happen in the case of an express wish of the people to withdraw from the EU. It was held in R (Factortame Ltd) v Secretary of State for Transport that "whatever limitation of its sovereignty Parliament accepted when it enacted the European Communities Act 1972 was entirely voluntary" and so "it has always been clear" that UK courts have a duty "to override any rule of national law found to be in conflict with any directly enforceable rule of Community law". In 2014, the Supreme Court of the United Kingdom noted that in R (HS2 Action Alliance Ltd) v Secretary of State for Transport, although the UK constitution is uncodified, there could be "fundamental principles" of common law, and Parliament "did not either contemplate or authorise the abrogation" of those principles when it enacted the European Communities Act 1972. The view of the German Constitutional Court from the Solange I and Solange II decisions is that if the EU does not comply with its basic constitutional rights and principles (particularly democracy, the rule of law and the social state principles) then it cannot override German law. However, as the nicknames of the judgments go, "so long as" the EU works towards the democratisation of its institutions, and has a framework that protects fundamental human rights, it would not review EU legislation for compatibility with German constitutional principles. Most other member states have expressed similar reservations. This suggests the EU's legitimacy rests on the ultimate authority of member states, its factual commitment to human rights, and the democratic will of the people.
As opposed to the member states, the relation of EU law and international law is debated, particularly relating to the European Convention on Human Rights and the United Nations. All individual EU member states are party to both organisations through international treaties. The Treaty on European Union article 6(2) required the EU to accede to the ECHR, but would "not affect the Union's competences as defined in the Treaties". This was thought necessary before the Treaty of Lisbon to ensure that the EU gave adequate protection to human rights, overseen by the external European Court of Human Rights in Strasbourg. However, in Opinion 2/13, after a request by the commission to review their plan to accede, the Court of Justice (in Luxembourg) produced five main reasons why it felt that the accession agreement as it stood was incompatible with the treaties. The reasoning was regarded by a majority of commentators as thinly veiled attempt of the Court of Justice to clutch onto its own power, but it has meant the commission is redrafting a new accession agreement. Under TEU articles 3(5), 21, 34 and 42, the EU must also respect the principles of the United Nations Charter. After the September 11 attacks on the World Trade Center in New York City, the UN Security Council adopted a resolution to freeze the assets of suspected terrorists, linked to Osama bin Laden. This included a Saudi national, Mr Kadi. Sweden froze his assets pursuant to an EU Regulation, which gave effect to the UN Security Council resolution. In Kadi v Commission, Mr Kadi claimed there was no evidence that he was connected to terrorism, and he had not had a fair trial: a fundamental human right. The opinion of AG Maduro recalled Aharon Barak, of the Supreme Court of Israel, that it "is when the cannons roar that we especially need the laws". The Court of Justice held that even UN member cannot contravene "the principles that form part of the very community legal order". In effect the EU has developed a rule that within the boundaries of certain jus cogens principles, other courts may take primacy. The content of those core principles remains open to ongoing judicial dialogue among the senior courts in the Union.
Administrative law
While constitutional law concerns the European Union's governance structure, administrative law binds EU institutions and member state governments to follow the law. Both member states and the Commission have a general legal right or "standing" (locus standi) to bring claims against EU institutions and other member states for breach of the treaties. From the EU's foundation, the Court of Justice also held that the Treaties allowed citizens or corporations to bring claims against EU and member state institutions for violation of the Treaties and Regulations, if they were properly interpreted as creating rights and obligations. However, under Directives, citizens or corporations were said in 1986 to not be allowed to bring claims against other non-state parties. This meant courts of member states were not bound to apply a Union law where a State law conflicted, even though the member state government could be sued, if it would impose an obligation on another citizen or corporation. These rules on "direct effect" limit the extent to which member state courts are bound to administer EU law. All actions by EU institutions can be subject to judicial review, and judged by standards of proportionality, particularly where general principles of law, or fundamental rights are engaged. The remedy for a claimant where there has been a breach of the law is often monetary damages, but courts can also require specific performance or will grant an injunction, in order to ensure the law is effective as possible.
Direct effect
Although it is generally accepted that EU law has primacy, not all EU laws give citizens standing to bring claims: that is, not all EU laws have "direct effect". In Van Gend en Loos v Nederlandse Administratie der Belastingen it was held that the provisions of the Treaties (and EU Regulations) are directly effective, if they are (1) clear and unambiguous (2) unconditional, and (3) did not require EU or national authorities to take further action to implement them. Van Gend en Loos, a postal company, claimed that what is now TFEU article 30 prevented the Dutch Customs Authorities charging tariffs, when it imported urea-formaldehyde plastics from Germany to the Netherlands. After a Dutch court made a reference, the Court of Justice held that even though the Treaties did not "expressly" confer a right on citizens or companies to bring claims, they could do so. Historically, international treaties had only allowed states to have legal claims for their enforcement, but the Court of Justice proclaimed "the Community constitutes a new legal order of international law". Because article 30 clearly, unconditionally and immediately stated that no quantitative restrictions could be placed on trade, without a good justification, Van Gend en Loos could recover the money it paid for the tariff. EU Regulations are the same as Treaty provisions in this sense, because as TFEU article 288 states, they are 'directly applicable in all Member States'. Member states come under a duty not to replicate Regulations in their own law, in order to prevent confusion. For instance, in Commission v Italy the Court of Justice held that Italy had breached a duty under the Treaties, both by failing to operate a scheme to pay farmers a premium to slaughter cows (to reduce dairy overproduction), and by reproducing the rules in a decree with various additions. "Regulations", held the Court of Justice, "come into force solely by virtue of their publication" and implementation could have the effect of "jeopardizing their simultaneous and uniform application in the whole of the Union". On the other hand, some Regulations may themselves expressly require implementing measures, in which case those specific rules should be followed.
While the Treaties and Regulations will have direct effect (if clear, unconditional and immediate), Directives do not generally give citizens (as opposed to the member state) standing to sue other citizens. In theory, this is because TFEU article 288 says Directives are addressed to the member states and usually "leave to the national authorities the choice of form and methods" to implement. In part this reflects that directives often create minimum standards, leaving member states to apply higher standards. For example, the Working Time Directive requires that every worker has at least 4 weeks paid holidays each year, but most member states require more than 28 days in national law. However, on the current position adopted by the Court of Justice, citizens have standing to make claims based on national laws that implement Directives, but not from Directives themselves. Directives do not have so called "horizontal" direct effect (i.e. between non-state parties). This view was instantly controversial, and in the early 1990s three Advocate Generals persuasively argued that Directives should create rights and duties for all citizens. The Court of Justice refused, but there are five large exceptions.
First, if a Directive's deadline for implementation is not met, the member state cannot enforce conflicting laws, and a citizen may rely on the Directive in such an action (so called "vertical" direct effect). So, in Pubblico Ministero v Ratti because the Italian government had failed to implement a Directive 73/173/EEC on packaging and labelling solvents by the deadline, it was estopped from enforcing a conflicting national law from 1963 against Mr Ratti's solvent and varnish business. A member state could "not rely, as against individuals, on its own failure to perform the obligations which the Directive entails". Second, a citizen or company can also invoke a Directive as a defence in a dispute with another citizen or company (not just a public authority) which is attempting to enforce a national law that conflicts with a Directive. So, in CIA Security v Signalson and Securitel the Court of Justice held that a business called CIA Security could defend itself from allegations by competitors that it had not complied with a Belgian decree from 1991 about alarm systems, on the basis that it had not been notified to the commission as a Directive required. Third, if a Directive gives expression to a "general principle" of EU law, it can be invoked between private non-state parties before its deadline for implementation. This follows from Kücükdeveci v Swedex GmbH & Co KG where the German Civil Code §622 stated that the years people worked under the age of 25 would not count towards the increasing statutory notice before dismissal. Ms Kücükdeveci worked for 10 years, from age 18 to 28, for Swedex GmbH & Co KG before her dismissal. She claimed that the law not counting her years under age 25 was unlawful age discrimination under the Employment Equality Framework Directive. The Court of Justice held that the Directive could be relied on by her because equality was also a general principle of EU law. Fourth, if the defendant is an emanation of the state, even if not central government, it can still be bound by Directives. In Foster v British Gas plc the Court of Justice held that Mrs Foster was entitled to bring a sex discrimination claim against her employer, British Gas plc, which made women retire at age 60 and men at 65, if (1) pursuant to a state measure, (2) it provided a public service, and (3) had special powers. This could also be true if the enterprise is privatised, as it was held with a water company that was responsible for basic water provision.Fifth, national courts have a duty to interpret domestic law "as far as possible in the light of the wording and purpose of the directive". Textbooks (though not the Court itself) often called this "indirect effect". In Marleasing SA v La Comercial SA the Court of Justice held that a Spanish Court had to interpret its general Civil Code provisions, on contracts lacking cause or defrauding creditors, to conform with the First Company Law Directive article 11, that required incorporations would only be nullified for a fixed list of reasons. The Court of Justice quickly acknowledged that the duty of interpretation cannot contradict plain words in a national statute. But, if a member state has failed to implement a Directive, a citizen may not be able to bring claims against other non-state parties. It must instead sue the member state itself for failure to implement the law. In sum, the Court of Justice's position on direct effect means that governments and taxpayers must bear the cost of private parties, mostly corporations, for refusing to follow the law.
References and remedies
Litigation often begins and is resolved by member state courts. They interpret and apply EU law, and award remedies of compensation and restitution (remedying loss or stripping gains), injunctions and specific performance (making somebody stop or do something). If, however, the position in EU law appears unclear, member state courts can refer questions to the Court of Justice for a "preliminary ruling" on EU law's proper interpretation. TFEU article 267 says court "may" refer "if it considers" this "is necessary to enable it to give judgment", and "shall bring the matter before the Court" if there is no possibility for further appeal and remedy. Any "court or tribunal of a Member State" can refer. This is widely interpreted. It obviously includes bodies like the UK Supreme Court, a High Court, or an Employment Tribunal. In Vaassen v Beambtenfonds Mijnbedrijf the Court of Justice also held that a mining worker pension arbitration tribunal could make a reference. By contrast, and oddly, in Miles v European Schools the Court of Justice held that a Complaints Board of European Schools, set up under the international agreement, the European Schools Convention, could not refer because though it was a court, it was not "of a member state" (even though all member states had signed that Convention).
On the other side, courts and tribunals are theoretically under a duty to refer questions. In the UK, for example, Lord Denning MR considered it appropriate to refer if the outcome of a case depended on a correct answer, and the Civil Procedure Rules entitle the High Court to refer at any stage of proceedings. The view of the Court of Justice in the leading case, CILFIT v Ministry of Health is that a national court has no duty to refer if the law is an acte clair (a clear rule), or "so obvious as to leave no scope for any reasonable doubt as to the manner in which the question raised is to be resolved". In Kenny Roland Lyckeskog the Court of Justice held that the duty to refer existed for the Swedish Court of Appeal, the hovrätt, since Sweden's Supreme Court (Högsta domstol) had to give permission for appeals to continue. The practical difficulty is that judges differ on their views of whether or not the law is clear. In a significant case, Three Rivers DC v Governor of the Bank of England the UK House of Lords felt confident that it was clear under the First Banking Directive that depositors did not have direct rights to sue the Bank of England for alleged failure to carry out adequate prudential regulation. Their Lordships highlighted that while some uncertainty might exist, the costs of delay in making a reference outweighed the benefits from total certainty. By contrast, in ParkingEye Ltd v Beavis, a majority of the Supreme Court apparently felt able to declare that the law under Unfair Terms in Consumer Contracts Directive was acte clair, and decline to make a reference, even though a senior Law Lord delivered a powerfully reasoned dissent. However, in addition to a reluctance to make references, a general scepticism has grown among senior member state judiciaries of the mode of reasoning used by the Court of Justice. The UK Supreme Court in R (HS2 Action Alliance Ltd) v Secretary of State for Transport devoted large parts of its judgment to criticism, in its view, an unpredictable 'teleological' mode of reasoning which, could decrease confidence in maintaining a dialogue within a plural and transnational judicial system. It added that it might not interpret the European Communities Act 1972 to abridge basic principles and understanding of constitutional functioning – in effect implying that it might decline to follow unreasonable Court of Justice judgments on important issues. Similarly, the German Constitutional Court in the Outright Monetary Transactions case referred a question for preliminary ruling on whether the European Central Bank's plan to buy Greek and other government bonds on secondary markets, despite the Treaty prohibition on buying them directly, was unlawful. In a highly unusual move, the two most senior judges dissented that the ECB's plan could be lawful, while the majority closely guided the Court of Justice on the appropriate mode of reasoning.
If references are made, the Court of Justice will give a preliminary ruling, in order for the member state court to conclude the case and award a remedy. The right to an effective remedy is a general principle of EU law, enshrined in the Charter of Fundamental Rights article 47. Most of the time Regulations and Directives will set out the relevant remedies to be awarded, or they will be construed from the legislation according to the practices of the member state. It could also be that the government is responsible for failure to properly implement a Directive or Regulation, and must therefore pay damages. In Francovich v Italy, the Italian government had failed to set up an insurance fund for employees to claim unpaid wages if their employers had gone insolvent, as the Insolvency Protection Directive required. Francovich, the former employee of a bankrupt Venetian firm, was therefore allowed to claim 6 million Lira from the Italian government in damages for his loss. The Court of Justice held that if a Directive would confer identifiable rights on individuals, and there is a causal link between a member state's violation of EU and a claimant's loss, damages must be paid. The fact that the incompatible law is an Act of Parliament is no defence. So, in Factortame it was irrelevant that Parliament had legislated to require a quota of British ownership of fishing vessels in primary legislation. Similarly, in was Brasserie du Pêcheur v Germany the German government was liable to a French beer company for damages from prohibiting its imports, which did not comply with the fabled beer purity law. It was not decisive that the German Parliament had not acted willfully or negligently. It was merely necessary that there was (1) a rule intended to confer rights, (2) that a breach was sufficiently serious, and (3) there was a causal link between the breach and damage. The Court of Justice advised a breach is to be regarded as 'sufficiently serious' by weighing a range of factors, such as whether it was voluntary, or persistent. In Köbler v Republik Österreich the Court of Justice added that member state liability could also flow from judges failing to adequately implement the law. On the other hand, it is also clear that EU institutions, such as the commission, may be liable according to the same principles for failure to follow the law. The only institution whose decisions appear incapable of generating a damages claim is the Court of Justice itself.
Judicial review
As well as preliminary rulings on the proper interpretation of EU law, an essential function of the Court of Justice is judicial review of the acts of the EU itself. Under Treaty on the Functioning of the European Union (TFEU) article 263(1) the Court can review the legality of any EU legislative of other "act" against the Treaties or general principles, such as those in the Charter of Fundamental Rights of the European Union. This includes legislation, and most other acts that have legal consequences for people. For example, in Société anonyme Cimenteries CBR Cementsbedrijven NV v Commission the commission made a decision to withdraw an assurance to a Dutch cement company that it would be immune from competition law fines, for vertical agreements. The cement company challenged the decision, and the Commission argued this was not really an "act", and so could not be challenged. The Court of Justice held a challenge could be made, and it was an act, because it "deprived [the cement company] of the advantages of a legal situation... and exposed them to a grave financial risk". Similarly in Deutsche Post v Commission the Commission demanded information on state aid given by Germany to Deutsche Post within 20 days. When both challenged this, the Commission argued that the demand for information could not be an act as there was no sanction. The Court of Justice disagreed, and held judicial review could proceed because the request produced "binding legal effects" since the information supplied or not could be relied upon as evidence in a final decision. By contrast, in IBM v Commission the Court of Justice held that a letter from the commission to IBM that it would sue IBM for abusing a dominant position contrary to competition was not a reviewable act, but just a preliminary statement of intent to act. In any case, if a reviewable act of an EU institution is not found compatible with the law, under article 264, it will be declared void.
However, only a limited number of people can bring claims for judicial review. Under TFEU article 263(2), a member state, the Parliament, Council or Commission have automatic rights to seek judicial review. But under article 263(4) a "natural or legal person" must have a "direct and individual concern" about the regulatory act. "Direct" concern means that someone is affected by an EU act without "the interposition of an autonomous will between the decision and its effect", for instance by a national government body. In Piraiki-Patraiki v Commission, a group of Greek textile businesses, who exported cotton products to France, challenged a Commission decision allow France to limit exports. The Commission argued that the exporters were not directly concerned, because France might decide not to limit exports, but the Court of Justice held this possibility was "entirely theoretical". A challenge could be brought. By contrast in Municipality of Differdange v Commission a municipality wanted to challenge the Commissions decision to aid steel firms which reduced production: this would probably reduce its tax collections. But the Court of Justice held that because Luxembourg had discretion, and its decision to reduce capacity was not inevitable, the municipality had no "direct" concern (its complaint was with the Luxembourg government instead). "Individual" concern requires that someone is affected specifically, not as a member of a group. In Plaumann & Co v Commission the Court of Justice held that a clementine importer was not individually concerned when the Commission refused permission to Germany to stop import custom duties. This kept it more expensive for Mr Plaumann to import clementines, but it was equally expensive for everyone else. This decision heavily restricted the number of people who could claim for judicial review. In Unión de Pequeños Agricultores, Advocate General Jacobs propose a broader test of allowing anyone to claim if there was a "substantial adverse effect" on the claimant's interests. Here, a group of Spanish olive oil producers challenged Council Regulation No 1638/98, which withdrew subsidies. Because Regulations are not implemented in national law, but have direct effect, they argued the requirement for individual concern would deny them effective judicial protection. The Court of Justice held that direct actions were still not allowed: if this was unsatisfactory the member states would have to change the treaties. Individual concern is not needed, however under article 263(4), if an act is not legislation, but just a "regulatory act". In Inuit Tapiriit Kanatami v Parliament and Council the Court of Justice affirmed that a Regulation does not count as a "regulatory act" within the Treaty's meaning: it is only meant for acts of lesser importance. Here, a Canadian group representing the Inuit wished to challenge a Regulation on seal products, but were not allowed. They would have to show both direct and individual concern as normal. Thus, without a treaty change, EU administrative law remains one of the most restrictive in Europe.
Human rights and principles
Although access to judicial review is restricted for ordinary questions of law, the Court of Justice has gradually developed a more open approach to standing for human rights. Human rights have also become essential in the proper interpretation and construction of all EU law. If there are two or more plausible interpretations of a rule, the one which is most consistent with human rights should be chosen. The Treaty of Lisbon 2007 made rights underpin the Court of Justice's competence, and required the EU's accession to the European Convention on Human Rights, overseen by the external Strasbourg Court. Initially, reflecting its primitive economic nature, the treaties made no reference to rights. However, in 1969 particularly after concern from Germany, the Court of Justice declared in Stauder v City of Ulm that 'fundamental human rights' were 'enshrined in the general principles of Community law'. This meant that Mr Stauder, who received subsidised butter under an EU welfare scheme only by showing a coupon with his name and address, was entitled to claim that this violated his dignity: he was entitled not to have to go through the humiliation of proving his identity to get food. While those 'general principles' were not written down in EU law, and simply declared to exist by the court, it accords with a majority philosophical view that 'black letter' rules, or positive law, necessarily exist for reasons that the society which made them wants: these give rise to principles, which inform the law's purpose. Moreover, the Court of Justice has clarified that its recognition of rights was 'inspired' by member states' own 'constitutional traditions', and international treaties. These include rights found in member state constitutions, bills of rights, foundational Acts of Parliament, landmark court cases, the European Convention on Human Rights, the European Social Charter 1961, the Universal Declaration of Human Rights 1948, or the International Labour Organization's Conventions. The EU itself must accede to the ECHR, although in '’Opinion 2/13'’ the Court of Justice delayed, because of perceived difficulties in retaining an appropriate balance of competences.
Many of the most important rights were codified in the Charter of Fundamental Rights of the European Union in 2000. While the UK has opted out of direct application of the Charter, this has little practical relevance since the Charter merely reflected pre-existing principles and the Court of Justice uses the Charter to interpret all EU law. For example, in Test-Achats ASBL v Conseil des ministres, the Court of Justice held that Equal Treatment in Goods and Services Directive 2004 article 5(2), which purported to allow a derogation from equal treatment, so men and women could be charged different car insurance rates, was unlawful. It contravened the principle of equality in CFREU 2000 articles 21 and 23, and had to be regarded as ineffective after a transition period. By contrast, in Deutsches Weintor eG v Land Rheinland-Pfalz wine producers claimed that a direction to stop marketing their brands as 'easily digestible' (bekömmlich) by the state food regulator (acting under EU law) contravened their right to occupational and business freedom under CFREU 2000 articles 15 and 16. The Court of Justice held that in fact, the right to health for consumers in article 35 has also to be taken into account, and was to be given greater weight, particularly given the health effects of alcohol. Some rights in the Charter, however, are not expressed with sufficient clarity to be regarded as directly binding. In AMS v Union locale des syndicats CGT a French trade union claimed that the French Labour Code should not exclude casual workers from counting toward the right to set up a work council that an employing entity must inform and consult. They said this contravened the Information and Consultation of Employees Directive and also CFREU article 27. The Court of Justice agreed that the French Labour Code was incompatible with the Directive, but held that article 27 was expressed too generally to create direct rights. On this view, legislation was necessary to make abstract human rights principles concrete, and legally enforceable.
Beyond human rights, the Court of Justice has recognised at least five further 'general principles' of EU law. The categories of general principles are not closed, and may develop according to the social expectations of people living in Europe.
Legal certainty requires that judgments should be prospective, open and clear.
Decision-making must be "proportionate" toward a legitimate aim when reviewing any discretionary act of a government or powerful body,. For example, if a government wishes to change an employment law in a neutral way, yet this could have disproportionate negative impact on women rather than men, the government must show a legitimate aim, and that its measures are (1) appropriate or suitable for achieving it, (2) do no more than necessary, and (3) reasonable in balancing the conflicting rights of different parties.
Equality is regarded as a fundamental principle: this matters particularly for labour rights, political rights, and access to public or private services.
Right to a fair hearing.
Professional privilege between lawyers and clients.
Free movement and trade
While the "social market economy" concept was only put into EU law by the 2007 Treaty of Lisbon, free movement and trade were central to European development since the Treaty of Rome in 1957. The standard theory of comparative advantage says two countries can both benefit from trade even if one of them has a less productive economy in all respects. Like the North American Free Trade Association, or the World Trade Organization, EU law breaks down barriers to trade, by creating rights to free movement of goods, services, labour and capital. This is meant to reduce consumer prices and raise living standards. Early theorists argued a free trade area would give way to a customs union, which led to a common market, then monetary union, then union of monetary and fiscal policy, and eventually a full union characteristic of a federal state. But in Europe those stages were mixed, and it is unclear whether the "endgame" should be the same as a state. Free trade, without rights to ensure fair trade, can benefit some groups within countries (particularly big business) more than others, and disadvantages people who lack bargaining power in an expanding market, particularly workers, consumers, small business, developing industries, and communities. For this reason, the European has become "not merely an economic union", but creates binding social rights for people to "ensure social progress and seek the constant improvement of the living and working conditions of their peoples". The Treaty on the Functioning of the European Union articles 28 to 37 establish the principle of free movement of goods in the EU, while articles 45 to 66 require free movement of persons, services and capital. These "four freedoms" were thought to be inhibited by physical barriers (e.g. customs), technical barriers (e.g. differing laws on safety, consumer or environmental standards) and fiscal barriers (e.g. different Value Added Tax rates). Free movement and trade is not meant to be a licence for unrestricted commercial profit. Increasingly, the Treaties and the Court of Justice aim to ensure free trade serves higher values such as public health, consumer protection, labour rights, fair competition, and environmental improvement.
Goods
Free movement of goods within the European Union is achieved by a customs union, and the principle of non-discrimination. The EU manages imports from non-member states, duties between member states are prohibited, and imports circulate freely. In addition under the Treaty on the Functioning of the European Union article 34, 'Quantitative restrictions on imports and all measures having equivalent effect shall be prohibited between Member States'. In Procureur du Roi v Dassonville the Court of Justice held that this rule meant all "trading rules" that are "enacted by Member States" which could hinder trade "directly or indirectly, actually or potentially" would be caught by article 34. This meant that a Belgian law requiring Scotch whisky imports to have a certificate of origin was unlikely to be lawful. It discriminated against parallel importers like Mr Dassonville, who could not get certificates from authorities in France, where they bought the Scotch. This "wide test", to determine what could potentially be an unlawful restriction on trade, applies equally to actions by quasi-government bodies, such as the former "Buy Irish" company that had government appointees. It also means states can be responsible for private actors. For instance, in Commission v France French farmer vigilantes were continually sabotaging shipments of Spanish strawberries, and even Belgian tomato imports. France was liable for these hindrances to trade because the authorities 'manifestly and persistently abstained' from preventing the sabotage.Generally speaking, if a member state has laws or practices that directly discriminate against imports (or exports under TFEU article 35) then it must be justified under article 36. The justifications include public morality, policy or security, "protection of health and life of humans, animals or plants", "national treasures" of "artistic, historic or archaeological value" and "industrial and commercial property". In addition, although not clearly listed, environmental protection can justify restrictions on trade as an overriding requirement derived from TFEU article 11. More generally, it has been increasingly acknowledged that fundamental human rights should take priority over all trade rules. So, in Schmidberger v Austria the Court of Justice held that Austria did not infringe article 34 by failing to ban a protest that blocked heavy traffic passing over the A13, Brenner Autobahn, en route to Italy. Although many companies, including Mr Schmidberger's German undertaking, were prevented from trading, the Court of Justice reasoned that freedom of association is one of the 'fundamental pillars of a democratic society', against which the free movement of goods had to be balanced, and was probably subordinate. If a member state does appeal to the article 36 justification, the measures it takes have to be applied proportionately. This means the rule must be pursue a legitimate aim and (1) be suitable to achieve the aim, (2) be necessary, so that a less restrictive measure could not achieve the same result, and (3) be reasonable in balancing the interests of free trade with interests in article 36.
Often rules apply to all goods neutrally, but may have a greater practical effect on imports than domestic products. For such "indirect" discriminatory (or "indistinctly applicable") measures the Court of Justice has developed more justifications: either those in article 36, or additional "mandatory" or "overriding" requirements such as consumer protection, improving labour standards, protecting the environment, press diversity, fairness in commerce, and more: the categories are not closed. In the noted case Rewe-Zentral AG v Bundesmonopol für Branntwein, the Court of Justice found that a German law requiring all spirits and liqueurs (not just imported ones) to have a minimum alcohol content of 25 per cent was contrary to TFEU article 34, because it had a greater negative effect on imports. German liqueurs were over 25 per cent alcohol, but Cassis de Dijon, which Rewe-Zentrale AG wished to import from France, only had 15 to 20 per cent alcohol. The Court of Justice rejected the German government's arguments that the measure proportionately protected public health under TFEU article 36, because stronger beverages were available and adequate labelling would be enough for consumers to understand what they bought. This rule primarily applies to requirements about a product's content or packaging. In Walter Rau Lebensmittelwerke v De Smedt PVBA the Court of Justice found that a Belgian law requiring all margarine to be in cube shaped packages infringed article 34, and was not justified by the pursuit of consumer protection. The argument that Belgians would believe it was butter if it was not cube shaped was disproportionate: it would "considerably exceed the requirements of the object in view" and labelling would protect consumers "just as effectively".In a 2003 case, Commission v Italy Italian law required that cocoa products that included other vegetable fats could not be labelled as "chocolate". It had to be "chocolate substitute". All Italian chocolate was made from cocoa butter alone, but British, Danish and Irish manufacturers used other vegetable fats. They claimed the law infringed article 34. The Court of Justice held that a low content of vegetable fat did not justify a "chocolate substitute" label. This was derogatory in the consumers' eyes. A 'neutral and objective statement' was enough to protect consumers. If member states place considerable obstacles on the use of a product, this can also infringe article 34. So, in a 2009 case, Commission v Italy, the Court of Justice held that an Italian law prohibiting motorcycles or mopeds pulling trailers infringed article 34. Again, the law applied neutrally to everyone, but disproportionately affected importers, because Italian companies did not make trailers. This was not a product requirement, but the Court reasoned that the prohibition would deter people from buying it: it would have "a considerable influence on the behaviour of consumers" that "affects the access of that product to the market". It would require justification under article 36, or as a mandatory requirement.
In contrast to product requirements or other laws that hinder market access, the Court of Justice developed a presumption that "selling arrangements" would be presumed to not fall into TFEU article 34, if they applied equally to all sellers, and affected them in the same manner in fact. In Keck and Mithouard two importers claimed that their prosecution under a French competition law, which prevented them selling Picon beer under wholesale price, was unlawful. The aim of the law was to prevent cut throat competition, not to hinder trade. The Court of Justice held, as "in law and in fact" it was an equally applicable "selling arrangement" (not something that alters a product's content) it was outside the scope of article 34, and so did not need to be justified. Selling arrangements can be held to have an unequal effect "in fact" particularly where traders from another member state are seeking to break into the market, but there are restrictions on advertising and marketing. In Konsumentombudsmannen v De Agostini the Court of Justice reviewed Swedish bans on advertising to children under age 12, and misleading commercials for skin care products. While the bans have remained (justifiable under article 36 or as a mandatory requirement) the Court emphasised that complete marketing bans could be disproportionate if advertising were "the only effective form of promotion enabling [a trader] to penetrate" the market. In Konsumentombudsmannen v Gourmet AB the Court suggested that a total ban for advertising alcohol on the radio, TV and in magazines could fall within article 34 where advertising was the only way for sellers to overcome consumers' "traditional social practices and to local habits and customs" to buy their products, but again the national courts would decide whether it was justified under article 36 to protect public health. Under the Unfair Commercial Practices Directive, the EU harmonised restrictions on restrictions on marketing and advertising, to forbid conduct that distorts average consumer behaviour, is misleading or aggressive, and sets out a list of examples that count as unfair. Increasingly, states have to give mutual recognition to each other's standards of regulation, while the EU has attempted to harmonise minimum ideals of best practice. The attempt to raise standards is hoped to avoid a regulatory "race to the bottom", while allowing consumers access to goods from around the continent.
Workers
Since its foundation, the Treaties sought to enable people to pursue their life goals in any country through free movement. Reflecting the economic nature of the project, the European Community originally focused upon free movement of workers: as a "factor of production". However, from the 1970s, this focus shifted towards developing a more "social" Europe. Free movement was increasingly based on "citizenship", so that people had rights to empower them to become economically and socially active, rather than economic activity being a precondition for rights. This means the basic "worker" rights in TFEU article 45 function as a specific expression of the general rights of citizens in TFEU articles 18 to 21. According to the Court of Justice, a "worker" is anybody who is economically active, which includes everyone in an employment relationship, "under the direction of another person" for "remuneration". A job, however, need not be paid in money for someone to be protected as a worker. For example, in Steymann v Staatssecretaris van Justitie, a German man claimed the right to residence in the Netherlands, while he volunteered plumbing and household duties in the Bhagwan community, which provided for everyone's material needs irrespective of their contributions. The Court of Justice held that Mr Steymann was entitled to stay, so long as there was at least an "indirect quid pro quo" for the work he did. Having "worker" status means protection against all forms of discrimination by governments, and employers, in access to employment, tax, and social security rights. By contrast a citizen, who is "any person having the nationality of a Member State" (TFEU article 20(1)), has rights to seek work, vote in local and European elections, but more restricted rights to claim social security. In practice, free movement has become politically contentious as nationalist political parties have manipulated fears about immigrants taking away people's jobs and benefits (paradoxically at the same time). Nevertheless, practically "all available research finds little impact" of "labour mobility on wages and employment of local workers".
The Free Movement of Workers Regulation articles 1 to 7 set out the main provisions on equal treatment of workers. First, articles 1 to 4 generally require that workers can take up employment, conclude contracts, and not suffer discrimination compared to nationals of the member state. In a famous case, the Belgian Football Association v Bosman, a Belgian footballer named Jean-Marc Bosman claimed that he should be able to transfer from R.F.C. de Liège to USL Dunkerque when his contract finished, regardless of whether Dunkerque could afford to pay Liège the habitual transfer fees. The Court of Justice held "the transfer rules constitute[d] an obstacle to free movement" and were unlawful unless they could be justified in the public interest, but this was unlikely. In Groener v Minister for Education the Court of Justice accepted that a requirement to speak Gaelic to teach in a Dublin design college could be justified as part of the public policy of promoting the Irish language, but only if the measure was not disproportionate. By contrast in Angonese v Cassa di Risparmio di Bolzano SpA a bank in Bolzano, Italy, was not allowed to require Mr Angonese to have a bilingual certificate that could only be obtained in Bolzano. The Court of Justice, giving "horizontal" direct effect to TFEU article 45, reasoned that people from other countries would have little chance of acquiring the certificate, and because it was "impossible to submit proof of the required linguistic knowledge by any other means", the measure was disproportionate. Second, article 7(2) requires equal treatment in respect of tax. In Finanzamt Köln Altstadt v Schumacker the Court of Justice held that it contravened TFEU art 45 to deny tax benefits (e.g. for married couples, and social insurance expense deductions) to a man who worked in Germany, but was resident in Belgium when other German residents got the benefits. By contrast in Weigel v Finanzlandesdirektion für Vorarlberg the Court of Justice rejected Mr Weigel's claim that a re-registration charge upon bringing his car to Austria violated his right to free movement. Although the tax was "likely to have a negative bearing on the decision of migrant workers to exercise their right to freedom of movement", because the charge applied equally to Austrians, in absence of EU legislation on the matter it had to be regarded as justified. Third, people must receive equal treatment regarding "social advantages", although the Court has approved residential qualifying periods. In Hendrix v Employee Insurance Institute the Court of Justice held that a Dutch national was not entitled to continue receiving incapacity benefits when he moved to Belgium, because the benefit was "closely linked to the socio-economic situation" of the Netherlands. Conversely, in Geven v Land Nordrhein-Westfalen the Court of Justice held that a Dutch woman living in the Netherlands, but working between 3 and 14 hours a week in Germany, did not have a right to receive German child benefits, even though the wife of a man who worked full-time in Germany but was resident in Austria could. The general justifications for limiting free movement in TFEU article 45(3) are "public policy, public security or public health", and there is also a general exception in article 45(4) for "employment in the public service".
Citizens
Beyond the right of free movement to work, the EU has increasingly sought to guarantee rights of citizens, and rights simply be being a human being. But although the Court of Justice stated that 'Citizenship is destined to be the fundamental status of nationals of the Member States', political debate remains on who should have access to public services and welfare systems funded by taxation. As of now, Union citizenship is criticised for being not inclusive enough and having failed to establish a truly borderless space of social solidarity. In 2008, just 8 million people from 500 million EU citizens (1.7 per cent) had in fact exercised rights of free movement, the vast majority workers. According to TFEU article 20, citizenship of the EU derives from nationality of a member state. Article 21 confers general rights to free movement in the EU and to reside freely within limits set by legislation. This applies for citizens and their immediate family members. This triggers four main groups of rights: (1) to enter, depart and return, without undue restrictions, (2) to reside, without becoming an unreasonable burden on social assistance, (3) to vote in local and European elections, and (4) the right to equal treatment with nationals of the host state, but for social assistance only after 3 months of residence.
First, the Citizens Rights Directive 2004 article 4 says every citizen has the right to depart a member state with a valid passport. This has historical importance for central and eastern Europe, when the Soviet Union and the Berlin Wall denied its citizens the freedom to leave. Article 5 gives every citizen a right of entry, subject to national border controls. Schengen Area countries (not the UK and Ireland) abolished the need to show documents, and police searches at borders, altogether. These reflect the general principle of free movement in TFEU article 21. Second, article 6 allows every citizen to stay three months in another member state, whether economically active or not. Article 7 allows stays over three months with evidence of "sufficient resources... not to become a burden on the social assistance system". Articles 16 and 17 give a right to permanent residence after 5 years without conditions. Third, TEU article 10(3) requires the right to vote in the local constituencies for the European Parliament wherever a citizen lives.
Fourth, and more debated, article 24 requires that the longer an EU citizen stays in a host state, the more rights they have to access public and welfare services, on the basis of equal treatment. This reflects general principles of equal treatment and citizenship in TFEU articles 18 and 20. In a simple case, in Sala v Freistaat Bayern the Court of Justice held that a Spanish lady who had lived in Germany for 25 years and had a baby was entitled to child support, without the need for a residence permit, because Germans did not need one. In Trojani v Centre public d'aide sociale de Bruxelles, a French man who lived in Belgium for two years was entitled to the "minimex" allowance from the state for a minimum living wage. In Grzelczyk v Centre Public d'Aide Sociale d'Ottignes-Louvain-la-Neuve a French student, who had lived in Belgium for three years, was entitled to receive the "minimex" income support for his fourth year of study. Similarly, in R (Bidar) v London Borough of Ealing the Court of Justice held that it was lawful to require a French UCL economics student lived in the UK for three years before receiving a student loan, but not that he had to have additional "settled status". Similarly, in Commission v Austria, Austria was not entitled to restrict its university places to Austrian students to avoid "structural, staffing and financial problems" if (mainly German) foreign students applied, unless it proved there was an actual problem. However, in Dano v Jobcenter Leipzig, the Court of Justice held that the German government was entitled to deny child support to a Romanian mother who had lived in Germany for 3 years, but had never worked. Because she lived in Germany for over 3 months, but under 5 years, she had to show evidence of "sufficient resources", since the Court reasoned the right to equal treatment in article 24 within that time depended on lawful residence under article 7.
Establishment and services
As well as creating rights for "workers" who generally lack bargaining power in the market, the Treaty on the Functioning of the European Union also protects the "freedom of establishment" in article 49, and "freedom to provide services" in article 56. In Gebhard v Consiglio dell’Ordine degli Avvocati e Procuratori di Milano the Court of Justice held that to be "established" means to participate in economic life "on a stable and continuous basis", while providing "services" meant pursuing activity more "on a temporary basis". This meant that a lawyer from Stuttgart, who had set up chambers in Milan and was censured by the Milan Bar Council for not having registered, should claim for breach of establishment freedom, rather than service freedom. However, the requirements to be registered in Milan before being able to practice would be allowed if they were non-discriminatory, "justified by imperative requirements in the general interest" and proportionately applied. All people or entities that engage in economic activity, particularly the self-employed, or "undertakings" such as companies or firms, have a right to set up an enterprise without unjustified restrictions. The Court of Justice has held that both a member state government and a private party can hinder freedom of establishment, so article 49 has both "vertical" and "horizontal" direct effect. In Reyners v Belgium the Court of Justice held that a refusal to admit a lawyer to the Belgian bar because he lacked Belgian nationality was unjustified. TFEU article 49 says states are exempt from infringing others' freedom of establishment when they exercise "official authority". But regulation of an advocate's work (as opposed to a court's) was not official. By contrast in Commission v Italy the Court of Justice held that a requirement for lawyers in Italy to comply with maximum tariffs unless there was an agreement with a client was not a restriction. The Grand Chamber of the Court of Justice held the commission had not proven that this had any object or effect of limiting practitioners from entering the market. Therefore, there was no prima facie infringement freedom of establishment that needed to be justified.
In regard to companies, the Court of Justice held in R (Daily Mail and General Trust plc) v HM Treasury that member states could restrict a company moving its seat of business, without infringing TFEU article 49. This meant the Daily Mail newspaper's parent company could not evade tax by shifting its residence to the Netherlands without first settling its tax bills in the UK. The UK did not need to justify its action, as rules on company seats were not yet harmonised. By contrast, in Centros Ltd v Erhversus-og Selkabssyrelsen the Court of Justice found that a UK limited company operating in Denmark could not be required to comply with Denmark's minimum share capital rules. UK law only required £1 of capital to start a company, while Denmark's legislature took the view companies should only be started up if they had 200,000 Danish krone (around €27,000) to protect creditors if the company failed and went insolvent. The Court of Justice held that Denmark's minimum capital law infringed Centros Ltd's freedom of establishment and could not be justified, because a company in the UK could admittedly provide services in Denmark without being established there, and there were less restrictive means of achieving the aim of creditor protection. This approach was criticised as potentially opening the EU to unjustified regulatory competition, and a race to the bottom in legal standards, like the US state of Delaware, which is argued to attract companies with the worst standards of accountability, and unreasonably low corporate tax. Appearing to meet the concern, in Überseering BV v Nordic Construction GmbH the Court of Justice held that a German court could not deny a Dutch building company the right to enforce a contract in Germany, simply because it was not validly incorporated in Germany. Restrictions on freedom of establishment could be justified by creditor protection, labour rights to participate in work, or the public interest in collecting taxes. But in this case denial of capacity went too far: it was an "outright negation" of the right of establishment. Setting a further limit, in Cartesio Oktató és Szolgáltató bt the Court of Justice held that because corporations are created by law, they must be subject to any rules for formation that a state of incorporation wishes to impose. This meant the Hungarian authorities could prevent a company from shifting its central administration to Italy, while it still operated and was incorporated in Hungary. Thus, the court draws a distinction between the right of establishment for foreign companies (where restrictions must be justified), and the right of the state to determine conditions for companies incorporated in its territory, although it is not entirely clear why.
The "freedom to provide services" under TFEU article 56 applies to people who give services "for remuneration", especially commercial or professional activity. For example, in Van Binsbergen v Bestuur van de Bedrijfvereniging voor de Metaalnijverheid a Dutch lawyer moved to Belgium while advising a client in a social security case, and was told he could not continue because Dutch law said only people established in the Netherlands could give legal advice. The Court of Justice held that the freedom to provide services applied, it was directly effective, and the rule was probably unjustified: having an address in the member state would be enough to pursue the legitimate aim of good administration of justice. The Court of Justice has held that secondary education falls outside the scope of article 56 because usually the state funds it, but higher education does not. Health care generally counts as a service. In Geraets-Smits v Stichting Ziekenfonds Mrs Geraets-Smits claimed she should be reimbursed by Dutch social insurance for costs of receiving treatment in Germany. The Dutch health authorities regarded the treatment unnecessary, so she argued this restricted the freedom (of the German health clinic) to provide services. Several governments submitted that hospital services should not be regarded as economic, and should not fall within article 56. But the Court of Justice held health was a "service" even though the government (rather than the service recipient) paid for the service. National authorities could be justified in refusing to reimburse patients for medical services abroad if the health care received at home was without undue delay, and it followed "international medical science" on which treatments counted as normal and necessary. The Court requires that the individual circumstances of a patient justify waiting lists, and this is also true in the context of the UK's National Health Service. Aside from public services, another sensitive field of services are those classified as illegal. Josemans v Burgemeester van Maastricht held that the Netherlands' regulation of cannabis consumption, including the prohibitions by some municipalities on tourists (but not Dutch nationals) going to coffee shops, fell outside article 56 altogether. The Court of Justice reasoned that narcotic drugs were controlled in all member states, and so this differed from other cases where prostitution or other quasi-legal activity was subject to restriction.
If an activity does fall within article 56, a restriction can be justified under article 52, or by overriding requirements developed by the Court of Justice. In Alpine Investments BV v Minister van Financiën a business that sold commodities futures (with Merrill Lynch and another banking firms) attempted to challenge a Dutch law that prohibiting cold calling customers. The Court of Justice held the Dutch prohibition pursued a legitimate aim to prevent "undesirable developments in securities trading" including protecting the consumer from aggressive sales tactics, thus maintaining confidence in the Dutch markets. In Omega Spielhallen GmbH v Bonn a "laserdrome" business was banned by the Bonn council. It bought fake laser gun services from a UK firm called Pulsar Ltd, but residents had protested against "playing at killing" entertainment. The Court of Justice held that the German constitutional value of human dignity, which underpinned the ban, did count as a justified restriction on freedom to provide services. In Liga Portuguesa de Futebol v Santa Casa da Misericórdia de Lisboa the Court of Justice also held that the state monopoly on gambling, and a penalty for a Gibraltar firm that had sold internet gambling services, was justified to prevent fraud and gambling where people's views were highly divergent. The ban was proportionate as this was an appropriate and necessary way to tackle the serious problems of fraud that arise over the internet. In the Services Directive a group of justifications were codified in article 16, which the case law has developed.
Capital
Free movement of capital was traditionally seen as the fourth freedom, after goods, workers and persons, services and establishment. The original Treaty of Rome required that restrictions on free capital flows only be removed to the extent necessary for the common market. From the Treaty of Maastricht, now in TFEU article 63, "all restrictions on the movement of capital between Member States and between Member States and third countries shall be prohibited". This means capital controls of various kinds are prohibited, including limits on buying currency, limits on buying company shares or financial assets, or government approval requirements for foreign investment. By contrast, taxation of capital, including corporate tax, capital gains tax and Financial transaction tax are not affected so long as they do not discriminate by nationality. According to the Capital Movement Directive 1988, Annex I, 13 categories of capital which must move free are covered. In Baars v Inspecteur der Belastingen Particulieren the Court of Justice held that for investments in companies, the capital rules, rather than freedom of establishment rules, were engaged if an investment did not enable a "definite influence" through shareholder voting or other rights by the investor. That case held a Dutch Wealth Tax Act 1964 unjustifiably exempted Dutch investments, but not Mr Baars' investments in an Irish company, from the tax: the wealth tax, or exemptions, had to be applied equally. On the other hand, TFEU article 65(1) does not prevent taxes that distinguish taxpayers based on their residence or the location of an investment (as taxes commonly focus on a person's actual source of profit) or any measures to prevent tax evasion. Apart from tax cases, largely following from cases originating in the UK, a series of cases held that government owned golden shares were unlawful. In Commission v Germany the Commission claimed the 1960 German Volkswagen Act violated article 63, in that §2(1) restricted any party having voting rights exceeding 20% of the company, and §4(3) allowed a minority of 20% of shares held by the Lower Saxony government to block any decisions. Although this was not an impediment to actual purchase of shares, or receipt of dividends by any shareholder, the Court of Justice's Grand Chamber agreed that it was disproportionate for the government's stated aim of protecting workers or minority shareholders. Similarly, in Commission v Portugal the Court of Justice held that Portugal infringed free movement of capital by retaining golden shares in Portugal Telecom that enabled disproportionate voting rights, by creating a "deterrent effect on portfolio investments" and reducing "the attractiveness of an investment". This suggested the Court's preference that a government, if it sought public ownership or control, should nationalise in full the desired proportion of a company in line with TFEU article 345.The final stage of completely free movement of capital was thought to require a single currency and monetary policy, eliminating the transaction costs and fluctuations of currency exchange. Following a Report of the Delors Commission in 1988, the Treaty of Maastricht made economic and monetary union an objective, first by completing the internal market, second by creating a European System of Central Banks to coordinate common monetary policy, and third by locking exchange rates and introducing a single currency, the euro. Today, 19 member states have adopted the euro, while 9 member states have either determined to opt-out or their accession has been delayed, particularly since the European debt crisis. According to TFEU articles 119 and 127, the objective of the European Central Bank and other central banks ought to be price stability. This has been criticised for apparently being superior to the objective of full employment in the Treaty on European Union article 3.
Social and market regulations
While the European Economic Community originally focused on free movement, and dismantling barriers to trade, more EU law today concerns regulation of the "social market economy". In 1976 the Court of Justice said in Defrenne v Sabena the goal was "not merely an economic union", but to "ensure social progress and seek the constant improvement of the living and working conditions of their peoples". On this view, stakeholders in each member state might not have the capacity to take advantage of expanding trade in a globalising economy. Groups with greater bargaining power can exploit weaker legal rights in other member states. For example, a corporation could shift production to member states with a lower minimum wage, to increase shareholder profit, even if production costs more and workers are paid less. This would mean an aggregate loss of social wealth, and a "race to the bottom" in human development. To make globalisation fair, the EU establishes a minimum floor of rights for the stakeholders in enterprise: for consumers, workers, investors, shareholders, creditors, and the public. Each field of law is vast, so EU law is designed to be subsidiary to comprehensive rules in each member state. Member states may go beyond the harmonised minimum, acting as "laboratories of democracy".EU law makes basic standards of "exit" (where markets operate), rights (enforceable in court), and "voice" (especially through votes) in enterprise. Rules of competition law balance the interests of different groups, generally to favour consumers, for the larger purpose in the Treaty on European Union article 3(3) of a "highly competitive social market economy". The EU is bound by the Treaty on the Functioning of the European Union article 345 to "in no way prejudice the rules in Member States governing the system of property ownership". This means the EU is bound to be neutral to member state's choice to take enterprises into public ownership, or to privatise them. While there have been academic proposals for a European Civil Code, and projects to frame non-binding principles of contract and tort, harmonisation has only occurred for conflict of laws and intellectual property.
Consumer protection
Protection of European consumers has been a central part of developing the EU internal market. The Treaty on the Functioning of the European Union article 169 enables the EU to follow the ordinary legislative procedure to protect consumers "health, safety and economic interests" and promote rights to "information, education and to organise themselves in order to safeguard their interests". All member states may grant higher protection, and a "high level of consumer protection" is regarded as a fundamental right. Beyond these general principles, and outside specific sectors, there are four main Directives: the Product Liability Directive 1985, Unfair Terms in Consumer Contracts Directive 1993, Unfair Commercial Practices Directive 2005 and the Consumer Rights Directive 2011, requiring information and cancellation rights for consumers. As a whole, the law is designed to ensure that consumers in the EU are entitled to the same minimum rights wherever they shop, and largely follows inspiration from theories of consumer protection developed in California and the Consumer Bill of Rights proclaimed by John F. Kennedy in 1962. The Court of Justice has continually affirmed that the need for more consumer rights (than in commercial contracts) both because consumers tend to lack information, and they have less bargaining power.
The Product Liability Directive 1985 was the first consumer protection measure. It creates strict enterprise liability for all producers and retailers for any harm to consumers from products, as a way to promote basic standards of health and safety. Any producer, or supplier if the ultimate producer is insolvent, of a product is strictly liable to compensate a consumer for any damage caused by a defective product. A "defect" is anything which falls below what a consumer is entitled to expect, and this essentially means that products should be safe for their purpose. A narrow defensive is available if a producer can show that a defect could not be known by any scientific method, thought this has never been successfully invoked, because it is generally thought a profit making enterprise should not be able to externalise the risks of its activities.
The Unfair Terms in Consumer Contracts Directive 1993 was the second main measure. Under article 3(1) a term is unfair, and not binding, if it is not "individually negotiated| and "if, contrary to the requirement of good faith, it causes a significant imbalance in the parties' rights and obligations arising under the contract, to the detriment of the consumer". The Court of Justice has continually affirmed that the Directive, as recital 16 states, "is based on the idea that the consumer is in a weak position vis-à-vis the seller or supplier, as regards both his bargaining power and his level of knowledge". Terms which are very skewed, are to be conclusively regarded as contrary to "good faith" and therefore unfair. For example, in RWE AG v Verbraucherzentrale NRW eV clauses in gas supply contracts enabling corporation, RWE, to unilaterally vary prices were advised by the Court of Justice to be not sufficiently transparent, and therefore unfair. In Brusse v Jahani BV the Court of Justice advised that clauses in a tenancy contract requiring tenants pay €25 per day were likely unfair, and would have to be entirely void without replacement, if they were not substituted with more precise mandatory terms in national legislation. In Aziz v Caixa d'Estalvis de Catalunya, following the financial crisis of 2007–2008, the European Court of Justice advised that even terms regarding repossession of homes in Spain had to be assessed for fairness by national courts. In Kušionová v SMART Capital a.s., the Court of Justice held that consumer law was to be interpreted in the light of fundamental rights, including the right to housing, if a home could be repossessed. Because consumer law operates through Directives, national courts have the final say on applying the general principles set out by the Court of Justice's case law.
Unfair Commercial Practices Directive 2005/29/EC
Consumer Rights Directive 2011/83/EU
Payment Services Directive 2007/64/EC
Late Payments Directive 2011/7/EU
Labour rights
While free movement of workers was central to the first European Economic Community agreement, the development of European labour law has been a gradual process. Originally, the Ohlin Report of 1956 recommended that labour standards did not need to be harmonised, although a general principle of anti-discrimination between men and women was included in the early Treaties. Increasingly, the absence of labour rights was seen as inadequate given the capacity for a "race to the bottom" in international trade if corporations can shift jobs and production to countries with low wages. Today, the EU is required under TFEU article 147 to contribute to a "high level of employment by encouraging cooperation between Member States". This has not resulted in legislation, which usually requires taxation and fiscal stimulus for significant change, while the European Central Bank's monetary policy has been acutely controversial during the European debt crisis. Under article 153(1), the EU is able to use the ordinary legislation procedure on a list of labour law fields. This notably excludes wage regulation and collective bargaining. Generally, four main fields of EU regulation of labour rights touch (1) individual labour rights, (2) anti-discrimination regulations, (3) rights to information, consultation, and participation at work, and (4) rights to job security. In virtually all cases, the EU follows the principle that member states can always create rights more beneficial to workers. This is because the fundamental principle of labour law is that employees' unequal bargaining power justifies substitution of rules in property and contract with positive social rights so that people may earn a living to fully participate in a democratic society. The EU's competences generally follow principles codified in the Community Charter of the Fundamental Social Rights of Workers 1989, introduced in the "social chapter" of the Treaty of Maastricht. Initially the UK had opted-out, because of opposition by the Conservative Party, but was acceded to when the Labour Party won the 1997 general election in the Treaty of Amsterdam.
The first group of Directives create a range of individual rights in EU employment relationships. The Employment Information Directive 1991 requires that every employee (however defined by member state law) has the right to a written statement of their employment contract. While there is no wage regulation, the Institutions for Occupational Retirement Provision Directive 2003 requires that pension benefits are protected through a national insurance fund, that information is provided to beneficiaries, and minimum standards of governance are observed. Most member states go far beyond these requirements, particularly by requiring a vote for employees in who manages their money. Reflecting basic standards in the Universal Declaration of Human Rights and ILO Conventions, the Working Time Directive 2003 requires a minimum of 4 weeks (totalling 28 days) paid holidays each year, a minimum of 20-minute paid rest breaks for 6-hour work shifts, limits on night work or time spent on dangerous work, and a maximum 48-hour working week unless a worker individually consents. The Parental Leave Directive 2010 creates a bare minimum of 4 months of unpaid leave for parents (mothers, fathers, or legal guardians) to care for children before they turn 8 years old, and the Pregnant Workers Directive 1992 creates a right for mothers to a minimum of 14 weeks' paid leave to care for children. Finally, the Safety and Health at Work Directive 1989 requires basic requirements to prevent and insure against workplace risks, with employee consultation and participation, and this is complemented by specialised Directives, ranging from work equipment to dangerous industries. In almost all cases, all member states go significantly beyond this minimum. The objective of transnational regulation is therefore to progressively raise the minimum floor in line with economic development. Second, equality was affirmed by the Court of Justice in Kücükdeveci v Swedex GmbH & Co KG to be a general principle of EU law. Further to this, the Part-time Work Directive 1997, Fixed-term Work Directive 1999 and Temporary Agency Work Directive 2008 generally require that people who do not have ordinary full-time, permanent contracts are treated no less favourably than their colleagues. However, the scope of the protected worker is left to member state law, and the TAWD 2008 only applies to "basic working conditions" (mostly pay, working hours and participation rights) and enabled member states to have a qualifying period. The Race Equality Directive 2000, Equality Framework Directive 2000 and Equal Treatment Directive 2006 prohibit discrimination based on sexual orientation, disability, religion or belief, age, race and gender. As well as "direct discrimination", there is a prohibition on "indirect discrimination" where employers apply a neutral rule to everybody, but this has a disproportionate impact on the protected group. The rules are not consolidated, and on gender pay potentially limited in not enabling a hypothetical comparator, or comparators in outsourced business. Equality rules do not yet apply to child care rights, which only give women substantial time off, and consequently hinder equality in men and women caring for children after birth, and pursuing their careers.
Third, the EU is formally not enabled to legislate on collective bargaining, although the EU, with all member states, is bound by the jurisprudence of the European Court of Human Rights on freedom of association. In Wilson and Palmer v United Kingdom the Court held that any detriment for membership of a trade union was incompatible with article 11, and in Demir and Baykara v Turkey the Court held "the right to bargain collectively with the employer has, in principle, become one of the essential elements" of article 11. This approach, which includes affirmation of the fundamental right to strike in all democratic member states, has been seen as lying in tension with some of the Court of Justice's previous case law, notably ITWF v Viking Line ABP and Laval Un Partneri Ltd v Svenska Byggnadsarbetareforbundet. These controversial decisions, quickly disapproved by legislative measures, suggested the fundamental right of workers to take collective action was subordinate to business freedom to establish and provide services. More positively, the Information and Consultation Directive 2002 requires that workplaces with over 20 or 50 staff have the right to set up elected work councils with a range of binding rights, the European Works Council Directive 2009 enables work councils transnationally, and the Employee Involvement Directive 2001 requires representation of workers on company boards in some European Companies. If a company transforms from a member state corporation to incorporate under the European Company Regulation 2001, employees are entitled to no less favourable representation than under the member state's existing board participation laws. This is practically important as a majority of EU member states require employee representation on company boards. Fourth, minimum job security rights are provided by three Directives. The Collective Redundancies Directive 1998 specifies that minimum periods of notice and consultation occur if more than a set number of jobs in a workplace are at risk. The Transfers of Undertakings Directive 2001 require that staff retain all contractual rights, unless there is an independent economic, technical or organisational reason, if their workplace is sold from one company to another. Last, the Insolvency Protection Directive 2008 requires that employees' wage claims are protected in case their employer falls insolvent. This last Directive gave rise to Francovich v Italy, where the Court of Justice affirmed that member states which fail to implement the minimum standards in EU Directives are liable to pay compensation to employees who should have rights under them.
Companies and investment
Like labour regulation, European company law is not a complete system and there is no such thing as a self-standing European corporation. Instead, a series of Directives require minimum standards, usually protecting investors, to be implemented in national corporate laws. The largest in Europe remain member state incorporations, such as the UK "plc", the German "AG" or the French "SA". There is however, a "European Company" (or Societas Europaea, abbreviated to "SE") created by the Statute for a European Company Regulation 2001. This sets out basic provisions on the method of registration (e.g. by merger or reincorporation of an existing company) but then states that wherever the SE has its registered office, the law of that member state supplements the rules of the Statute. The Employee Involvement Directive 2001
also adds that, when an SE is incorporated, employees have the default right to retain all existing representation on the board of directors that they have, unless the negotiate by collective agreement a different or better plan than is provided for in existing member state law. Other than this, most important standards in a typical company law are left to member state law, so long as they comply with further minimum requirements in the company law directives. Duties owed by the board of directors to the company and its stakeholders, or the right to bring derivative claims to vindicate constitutional rights, are not generally regulated by EU law. Nor are rights of pre-emption to buy shares, nor rights of any party regarding claims by tort, contract or piercing the corporate veil to hold directors and shareholders accountable. However, Directives do require minimum rights on company formation, capital maintenance, accounting and audit, market regulation, board neutrality in a takeover bid, rules on mergers, and management of cross-border insolvency. The omission of minimum standards is important since the Court of Justice held in Centros that freedom of establishment requires companies operate in any member state they choose. This has been argued to risk a "race to the bottom" in standards, although the Court of Justice soon affirmed in Inspire Art that companies must still comply with proportionate requirements that are in the "public interest".
Among the most important governance standards are rights vote for who is on the board of directors for investors of labour and capital. A Draft Fifth Company Law Directive proposed in 1972, which would have required EU-wide rights for employees to vote for boards stalled mainly because it attempted to require two-tier board structures, although most EU member states have codetermination today with unified boards. The Shareholder Rights Directive 2007 requires shareholders be able to make proposals, ask questions at meetings, vote by proxy and vote through intermediaries. This has become increasingly important as most company shares are held by institutional investors (primarily asset managers or banks, depending on the member state) who are holding "other people's money". A large proportion of this money comes from employees and other people saving for retirement, but who do not have an effective voice. Unlike Switzerland after a 2013 people's initiative, or the U.S. Dodd-Frank Act 2010 in relation to brokers, the EU has not yet prevented intermediaries casting votes without express instructions of beneficiaries. This concentrates power into a small number of financial institutions, and creates the potential for conflicts of interest where financial institutions sell retirement, banking or products to companies in which they cast votes with other people's money. A series of rights for ultimate investors exist in the Institutions for Occupational Retirement Provision Directive 2003. This requires duties of disclosure in how a retirement fund is run, funding and insurance to guard against insolvency, but not yet that voting rights are only cast on the instructions of investors. By contrast, the Undertakings for Collective Investment in Transferable Securities Directive 2009 does suggest that investors in a mutual fund or ("collective investment scheme") should control the voting rights. The UCITS Directive 2009 is primarily concerned with creating a "passport". If a firm complies with rules on authorisation, and governance of the management and investment companies in an overall fund structure, it can sell its shares in a collective investment scheme across the EU. This forms a broader package of Directives on securities and financial market regulation, much of which has been shaped by experience in the financial crisis of 2007–2008. Additional rules on remuneration practices, separating depositary bodies in firms from management and investment companies, and more penalties for violations were inserted in 2014. These measures are meant to decrease the risk to investors that an investment goes insolvent. The Markets in Financial Instruments Directive 2004 applies to other businesses selling financial instruments. It requires similar authorisation procedures to have a "passport" to sell in any EU country, and transparency of financial contracts through duties to disclose material information about products being sold, including disclosure of potential conflicts of interest with clients. The Alternative Investment Fund Managers Directive 2011 applies to firms with massive quantities of capital, over €100 million, essentially hedge funds and private equity firms. Similarly, it requires authorisation to sell products EU wide, and then basic transparency requirements on products being sold, requirements in remuneration policies for fund managers that are perceived to reduce "risk" or make pay "performance" related. They do not, however, require limits to pay. There are general prohibitions on conflicts of interest, and specialised prohibitions on asset stripping. The Solvency II Directive 2009 is directed particularly at insurance firms, requiring minimum capital and best practices in valuation of assets, again to avoid insolvency. The Capital Requirements Directives contain analogous rules, with a similar goals, for banks. To administer the new rules, the European System of Financial Supervision was established in 2011, and consists of three main branches: the European Securities and Markets Authority in Paris, the European Banking Authority in London and the European Insurance and Occupational Pensions Authority in Frankfurt.
Competition law
Competition law aims "to prevent competition from being distorted to the detriment of the public interest, individual undertakings and consumers", especially by limiting big business power. It covers all types of enterprise or "undertaking" regardless of legal form, or "every entity engaged in an economic activity", but not non-profit organisations based on the principle of solidarity, or bodies carrying out a regulatory function. Employees and trade unions are not undertakings, and are outside the scope of competition law, and so are solo self-employed workers, because on long-standing consensus in international law labour is not a commodity, and workers have structurally unequal bargaining power compared to business and employers. A legal professional body setting regulatory standards was held to be outside competition law, and so were the rules of the International Olympic Committee and the International Swimming Federation in prohibiting drugs, because although drugs might increase "competition", the "integrity and objectivity of competitive sport" was more important. EU competition law only regulates activities where trade between member states is affected to an "appreciable" degree, but member states may have higher standards that comply with social objectives. The four most important sets of rules relate to monopolies and enterprises with a dominant position, mergers and takeovers, cartels or collusive practices, and state aid.
First, Article 102 of the Treaty on the Functioning of the European Union prohibits "abuse by one or more undertakings of a dominant position". A "dominant position" is presumed to exist with over a 50% market share, and may exist with a 39.7% market share. There may also be dominance through control of data, or by a group of undertakings acting collectively, and a corporate group will be treated as a "single economic unit" for the purpose of calculating market share. The prohibited categories of "abuse" are unlimited, article 102 spells out the ban on (a) "unfair purchase or selling prices", (b) "limiting production", (c) "applying dissimilar conditions to equivalent transactions", and (d) imposing unconnected "supplementary obligations" to contracts. In a leading case on (a) unfair prices, United Brands Co v Commission held that, although a banana company had a dominant position in its product and geographic markets (because bananas were not easily substituted with other fruit, and its relevant market share was 40 to 45%), prices 7% higher than rivals were not enough to be an abuse. By contrast prices 25% higher than a company's estimated costs were found to be unfair. Unfair pricing also includes predatory pricing, where a company cuts its own selling prices to bankrupt a competitor: there is a presumption of abuse if a company prices "below average variable costs", namely "those which vary depending on the quantities produced". There is no requirement to show losses might be recouped. A leading case on (b) limiting production is AstraZeneca plc v Commission, where a drug company was fined €60 million for misleading public authorities to secure a longer patent for a medicine it called Losec, so limiting public use. In 2022, in Google LLC v Commission the General Court upheld a €4.125 billion fine against Google for the "obstruction of development and distribution of competing Android operating systems" by paying manufacturers to not install any version other than Google's own. Refusal to supply goods or services may also be abusive, as in Commercial Solvents Corporation v Commission where the subsidiary of CSC stopped selling an ingredient for a drug to combat tuberculosis to a competitor after it itself entered the drug market. Similarly in Microsoft Corp v Commission, Microsoft was fined €497 million for, among other things, refusing to give Sun Microsystems and other competitors information needed to build servers after Microsoft itself entered the server market.
Under the third type of abuse, (c) unlawful discrimination, in British Airways plc v Commission it was held that British Airways abused its dominant position by giving some travel agents extra payment to promote its tickets over others. This made "market entry very difficult" and frustrated the ability of "co-contractors to choose between various sources of supply or commercial partners". Under (d) examples of the abuse of imposing supplementary obligations include the Microsoft Corp v Commission case, where Microsoft bundled a pre-installed media player into Windows OS sales, which had the effect of damaging competitor businesses such as RealPlayer. By contrast, in Intel Corp v Commission, Intel was fined €1.06 billion by the Commission for giving rebates on x86 computer processors if manufacturers bought over 80% of their chips only from Intel. This had the effect of "tying customers to the undertaking in a dominant position". However the fine was annulled on the ground that the Commission had not adequately proven an anti-competitive effect, so in 2023 the Commission imposed a smaller €376 million fine. Second, the Merger Regulation 2004 applies to "concentrations" (any merger or acquisition), that generally have a value of at least €100 million turnover in the EU if it "would significantly impede effective competition" by creating or strengthening a dominant position. While mergers between direct ("horizontal") competitors are carefully scrutinised upon mandatory notification to the Commission, vertical or conglomerate mergers are often allowed where a competitor is not removed. This has led to increasingly large business groups, with ever greater power.
Third, Article 101 of the TFEU prohibits cartels or collusive practices, including competitors engaging in (a) price fixing, (b) limiting production, (c) sharing markets, (d) applying dissimilar conditions to equivalent transactions, and (e) making contracts subject to unconnected obligations. According to Article 101(2) any such agreements between undertakings are automatically void. Article 101(3) establishes exemptions, if the collusion is for distributional or technological innovation, gives consumers a "fair share" of the benefit and does not include unreasonable restraints that risk eliminating competition anywhere. For example, in Parker ITR Srl v Commission eleven corporations that manufactured marine hoses for offshore oil rigs were fined €131 million for rigging bids and sharing markets worldwide - they would designate a "bid champion" in each case to raise prices. Secret cartels are often hard to prove, so the courts allow competition regulators to establish collusion where there is no other plausible explanation for price rises. Some agreements among businesses, however, can be highly beneficial. For instance, in a decision on the Conseil Européen de la Construction d’Appareils Domestiques, the Commission held an agreement among washing machine makers to phase out product of low-efficiency machines was lawful, especially since it would lead to "reduced pollution from electricity generation". Fourth, TFEU article 106(1) requires that the state may not grant special or exclusive rights for undertakings that distort competition, and states that (2) competition law applies to services of general economic interest, unless it obstructs their tasks in law or fact (e.g. in providing public services). Under TFEU art 107(1) no state aid that distorts competition is allowed, but aid is allowed (2) for individual consumers, without discrimination, and (3) for economic development, particularly to tackle underemployment. The Procurement Directive 2014/24/EU, on government procurement in the EU sets standards for open tenders when outsourcing public services to private companies.
Commerce and intellectual property
While EU law has not yet developed a civil code for contracts, torts, unjust enrichment, real or personal property, or commerce in general, European legal scholars have drafted common principles, including Principles of European Contract Law and Principles of European Tort Law that are common to member states. In absence of harmonisation, there is a comprehensive system of conflicts of laws to settle the jurisdiction of courts, and the applicable law, for most commercial disputes. The Brussels I Regulation 2012 determines the jurisiction of courts depending upon where a person is domiciled or has operations. The applicable law for consensual obligations is then determined by the Rome I Regulation, where article 3 states the principle that a choice of law can be made expressly in a contract, unless this affects provisions that cannot be derogated from, such as employment, consumer, tenancy or other rights. The Rome II Regulation determines applicable law in the case of non-consensual obligations, such as torts and unjust enrichment. Under article 4 the general rule is that the law applies where "the damage occurred", although under article 7 in the case of "environmental damage or damage sustained by persons or property as a result" there is a choice to bring an action under the law of the tortfeasor.Unlike other property forms, intellectual property rights are comprehensively regulated by a series of directives on copyrights, patents and trademarks. The Copyright Term Directive 2006 article 1 states the principle that copyrights last for 70 years after the death of the author. The Copyright and Information Society Directive 2001 was passed to regulate copyright over the internet, and the effect of article 5 is that internet service providers are not liable for data they transmit even if it infringes copyright. However under article 6, member states must give "adequate legal protection" for copyrights. The Resale Rights Directive creates a right to royalties for authors where works are resold. The Enforcement Directive requires member states have effective remedies and legal processes. Under the European Patent Convention, which is separate from the EU, the general patent term is 20 years from the date that a patent (of an invention) is filed with an official register, and the development of an EU patent attempts to harmonise standards around these norms. The Trade Marks Directive enables a common system of trade mark registration so that, with exceptions, a registered trade mark applies across all EU member states.
Public regulation
A major part of EU law, and most of the EU's budget, concerns public regulation of enterprise and public services. A basic norm of the Treaty on the Functioning of the EU, in article 345, is that the "Treaties shall in no way prejudice the rules in Member States governing the system of property ownership", meaning the EU remains neutral between private or public ownership, but that it can require common standards. In the cases of education and health, member states generally organise public services and the EU requires free movement. There is a unified European Central Bank that funds private banks, and adopts a common monetary policy for price stability, employment and sustainability. The EU's policies on energy, agriculture and forestry, transport and buildings are crucial to end climate damage and shift completely to clean energy that does not heat the planet. Among these, 33% of the entire EU budget is spent on agricultural subsidies to farm corporations and owners. The EU also has an increasing number of policies to raise standards for communications, the internet, data protection, and online media. It has limited involvement in the military and security, but a Common Foreign and Security Policy.
Education and health
Education and health are provided mainly by member states, but shaped by common minimum standards in EU law. In the case of education, the European Social Charter, like the Universal Declaration, the International Bill of Human Rights, say that "everyone" has the right to education, and that primary, secondary and higher education should be made "free", for instance "by reducing or abolishing any fees or charges" and "granting financial assistance". While the history of education was confined to a wealth elite, today most member states have tuition free university. There are no common rules for university finance or governance, although there is a right to free movement and universities have voluntarily harmonised standards. In 1987, the Erasmus Programme was created to fund students to study in other countries, and with a budget of €30 billion from 2021-2027. From 1999, the Bologna Declaration and Process led to the creation of the European Higher Education Area where member state universities adopted a common degree structure (bachelor, master, and doctoral degree) with a goal to have similar expectations in learning outcomes. Member states may not impose different fees on students from other member states or limit their numbers, and this appears to have worked even without a system for countries to reimburse one another if costs differ widely. However, if member states have grants or student loans, R (Bidar) v London Borough of Ealing held there may be a minimum residency requirement, such as three years. Most of the world's best universities enable majority staff, and significant alumni or student voice in university governance. For instance, the French Education Code requires that universities have a board of management with 24 to 36 members, and 8 to 16 elected by professors, 4 to 6 by non-academic staff, 4 to 6 by students, and 8 external members, and have an academic council elected by staff with powers to set important rules, such as on training or examinations. Secondary, primary, and pre-school are generally free from fees. More successful school systems tend to be well-funded and public, and do not have barriers to children based on wealth, such as private fees for school. Most schools enable staff and parents to vote for representatives on their children's school governing bodies.
As in education, there is a universal human right to ‘health and well-being’ including ‘medical care and necessary social services’, although human rights law does not say what the best system of health governance is. Among EU member states there are two main traditions of health care provision, based on public service or insurance. First, healthcare may be seen as a public service free at the point of use, with hospitals largely owned by the public health service and doctors publicly funded (the "Beveridge model".) This is the system, for example, in Finland, Sweden, Denmark, Spain, Italy, Portugal, Greece or Ireland. Second, healthcare can be provided through insurance, where hospitals and doctors are separately owned and run from the service provider (the "Bismarck model"). There is a large spectrum between systems based mainly on public insurance and usually creating public option hospitals or requiring no profit (such as France, Belgium, Luxembourg, Slovenia, the Czech Republic or Estonia) and those that allow substantial private and profit-making insurance and hospital or doctor provision (the Netherlands and Germany). In all cases, health is universal, and subsidised or free wherever people cannot afford insurance premiums, unlike the notorious case of the United States which still does not have universal healthcare. The healthcare outcomes vary greatly between different systems, so that while there is generally higher life expectancy with more investment, healthcare tends to have worse outcomes and costs more where there is more private business or profit. Under the Treaty on the Functioning of the European Union article 56 there is the right to receive services, with rules codified into the Patients’ Rights Directive 2011. Article 4 requires that people are treated, article 5 requires reimbursement of costs by the person's country of origin, article 6 requires national contact points to connect healthcare providers or insurers and patient organisations, but under article 8 member states may require prior authorisation for people to travel abroad for treatment where the costs are high or planning is needed. A European Health Insurance Card is also available for free to receive health across the EU. This system was developed after R (Watts) v Bedford Primary Care Trust, where in 2003 Mrs Watts travelled from the UK to France, paid £3900 for a hip replacement operation, and claimed she should be reimbursed. The UK's National Health Service waiting lists were 4 to 6 months at the time. The Court of Justice's Grand Chamber held that health care counted as a ‘service’ under TFEU article 56, and that in principle there was a right to receive those services abroad. However, high demand could justify waiting lists in a national health system, but individual circumstances of the patient had to be assessed. For non-EU nationals, the European Court of Human Rights held in N v United Kingdom that it was not inhuman and degrading treatment contrary to ECHR article 3 to deport someone to a country where there were unlikely to live longer than two years without treatment. There is no duty ‘through provision of free and unlimited health care to all aliens without a right to stay within its jurisdiction' to avoid 'too great a burden on the Contracting States.’ However, if someone's death would be imminent the European Court of Human Rights has held that a decision to remove would violate ECHR article 3.
Banking, monetary and fiscal policy
Banking, monetary and fiscal policy is overseen by the European Central Bank, member states, and the EU Commission. This is vital for European society as it affects the human rights to full employment, to fair wages, housing, and to an adequate standard of living. When the Eurozone and common currency of the euro was established, there was no political agreement to develop a full EU fiscal policy (i.e. tax and spending), so that governments would pool money and lend to countries in trouble, because it was thought that wealthier member states should not have to subsidise poorer member states. However, there was planned to be a common central bank, which would aim to have common interest rates. The ECB, based in Frankfurt, controls monetary policy that underpins the euro. Member states also have central banks (such as the Bundesbank, Banque de France, Banco de España), and these 19 Eurozone member state central banks have a duty to act compatibly with ECB policy. The ECB has an executive board with a president, vice president and four other members, all appointed by the European Council by qualified majority, after consulting the European Parliament, and the Governing Council of the ECB. The Governing Council is made up of the ECB executive board and member state central banks using the euro, they have 8 year terms, and can be removed only for gross misconduct.
The European Central Bank's ‘primary objective... shall be to maintain price stability. Without prejudice to that objective, it shall support the general economic policies in the Union’, such as ‘balanced economic growth and price stability, a highly competitive social market economy, aiming at full employment and social progress, and a high level of protection and improvement of the quality of the environment.’ There are three main powers to achieve these goals. First, the ECB can require other banks to hold reserves proportionate to their type of lending. Second, it may lend money to other banks, or conduct 'credit operations'. Third it may ‘operate in the financial markets by buying and selling’ securities. For example, in Gauweiler v Deutscher Bundestag a German politician claimed that the ECB's purchase of Greek government debt on secondary markets violated TFEU article 123, which prohibits directly lending money to member state governments. The Court of Justice rejected that the ECB had engaged in 'economic policy' (i.e. fiscal transfers) rather than monetary policy decisions, which it was allowed to do. So far the ECB has failed to use these powers to eliminate investment in fossil fuels, despite the inflation that gas, oil and coal cause given their price volality in international markets.
Beyond the central bank, the Credit Institutions Directive 2013 requires authorisation and prudence provisions in other banks in all EU member states. Under the Basel III programme, created by an international banker group, banks must hold more money in reserves based on the risk-profile that it holds, as determined by the member state regulator. More risky assets require more reserves, and the Capital Requirements Regulation 2013 codifies these standards, for instance by mandating that proportionally less in reserves is needed if more government debt is held, but more if mortgage-backed securities are held. To guard against the risk of bank runs, the Deposit Guarantee Directive 2014 creates an EU wide minimum guarantee of €100,000 for bank deposits, so that if anyone's bank goes insolvent, the state will pay the deposit up to this amount. There are not yet rules requiring higher reserving and accounting practices for climate risk and of gas, oil or coal reserves become worthless as Europe replaces fossil fuels with renewable energy.
The budget of the European Union is set in 7 year cycles, and in 2022 around €170bn was spend, of which nearly one-third was agricultural policy, including regional development. EU member state government expenditures are far higher as a proportion of Gross Domestic Product, but are constrained by the Fiscal Compact, which requires no more than a 3% budget deficit compared to GDP in any given year, and aiming for surpluses or balanced budgets. As a result of the Eurozone crisis, a Treaty establishing the European Stability Mechanism created a fund to assist countries with severe fiscal problems. The results of the "strict conditionality" attached to loans (or so called structural adjustment) that required privatisation, cuts to welfare, and wages in Greece, Spain, Portugal or Ireland was particularly negative. The EU's main metric for economic performance has been GDP, which adds up market exchange values in firm accounts and government expenditures according to the Gross National Income Regulation 2019, even though this fails to discount polluting and harmful economic activities such as energy and industry that damages the climate, the environment and human health. The EU's budget mainly comes from contributions of around 0.7% of GDP per member state, as well as a share of EU value added tax and customs duties. The EU does not yet have a more comprehensive system for preventing tax evasion, or for fair taxation of multinational or financial corporations.
Electricity and energy
Like the world, the EU's greatest task is to replace fossil fuels with clean energy as fast as technology allows since protection of "life", and "improvement of the quality of the environment" are fundamental rights, and the highest policy goals of the EU. In international law, there is also ‘the inherent right of all peoples to enjoy and utilize fully and freely their natural wealth and resources’ such as clean air, and the right to ‘the benefits of scientific progress’, such as clean energy. The EU's overall target is to reduce toxic greenhouse gas emissions by 50-55% by 2030, and be carbon neutral or negative by 2050, and 32% renewable energy by 2030, though a 45% target by 2030 was proposed by the Commission and backed by Parliament in 2022. Since the 2022 Russian invasion of Ukraine it aims to eliminate Russian fossil fuel imports as fast as possible. However laws such as the Hydrocarbons Directive 1994 still enable gas and oil extraction. It requires that licences are awarded based on technical and financial capability, methods, price, and previous conduct, that applicants are treated equally by objective and non-discriminatory criteria, and advertisements for tenders must be public. It has not yet required that existing licensees pay for the pollution and climate damage they have caused, nor sought to end extraction of gas and oil.
A growing number of cases seek to enforce liability on gas, oil and coal polluters. In Friends of the Earth v Royal Dutch Shell plc, the Hague District Court held that Shell was bound by the tort provisions of the Dutch Civil Code, Book 6, section 162(2), interpreted in light of the Paris Agreement 2015 article 2(1) and ECHR articles 2 and 8 (rights to life and home), to immediately start cutting all of its emissions by 45% by 2030, whether generated directly by its corporate group (scope 1), indirectly from its purchases (scope 2), or indirectly from its value chain or the purchase and use of its products (scope 3). It emphasised the ‘serious and irreversible consequences of dangerous climate change in the Netherlands... pose a threat to the human rights of Dutch residents’. After this loss, Shell dropped "Royal Dutch" from its name, and moved its headquarters to London. In Lliuya v RWE AG Mr Lliuya, who lives in Huaraz, Peru has claimed that RWE AG should pay 0.47% of the costs of flood defences against a melting mountain glacier that increases the size of Lake Palcacocha, because RWE is responsible for 0.47% of historic global greenhouse gas emissions. The Higher Regional Court of Essen gave leave to appeal on whether there is causation of damage, and in 2022 visited the lake. There has also been heightened responsibility on member state governments. In Urgenda v State of Netherlands the Dutch Supreme Court held the Dutch government must reduce greenhouse gas emissions by 25% before 2020, following the IPCC 2007 minimum recommendations, and that failure to do so would violate the right to life and home in ECHR articles 2 and 8. In the Klimaschutz case, the German Constitutional Court held that the German government must speed up its climate protection measures to protect the rights to life, and the environment under the Grundgesetz 1949, articles 2 and 20a. However the EU and member states have so far failed to codify liability to prevent pollution and climate damage by corporations that profit, and the EU Emissions Trading System has failed to adequately price carbon for the damage it causes (prices traded under €98 per metric ton until the end of 2022).
As clean energy from wind, solar or hydro storage replaces pollution from gas, oil and coal, EU law has standards for generation and distribution networks. First, in generation, the Renewable Energy Directive 2018 still enables biomass and biofuel to count toward "renewable" energy statistics based on the argument that trees or plants absorb greenhouse gases when they grow, even though biomass burning (usually in ex-coal plants) releases more greenhouse gases than coal, biomass transport is not clean, forests take decades to replenish, and smoke damages human health. Second, the EU does not yet have a feed-in tariff system, requiring energy grids and retailers to pay a fair price to households or businesses with solar or wind generation, however in PreussenElektra AG v Schleswag AG the Court of Justice held that member states could fix any price they chose, so that energy companies would have to reimburse producers for the energy they received. A company now owned by E.ON claimed the feed-in tariff was state aid under TFEU article 107, and should have to pass rules for exemption, as a way to hinder renewable energy funding. The Court rejected this, because although the policy might have ‘negative repercussions’ for big energy companies it ‘cannot be regarded’ as giving to small producers ‘a particular advantage at the expense of the state’.
The third main set of standards is that the EU requires that electricity or gas enterprises acquire a licence from member state authorities. There must be legal separation into different entities of owners of networks from retailers, although they can be owned by the same enterprise, to ensure transparency of accounting. Then, different enterprises have rights to access infrastructure of network owners on fair and transparent terms, as a way to ensure different member state networks and supplies can become integrated across the EU. Most EU operators are publicly owned, and the Court of Justice in Netherlands v Essent NV emphatically rejected that there was any violation of EU law on free movement of capital by a Dutch Act requiring electricity and gas distributors to be publicly owned, that system operators could not be connected by ownership to generators, and limited the level of debt.
The Court of Justice held a public ownership requirement was justified by ‘overriding reasons in the public interest’, ‘to protect consumers’ and for the ‘security of energy supply’. It further pointed to the foundational case of Costa v ENEL, where the Court held in 1964 that the treaties do ‘not prohibit the creation of any state monopolies’ so long as they do not operate commercially and discriminate. The approach of EU law is that even where energy companies are privatised, they still are subject to the same rules as the state on direct effect, because it remains that they are ‘providing a public service’. The evidence suggests "consumers pay lower electricity net-of-tax prices in countries where there are still incumbents owned by national governments." With the sharp rise in fossil fuel prices that came from the 2022 Russian invasion of Ukraine and the fossil fuel cartel OPEC deciding to restrict supply, the EU Commission proposed a windfall fossil fuel tax. There are not yet common standards on energy enterprise governance, although a number of member states ensure that workers and energy bill payers have the right to vote for directors.
Agriculture, forestry and water
Everyone has the right to food and water, and under the Charter of Fundamental Rights of the EU "the improvement of the quality of the environment must be integrated into the policies of the Union". The Common Agricultural Policy's origins lay in ensuring that all farm workers had fair wages and everyone had food, since in 1960 a third of employment and a fifth of GDP was in agriculture, and after WW2 Europe had been on the brink of starvation. In 2020, the agricultural workforce was 4.2% of the EU total. The CAP's objectives are still to increase production, "a fair standard of living for the agricultural community", to stabilise markets and supplies, and "reasonable prices" for consumers. In 2021, the CAP was 33.1% of the entire EU budget, at €55.1 billion, however there are no requirements for subsidies to be used so that farm workers (as opposed to owners) have fair pay scales, few requirements for rural development, and minimal standards for environmental improvement.
The CAP has three main parts. First, the European Agricultural Guarantee Fund distributes ‘direct payments’, which are 70.9% of the CAP budget. The Direct Payments Regulation 2013 gives payments to an ‘active farmer’ that carries out agricultural activity, grazing or cultivation, does not operate airports, rail, waterworks, real estate, sport or recreation grounds, and has the land at their disposal. The farm must have at least 1 hectare and receive €100 for each, though member states can set higher thresholds (e.g. 5 hectares and €200). If payments reach over €150,000 there is a 5% reduction per hectare for each hectare. This favours large farm corporations, and the largest 1% typically receive around 10 to 15% of all subsidies in member states. As conditions of receiving subsidies, farms can be required to keep land in good condition, for public, animal, and plant health, and maintain environment standards. For minimal biodiversity, farmers must have over two crops if they have 10 hectares, not farm at least 5% of lintensively (an ‘ecological focus area’ over 15 hectares, and have three crops over 30 hectares. Environmentally sensitive grasslands, as designated by the Habitats Directive 1992 and the Wild Birds Directive 2009, should not be more turned into more than 5% agricultural area. The second main part, also carried out by the EAGF, is ‘market measures’. Under the Agricultural Products Regulation 2013 certain crops and meat are eligible for purchase by member state authorities, to be ‘stored by them until disposed of’, with extra aid for storage. The goal of this is to restrict supply and therefore raise prices, particularly in response to unexpected drops in demand, a health scare, or international market volatility. In 2018, this was 4.59% of the CAP budget. The benefits of many of these subsidies go to the parties in the food supply chains with most bargaining power, which is usually supermarkets. The Agricultural Unfair Trading Practices Directive 2019 article 3 prohibits practices such as late payments by buyers of food to suppliers, cancellations at short notice, unilateral alteration of terms, threats of commercial retaliation, and payments by suppliers to the buyers (i.e. from farmers to supermarkets) for stocking, adverts, marketing or staff. These rules limit supermarkets’ abuse of a dominant position but do not ensure subsidies reach farm communities. The Food Safety Regulation 2002 article 14 requires that food is not place on the market if it is ‘injurious to health’ or is 'unfit for human consumption’, but there is no requirement that supermarkets or others eliminate harmful packaging such as plastic. The third main part, administered by the European Agricultural Fund for Rural Development, is ‘rural development’ payments, which are 24.4% of the CAP budget. Following the ‘Europe 2020 Strategy by promoting sustainable rural development’, payments are made for knowledge transfer, advice, asset investment, and business development aid. Priorities may include improving water and energy use. The courts give the EU a wide discretion to implement policy, so judicial review is possible only if agricultural measures are 'manifestly inappropriate'. EU law does not yet have a systematic plan or subsidies to rewild depleted environments, and to move to complete clean energy infrastructure.
Outside farms, forests cover just 43.52% of the EU's land, compared to 80% forest cover historically across Europe. There is no requirement yet to undertake any reforesting or rewilding of land, while the Land Use and Forestry Directive 2018 merely requires that member states keep accounts of land use and forestry changes based on greenhouse gas emissions, and that emissions do not exceed removals of greenhouse gases. Globally, the Timber Regulation 2010 requires that all timber traders know their supply chains and keep records for 5 years, to ensure that any illegally harvested timber is banned in the EU law, however there is not yet any ban on imports of goods (such as beef or palm oil) from countries that continue to deforest their landscape. For water resources, in nature or for drinking, the Water Framework Directive 2000 sets common standards and provides that member states should oversee water industry standards. The Drinking Water Quality Directive 2020 requires water that is "wholesome and clean", and article 4 defines this as free from micro-organisms and parasites dangerous to health, and compliant with chemical and biological standards in Annex I. The Bathing Waters Directive 2006 sets standards for quality of bathing waters, namely riviers and beaches, to be free from toxic waste or sewage. There must be adequate remedies for breaches, so in Commission v United Kingdom (1992) it was held that the UK's approach of accepting undertakings from water companies to behave better in future, instead of using enforcement orders, was inadequate to comply with EU law. Fines can be and often are significant, ranging into hundreds of thousands or millions of euros for breach.
Transport and buildings
Clean road, rail, sea and air transport are fundamental goals of the EU, given its commitment to human rights for 'improvement of the quality of the environment', 'services of general economic interest', and the right to 'the benefits of scientific progress'. However, the pace of reform is slow compared to the urgency of reversing global heating. The Renewable Energy Directive 2018 article 25 requires that final energy consumption in transport in each member state is ‘at least 14%’ renewable by 2030. This is within the 2030 target for 32% "share of energy from renewable sources in the Union's gross final consumption of energy". In 2022, the EU promised to ban sale of new petrol and diesel vehicles only by 2035, enabling manufacturing corporations to profit from toxic emissions for another 13 years, though many member states have higher standards. There is not yet a plan for full rail electrification, or clean shipping or air travel, even where technology exists.
In road transport, the Emission Performance Regulation 2019 says manufacturers of "new passenger cars" should not allow emissions to exceed 95 grams of CO2 per kilometre, and 147 grams of CO2 per kilometre for new light commercial vehicles, but this is merely an "EU fleet-wide target" rather than requirements for each vehicle. Manufacturers can agree to pool their production quotas, so as to meet their targets on average, but there is no legal sanction for failure to meet the target. Member states are simply required to record the relevant success or failure, and manufacturers’ performance is published. By contrast the Vehicles Emissions Regulation 2007 sets the "Euro 6" standards in maximum emissions that car manufacturers can have. Since the 'Euro 1' standard was introduced in 1992, standards became cleaner each 4 to 5 years, but recently stalled. Article 2 states this applies to vehicles under 2,610 kilograms, while the Heavy Vehicle Emission Regulation 2019 applies to heavier vehicles, with looser CO2 limits. Article 4 states manufacturers must ‘demonstrate that all new vehicles sold, registered or put into service in the Community are type approved in accordance with this Regulation’. Article 6 requires manufacturers to ‘provide unrestricted and standardised access to vehicle repair and maintenance information’ should there be any non-compliance. Article 13 requires penalties imposed by member states for breach are ‘effective, proportionate and dissuasive’, and breaches include any ‘false declarations’ as well as ‘use of defeat devices’. This reference follows the "Dieselgate" scandal where Volkswagen and manufacturers around Europe and the world fraudulently concealed their true emissions. In 2007, Commission v Germany held that the German Volkswagen Act 1959 violated free movement of capital in TFEU article 63 by ensuring that the stae of Lower Saxony had a golden share to exercise public control over the company's governance. It limited voting rights of individual shareholder to 20% of the company. The German government's justification that the restrictions were an overriding public interest, for instance, to protect workers was rejected. A justification for environmental protection was not offered. After this, the Porsche family dominated Volkswagen, and in 2007 a new CEO, Martin Winterkorn took up his post and aimed in ‘Strategie 2018’ to become the world's largest auto-manufacturer, and it began to install cheat devices.
People need to have a driving licence to drive on a road, and there is a common system of recognition around the EU. For delivery vehicle workers, the Road Transport Regulation 2006 limits daily driving time to 9 hours a day, a maximum of 56 hours a week, and requires at least a 45 minute break after 4 and a half hours. Drivers may also not be paid according to distance travelled if this would endanger road safety. Taxi enterprises are usually regulated separately in each member state, and the attempts of the app-based firm Uber to evade regulation by arguing it was not a "transport service" rather than an "Information Society Service" failed. Most bus networks are publicly owned or procured, but there are common rights. If buses are delayed in journeys over 250 kilometres, the Bus Passenger Rights Regulation 2011 entitles passengers to compensation. Under article 19, a delay over two hours must result in compensation of 50% of the ticket price, as well as rerouting and reimbursement. Article 6 says ‘Carriers may offer contract conditions that are more favourable for the passenger’, although it is not clear many take up this option. Article 7 says member states cannot set maximum compensation for death or injury lower than €220,000 per passenger or €1200 per item of luggage. There is not yet a requirement for the major bus, delivery, taxi enterprises to electrify their fleets even though this would create the fastest reduction of emissions and would be cheaper for business in total operating costs.
In rail transport, the Single European Railway Directive 2012 requires that ownership of tracks and operating companies are separated to prevent conflicts of interest and pricing, particularly to ensure that trains can run from one member state to another. Most European railways are publicly owned, and each train enterprise must have separate accounts and member states should run railways ‘at the lowest possible cost for the quality of service required’. The Rail Passenger Rights Regulation 2007 article 17 states that 25% of a ticket price should be refunded if there is a one hour delay, and 50% over two hours, with a threshold of €4 to claim. Passengers have a right to take bicycles on trains where they are not overcrowded, there must be clear information on tickets, and there are rights to make reservations. Finally, in air transport, under the Flight Compensation Regulation (EC) No 261/2004 there is a minimum right of €250 compensation for 2 hour delay on 1500 km flight, €400 compensation for 3 hour delay or more on a 1500–3500 km flight, and €600 for 4 hours in flights over 3500 km flight, plus the right to refreshments, hotels, and alternative transport. There are not yet duties on airline companies to invest in research for clean fuels, and eliminate unnecessary flight paths when clean land transport alternatives (such as high-speed rail) exist.
Finally, the 'right to housing assistance' is a basic part of EU law. House prices are affected by monetary policy (above), but otherwise the EU's involvement is so far limited to minimal environmental standards. The Energy Performance of Buildings Directive 2010 aims to eliminate unclean materials and energy waste to have "nearly zero-energy buildings", particularly by setting standards for new buildings since 2020 and upgrading existing buildings by 2050. There is, however, no requirement yet that all buildings replace gas heating with electric or heat-pumps, have solar or wind energy generation, electric vehicle charging, and particular insulation standards, wherever possible.
Communications and data
The right ‘to seek, receive and impart information and ideas of all kinds, regardless of frontiers’ is a basic part of freedom of expression, as much as the right against ‘arbitrary or unlawful interference with [our] privacy, family, home or correspondence’, whether interference is by business, government or anyone else. Communication networks, from the post to telephone lines to the internet, are crucial for friends, families, business and government, and EU law sets standards for their construction and use. For example, the Postal Services Directive 1997 article 3 requires 'universal service' at minimum standards by the main postal provider. For mobile phone access anywhere in the EU, the Roaming Regulation 2022 eliminates extra charges for mobile calls, texts and data when abroad in other member states, and wholesale charges must be fair. To ensure internet service providers do not slow speeds for some websites to gain unfair profit, the Net Neutrality Regulation 2015 states providers 'of internet access services shall treat all traffic equally’ but this shall not prevent ‘reasonable traffic management measures’.
Since today's communications have mostly merged into the internet, the Electronic Communications Code Directive 2018 is critical for EU infrastructure. Article 5 requires a member state regulator or a "competent authority" is set up that will license use of the radio spectrum, through which mobile and internet signals travel. A regulator must also enable access and interconnection to other infrastructure (such as telecomms and broadband cables), protect end-user rights, and monitor "competition issues regarding open internet access" to ensure rights such as universal service and portability of phone numbers. Articles 6-8 require the regulators are independent, with dismissal of heads only for a good reason, and articles 10-11 require cooperation with other authorities. Articles 12-13 require that use of electronic communication networks is authorised by a regulator, and that conditions attached are non-discriminatory, proportionate and transparent. The owner of a communication network has duties to allow access and interconnection on fair terms, and so article 17 requires that its accounts and financial reports are separate from other activities (if the enterprise does other business), article 74 foresees that regulators can control prices, and article 84 says member states should "ensure that all consumers in their territories have access at an affordable price, in light of specific national conditions, to an available adequate broadband internet access service and to voice communications services". While some EU member states have privatised all, and some part, of their telecomms infrastructure, publicly or community-owned internet providers (such as in Denmark or Romania) tend to have the fastest web speeds.
Historically to protect people's privacy and correspondence, the post banned tampering with letters, and excluded post offices from responsibility for letters even if the contents were for something illegal. As the internet developed, the original Information Society Directive 1998 aimed for something similar, so that internet server providers or email hosts, for instance, protected privacy. After this the Electronic Commerce Directive 2000 also sought to ensure free movement for an "information society service", requiring member states to not restrict them unless it was to fulfill a public policy, prevent crime, fight incitement to hatred, protect individual dignity, protect health, or protect consumers or investors. Articles 12 to 14 further said that an ISS operating as a "mere conduit" for information, doing "caching" or "hosting" is ‘not liable for information stored’ if the ‘provider does not have actual knowledge of illegal activity’ and ‘is not aware of facts or circumstances from which the illegal activity or information is apparent’, but must act quickly to remove or disable access 'upon obtaining knowledge or awareness'. Article 15 states that member states should ‘not impose a general obligation on providers... to monitor the information which they transmit or store’ nor ‘seek facts’ on illegality. However the meaning of who was an "ISS" was not clearly defined in law, and has become a problem with social media that was not meant to be protected like private communication. An internet service provider has been held to be an ISS, and so has a Wi-Fi host, the Electronic Commerce Directive 2000 recital 11 states email services, search engines, data storage, and streaming, are information society services, and an individual email is not, and the Information Society Directive 2015 makes clear that TV and radio stations do not count as ISS's. None of these definitions include advertising, which is never "at the request of a recipient of services" as the 2015 Directive requires, however various cases have decided that eBay, Facebook, and AirBnB, may count as ISSs, but the cab app Uber does not.The main rights to data privacy are found in the General Data Protection Regulation 2016. First, there is the right to have data about someone processed only with their 'consent', or based on other justifiable grounds, such as a lawful purpose. It has been held that consent is not given if there is ‘a pre-checked checkbox which the user must deselect to refuse’. Under the Privacy and Electronic Communications Directive 2002 a well-known result is that websites must not install "cookies" into someone's internet browser unless they positively accept cookies. The EU has not yet simply enabled people to block all cookies within a browser, and required that websites give people this option without thousands of annoying buttons to click. Second, people have the right to be informed about data kept on them. Third, there is a right to be forgotten and the data to be deleted. Where legal standards do not exist, Alphabet, Facebook or Microsoft have largely been uncontrolled in privacy invasion, for instance, Gmail pioneering surveillance of emails for ads as its first business model, and Facebook abolishing service-user voting rights over changes to its privacy policies in 2012. There are no rights yet in EU law for service-users to vote for representatives on boards of big tech companies that take their data, or to have decision-rights over use of their data, in contrast to the rights of service-users of websites like Wikipedia.
Media and markets
Pluralism and regulation of the media, such as through ‘the licensing of broadcasting, television or cinema enterprises’, have long been seen as essential to protect freedom of opinion and expression, to ensure that citizens have a more equal voice, and ultimately to support the universal ‘right to take part in the government’. In almost all member states there is a well funded public, and independent broadcaster for TV and radio, and there are common standards for all TV and radio, which are designed to support open, fact-based discussion and deliberative democracy. However, the same standards have not yet been applied to equivalent internet television, radio or "social media" such as the platforms controlled by YouTube (owned by Alphabet), Facebook or Instagram (owned by Meta), or Twitter (owned by Elon Musk), all of which have spread conspiracy theories, discrimination, far-right, extremist, terrorist, and hostile military content.
General standards for broadcasting are found in the Audiovisual Media Services Directive 2010. It defines an audiovisual media services to mean those ‘devoted to providing programmes, under the editorial responsibility of a media service provider, to the general public, in order to inform, entertain or educate, to the general public by electronic communications networks’, either on TV or an ‘on-demand’ service. An ‘on-demand’ service involves ‘viewing of programmes at the moment chosen by the user and at his individual request on the basis of a catalogue of programmes selected by the media service provider’. Member states must ensure audiovisual services ‘do not contain any incitement to hatred’ based on race, sex, religion, nationality or other protected characteristics. Article 9 prohibits media with ‘surreptitious’ communication or ‘subliminal’ techniques, to ‘prejudice respect for human dignity’, that would ‘promote any discrimination’, prejudice health and safety or ‘encourage behaviour grossly prejudicial to the protection of the environment’. Social media on Facebook, YouTube or Twitter may be thought to be exempt as they lack 'editorial responsibility', however each use algorithms to exert 'effective control' and profit from arrangement of media. After 2018 new provisions on "video-sharing platform service" providers were introduced, with duties on member states to ensure under article 28b that video-sharing platform providers protect (a) minors from content that "may impair their physical, mental or moral development", (b) the general public from content "containing incitement to violence or hatred", and (c) the general public from content whose dissemination is criminal in EU law, such as terrorism, child pornography or offences concerning racism or xenophobia. Under the Digital Services Act Regulation 2022 the rules from the Electronic Commerce Directive 2000 were repeated, so that a platform's or "gatekeeper's" liability is limited unless the platforms have failed to act with due diligence to stop certain illegal content, complying with transparent terms and algorithms. New codes of conduct should be drawn up for best practice. Fines for large platforms go up to 6% of annual turnover. These rules fall short of most TV standards that restrict inaccurate news (such as flat Earth conspiracies or global warming denial), discriminatory content short of incitement to hatred, systematic bias, or propaganda from dictatorships or corporations. By contrast, Wikipedia's online content has user-regulated policies preventing uncontrolled use of bots, preventing personal attacks by suspending or banning users that break rules, and ensuring Wikipedia maintains a neutral point of view.
The EU has also begun to regulate marketplaces that operate online, both through competition law and the Digital Markets Act Regulation 2022. First, in a series of Commission decisions, Google and Amazon were fined for competition violations. In the Google Shopping case, the Commission fined Google €2.4 billion for giving preference to its own shopping results over others in Google's search, leading to huge increases in traffic for Google over rivals. In the Google Android case the Commission fined Alphabet Inc (by then Google's rebranded parent name) €4.34 billion, or 4.5% of worldwide turnover, for paying phone manufacturers to pre-install its apps, such as Google search or Chrome, as a condition to license its app marketplace Google Play. In the Google AdSense case, the Commission fined Google €1.49 billion for stopping third-party websites displaying their adverts in Google's embedded search widgets, given that it was dominant in the ad market, and unfairly excluding competitiors from results. In the Amazon Marketplace case an investigation for abuse of dominant position was launched for Amazon using other traders' data to benefit its own retail business, and preferencing itself in its "Buy Box" and in access to "Prime" seller status. This was settled after Amazon committed in 2022 "not to use non-public data relating to, or derived from, the independent sellers' activities on its marketplace, for its retail business", and to not discriminate against third parties in its Buy Box and Prime services. The Digital Markets Act codifies many of these standards.
Foreign, security and trade policy
Common Foreign and Security Policy
Common Security and Defence Policy
European Defence Fund
European Commissioner for Trade
European Neighbourhood Policy
European Union free trade agreements
Common Commercial Policy (EU)
Criminal law
In 2006, a toxic waste spill off the coast of Côte d'Ivoire, from a European ship, prompted the commission to look into legislation against toxic waste. Environment Commissioner Stavros Dimas stated that "Such highly toxic waste should never have left the European Union". With countries such as Spain not even having a crime against shipping toxic waste, Franco Frattini, the Justice, Freedom and Security Commissioner, proposed with Dimas to create criminal sentences for "ecological crimes". The competence for the Union to do this was contested in 2005 at the Court of Justice resulting in a victory for the commission. That ruling set a precedent that the commission, on a supranational basis, may legislate in criminal law – something never done before. So far, the only other proposal has been the draft intellectual property rights directive. Motions were tabled in the European Parliament against that legislation on the basis that criminal law should not be an EU competence, but was rejected at vote. However, in October 2007, the Court of Justice ruled that the commission could not propose what the criminal sanctions could be, only that there must be some.
See also
Notes
References
Butler, Graham; Wessel, Ramses A (2022). EU External Relations Law: The Cases in Context. Oxford: Hart Publishing/Bloomsbury. ISBN 9781509939695.
Craig, Paul; de Búrca, Gráinne (2020). EU Law: Text, Cases, and Materials (7th ed.). Oxford University Press. ISBN 9780198714927.
McGaughey, Ewan (2022). Principles of Enterprise Law: the Economic Constitution and Human Rights. Cambridge University Press. ISBN 9781009045735.
Tobler, Christa; Beglinger, Jacques (2020). Essential EU Law in Charts. Budapest HVG-ORAC. ISBN 978-9632584898.
Weiler, JHH (1991). "The Transformation of Europe". Yale Law Journal. 100 (8): 2403–2483. doi:10.2307/796898. ISSN 0044-0094. JSTOR 796898.
Barnard, Catherine (2013). The substantive law of the EU : the four freedoms (4th ed.). Oxford University Press. ISBN 9780199670765. (later editions are available)
Craig, Paul; de Búrca, Gráinne (2011). The evolution of EU Law (2nd ed.). Oxford University Press. ISBN 9780199592968. (later editions are available)
Craig, Paul; de Búrca, Gráinne (2015). The evolution of EU Law (2nd ed.). Oxford University Press. ISBN 9780198821182.
Hartley, Trevor (2014). The foundations of European Union law : an introduction to the constitutional and administrative law of European Union. Oxford University Press. ISBN 9780198734673.
External links
EUR-Lex – online access to existing and proposed European Union legislation
Treaties
Summaries of EU legislation
Evolution of European Union legislation
The Principle of Loyalty in EU Law, 2014, by Marcus Klamert, Legal Officer, European Commission |
green hydrogen | Green hydrogen (GH2 or GH2) is hydrogen produced by the electrolysis of water, using renewable electricity. Production of green hydrogen causes significantly lower greenhouse gas emissions than production of grey hydrogen, which is derived from fossil fuels without carbon capture.Green hydrogen’s principal purpose is to help limit global warming to 1.5C, reduce fossil fuel dependence by replacing grey hydrogen, and provide for an expanded set of end-uses in specific economic sectors, sub-sectors and activities. These end-uses may be technically difficult to decarbonize through other means such as electrification with renewable power. Its main applications are likely to be in heavy industry (e.g. high temperature processes alongside electricity, feedstock for production of green ammonia and organic chemicals, as an alternative to coal-derived coke for steelmaking), long-haul transport (e.g. shipping, aviation and to a lesser extent heavy goods vehicles), and long-term energy storage.As of 2021, green hydrogen accounted for less than 0.04% of total hydrogen production. Its cost relative to hydrogen derived from fossil fuels is the main reason green hydrogen is in less demand.
For example, hydrogen produced by electrolysis powered by solar power was about 25 times more expensive than that derived from hydrocarbons in 2018.
Definition
Most commonly, green hydrogen is defined as hydrogen produced by the electrolysis of water, using renewable electricity. In this article, the term green hydrogen is used with this meaning.
Precise definitions sometimes add other criteria. The global Green Hydrogen Standard defines green hydrogen as “hydrogen produced through the electrolysis of water with 100% or near 100% renewable energy with close to zero greenhouse gas emissions.”A broader, less-used definition of green hydrogen also includes hydrogen produced through various other methods that produce relatively low emissions and meet other sustainability criteria. For example, these production methods may involve nuclear energy or biomass feedstocks.
Uses
There is potential for green hydrogen to play a significant role in decarbonising energy systems where there are challenges and limitations to replacing fossil fuels with direct use of electricity. Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. However, it is likely to play a larger role in providing industrial feedstock for cleaner production of ammonia and organic chemicals. For example, in steelmaking, hydrogen could function as a clean energy carrier and also as a low-carbon catalyst replacing coal-derived coke. Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and to a lesser extent heavy goods vehicles, through the use of hydrogen-derived synthetic fuels such as ammonia and methanol, and fuel cell technology. For light duty vehicles including passenger cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future.Green hydrogen can also be used for long-duration grid energy storage, and for long-duration seasonal energy storage.
Market
As of 2022, the global hydrogen market was valued at $155 billion and was expected to grow at an average (CAGR) of 9.3% between 2023 and 2030.
Of this market, green hydrogen accounted for about $4.2 billion (2.7%).
Due to the higher cost of production, green hydrogen represents a smaller fraction of the hydrogen produced compared to its share of market value.
The majority of hydrogen produced in 2020 was derived from fossil fuel. 99% came from carbon-based sources. Electrolysis-driven production represents less than 0.1% of the total, of which only a part is powered by renewable electricity.
The current high cost of production is the main factor limiting the use of green hydrogen. A price of $2/kg is considered by many to be a potential tipping point that would make green hydrogen competitive against grey hydrogen. It is cheapest to produce green hydrogen with surplus renewable power that would otherwise be curtailed, which favours electrolysers capable of responding to low and variable power levels (such as proton exchange membrane electrolysers).: 5 The cost of electrolysers fell by 60% from 2010 to 2022, and green hydrogen production costs are forecasted to fall significantly to 2030 and 2050,: 26 driving down the cost of green hydrogen alongside the falling cost of renewable power generation.: 28 Goldman Sachs analysis observed in 2022, just prior to Russia’s invasion of Ukraine that the “unique dynamic in Europe with historically high gas and carbon prices is already leading to green H2 cost parity with grey across key parts of the region”, and anticipated that globally green hydrogen achieve cost parity with grey hydrogen by 2030, earlier if a global carbon tax were placed on grey hydrogen.As of 2021, the green hydrogen investment pipeline was estimated at 121 gigawatts of electrolyser capacity across 136 projects in planning and development phases, totaling over $500 billion. If all projects in the pipeline were built, they could account for 10% of hydrogen production by 2030.
The market could be worth over $1 trillion a year by 2050 according to Goldman Sachs.
An energy market analyst suggested in early 2021 that the price of green hydrogen would drop 70% by 2031 in countries that have cheap renewable energy.
Projects
Australia
In 2020, the Australian government fast-tracked approval for the world's largest planned renewable energy export facility in the Pilbara region. In 2021, energy companies announced plans to construct a "hydrogen valley" in New South Wales at a cost of $2 billion to replace the region's coal industry.As of July 2022, the Australian Renewable Energy Agency (ARENA) had invested $88 million in 35 hydrogen projects ranging from university research and development to first-of-a-kind demonstrations. In 2022, ARENA is expected to close on two or three of Australia’s first large-scale electrolyser deployments as part of its $100 million hydrogen deployment round.
Canada
World Energy GH2’s Project Nujio'qonik aims to be Canada’s first commercial green hydrogen / ammonia producer created from three gigawatts of wind energy on the west coast of Newfoundland and Labrador, Canada. Nujio’qonik is the Mi’kmaw name for Bay St. George, where the project is proposed. Since August 2022, the project has been undergoing environmental assessment according to regulatory guidelines issued by the Government of Newfoundland and Labrador. Project Nujio'qonik is expected to produce the first green hydrogen in late-2025 and start international export in 2026.
Chile
Chile's goal to use only clean energy by the year 2050 includes the use of green hydrogen. The EU Latin America and Caribbean Investment Facility provided a €16.5 million grant and the EIB and KfW are in the process of providing up to €100 million each to finance green hydrogen projects.
China
In 2022 China was the leader of the global hydrogen market with an output of 33 million tons (a third of global production), mostly using fossil fuel.
As of 2021, several companies have formed alliances to increase production of the fuel fifty-fold in the next six years.Sinopec aimed to generate 500,000 tonnes of green hydrogen by 2025. Hydrogen generated from wind energy could provide a cost-effective alternative for coal-dependent regions like Inner Mongolia. As part of preparations for the 2022 Winter Olympics a hydrogen electrolyser, described as the "world's largest" began operations to fuel vehicles used at the games. The electrolyser was powered by onshore wind.
Germany
Germany invested €9 billion to construct 5 GW of electrolyzer capacity by 2030.
India
Reliance Industries announced its plan to use about 3 gigawatts (GW) of solar energy to generate 400,000 tonnes of hydrogen. Gautam Adani, founder of the Adani Group announced plans to invest $70 billion to become the world's largest renewable energy company, and produce the cheapest hydrogen across the globe. The power ministry of India has stated that India intends to produce a cumulative 5 million tonnes of green hydrogen by 2030.In April 2022, the public sector Oil India Limited (OIL), which is headquartered in eastern Assam’s Duliajan, set up India’s first 99.99% pure green hydrogen pilot plant in keeping with the goal of “making the country ready for the pilot-scale production of hydrogen and its use in various applications” while “research and development efforts are ongoing for a reduction in the cost of production, storage and the transportation” of hydrogen.
Mauritania
Mauritania launched two major projects on green hydrogen. The NOUR Project would become one of the world’s largest hydrogen projects with 10 GW of capacity by 2030 in cooperation with Chariot company. The second is the AMAN Project, which includes 12GW of wind capacity and 18GW of solar capacity to produce 1.7 million tons per annum of green hydrogen or 10 million tons per annum of green ammonia for local use and export, in cooperation with Australian company CWP.
Namibia
Namibia has commissioned a green hydrogen production project with German support. The 10 billion dollar project involves the construction of wind farms and photovoltaic plants with a total capacity of 7 (GW) to produce. It aims to produce 2 million tonnes of green ammonia and hydrogen derivatives by 2030 and will create 15,000 jobs of which 3,000 will be permanent.
Oman
An association of companies announced a $30 billion project in Oman, which would become one of the world's largest hydrogen facilities. Construction was to begin in 2028. By 2038 the project was to be powered by 25 GW of wind and solar energy.
Portugal
In April 2021, Portugal announced plans to construct the first solar-powered plant to produce hydrogen by 2023. Lisbon based energy company Galp Energia announced plans to construct an electrolyser to power its refinery by 2025.
Saudi Arabia
In 2021, Saudi Arabia, as a part of the NEOM project, announced an investment of $5bn to build a green hydrogen-based ammonia plant, which would start production in 2025.
Spain
In February 2021, thirty companies announced a pioneering project to provide hydrogen bases in Spain. The project intended to supply 93 GW of solar and 67 GW of electrolysis capacity by the end of the decade.
United Arab Emirates
In 2021, in collaboration with Expo 2020 Dubai, a pilot project was launched which is the first "industrial scale", solar-driven green hydrogen facility in the Middle East and North Africa."
United Kingdom
In March 2021, a proposal emerged to use offshore wind in Scotland to power converted oil and gas rigs into a "green hydrogen hub" which would supply fuel to local distilleries.In June 2021, Equinor announced plans to triple UK hydrogen production. In March 2022 National Grid announced a project to introduce green hydrogen into the grid with a 200m wind turbine powering an electrolyser to produce gas for about 300 homes.Vattenfall planned to generate green hydrogen from a test offshore wind turbine near Aberdeen in 2025.
United States
The federal Infrastructure Investment and Jobs Act, which became law in November 2021, allocated $9.5 billion to green hydrogen initiatives. In 2021, the U.S. Department of Energy (DOE) was planning the first demonstration of a hydrogen network in Texas. The department had previously attempted a hydrogen project known as Hydrogen Energy California. Texas is considered a key part of green hydrogen projects in the country as the state is the largest domestic producer of hydrogen and has a hydrogen pipeline network. In 2020, SGH2 Energy Global announced plans to use plastic and paper via plasma gasification to produce green hydrogen near Los Angeles.In 2021 then New York governor Andrew Cuomo announced a $290 million investment to construct a green hydrogen fuel production facility. State authorities backed plans for developing fuel cells to be used in trucks and research on blending hydrogen into the gas grid. In March 2022 the governors of Arkansas, Louisiana, and Oklahoma announced the creation of a hydrogen energy hub between the states. Woodside announced plans for a green hydrogen production site in Ardmore, Oklahoma. The Inflation Reduction Act of 2022 established a 10-year production tax credit, which includes a $3.00/kg subsidy for green hydrogen.
Public-private projects
In October 2023, Siemens announced that it had successfully performed the first test of an industrial turbine powered by 100 per cent green hydrogen generated by a 1 megawatt electrolyser. The turbine also operates on gas and any mixture of gas and hydrogen.
Government support
In 2020, the European Commission adopted a dedicated strategy on hydrogen. The "European Green Hydrogen Acceleration Center" is tasked with developing a €100 billion a year green hydrogen economy by 2025.In December 2020, the United Nations together with RMI and several companies, launched Green Hydrogen Catapult, with a goal to reduce the cost of green hydrogen below US$2 per kilogram (equivalent to $50 per megawatt hour) by 2026.In 2021, with the support of the governments of Austria, China, Germany, and Italy, UN Industrial Development Organization (UNIDO) launched its Global Programme for Hydrogen in Industry. Its goal is to accelerate the deployment of GH2 in industry.
In 2021, the British government published its policy document, a "Ten Point Plan for a Green Industrial Revolution," which included investing to create 5 GW of low carbon hydrogen by 2030. The plan included working with industry to complete the necessary testing that would allow up to 20% blending of hydrogen into the gas distribution grid by 2023. A BEIS consultation in 2022 suggested that grid blending would only have a "limited and temporary" role due to an expected reduction in the use of natural gas.The Japanese government planned to transform the nation into a "hydrogen society". Energy demand would require the government to import/produce 36 million tons of liquefied hydrogen. At the time Japan's commercial imports were projected to be 100 times less than this amount by 2030, when the use of fuel was expected to commence. Japan published a preliminary road map that called for hydrogen and related fuels to supply 10% of the power for electricity generation as well as a significant portion of the energy for uses such as shipping and steel manufacture by 2050. Japan created a hydrogen highway consisting of 135 subsidized hydrogen fuels stations and planned to construct 1,000 by the end of the 2020s.In October 2020, the South Korean government announced its plan to introduce the Clean Hydrogen Energy Portfolio Standards (CHPS) which emphasizes the use of clean hydrogen. During the introduction of the Hydrogen Energy Portfolio Standard (HPS), it was voted on by the 2nd Hydrogen Economy Committee. In March 2021, the 3rd Hydrogen Economy Committee was held to pass a plan to introduce a clean hydrogen certification system based on incentives and obligations for clean hydrogen.Morocco, Tunisia, Egypt and Namibia have proposed plans to include green hydrogen as a part of their climate change agenda. Namibia is partnering with European countries such as Netherlands and Germany for feasibility studies and funding.In July 2020, the European Union unveiled the Hydrogen Strategy for a Climate-Neutral Europe. A motion backing this strategy passed the European Parliament in 2021. The plan is divided into three phases. From 2020 to 2024, the program aims to decarbonize existing hydrogen production. From 2024-2030 green hydrogen would be integrated into the energy system. From 2030 to 2050 large-scale deployment of hydrogen would occur. Goldman Sachs estimated hydrogen to 15% of the EU energy mix by 2050.Six European Union member states: Germany, Austria, France, the Netherlands, Belgium and Luxembourg, requested hydrogen funding be backed by legislation. Many member countries have created plans to import hydrogen from other nations, especially from North Africa. These plans would increase hydrogen production, but were accused of trying to export the necessary changes needed within Europe. The European Union required that starting in 2021, all new gas turbines made in the bloc must be ready to burn a hydrogen–natural gas blend.In November 2020, Chile's president presented the "National Strategy for Green Hydrogen," stating he wanted Chile to become "the most efficient green hydrogen producer in the world by 2030". The plan includes HyEx, a project to make solar based hydrogen for use in the mining industry.
Regulations and standards
In the European Union, certified ‘renewable’ hydrogen, defined as produced from non-biological feedstocks, requires an emission reduction of at least 70% below the fossil fuel it is intended to replace. This is distinct in the EU from ‘low carbon’ hydrogen, which is defined as made using fossil fuel feedstocks. For it to be certified, low carbon hydrogen must achieve at least a 70% reduction in emissions compared with the grey hydrogen it replaces.In the United Kingdom, just one standard is proposed, for ‘low carbon’ hydrogen. Its threshold GHG emissions intensity of 20gCO2 equivalent per megajoule should be easily met by renewably-powered electrolysis of water for green hydrogen production, but has been set at a level to allow for and encourage other ‘low carbon’ hydrogen production, principally blue hydrogen. Blue hydrogen is grey hydrogen with added carbon capture and storage, which to date has not been produced with carbon capture rates in excess of 60%. To meet the UK’s threshold, its government has estimated that an 85% carbon capture rate would be necessary.In the United States, planned tax credit incentives for green hydrogen production are to be tied to the emissions intensity of ‘clean’ hydrogen produced, with greater levels of support on offer for lower greenhouse gas intensities.
Research
A 2023 study reported two uses of a conductive adhesive-barrier (CAB) that converted >99% of photoelectric power to chemical reactions. One experiment examined halide perovskite-based photoelectrochemical cells that achieved efficiency of 13.4% and 16.3 h to t60. The second was formed using a monolithic, stacked, silicon-perovskite tandem (two layered cell, with each layer absorbing a different frequency range), achieving peak efficiency of 20.8% and continuous operation of 102 h.
See also
Hydrogen economy
Alternative fuel
Carbon-neutral fuel
Fossil fuel phase-out
Combined cycle hydrogen power plant
References
External links
Green hydrogen explainer video from Scottish Power |
hydrotreated vegetable oil | Hydrotreated vegetable oil (HVO) is a biofuel made by the hydrocracking or hydrogenation of vegetable oil. Hydrocracking breaks big molecules into smaller ones using hydrogen while hydrogenation adds hydrogen to molecules. These methods can be used to create substitutes for gasoline, diesel, propane, kerosene and other chemical feedstock. Diesel fuel produced from these sources is known as green diesel or renewable diesel.
Diesel fuel created by hydrotreating is called green diesel and is distinct from the biodiesel made through esterification.
Feedstock
The majority of plant and animal oils are triglycerides, suitable for refining. Refinery feedstock includes canola, algae, jatropha, salicornia, palm oil, tallow and soybeans. One type of algae, Botryococcus braunii produces a different type of oil, known as a triterpene, which is transformed into alkanes by a different process.
Chemical analysis
Synthesis
The production of hydrotreated vegetable oils is based on introducing hydrogen molecules into the raw fat or oil molecule. This process is associated with the reduction of the carbon compound. When hydrogen is used to react with triglycerides, different types of reactions can occur, and different resultant products are combined. The second step of the process involves converting the triglycerides/fatty acids to hydrocarbons by hydrodeoxygenation (removing oxygen as water) and/or decarboxylation (removing oxygen as carbon dioxide).
A formulaic example of this is C3H5(RCOO)3 + 12H2 -> C3H8 + 3RCH3 + 6H2O
Chemical composition
The chemical formula for HVO Diesel is CnH2n+2
Chemical properties
Hydrotreated oils are characterized by very good low temperature properties. The cloud point also occurs below −40 °C. Therefore, these fuels are suitable for the preparation of premium fuel with a high cetane number and excellent low temperature properties. The cold filter plugging point (CFPP) virtually corresponds to the cloud point value, which is why the value of the cloud point is significant in the case of hydrotreated oils.
Comparison to biodiesel
Both HVO diesel (green diesel) and biodiesel are made from the same vegetable oil feedstock. However the processing technologies and chemical makeup of the two fuels differ. The chemical reaction commonly used to produce biodiesel is known as transesterification.The production of biodiesel also makes glycerol, but the production of HVO does not.
Commercialization
Various stages of converting renewable hydrocarbon fuels produced by hydrotreating is done throughout energy industry. Some commercial examples of vegetable oil refining are:
Neste NExBTL
Topsoe HydroFlex technology
Axens Vegan technology
H-Bio, the ConocoPhilips process
UOP/Eni Ecofining process.Neste is the largest manufacturer, producing 2 million tonnes annually (2013). Neste completed their first NExBTL plant in the summer 2007 and the second one in 2009. Petrobras planned to use 256 megalitres (1,610,000 bbl) of vegetable oils in the production of H-Bio fuel in 2007. ConocoPhilips is processing 42,000 US gallons per day (1,000 bbl/d) of vegetable oil. Other companies working on the commercialization and industrialization of renewable hydrocarbons and biofuels include Neste, REG Synthetic Fuels, LLC, ENI, UPM Biofuels, Diamond Green Diesel partnered with countries across the globe. Manufacturers of these renewable diesels report greenhouse gas emissions reductions of 40-90% compared to fossil diesel, as well as better cold-flow properties to work in colder climates. In addition, all of these green diesels can be introduced into any diesel engine or infrastructure without many mechanical modifications at any ratio with petroleum-based diesels.Renewable diesel from vegetable oil is a growing substitute for petroleum. California fleets used over 200,000,000 gallons of renewable diesel in 2017. The California Air Resources Board predicts that over 2 billion gallons of fuel will be consumed in the state under its Low Carbon Fuel Standard requirements in the next ten years. Fleets operating on Renewable Diesel from various refiners and feedstocks are reported to see lower emissions, reduced maintenance costs, and nearly identical experience when driving with this fuel.
Sustainability concerns
A number of issues have been raised about the sustainability of HVO, primarily concerning the sourcing of its lipid feedstocks. Waste oils such as used cooking oil are a limited resource and their use cannot be scaled up beyond a certain point. Further demand for HVO would have to be met with crop-based virgin vegetable oils, but the diversion of vegetable oils from the food market into the biofuels sector has been linked to increased global food prices, and to global agricultural expansion and intensification. This is associated with a variety of ecological and environmental implications; moreover, greenhouse gas emissions from land use change may in some circumstances negate or exceed any benefit from the displacement of fossil fuels.A 2022 study published by the International Council on Clean Transportation found that the anticipated scale-up of renewable diesel capacity in the U.S. would quickly exhaust the available supply of waste and residual oils, and increasingly rely on domestic and imported soy oil. The report also noted that increased U.S. renewable diesel production risked indirectly driving the expansion of palm oil cultivation in Southeast Asia, where the palm oil industry is still endemically associated with deforestation and peat destruction.
See also
Biodiesel
Indirect land use change impacts of biofuels
Algae fuel
Renewable hydrocarbon fuels via decarboxylation/decarbonylation
Sustainable oils
Vegetable oil fuel
References
External links
University Of Wisconsin / College Of Engineering (June 6, 2005). Green Diesel: New Process Makes Liquid Transportation Fuel From Plants. ScienceDaily. Retrieved August 10, 2010
Renewable Diesel Primer. Retrieved August 10, 2010 |
environmental policy of the stephen harper government | The environmental policy of the Stephen Harper government was implemented when Stephen Harper was the Prime Minister of Canada from 2006 to 2015, under two minority governments until 2011 when the Conservative Party of Canada won a majority in the 2011 Canadian federal election. During the term of Stephen Harper, Canada's greenhouse gas emissions decreased from 730 to 723 Mt of carbon dioxide equivalent. In contrast, during the period from 1993 until 2006, under various Liberal governments, Canada's greenhouse gas emissions increased 617 to 730 Mt of carbon dioxide equivalent.The Harper government took credit for the 7 Mt overall reduction in greenhouse gases, while critics claimed that the Harper government was against measures to curb climate change and global warming. Some point to the Financial crisis of 2007–2008 and the Province of Ontario closing its coal power plants as the reason for the reduction of greenhouse gas emissions during the premiership of Stephen Harper, factors that were outside his control.
Funding
Departmental funding
The Harper administration reduced funding for environmental research and monitoring by $83.3 million for 2012-2013, by $117.9 million for 2013-2014, and by $180.5 million per year from 2014-2015 onwards. The government has also made significant cuts at Fisheries and Oceans Canada, including cutting $100 million in work related to water protection. The reduced functioning of climate monitoring programmes resulted in gaps in data collection, amongst other effects.
Research
As part of the 2008 budget on February 26, 2008, $250 million was announced for research in developing more fuel-efficient vehicles and $300 million for the development a more advanced nuclear reactor and to improve safety at the Chalk River, Ontario Nuclear facility which shut down during the fall of 2007 after there were safety concerns.
Environmental research groups
In 2012, the Conservative government revised the Canadian Environmental Assessment Act, reducing its scope to facilitate the approval of projects that would contribute to economic growth. The number of agencies who could conduct environmental reviews was reduced from 40 to three, and approximately 3000 assessments were cancelled due to the reduction in purview. These revisions raised concern among the opposition and environmental groups, who stated the revisions reduced the government’s responsibility towards the environment, “gutting Canada's environmental assessment process.” For example, the Conservatives have cut funding for the National Round Table on the Environment and the Economy (NRTEE) because the research group promoted carbon taxing. Foreign Affairs Minister John Baird explained the government's position: "Why should taxpayers have to pay for more than 10 reports promoting a carbon tax, something that the people of Canada have repeatedly rejected?"
Funding for provinces
The previous government pledged funding to several provinces including Ontario and Quebec. Quebec's Environment Minister Claude Béchard vowed to encourage the Tory government to continue with the $328 million funding previously committed for the province. The government announced $1.5 billion for supporting provincial projects including the $328 million Quebec requested.
Renewable energy
Lower Churchill Project
In 2011, the federal government pledged a loan guarantee towards the Lower Churchill Project in Labrador, which is scheduled for completion in 2017. On April 17, 2013, the 41st Parliament voted in favour of a loan guarantee to Newfoundland and Labrador for the Lower Churchill Project. The Conservative Party of Canada, the Liberal Party of Canada, and the New Democratic Party voted in favour of the loan guarantee. The Green Party of Canada's only MP, Elizabeth May, abstained from voting. The Bloc Québécois voted against the project. The vote passed 271 to 5.
Media coverage of climate change
According to an Environment Canada document, reported by the Montreal Gazette, "Media coverage of climate change science, our most high-profile issue, has been reduced by over 80 per cent" from 2007 to 2010. The Canadian government was accused of "muzzling" its scientists, because journalists needed to file a request to government officials before being allowed to interview scientists, which requests were often denied or only allowed after the news story had already been published by the journalist.
Position on the Kyoto Accord
One prominent policy of the government since its access to power was its position over the Kyoto Accord in which the federal government ratified the Protocol in the late 1990s. The Conservative government had criticized the Accord for having negative impacts on the environment while not providing concrete results as far as greenhouse emission reductions and proposed a new policy which met with criticism from various environmental organizations and the opposition parties.
Harper and the Conservative government criticized the Kyoto Accord on measures to fight against global warming, saying that the economy would be crippled if Canada was forced to meet the Accord's timetable to reduce greenhouse gas emissions. In 2002, Harper wrote a letter to members of the former Canadian Alliance party, mentioning that the Accord is a "socialist conspiracy" and questioning climate science, and in a meeting with other Commonwealth countries in Uganda commented that Kyoto was a mistake that should not be repeated. He also stated that the Accord "focuses on carbon dioxide, which is essential to life, rather than upon pollutants." Harper considered the objectives implemented by Canada to meet its goals were not realistic and later criticized further the accord which did not set any targets for the world's biggest polluters. He proposed a "Made in Canada" plan that would concentrate its efforts on reducing smog pollution from vehicles. In a CTV report in October, however, the Conservatives had mentioned that it would be an approach rather than a plan. While repeatedly mentioning that the goals will not be achieved before the timeline, John Baird mentioned on March 17, 2007, that the government had no plans to abandon the Kyoto Accord. The Conservatives' position has been backed by five independent economists, including Toronto-Dominion Bank chief economist Don Drummond. Drummond, who has been consulted by political parties of all stripes, said that the "economic cost [of implementing Kyoto] would be at least as deep as the recession in the early 1980s", agreeing with the results of a study compiled by the environment department.
Opposition members led by Liberal MP Pablo Rodriguez tabled bill C-288 that would force the government to respect the measures of the Kyoto Accord and forced it to present its measures within 60 days. The bill passed third reading on February 14, 2007, 161-113. The Conservatives had appealed the Speaker of the House, Peter Milliken to make the bill invalid citing it was forcing them to spend money against its will, which was denied. While criticizing the Opposition bill as an empty law without any action plans and not giving authority to spend, Harper announced that he would respect the law, despite earlier threats by the government not to respect it. Toronto-Dominion Bank chief economist Don Drummond dismissed bill C-288 as unworkable. On April 19, 2007, Baird told the Senate of Canada environmental committee that respecting the Kyoto Accord would have a negative impact on the economy, citing that Canada would return to a recession similar to the early 1980s while gas and natural gas prices would skyrocket despite a United Nations report that said that the impact would be minimal.In the 2007 Throne speech, the government officially abandoned the Kyoto objectives in favour of their policies and accords with Asian and Pacific countries in which Harper joined the US-led the Asia-Pacific Partnership on Clean Development and Climate on September 24, 2007, the United States, China, Japan, India, South Korea and Australia, several among them being among the biggest polluters. The APP's plans goals are lower than the Kyoto Protocol and consists on the introduction of newer and cleaner technology including solar, coal and nuclear power.The Conservatives withdrew Canada from the Kyoto Protocol in December 2011.
Clean Air Act
On October 10, 2006, in Vancouver, Harper announced tougher measures than the previous Liberal government such as tax credits to environmentally friendly measures, a repackaged air quality health index and a program to retrofit diesel school buses. Harper mentioned that these measures would "move industry from voluntary compliance to strict enforcement; replace the current ad hoc, patchwork system with clear, consistent, and comprehensive national standards, and institute a holistic approach that doesn't treat the related issues of pollutants and greenhouse gas emissions in isolation." Prior to the announcement, activist groups listed a series of recommendations including regulations on big industries and compliance with the Kyoto Protocol.Details of the Clean Air Act were revealed on October 19, 2006, by Harper along with Environment Minister Rona Ambrose and Transport Minister Lawrence Cannon. Its main plan was to reduce greenhouse emissions at about 45-65% of the 2003 levels. The goal was set for the year 2050 while a decrease of greenhouse emissions would be noticed in 2020. There were also regulations set for vehicle fuel consumption for 2011, while new measures would be set for industries starting in 2010. Finally, oil companies will have to reduce gas emissions for each barrel produced. However, companies can increase their production until 2020. The plan was heavily criticized by opposition parties and several environmental groups, with New Democratic Party leader Jack Layton stating that the act does little to prevent climate change. Since the opposition threatened to turn this into an election issue, the Conservative Party agreed to rework the act.The Conservatives made a detailed and revised plan called "Turning the Corner" on April 25, 2007, after leaks of a speech which was supposed to be made by John Baird on April 26 were discovered after some Liberal MPs received a fax of the speech. The new plan seeks to stop the increase of greenhouse gas emissions before 2012 and reduce the amount as much as 20% by 2020. Targets would be imposed to industries before 2015, while home appliances would need to be more energy efficient. There were also rewards for companies that reduced the amount of emissions since 2006. On the next day, Baird announced additional measures including one that would force industries to reduce greenhouse emissions by 18 percent by 2010 while auto industries would have a mandatory fuel-efficient standard by 2011. Later in 2007, Baird revealed other plans and deadlines that industries must meet. The plan mentioned that over 700 big-polluter companies, including oil and gas, pulp and paper, electricity and iron and steel companies, will have to reduce green-house emissions by six percent from 2008 to 2010 and will have to report data on their emissions on every May 31.However, critics including the World Wildlife Fund said that the greenhouse emissions in 2020 would still be higher than the 1990 levels, and Canada would not meet Kyoto targets before 2025, 13 years after its objectives. High-profile figures including David Suzuki and Former US Vice-President Al Gore also criticized the plan as being insufficient.In 2019, Canada's GHG emissions were 730 Mt compared to 739 Mt in 2005, representing a reduction of only approximately 5%.
Clean energy technology funding
On December 20, 2006, Ambrose and Agriculture Minister Chuck Strahl announced $345 million of funding and other measures to promote the use of biodiesel and ethanol in policies related to the Clean Air Act. Among that, diesel fuel, regular fuel and heating oil would require a small amount of cleaner energy by 2012. Measures also affected farmers in diversifying their agriculture and farming equipment.On January 17, 2006, Natural Resources Minister Gary Lunn and new Environment Minister John Baird announced an additional $230 million for the development of clean energy technology Two days later, Harper, Baird and Dunn presented a new program initiative called ecoEnergy Renewable Initiative which would concentrate on the increase of cleaner energy sources such as wind, biomass, small hydro and ocean energies. The cost of the program was about $1.5 billion. Some money was also planned for incentives for companies and industries that would use cleaner energy sources.On January 21, 2007, the Government announced another related funding announcement by pledging $300 million by helping homeowners across the country by becoming more energy-efficient including cash reward for those implementing measures to improve the efficiency. Critics of the measures such as Friends of Earth Canada and Liberal environment critic David McGuinty, mentioned though that the Conservatives had used some of the programs and strategies planned by opposition parties including a remake of the EnerGuide Program launched by the Liberals.
Response to climate change report
Harper later proposed a discussion with NDP leader Jack Layton in the light of growing concerns made by the United Kingdom government of Tony Blair as well as a report by Sir Nicholas Stern, a former chief economist World Bank who predicted a 20% drop of the global economy. Layton tabled a private member's bill, the Climate Change Accountability Act (Bill C-224), which contained plans to respect Kyoto's targets. After their meeting, they agreed on a formal review of the Clean Air Act.
Meetings on global warming
Harper cancelled a planned meeting on environment with European Union members in Helsinki, Finland, a meeting in which he was expecting to condemn the Kyoto Accord. Harper's director of communications cited that his legislation agenda forced him to withdraw from the meeting. Furthermore, Ambrose attended a two-week November 2006 UN summit meeting in Nairobi, Kenya on the issue of the Kyoto Accord and it targets. Opposition members have claimed that her presence was an embarrassment for Canada.In late 2007, Harper attended the Commonwealth Summit Meeting in Uganda. While Harper called Kyoto a mistake, he rejected claims that Canada would be a holdout on climate change action. A deal was reached between the 53 members of the organization but blocked a proposal to exclude developing countries to comply to emission reductions. He commented that the deal in Uganda will set the stage for the meeting in Indonesia. John Baird who was also at the meeting mentioned that any agreements would have to include reduction targets in which the biggest polluters such as the United States, China and India must comply. A last-ditch agreement, after difficult discussions, was made late in the Summit which consisted of a two-year plan which would lead to a new treaty replacing the Kyoto Protocol as well as additional negotiations until 2009 that would force countries to set basic parameters of greenhouse reduction goals. Baird, while citing that the last-minute talks were a positive step for a future agreement, stated that he was disappointed that some off the agreement was watered down and that "the deal was almost completely stripped of any reference to numbers and targets that could have been the starting point for the discussion". Climate changes was also a topic at the G8 meeting in Japan in July 2008 where the organization had agreed to fix an objective on reducing greenhouse emissions by 50 cent by 2050 although it was not clear whether the goal was based on either 1990 or current (2008) levels.
Climate change in the arctic
On March 1, 2007, while launching the International Polar Year that a worldwide program that will focus on intense researches on the Arctic regions, including climate change effects, the government announced a $150 million/4 year funding for over 40 projects related to the IPY program. In the 2009 federal budget the government introduced $85 million over two years for key Arctic research stations, and $2 million over two years for a feasibility study for a world-class Arctic research station
Clean-car rebate
As part of the 2007 budget on March 19, 2007, Flaherty announced a rebate of up to $2,000 for people who purchase fuel-efficient vehicles. He also announced a new levy to penalize consumers who purchase vehicles with a high-fuel consumption rate: $1,000 for every litre consumed per 100 kilometres would be imposed (up to a total of $4,000) if the vehicle consumes over 13 litres of fuel per 100 kilometres in the city. However the 2008 budget announced the clean-car rebate would be scrapped in 2009.
Critics
Due to the mounted controversy surrounding the Clean Air Act, there were reports according to the Canadian Press that Ambrose would be relieved of her duties as Environment Minister and replaced by Indian Affairs Minister Jim Prentice in a future cabinet shuffle. However, on January 4, 2007, Ambrose was replaced by the President of the Treasury Board John BairdIn 2011, Canada's commissioner of the environment and sustainable development, Scott Vaughan, stated that the government was not only failing to meet Kyoto standards, but also those of other agreements it had signed. He also criticized the Harper government of dramatically lowering its greenhouse gas emission targets since 2007, which have dropped by 90% (from 282 million tonnes to 28 million tonnes).
See also
Environment of Canada
Domestic policy of the Stephen Harper government
== References == |
plant milk | Plant milk is a plant beverage with a color resembling that of milk. Plant milks are non-dairy beverages made from a water-based plant extract for flavoring and aroma. Plant milks are consumed as alternatives to dairy milk, and may provide a creamy mouthfeel.As of 2021, there are about 17 different types of plant milks; almond, oat, soy, coconut, and pea are the highest-selling worldwide. Production of plant-based milks, particularly soy, oat, and pea milks, can offer environmental advantages over animal milks in terms of greenhouse gas emissions, land and water use.Plant-based beverages have been consumed for centuries, with the term "milk-like plant juices" used since the 13th century. In the 21st century, they are commonly referred to as plant-based milk, alternative milk, non-dairy milk or vegan milk. For commerce, plant-based beverages are typically packaged in containers similar and competitive to those used for dairy milk, but cannot be labeled as "milk" within the European Union.Across various cultures, plant milk has been both a beverage and a flavor ingredient in sweet and savory dishes, such as the use of coconut milk in curries. It is compatible with vegetarian and vegan lifestyles. Plant milks are also used to make ice cream alternatives, plant cream, vegan cheese, and yogurt-analogues, such as soy yogurt. The global plant milk market was estimated to reach US$62 billion by 2030.
History
Before commercial production of 'milks' from legumes, beans and nuts, plant-based mixtures resembling milk have existed for centuries. The Wabanaki and other Native American tribal nations in the northeastern United States made milk and infant formula from nuts.Horchata, a beverage originally made in North Africa from soaked, ground, and sweetened tiger nuts, spread to Iberia (now Spain) before the year 1000. In English, the word "milk" has been used to refer to "milk-like plant juices" since 1200 CE.Recipes from the 13th-century Levant exist describing almond milk. Soy was a plant milk used in China during the 14th century. In Medieval England, almond milk was used in dishes such as ris alkere (a type of rice pudding) and appears in the recipe collection The Forme of Cury. Coconut milk (and coconut cream) are traditional ingredients in many cuisines such as in South and Southeast Asia, and are often used in curries.Plant milks may be regarded as milk substitutes in Western countries, but have traditionally been consumed in other parts of the world, especially ones where there are higher rates of lactose intolerance (see especially lactose intolerance: epidemiology section).
Types
Common plant milks are almond milk, coconut milk, rice milk, and soy milk. Other plant milks include hemp milk, oat milk, pea milk, and peanut milk.
Plant milks can be made from:
Grains: barley, fonio, maize, millet, oat, rice, rye, sorghum, teff, triticale, spelt, wheat
Pseudocereals: amaranth, buckwheat, quinoa
Legumes: lupin, pea, peanut, soy
Nuts: almond, brazil, cashew, hazelnut, macadamia, pecan, pistachio, walnut
Seeds: chia seed, flax seed, hemp seed, pumpkin seed, sesame seed, sunflower seed
Other: coconut (fruit; drupe), banana (fruit; berry) potato (tuber), tiger nut (tuber)A blend is a plant milk created by mixing two or more types together. Examples of blends are almond-coconut milk and almond-cashew milk.
Other traditional plant milk recipes include:
Kunu, a Nigerian beverage made from sprouted millet, sorghum, or maize
Sikhye, a traditional sweet Korean rice beverage
Amazake, a Japanese rice milk
Manufacturing
Although there are variations in the manufacturing of plant milks according to the starting plant material, as an example, the general technique for soy milk involves several steps, including:
cleaning, soaking and dehulling the beans
grinding of the starting material to produce a slurry, powder or emulsion
heating the processed plant material to denature lipoxidase enzymes to minimize their effects on flavor
removing sedimentable solids by filtration
adding water, sugar (or sugar substitutes) and other ingredients to improve flavour, aroma, and micronutrient content
pasteurizing the pre-final liquid
homogenizing the liquid to break down fat globules and particles for a smooth mouthfeel
packaging, labeling and storage at 1 °C (34 °F)The actual content of the highlighted plant in commercial plant milks may be only around 2%. Other ingredients commonly added to plant milks during manufacturing include guar gum, xanthan gum, or sunflower lecithin for texture and mouthfeel, select micronutrients (such as calcium, B vitamins, and vitamin D), salt, and natural or artificial ingredients—such as flavours characteristic of the featured plant—for aroma, color, and taste. Plant milks are also used to make ice cream, plant cream, vegan cheese, and yogurt-analogues, such as soy yogurt.
The production of almond-based dairy substitutes has been criticized on environmental grounds as large amounts of water and pesticides are used. The emissions, land, and water footprints of plant milks vary, due to differences in crop water needs, farming practices, region of production, production processes, and transportation. Production of plant-based milks, particularly soy and oat milks, can offer environmental advantages over animal milks in terms of greenhouse gas emissions, land and water use.
Nutritional comparison with cow's milk
Many plant milks aim to contain the same proteins, vitamins and lipids as those produced by lactating mammals. Generally, because plant milks are manufactured using processed extracts of the starting plant, plant milks are lower in nutrient density than dairy milk and are fortified during manufacturing to add precise levels of micronutrients, commonly calcium and vitamins A and D. Animal milks are also commonly fortified, and many countries have laws mandating fortification of milk products with certain nutrients, commonly vitamins A and D.
Nutritional content of human, cow, soy, almond, and oat milks
Non-human milks are fortified
Packaging and commerce
Plant-based milks have emerged as an alternative to dairy in response to consumer dietary requests and changing attitudes about animals and the environment. Huffington Post stated that due to health and environmental reasons as well as changing consumer trends, more individuals regularly buy non-dairy alternatives to milk. Between 1974 and 2020, dairy milk consumption of people aged between 16 and 24 in the United Kingdom decreased from 94% to 73%. In Australia, there is decreased confidence within the dairy industry, with only 53% being optimistic in the future profitability and demand for dairy products per a Dairy Australia report.To improve competition, plant milks are typically packaged in containers similar to those of dairy milks. A scientific journal article argued that plant-milk companies send the message that plant milks are 'good and wholesome' and dairy milk is 'bad for the environment', and the article also reported that an increasing number of young people associate dairy with environmental damage. There has been an increased concern that dairy production has adverse effects on biodiversity, water and land use. These negative links between dairy and the environment have also been communicated through audiovisual material against dairy production, such as 'Cowspiracy' and 'What the Health'. Animal welfare concerns have also contributed to the declining popularity of dairy milk in many Western countries. Advertising for plant milks may also contrast the intensive farming effort to produce dairy milk with the relative ease of harvesting plant sources, such as oats, rice or soybeans. In 2021, an advertisement for oat milk brand Oatly aired during the Super Bowl.
In the United States, plant milk sales grew steadily by 61% over the period 2012 to 2018. As of 2019, the plant-based milk industry in the US is worth $1.8 billion per year. In 2018, the value of 'dairy alternatives' around the world was said to be $8 billion. Among plant milks, almond (64% market share), soy (13% market share), and coconut (12% market share) were category leaders in the United States during 2018. Oat milk sales increased by 250% in Canada during 2019, and its growing consumption in the United States and United Kingdom led to production shortages from unprecedented consumer demand. In 2020, one major coffee retailer – Starbucks – added oat milk, coconut milk, and almond milk beverages to its menus in the United States and Canada. During 2020, oat milk sales in the United States increased to $213 million, becoming the second most consumed plant milk after almond milk ($1.5 billion in 2020 sales).A key dietary reason for the increase in popularity of plant-based milks is lactose intolerance. For example, the most common food causing intolerance in Australia is lactose and affects 4.5% of the population. In the United States, around 40 million people are lactose intolerant.
Labeling and terminology
One of the first reliable modern English dictionaries, Samuel Johnson's 1755 A Dictionary of the English Language, gave two definitions of the word "milk". The first described "the liquor with which animals feed their young from the breast", and the second an "emulsion made by contusion of seeds", using almond milk as an example.As plant milks resurged in popularity in the late twentieth and early twenty-first centuries, their definition became a matter of controversy. Plant milks may be labeled to highlight their nutrient contents, or with terms reflecting their composition or absence of ingredients, such as "dairy-free", "gluten-free" or "GMO-free". Manufacturers and distributors of animal milk have advocated that plant-based milk not be labelled as "milk". They complain that consumers may be confused between the two, and that plant-based milks are not necessarily as nutritious in terms of vitamins and minerals.
Europe
In December 2013, European Union regulations stated that the terms "milk", "butter", "cheese", "cream" and "yoghurt" can only be used to market and advertise products derived from animal milk, with a small number of exceptions including coconut milk, peanut butter and ice cream. In 2017, the Landgericht Trier (Trier regional court), Germany, asked the Court of Justice of the European Union, to clarify European food-labeling law (Case C-422/16), with the court stating that plant-based products cannot be marketed as milk, cream, butter, cheese or yoghurt within the European Union because these are reserved for animal products; exceptions to this do not include tofu and soy. Although plant-based dairy alternatives are not allowed to be called "milk", "cheese" and the like, they are allowed to be described as buttery or creamy. In the United Kingdom, strict standards are applied to food labeling for terms such as milk, cheese, cream, yogurt, which are protected to describe dairy products and may not be used to describe non-dairy produce. However, there are exceptions for each of the EU languages, based on established use of livestock terms for non-livestock products. The list's extent varies widely; for example there is only one exception in Polish, and 20 exceptions in English.A proposal for further restrictions failed at second reading in the European Parliament, in May 2021. The proposal, called Amendment 171, would have outlawed labels including 'yogurt-style' and 'cheese alternative'.
United States
In the United States, the dairy industry petitioned the FDA to ban the use of terms like "milk", "cheese", "cream" and "butter" on plant-based analogues (except for peanut butter). FDA commissioner, Scott Gottlieb, stated on July 17, 2018 that the term "milk" is used imprecisely in the labeling of non-dairy beverages, such as soy milk, oat milk and almond milk: "An almond doesn't lactate", he said. In 2019, the US National Milk Producers Federation petitioned the FDA to restrict labeling of plant-based milks, claiming they should be described as "imitation". In response, the Plant-Based Foods Association stated the word "imitation" was disparaging, and there was no evidence that consumers were misled or confused about plant-based milks. A 2018 survey by the International Food Information Council Foundation found that consumers in the United States do not typically confuse plant-based analogues with animal milk or dairy products. As of 2021, though the USDA is investigating and various state legislatures are considering regulation, various courts have determined that reasonable consumers are not confused, and the FDA has enacted no regulations against plant-based milk labels.In 2021, the FDA issued a final rule that amends yogurt's standard of identity (which remains a product of "milk-derived ingredients"), and is expecting to issue industry guidance on "Labeling of Plant-based Milk Alternatives" in 2022.Proponents of plant-based milk assert that these labeling requirements are infantilizing to consumers and burdensome and unfair on dairy-alternatives. Critics of the FDA's labeling requirements also note that there is often collusion between government officials and the dairy industry in an attempt to maintain dairy dominance in the market. For example, in 2017, Senator Tammy Baldwin (WI) introduced the "Defending Against Imitations and Replacements of Yogurt, Milk, and Cheese to Promote Regular Intake of Dairy Everyday (DAIRY PRIDE) Act" which would prevent almond milk, coconut milk, cashew milk from being labeled with terms like milk, yogurt, and cheese. Proponents of plant-based dairy alternatives note that dairy sales are decreasing faster than plant sales are increasing and that therefore, attacking plant milks as being the chief reason for a decline in dairy consumption is inaccurate. A 2020 USDA study found that the "increase in sales over 2013 to 2017 of plant-based options is one-fifth the size of the decrease in Americans' purchases of cow's milk."
Health recommendations
Health authorities recommend that plant milks should not be given to infants younger than 12 months unless commercially prepared infant formula is available, such as soy infant formula. A 2020 clinical review stated that only appropriate commercial infant formulas should be used as alternatives to human milk which contains a substantial source of calcium, vitamin D and protein in the first year of life and that plant milks "do not represent an equivalent source of such nutrients".The Healthy Drinks, Healthy Kids 2023 guidelines state that infants younger than 12 months should not drink plant milks. They suggest that children between 12 and 24 months may consume fortified soy milk, but not other non-dairy milks such as almond, oat and rice, which are deficient in key nutrients. A 2022 review suggested that the best option for toddlers (1–3 years old) who do not consume cow's milk would be to have at least 250 mL/day of fortified soy milk.For vegan infants younger than 12 months who are not breastfed, the New Zealand Ministry of Health recommends soy infant formula and advises against the use of plants milks. A 2019 Consensus Statement from the Academy of Nutrition and Dietetics, American Academy of Pediatric Dentistry, American Academy of Pediatrics, and the American Heart Association concluded that plant milks are not recommended for infants younger than 12 months and that for children aged 1–5 years plant milks may be useful for those with allergies or intolerances to cow's milk but should only be consumed after a consultation with a professional health care provider.
See also
References
External links
Wikibooks Cookbook category for Nut and Grain Milk recipes |
diet (nutrition) | In nutrition, diet is the sum of food consumed by a person or other organism.
The word diet often implies the use of specific intake of nutrition for health or weight-management reasons (with the two often being related). Although humans are omnivores, each culture and each person holds some food preferences or some food taboos. This may be due to personal tastes or ethical reasons. Individual dietary choices may be more or less healthy.
Complete nutrition requires ingestion and absorption of vitamins, minerals, essential amino acids from protein and essential fatty acids from fat-containing food, also food energy in the form of carbohydrate, protein, and fat. Dietary habits and choices play a significant role in the quality of life, health and longevity.
Health
A healthy diet can improve and maintain health, which can include aspects of mental and physical health. Specific diets, such as the DASH diet, can be used in treatment and management of chronic conditions.Dietary recommendations exist for many different countries, and they usually emphasise a balanced diet which is culturally appropriate. These recommendation are different from dietary reference values which provide information about the prevention of nutrient deficiencies.
Dietary choices
Exclusionary diets are diets with certain groups or specific types of food avoided, either due to health considerations or by choice. Many do not eat food from animal sources to varying degrees (e.g. flexitarianism, pescetarianism, vegetarianism, and veganism) for health reasons, issues surrounding morality, or to reduce their personal impact on the environment (e.g. environmental vegetarianism). People on a balanced vegetarian or vegan diet can obtain adequate nutrition, but may need to specifically focus on consuming specific nutrients, such as protein, iron, calcium, zinc, and vitamin B12. Raw foodism and intuitive eating are other approaches to dietary choices. Education, income, local availability, and mental health are all major factors for dietary choices.
Weight management
A particular diet may be chosen to promote weight loss or weight gain. Changing a person's dietary intake, or "going on a diet", can change the energy balance, and increase or decrease the amount of fat stored by the body. The terms "healthy diet" and "diet for weight management" (dieting) are often related, as the two promote healthy weight management. If a person is overweight or obese, changing to a diet and lifestyle that allows them to burn more calories than they consume may improve their overall health, possibly preventing diseases that are attributed in part to weight, including heart disease and diabetes. Within the past 10 years, obesity rates have increased by almost 10%. Conversely, if a person is underweight due to illness or malnutrition, they may change their diet to promote weight gain. Intentional changes in weight, though often beneficial, can be potentially harmful to the body if they occur too rapidly. Unintentional rapid weight change can be caused by the body's reaction to some medications, or may be a sign of major medical problems including thyroid issues and cancer among other diseases.
Eating disorders
An eating disorder is a mental disorder that interferes with normal food consumption. It is defined by abnormal eating habits, and thoughts about food that may involve eating much more or much less than needed. Common eating disorders include anorexia nervosa, bulimia nervosa, and binge-eating disorder. Eating disorders affect people of every gender, age, socioeconomic status, and body size.
Environmental dietary choices
Agriculture is a driver of environmental degradation, such as biodiversity loss, climate change, desertification, soil degradation and pollution. The food system as a whole – including refrigeration, food processing, packaging, and transport – accounts for around one-quarter of greenhouse gas emissions. More sustainable dietary choices can be made to reduce the impact of the food system on the environment. These choices may involve reducing consumption of meat and dairy products and instead eating more plant-based foods, and eating foods grown through sustainable farming practices.
Religious and cultural dietary choices
Some cultures and religions have restrictions concerning what foods are acceptable in their diet. For example, only Kosher foods are permitted in Judaism, and Halal foods in Islam. Although Buddhists are generally vegetarians, the practice varies and meat-eating may be permitted depending on the sects. In Hinduism, vegetarianism is the ideal. Jains are strictly vegetarian and in addition to that the consumption of any roots (ex: potatoes, carrots) is not permitted.
In Christianity there is no restriction on the kinds of animals that can be eaten, though various groups within Christianity have practiced specific dietary restrictions for various reasons. The most common diets used by Christians are Mediterranean and vegetarianism.
Diet classification table
Notes
See also
Diet food
Dieting
Dessert crop
Intuitive eating
Nutrition psychology
References
External links
The dictionary definition of diet at Wiktionary |
geothermal power | Geothermal power is electrical power generated from geothermal energy. Technologies in use include dry steam power stations, flash steam power stations and binary cycle power stations. Geothermal electricity generation is currently used in 26 countries, while geothermal heating is in use in 70 countries.As of 2019, worldwide geothermal power capacity amounts to 15.4 gigawatts (GW), of which 23.9% (3.68 GW) are installed in the United States. International markets grew at an average annual rate of 5 percent over the three years to 2015, and global geothermal power capacity is expected to reach 14.5–17.6 GW by 2020. Based on current geologic knowledge and technology the Geothermal Energy Association (GEA) publicly discloses, the GEA estimates that only 6.9% of total global potential has been tapped so far, while the IPCC reported geothermal power potential to be in the range of 35 GW to 2 TW. Countries generating more than 15 percent of their electricity from geothermal sources include El Salvador, Kenya, the Philippines, Iceland, New Zealand, and Costa Rica. Indonesia has an estimated potential of 29 GW of geothermal energy resources, the largest in the world; in 2017, its installed capacity was 1.8 GW.
Geothermal power is considered to be a sustainable, renewable source of energy because the heat extraction is small compared with the Earth's heat content. The greenhouse gas emissions of geothermal electric stations average 45 grams of carbon dioxide per kilowatt-hour of electricity, or less than 5% of those of conventional coal-fired plants.As a source of renewable energy for both power and heating, geothermal has the potential to meet 3-5% of global demand by 2050. With economic incentives, it is estimated that by 2100 it will be possible to meet 10% of global demand with geothermal power.
History and development
In the 20th century, demand for electricity led to the consideration of geothermal power as a generating source. Prince Piero Ginori Conti tested the first geothermal power generator on 4 July 1904 in Larderello, Italy. It successfully lit four light bulbs. Later, in 1911, the world's first commercial geothermal power station was built there. Experimental generators were built in Beppu, Japan and the Geysers, California, in the 1920s, but Italy was the world's only industrial producer of geothermal electricity until 1958.
In 1958, New Zealand became the second major industrial producer of geothermal electricity when its Wairakei station was commissioned. Wairakei was the first station to use flash steam technology. Over the past 60 years, net fluid production has been in excess of 2.5 km3. Subsidence at Wairakei-Tauhara has been an issue in a number of formal hearings related to environmental consents for expanded development of the system as a source of renewable energy.In 1960, Pacific Gas and Electric began operation of the first successful geothermal electric power station in the United States at The Geysers in California. The original turbine lasted for more than 30 years and produced 11 MW net power.The binary cycle power station was first demonstrated in 1967 in the Soviet Union and later introduced to the United States in 1981, following the 1970s energy crisis and significant changes in regulatory policies. This technology allows the use of much lower temperature resources than were previously recoverable. In 2006, a binary cycle station in Chena Hot Springs, Alaska, came on-line, producing electricity from a record low fluid temperature of 57 °C (135 °F).Geothermal electric stations have until recently been built exclusively where high-temperature geothermal resources are available near the surface. The development of binary cycle power plants and improvements in drilling and extraction technology may enable enhanced geothermal systems over a much greater geographical range. Demonstration projects are operational in Landau-Pfalz, Germany, and Soultz-sous-Forêts, France, while an earlier effort in Basel, Switzerland was shut down after it triggered earthquakes. Other demonstration projects are under construction in Australia, the United Kingdom, and the United States of America.The thermal efficiency of geothermal electric stations is low, around 7–10%, because geothermal fluids are at a low temperature compared with steam from boilers. By the laws of thermodynamics this low temperature limits the efficiency of heat engines in extracting useful energy during the generation of electricity. Exhaust heat is wasted, unless it can be used directly and locally, for example in greenhouses, timber mills, and district heating. The efficiency of the system does not affect operational costs as it would for a coal or other fossil fuel plant, but it does factor into the viability of the station. In order to produce more energy than the pumps consume, electricity generation requires high-temperature geothermal fields and specialized heat cycles. Because geothermal power does not rely on variable sources of energy, unlike, for example, wind or solar, its capacity factor can be quite large – up to 96% has been demonstrated. However the global average capacity factor was 74.5% in 2008, according to the IPCC.
Resources
The Earth's heat content is about 1×1019 TJ (2.8×1015 TWh). This heat naturally flows to the surface by conduction at a rate of 44.2 TW and is replenished by radioactive decay at a rate of 30 TW. These power rates are more than double humanity's current energy consumption from primary sources, but most of this power is too diffuse (approximately 0.1 W/m2 on average) to be recoverable. The Earth's crust effectively acts as a thick insulating blanket which must be pierced by fluid conduits (of magma, water or other) to release the heat underneath.
Electricity generation requires high-temperature resources that can only come from deep underground. The heat must be carried to the surface by fluid circulation, either through magma conduits, hot springs, hydrothermal circulation, oil wells, drilled water wells, or a combination of these. This circulation sometimes exists naturally where the crust is thin: magma conduits bring heat close to the surface, and hot springs bring the heat to the surface. If a hot spring is not available, a well must be drilled into a hot aquifer. Away from tectonic plate boundaries the geothermal gradient is 25–30 °C per kilometre (km) of depth in most of the world, so wells would have to be several kilometres deep to permit electricity generation. The quantity and quality of recoverable resources improves with drilling depth and proximity to tectonic plate boundaries.
In ground that is hot but dry, or where water pressure is inadequate, injected fluid can stimulate production. Developers bore two holes into a candidate site, and fracture the rock between them with explosives or high-pressure water. Then they pump water or liquefied carbon dioxide down one borehole, and it comes up the other borehole as a gas. This approach is called hot dry rock geothermal energy in Europe, or enhanced geothermal systems in North America. Much greater potential may be available from this approach than from conventional tapping of natural aquifers.Estimates of the electricity generating potential of geothermal energy vary from 35 to 2000 GW depending on the scale of investments. This does not include non-electric heat recovered by co-generation, geothermal heat pumps and other direct use. A 2006 report by the Massachusetts Institute of Technology (MIT) that included the potential of enhanced geothermal systems estimated that investing US$1 billion in research and development over 15 years would allow the creation of 100 GW of electrical generating capacity by 2050 in the United States alone. The MIT report estimated that over 200×109 TJ (200 ZJ; 5.6×107 TWh) would be extractable, with the potential to increase this to over 2,000 ZJ with technology improvements – sufficient to provide all the world's present energy needs for several millennia.At present, geothermal wells are rarely more than 3 km (1.9 mi) deep. Upper estimates of geothermal resources assume wells as deep as 10 km (6.2 mi). Drilling near this depth is now possible in the petroleum industry, although it is an expensive process. The deepest research well in the world, the Kola Superdeep Borehole (KSDB-3), is 12.261 km (7.619 mi) deep.
Wells drilled to depths greater than 4 km (2.5 mi) generally incur drilling costs in the tens of millions of dollars. The technological challenges are to drill wide bores at low cost and to break larger volumes of rock.
Geothermal power is considered to be sustainable because the heat extraction is small compared to the Earth's heat content, but extraction must still be monitored to avoid local depletion. Although geothermal sites are capable of providing heat for many decades, individual wells may cool down or run out of water. The three oldest sites, at Larderello, Wairakei, and the Geysers have all reduced production from their peaks. It is not clear whether these stations extracted energy faster than it was replenished from greater depths, or whether the aquifers supplying them are being depleted. If production is reduced, and water is reinjected, these wells could theoretically recover their full potential. Such mitigation strategies have already been implemented at some sites. The long-term sustainability of geothermal energy has been demonstrated at the Larderello field in Italy since 1913, at the Wairakei field in New Zealand since 1958, and at the Geysers field in California since 1960.
Power station types
Geothermal power stations are similar to other steam turbine thermal power stations in that heat from a fuel source (in geothermal's case, the Earth's core) is used to heat water or another working fluid. The working fluid is then used to turn a turbine of a generator, thereby producing electricity. The fluid is then cooled and returned to the heat source.
Dry steam power stations
Dry steam stations are the simplest and oldest design. There are few power stations of this type, because they require a resource that produces dry steam, but they are the most efficient, with the simplest facilities. At these sites, there may be liquid water present in the reservoir, but only steam, not water, is produced to the surface. Dry steam power directly uses geothermal steam of 150 °C or greater to turn turbines. As the turbine rotates it powers a generator that produces electricity and adds to the power field. Then, the steam is emitted to a condenser, where it turns back into a liquid, which then cools the water. After the water is cooled it flows down a pipe that conducts the condensate back into deep wells, where it can be reheated and produced again. At The Geysers in California, after the first 30 years of power production, the steam supply had depleted and generation was substantially reduced. To restore some of the former capacity, supplemental water injection was developed during the 1990s and 2000s, including utilization of effluent from nearby municipal sewage treatment facilities.
Flash steam power stations
Flash steam stations pull deep, high-pressure hot water into lower-pressure tanks and use the resulting flashed steam to drive turbines. They require fluid temperatures of at least 180 °C, usually more. This is the most common type of station in operation today. Flash steam plants use geothermal reservoirs of water with temperatures greater than 360 °F (182 °C). The hot water flows up through wells in the ground under its own pressure. As it flows upward, the pressure decreases and some of the hot water is transformed into steam. The steam is then separated from the water and used to power a turbine/generator. Any leftover water and condensed steam may be injected back into the reservoir, making this a potentially sustainable resource.
Binary cycle power stations
Binary cycle power stations are the most recent development, and can accept fluid temperatures as low as 57 °C. The moderately hot geothermal water is passed by a secondary fluid with a much lower boiling point than water. This causes the secondary fluid to flash vaporize, which then drives the turbines. This is the most common type of geothermal electricity station being constructed today. Both Organic Rankine and Kalina cycles are used. The thermal efficiency of this type of station is typically about 10–13%. Binary cycle power plants have an average unit capacity of 6.3 MW, 30.4 MW at single-flash power plants, 37.4 MW at double-flash plants, and 45.4 MW at power plants working on superheated steam.
Worldwide production
The International Renewable Energy Agency has reported that 14,438 megawatts (MW) of geothermal power was online worldwide at the end of 2020, generating 94,949 GWh of electricity. In theory, the world's geothermal resources are sufficient to supply humans with energy. However, only a tiny fraction of the world's geothermal resources can at present be explored on a profitable basis.In 2021, the United States led the world in geothermal electricity production with 3,889 MW of installed capacity, a substantial increase from 2020 when it produced 2,587 MW. Indonesia follows the US as the second highest producer of geothermal power in the world, with 2,277 MW of capacity online in 2021.
Al Gore said in The Climate Project Asia Pacific Summit that Indonesia could become a super power country in electricity production from geothermal energy. In 2013 the publicly owned electricity sector in India announced a plan to develop the country's first geothermal power facility in the landlocked state of Chhattisgarh.Geothermal power in Canada has high potential due to its position on the Pacific Ring of Fire. The region of greatest potential is the Canadian Cordillera, stretching from British Columbia to the Yukon, where estimates of generating output have ranged from 1,550 MW to 5,000 MW.The geography of Japan is uniquely suited for geothermal power production. Japan has numerous hot springs that could provide fuel for geothermal power plants, but a massive investment in Japan's infrastructure would be necessary.
Utility-grade stations
The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California, United States. As of 2004, five countries (El Salvador, Kenya, the Philippines, Iceland, and Costa Rica) generate more than 15% of their electricity from geothermal sources.Geothermal electricity is generated in the 24 countries listed in the table below. During 2005, contracts were placed for an additional 500 Megawatt (MW) of electrical capacity in the United States, while there were also stations under construction in 11 other countries. Enhanced geothermal systems that are several kilometres in depth are operational in France and Germany and are being developed or evaluated in at least four other countries.
Environmental impact
Existing geothermal electric stations, that fall within the 50th percentile of all total life cycle emissions studies reviewed by the IPCC, produce on average 45 kg of CO2 equivalent emissions per megawatt-hour of generated electricity (kg CO2eq/MW·h). For comparison, a coal-fired power plant emits 1,001 kg of CO2 equivalent per megawatt-hour when not coupled with carbon capture and storage (CCS). As many geothermal projects are situated in volcanically active areas that naturally emit greenhouse gases, it is hypothesized that geothermal plants may actually decrease the rate of de-gassing by reducing the pressure on underground reservoirs.Stations that experience high levels of acids and volatile chemicals are usually equipped with emission-control systems to reduce the exhaust. Geothermal stations can also inject these gases back into the earth as a form of carbon capture and storage, such as in New Zealand and in the CarbFix project in Iceland.
Other stations like the Kızıldere geothermal power plant, exhibit the capability to utilize geothermal fluids to process carbon dioxide gas into dry ice at two nearby plants resulting in very little environmental impact.In addition to dissolved gases, hot water from geothermal sources may hold in solution trace amounts of toxic chemicals, such as mercury, arsenic, boron, antimony, and salt. These chemicals come out of solution as the water cools, and can cause environmental damage if released. The modern practice of injecting geothermal fluids back into the Earth to stimulate production has the side benefit of reducing this environmental risk.
Station construction can adversely affect land stability. Subsidence has occurred in the Wairakei field in New Zealand. Enhanced geothermal systems can trigger earthquakes due to water injection. The project in Basel, Switzerland was suspended because more than 10,000 seismic events measuring up to 3.4 on the Richter Scale occurred over the first 6 days of water injection. The risk of geothermal drilling leading to uplift has been experienced in Staufen im Breisgau.
Geothermal has minimal land and freshwater requirements. Geothermal stations use 404 square meters per GW·h versus 3,632 and 1,335 square meters for coal facilities and wind farms respectively. They use 20 litres of freshwater per MW·h versus over 1000 litres per MW·h for nuclear, coal, or oil.Geothermal power stations can also disrupt the natural cycles of geysers. For example, the Beowawe, Nevada geysers, which were uncapped geothermal wells, stopped erupting due to the development of the dual-flash station.
Local climate cooling is possible as a result of the work of the geothermal circulation systems. However, according to an estimation given by Leningrad Mining Institute in 1980s, possible cool-down will be negligible compared to natural climate fluctuations.While volcanic activity produces geothermal energy, it is also risky. As of 2022 the Puna Geothermal Venture has still not returned to full capacity after the 2018 lower Puna eruption.
Economics
Geothermal power requires no fuel; it is therefore immune to fuel cost fluctuations. However, capital costs tend to be high. Drilling accounts for over half the costs, and exploration of deep resources entails significant risks. A typical well doublet in Nevada can support 4.5 megawatts (MW) of electricity generation and costs about $10 million to drill, with a 20% failure rate.
In total, electrical station construction and well drilling costs about 2–5 million € per MW of electrical capacity, while the levelised energy cost is 0.04–0.10 € per kW·h. Enhanced geothermal systems tend to be on the high side of these ranges, with capital costs above $4 million per MW and levelized costs above $0.054 per kW·h in 2007.Research suggests in-reservoir storage could increase the economic viability of enhanced geothermal systems in energy systems with a large share of variable renewable energy sources.Geothermal power is highly scalable: a small power station can supply a rural village, though initial capital costs can be high.The most developed geothermal field is the Geysers in California. In 2008, this field supported 15 stations, all owned by Calpine, with a total generating capacity of 725 MW.
See also
Enhanced geothermal system
Geothermal heating
Hot dry rock geothermal energy
Iceland Deep Drilling Project
List of renewable energy topics by country
Thermal battery
References
External links
Articles on Geothermal Energy Archived 26 October 2020 at the Wayback Machine
The Geothermal Collection by the University of Hawaii at Manoa
GRC Geothermal Library |
wipro | Wipro (stylized in lower case as, wipro) is an Indian multinational corporation that provides information technology, consultant and business process services. It is one of the leading Big Tech companies. Wipro's capabilities range across cloud computing, computer security, digital transformation, artificial intelligence, robotics, data analytics, and other technology consulting services to customers in 167 countries.
History of Wipro
Early years
The company was incorporated on 29 December 1945 in Amalner, India, by Mohamed Premji. In 1966, after Mohamed Premji's death, his son Azim Premji took over Wipro as its chairperson at the age of 21.
Shift to IT Industry
During the 1970s and 1980s, the company shifted its focus to new opportunities in the IT and computing industry, which was at a nascent stage in India at the time. On 7 June 1977, the name of the company changed from Western India Vegetable Products Limited, to Wipro Products Limited. In 1982, the name was changed again, from Wipro Products Limited to Wipro Limited. In 1999, Wipro was listed on the New York Stock Exchange. In 2004, Wipro became the second Indian IT company to earn US$1 billion in annual revenue.In 2012, Wipro demerged its non-IT businesses into a separate company called Wipro Enterprises. Prior to this demerger, these businesses, mainly in the consumer care, lighting, furniture, hydraulics, water treatment, and medical diagnostics, contributed about 10% of Wipro's total revenues.In March 2023, Wipro opened its American international Headquarters at Tower Center in East Brunswick, Middlesex County, New Jersey.
Notable acquisitions
In 2006, Wipro acquired California-based technology company cMango in an all cash deal.In 2012, Wipro acquired Australian analytics company, Promax Applications Group for A$35 million in an all cash deal.In 2015, Wipro acquired Denmark-based design consultancy Designit for €85 million.In 2016, Wipro acquired cloud services consultancy Appirio for $500M.In April 2019, Wipro acquired Filipino personal care company Splash Corporation.In February 2020, Wipro acquired Rational Interaction, a Seattle-based digital customer experience consultancy.In March 2021, Wipro acquired Capco, a 22-year-old global technology and management consultancy specializing in driving digital transformation in the financial services industry. Wipro has signed an agreement to acquire Ampion for a cash consideration of $117 million, according to an exchange filing.In April 2022, Wipro signed a definitive agreement to acquire the Stamford, Connecticut-headquartered Systems Applications and Products (SAP) consulting company, Rizing Intermediate Holdings.
Listing and shareholding
Wipro's equity shares are listed on Bombay Stock Exchange, where it is a constituent of the BSE SENSEX index, and the National Stock Exchange of India where it is a constituent of the NIFTY 50. The American Depositary Shares of the company are listed on the New York Stock Exchange (NYSE) since October 2000.The table provides the shareholding pattern as of 31 March 2022.
References
External links
Business data for Wipro: |
liquefied natural gas | Liquefied natural gas (LNG) is natural gas (predominantly methane, CH4, with some mixture of ethane, C2H6) that has been cooled down to liquid form for ease and safety of non-pressurized storage or transport. It takes up about 1/600th the volume of natural gas in the gaseous state at standard conditions for temperature and pressure.
LNG is odorless, colorless, non-toxic and non-corrosive. Hazards include flammability after vaporization into a gaseous state, freezing and asphyxia. The liquefaction process involves removal of certain components, such as dust, acid gases, helium, water, and heavy hydrocarbons, which could cause difficulty downstream. The natural gas is then condensed into a liquid at close to atmospheric pressure by cooling it to approximately −162 °C (−260 °F); maximum transport pressure is set at around 25 kPa (4 psi) (gauge pressure), which is about 1.25 times atmospheric pressure at sea level.
The gas extracted from underground hydrocarbon deposits contains a varying mix of hydrocarbon components, which usually includes mostly methane (CH4), along with ethane (C2H6), propane (C3H8) and butane (C4H10). Other gases also occur in natural gas, notably CO2. These gases have wide-ranging boiling points and also different heating values, allowing different routes to commercialization and also different uses. The "acidic" elements such as hydrogen sulphide (H2S) and carbon dioxide (CO2), together with oil, mud, water, and mercury, are removed from the gas to deliver a clean sweetened stream of gas. Failure to remove much or all of such acidic molecules, mercury, and other impurities could result in damage to the equipment. Corrosion of steel pipes and amalgamization of mercury to aluminum within cryogenic heat exchangers could cause expensive damage.
The gas stream is typically separated into the liquefied petroleum fractions (butane and propane), which can be stored in liquid form at relatively low pressure, and the lighter ethane and methane fractions. These lighter fractions of methane and ethane are then liquefied to make up the bulk of LNG that is shipped.
Natural gas was considered during the 20th century to be economically unimportant wherever gas-producing oil or gas fields were distant from gas pipelines or located in offshore locations where pipelines were not viable. In the past this usually meant that natural gas produced was typically flared, especially since unlike oil, no viable method for natural gas storage or transport existed other than compressed gas pipelines to end users of the same gas. This meant that natural gas markets were historically entirely local, and any production had to be consumed within the local or regional network.
Developments of production processes, cryogenic storage, and transportation effectively created the tools required to commercialize natural gas into a global market which now competes with other fuels. Furthermore, the development of LNG storage also introduced a reliability in networks which was previously thought impossible. Given that storage of other fuels is relatively easily secured using simple tanks, a supply for several months could be kept in storage. With the advent of large-scale cryogenic storage, it became possible to create long term gas storage reserves. These reserves of liquefied gas could be deployed at a moment's notice through regasification processes, and today are the main means for networks to handle local peak shaving requirements.
Specific energy content and energy density
The heating value depends on the source of gas that is used and the process that is used to liquefy the gas. The range of heating value can span ±10 to 15 percent. A typical value of the higher heating value of LNG is approximately 50 MJ/kg or 21,500 BTU/lb. A typical value of the lower heating value of LNG is 45 MJ/kg or 19,350 BTU/lb.
For the purpose of comparison of different fuels, the heating value may be expressed in terms of energy per volume, which is known as the energy density expressed in MJ/litre. The density of LNG is roughly 0.41 kg/litre to 0.5 kg/litre, depending on temperature, pressure, and composition, compared to water at 1.0 kg/litre. Using the median value of 0.45 kg/litre, the typical energy density values are 22.5 MJ/litre (based on higher heating value) or 20.3 MJ/litre (based on lower heating value).
The volumetric energy density of LNG is approximately 2.4 times that of compressed natural gas (CNG), which makes it economical to transport natural gas by ship in the form of LNG. The energy density of LNG is comparable to propane and ethanol but is only 60 percent that of diesel and 70 percent that of gasoline.
History
Experiments on the properties of gases started early in the seventeenth century. By the middle of the seventeenth century Robert Boyle had derived the inverse relationship between the pressure and the volume of gases. About the same time, Guillaume Amontons started looking into temperature effects on gas. Various gas experiments continued for the next 200 years. During that time there were efforts to liquefy gases. Many new facts about the nature of gases were discovered. For example, early in the nineteenth century Cagniard de la Tour showed there was a temperature above which a gas could not be liquefied. There was a major push in the mid to late nineteenth century to liquefy all gases. A number of scientists including Michael Faraday, James Joule, and William Thomson (Lord Kelvin) did experiments in this area. In 1886 Karol Olszewski liquefied methane, the primary constituent of natural gas. By 1900 all gases had been liquefied except helium, which was liquefied in 1908.
The first large-scale liquefaction of natural gas in the U.S. was in 1918 when the U.S. government liquefied natural gas as a way to extract helium, which is a small component of some natural gas. This helium was intended for use in British dirigibles for World War I. The liquid natural gas (LNG) was not stored, but regasified and immediately put into the gas mains.The key patents having to do with natural gas liquefaction date from 1915 and the mid-1930s. In 1915 Godfrey Cabot patented a method for storing liquid gases at very low temperatures. It consisted of a Thermos bottle-type design which included a cold inner tank within an outer tank; the tanks being separated by insulation. In 1937 Lee Twomey received patents for a process for large-scale liquefaction of natural gas. The intention was to store natural gas as a liquid so it could be used for shaving peak energy loads during cold snaps. Because of large volumes it is not practical to store natural gas, as a gas, near atmospheric pressure. However, when liquefied, it can be stored in a volume 1/600th as large. This is a practical way to store it but the gas must be kept at −260 °F (−162 °C).
There are two processes for liquefying natural gas in large quantities. The first is the cascade process, in which the natural gas is cooled by another gas which in turn has been cooled by still another gas, hence named the "cascade" process. There are usually two cascade cycles before the liquid natural gas cycle. The other method is the Linde process, with a variation of the Linde process, called the Claude process, being sometimes used. In this process, the gas is cooled regeneratively by continually passing and expanding it through an orifice until it is cooled to temperatures at which it liquefies. This process was developed by James Joule and William Thomson and is known as the Joule–Thomson effect. Lee Twomey used the cascade process for his patents.
Commercial operations in the United States
The East Ohio Gas Company built a full-scale commercial LNG plant in Cleveland, Ohio, in 1940 just after a successful pilot plant built by its sister company, Hope Natural Gas Company of West Virginia. This was the first such plant in the world. Originally it had three spheres, approximately 63 feet in diameter containing LNG at −260 °F. Each sphere held the equivalent of about 50 million cubic feet of natural gas. A fourth tank, a cylinder, was added in 1942. It had an equivalent capacity of 100 million cubic feet of gas. The plant operated successfully for three years. The stored gas was regasified and put into the mains when cold snaps hit and extra capacity was needed. This precluded the denial of gas to some customers during a cold snap.
The Cleveland plant failed on October 20, 1944, when the cylindrical tank ruptured, spilling thousands of gallons of LNG over the plant and nearby neighborhood. The gas evaporated and caught fire, which caused 130 fatalities. The fire delayed further implementation of LNG facilities for several years. However, over the next 15 years new research on low-temperature alloys, and better insulation materials, set the stage for a revival of the industry. It restarted in 1959 when a U.S. World War II Liberty ship, the Methane Pioneer, converted to carry LNG, made a delivery of LNG from the U.S. Gulf Coast to energy-starved Great Britain. In June 1964, the world's first purpose-built LNG carrier, the Methane Princess, entered service. Soon after that a large natural gas field was discovered in Algeria. International trade in LNG quickly followed as LNG was shipped to France and Great Britain from the Algerian fields. One more important attribute of LNG had now been exploited. Once natural gas was liquefied it could not only be stored more easily, but it could be transported. Thus energy could now be shipped over the oceans via LNG the same way it was shipped in the form of oil.
The US LNG industry restarted in 1965 when a series of new plants were built in the U.S. The building continued through the 1970s. These plants were not only used for peak-shaving, as in Cleveland, but also for base-load supplies for places that never had natural gas before this. A number of import facilities were built on the East Coast in anticipation of the need to import energy via LNG. However, a recent boom in U.S. natural gas production (2010–2014), enabled by hydraulic fracturing ("fracking"), has many of these import facilities being considered as export facilities. The first U.S. LNG export was completed in early 2016.
LNG life cycle
The process begins with the pre-treatment of a feedstock of natural gas entering the system to remove impurities such as H2S, CO2, H2O, mercury and higher-chained hydrocarbons. Feedstock gas then enters the liquefaction unit where it is cooled to between -145 °C and -163 °C Although the type or number of heating cycles and/or refrigerants used may vary based on the technology, the basic process involves circulating the gas through aluminum tube coils and exposure to a compressed refrigerant. As the refrigerant is vaporized, the heat transfer causes the gas in the coils to cool. The LNG is then stored in a specialized double-walled insulated tank at atmospheric pressure ready to be transported to its final destination.Most domestic LNG is transported by land via truck/trailer designed for cryogenic temperatures. Intercontinental LNG transport travels by special tanker ships. LNG transport tanks comprise an internal steel or aluminum compartment and an external carbon or steel compartment with a vacuum system in between to reduce the amount of heat transfer. Once on site, the LNG must be stored in vacuum insulated or flat bottom storage tanks. When ready for distribution, the LNG enters a regasification facility where it is pumped into a vaporizer and heated back into gaseous form. The gas then enters the gas pipeline distribution system and is delivered to the end-user.
Production
The natural gas fed into the LNG plant will be treated to remove water, hydrogen sulfide, carbon dioxide, benzene and other components that will freeze under the low temperatures needed for storage or be destructive to the liquefaction facility. LNG typically contains more than 90% methane. It also contains small amounts of ethane, propane, butane, some heavier alkanes, and nitrogen. The purification process can be designed to give almost 100% methane. One of the risks of LNG is a rapid phase transition explosion (RPT), which occurs when cold LNG comes into contact with water.The most important infrastructure needed for LNG production and transportation is an LNG plant consisting of one or more LNG trains, each of which is an independent unit for gas liquefaction and purification. A typical train consists of a compression area, propane condenser area, and methane and ethane areas.
The largest LNG train in operation is in Qatar, with a total production capacity of 7.8 million tonnes per annum (MTPA). LNG is loaded onto ships and delivered to a regasification terminal, where the LNG is allowed to expand and reconvert into gas. Regasification terminals are usually connected to a storage and pipeline distribution network to distribute natural gas to local distribution companies (LDCs) or independent power plants (IPPs).
LNG plant production
Information for the following table is derived in part from publication by the U.S. Energy Information Administration.
See also List of LNG terminals
World total production
The LNG industry developed slowly during the second half of the last century because most LNG plants are located in remote areas not served by pipelines, and because of the high costs of treating and transporting LNG. Constructing an LNG plant costs at least $1.5 billion per 1 MTPA capacity, a receiving terminal costs $1 billion per 1 bcf/day throughput capacity and LNG vessels cost $200 million–$300 million.
In the early 2000s, prices for constructing LNG plants, receiving terminals and vessels fell as new technologies emerged and more players invested in liquefaction and regasification. This tended to make LNG more competitive as a means of energy distribution, but increasing material costs and demand for construction contractors have put upward pressure on prices in the last few years.
The standard price for a 125,000 cubic meter LNG vessel built in European and Japanese shipyards used to be US$250 million. When Korean and Chinese shipyards entered the race, increased competition reduced profit margins and improved efficiency—reducing costs by 60 percent. Costs in US dollars also declined due to the devaluation of the currencies of the world's largest shipbuilders: the Japanese yen and Korean won.
Since 2004, the large number of orders increased demand for shipyard slots, raising their price and increasing ship costs. The per-ton construction cost of an LNG liquefaction plant fell steadily from the 1970s through the 1990s. The cost reduced by approximately 35 percent. However, recently the cost of building liquefaction and regasification terminals doubled due to increased cost of materials and a shortage of skilled labor, professional engineers, designers, managers and other white-collar professionals.
Due to natural gas shortage concerns in the northeastern U.S. and surplus natural gas in the rest of the country, many new LNG import and export terminals are being contemplated in the United States. Concerns about the safety of such facilities create controversy in some regions where they are proposed. One such location is in the Long Island Sound between Connecticut and Long Island. Broadwater Energy, an effort of TransCanada Corp. and Shell, wishes to build an LNG import terminal in the sound on the New York side. Local politicians including the Suffolk County Executive raised questions about the terminal. In 2005, New York Senators Chuck Schumer and Hillary Clinton also announced their opposition to the project. Several import terminal proposals along the coast of Maine were also met with high levels of resistance and questions. On Sep. 13, 2013 the U.S. Department of Energy approved Dominion Cove Point's application to export up to 770 million cubic feet per day of LNG to countries that do not have a free trade agreement with the U.S. In May 2014, the FERC concluded its environmental assessment of the Cove Point LNG project, which found that the proposed natural gas export project could be built and operated safely. Another LNG terminal is currently proposed for Elba Island, Ga. Plans for three LNG export terminals in the U.S. Gulf Coast region have also received conditional Federal approval. In Canada, an LNG export terminal is under construction near Guysborough, Nova Scotia.
Commercial aspects
Global Trade
In the commercial development of an LNG value chain, LNG suppliers first confirm sales to the downstream buyers and then sign long-term contracts (typically 20–25 years) with strict terms and structures for gas pricing. Only when the customers are confirmed and the development of a greenfield project deemed economically feasible, could the sponsors of an LNG project invest in their development and operation. Thus, the LNG liquefaction business has been limited to players with strong financial and political resources. Major international oil companies (IOCs) such as ExxonMobil, Royal Dutch Shell, BP, Chevron, TotalEnergies and national oil companies (NOCs) such as Pertamina and Petronas are active players.
LNG is shipped around the world in specially constructed seagoing vessels. The trade of LNG is completed by signing an SPA (sale and purchase agreement) between a supplier and receiving terminal, and by signing a GSA (gas sale agreement) between a receiving terminal and end-users. Most of the contract terms used to be DES or ex ship, holding the seller responsible for the transport of the gas. With low shipbuilding costs, and the buyers preferring to ensure reliable and stable supply, however, contracts with FOB terms increased. Under such terms the buyer, who often owns a vessel or signs a long-term charter agreement with independent carriers, is responsible for the transport.
LNG purchasing agreements used to be for a long term with relatively little flexibility both in price and volume. If the annual contract quantity is confirmed, the buyer is obliged to take and pay for the product, or pay for it even if not taken, in what is referred to as the obligation of take-or-pay contract (TOP).
In the mid-1990s, LNG was a buyer's market. At the request of buyers, the SPAs began to adopt some flexibilities on volume and price. The buyers had more upward and downward flexibilities in TOP, and short-term SPAs less than 16 years came into effect. At the same time, alternative destinations for cargo and arbitrage were also allowed. By the turn of the 21st century, the market was again in favor of sellers. However, sellers have become more sophisticated and are now proposing sharing of arbitrage opportunities and moving away from S-curve pricing. There has been much discussion regarding the creation of an "OGEC" as a natural gas equivalent of OPEC. Russia and Qatar, countries with the largest and the third largest natural gas reserves in the world, have finally supported such move.
Until 2003, LNG prices have closely followed oil prices. Since then, LNG prices in Europe and Japan have been lower than oil prices, although the link between LNG and oil is still strong. In contrast, prices in the US and the UK have recently skyrocketed, then fallen as a result of changes in supply and storage. In the late 1990s and early 2000s, the market shifted for buyers, but since 2003 and 2004, it has been a strong seller's market, with net-back as the best estimation for prices..
Research from Global Energy Monitor in 2019 warned that up to US$1.3 trillion in new LNG export and import infrastructure currently under development is at significant risk of becoming stranded, as global gas risks becoming oversupplied, particularly if the United States and Canada play a larger role.The current surge in unconventional oil and gas in the U.S. has resulted in lower gas prices in the U.S. This has led to discussions in Asia' oil linked gas markets to import gas based on Henry Hub index. Recent high level conference in Vancouver, the Pacific Energy Summit 2013 Pacific Energy Summit 2013 convened policy makers and experts from Asia and the U.S. to discuss LNG trade relations between these regions.
Receiving terminals exist in about 40 countries, including Belgium, Chile, China, the Dominican Republic, France, Greece, India, Italy, Japan, Korea, Poland, Spain, Taiwan, the UK, the US, among others. Plans exist for Bahrain, Germany, Ghana, Morocco, Philippines, Vietnam and others to also construct new receiving (regasification) terminals.
LNG Project Screening
Base load (large-scale, >1 MTPA) LNG projects require natural gas reserves, buyers and financing. Using proven technology and a proven contractor is extremely important for both investors and buyers. Gas reserves required: 1 tcf of gas required per Mtpa of LNG over
20 years.LNG is most cost efficiently produced in relatively large facilities due to economies of scale, at sites with marine access allowing regular large bulk shipments direct to market. This requires a secure gas supply of sufficient capacity. Ideally, facilities are located close to the gas source, to minimize the cost of intermediate transport infrastructure and gas shrinkage (fuel loss in transport). The high cost of building large LNG facilities makes the progressive development of gas sources to maximize facility utilization essential, and the life extension of existing, financially depreciated LNG facilities cost effective. Particularly when combined with lower sale prices due to large installed capacity and rising construction costs, this makes the economic screening/ justification to develop new, and especially greenfield, LNG facilities challenging, even if these could be more environmentally friendly than existing facilities with all stakeholder concerns satisfied. Due to high financial risk, it is usual to contractually secure gas supply/ concessions and gas sales for extended periods before proceeding to an investment decision.
Uses
The primary use of LNG is to simplify transport of natural gas from the source to a destination. On the large scale, this is done when the source and the destination are across an ocean from each other. It can also be used when adequate pipeline capacity is not available. For large-scale transport uses, the LNG is typically regassified at the receiving end and pushed into the local natural gas pipeline infrastructure.
LNG can also be used to meet peak demand when the normal pipeline infrastructure can meet most demand needs, but not the peak demand needs. These plants are typically called LNG Peak Shaving Plants as the purpose is to shave off part of the peak demand from what is required out of the supply pipeline.
LNG can be used to fuel internal combustion engines. LNG is in the early stages of becoming a mainstream fuel for transportation needs. It is being evaluated and tested for over-the-road trucking, off-road, marine, and train applications. There are known problems with the fuel tanks and delivery of gas to the engine, but despite these concerns the move to LNG as a transportation fuel has begun. LNG competes directly with compressed natural gas as a fuel for natural gas vehicles since the engine is identical. There may be applications where LNG trucks, buses, trains and boats could be cost-effective in order to regularly distribute LNG energy together with general freight and/or passengers to smaller, isolated communities without a local gas source or access to pipelines.
Use of LNG to fuel large over-the-road trucks
China has been a leader in the use of LNG vehicles with over 100,000 LNG-powered vehicles on the road as of Sept 2014.In the United States the beginnings of a public LNG fueling capability are being put in place. An alternative fuelling centre tracking site shows 84 public truck LNG fuel centres as of Dec 2016. It is possible for large trucks to make cross country trips such as Los Angeles to Boston and refuel at public refuelling stations every 500 miles. The 2013 National Trucker's Directory lists approximately 7,000 truckstops, thus approximately 1% of US truckstops have LNG available.
While as of December 2014 LNG fuel and NGV's were not taken to very quickly within Europe and it was questionable whether LNG will ever become the fuel of choice among fleet operators recent trends from 2018 onwards show different prospect.
During the year 2015, Netherlands introduced LNG-powered trucks in transport sector. Australian government is planning to develop an LNG highway to utilise the locally produced LNG and replace the imported diesel fuel used by interstate haulage vehicles.In the year 2015, India also made small beginning by transporting LNG by LNG-powered road tankers in Kerala state. In 2017, Petronet LNG is setting up 20 LNG stations on highways along the Indian west coast that connect Delhi with Thiruvananthapuram covering a total distance of 4,500 km via Mumbai and Bengaluru. In 2020, India planned to install 24 LNG fuelling stations along the 6,000 km Golden Quadrilateral highways connecting the four metros due to LNG prices decreasing in price.Japan, the world's largest importer of LNG, is set to begin use of LNG as a road transport fuel.
High-power, high-torque engines
Engine displacement is an important factor in the power of an internal combustion engine. Thus a 2.0 L engine would typically be more powerful than an 1.8 L engine, but that assumes a similar air–fuel mixture is used.
However, if a smaller engine uses an air–fuel mixture with higher energy density (such as via a turbocharger), then it can produce more power than a larger one burning a less energy-dense air–fuel mixture. For high-power, high-torque engines, a fuel that creates a more energy-dense air–fuel mixture is preferred, because a smaller and simpler engine can produce the same power.
With conventional gasoline and diesel engines the energy density of the air–fuel mixture is limited because the liquid fuels do not mix well in the cylinder. Further, gasoline and diesel fuel have autoignition temperatures and pressures relevant to engine design. An important part of engine design is the interactions of cylinders, compression ratios, and fuel injectors such that pre-ignition is prevented but at the same time as much fuel as possible can be injected, become well mixed, and still have time to complete the combustion process during the power stroke.
Natural gas does not auto-ignite at pressures and temperatures relevant to conventional gasoline and diesel engine design, so it allows more flexibility in design. Methane, the main component of natural gas, has an autoignition temperature of 580 °C (1,076 °F), whereas gasoline and diesel autoignite at approximately 250 °C (482 °F) and 210 °C (410 °F) respectively.
With a compressed natural gas (CNG) engine, the mixing of the fuel and the air is more effective since gases typically mix well in a short period of time, but at typical CNG pressures the fuel itself is less energy-dense than gasoline or diesel, so the result is a less energy-dense air–fuel mixture. For an engine of a given cylinder displacement, a normally-aspirated CNG-powered engine is typically less powerful than a gasoline or diesel engine of similar displacement. For that reason turbochargers are popular in European CNG cars. Despite that limitation, the 12-litre Cummins Westport ISX12G engine is an example of a CNG-capable engine designed to pull tractor–trailer loads up to 80,000 pounds (36,000 kg) showing CNG can be used in many on-road truck applications. The original ISX G engine incorporated a turbocharger to enhance the air–fuel energy density.LNG offers a unique advantage over CNG for more demanding high-power applications by eliminating the need for a turbocharger. Because LNG boils at approximately −160 °C (−256 °F), by using a simple heat exchanger a small amount of LNG can be converted to its gaseous form at extremely high pressure with the use of little or no mechanical energy. A properly designed high-power engine can leverage this extremely-high-pressure, energy-dense gaseous fuel source to create a higher-energy-density air–fuel mixture than can be efficiently created with a CNG-powered engine. The result when compared to CNG engines is more overall efficiency in high-power engine applications when high-pressure direct-injection technology is used. The Westport HDMI2 fuel system is an example of a high-pressure direct-injection system that does not require a turbocharger if paired with an appropriate LNG heat exchanger. The Volvo Trucks 13-litre LNG engine is another example of an LNG engine leveraging advanced high-pressure technology.
Westport recommends CNG for engines 7 litres or smaller and LNG with direct-injection for engines between 20 and 150 litres. For engines between 7 and 20 litres either option is recommended. See slide 13 from their NGV Bruxelles – Industry Innovation Session presentation.High-power engines in the oil drilling, mining, locomotive, and marine fields have been or are being developed. Paul Blomerus has written a paper concluding as much as 40 million tonnes per annum of LNG (approximately 26.1 billion gallons/year or 71 million gallons/day) could be required just to meet the global needs of such high-power engines by 2025 to 2030.
As of the end of first quarter of 2015, Prometheus Energy Group Inc claimed to have delivered over 100 million gallons of LNG to the industrial market within the previous four years and is continuing to add new customers.
Use of LNG in maritime applications
LNG bunkering has been established in some ports via truck-to-ship fueling. This type of LNG fueling is straightforward to implement, assuming a supply of LNG is available.
Feeder and short-sea shipping company Unifeeder has been operating the world's first LNG powered container vessel, the Wes Amelie, since late 2017, transiting between the port of Rotterdam and the Baltics on a weekly schedule.
Container shipping company Maersk Group has decided to introduce LNG-powered container ships. The DEME Group has contracted Wärtsilä to power its new generation 'Antigoon' class dredger with dual fuel (DF) engines. Crowley Maritime of Jacksonville, Florida, launched two LNG-powered ConRo ships, the Coquí and Taino, in 2018 and 2019, respectively.In 2014, Shell ordered a dedicated LNG bunker vessel. It is planned to go into service in Rotterdam in the summer of 2017The International Convention for the Prevention of Pollution from Ships (MARPOL), adopted by the IMO, has mandated that marine vessels shall not consume fuel (bunker fuel, diesel, etc.) with a sulphur content greater than 0.5% from the year 2020 within international waters and the coastal areas of countries adopting the same regulation. Replacement of high sulphur bunker fuel with sulphur-free LNG is required on a major scale in the marine transport sector, as low sulphur liquid fuels are costlier than LNG. Japan's is planning to use LNG as bunker fuel by 2020.BHP, one of the largest mining companies in the world, is aiming to commission minerals transport ships powered with LNG by late 2021.In January 2021, 175 sea-going LNG-powered ships were in service, with another 200 ships ordered.
Use of LNG on rail
Florida East Coast Railway has 24 GE ES44C4 locomotives adapted to run on LNG fuel.
Trade
The global trade in LNG is growing rapidly from negligible in 1970 to what is expected to be a globally substantial amount by 2020. As a reference, the 2014 global production of crude oil was 14.6 million cubic metres (92 million barrels) per day or 54,600 terawatt-hours (186.4 quadrillion British thermal units) per year.
In 1970, global LNG trade was of 3 billion cubic metres (bcm) (0.11 quads). In 2011, it was 331 bcm (11.92 quads). The U.S. started exporting LNG in February 2016. The Black & Veatch Oct 2014 forecast is that by 2020, the U.S. alone will export between 10 and 14 billion cu ft/d (280 and 400 million m3/d) or by heating value 3.75 to 5.25 quad (1,100 to 1,540 TWh). E&Y projects global LNG demand could hit 400 mtpa (19.7 quads) by 2020. If that occurs, the LNG market will be roughly 10% the size of the global crude oil market, and that does not count the vast majority of natural gas which is delivered via pipeline directly from the well to the consumer.
In 2004, LNG accounted for 7 percent of the world's natural gas demand. The global trade in LNG, which has increased at a rate of 7.4 percent per year over the decade from 1995 to 2005, is expected to continue to grow substantially. LNG trade is expected to increase at 6.7 percent per year from 2005 to 2020.Until the mid-1990s, LNG demand was heavily concentrated in Northeast Asia: Japan, South Korea and Taiwan. At the same time, Pacific Basin supplies dominated world LNG trade. The worldwide interest in using natural gas-fired combined cycle generating units for electric power generation, coupled with the inability of North American and North Sea natural gas supplies to meet the growing demand, substantially broadened the regional
markets for LNG. It also brought new Atlantic Basin and Middle East suppliers into the trade.
By the end of 2017, there were 19 LNG exporting countries and 40 LNG importing countries. The three biggest LNG exporters in 2017 were Qatar (77.5 MT), Australia (55.6 MT) and Malaysia (26.9 MT). The three biggest LNG importers in 2017 were Japan (83.5 MT), China (39 MT) and South Korea (37.8 MT). LNG trade volumes increased from 142 MT in 2005 to 159 MT in 2006, 165 MT in 2007, 171 MT in 2008, 220 MT in 2010, 237 MT in 2013, 264 MT in 2016 and 290 MT in 2017. Global LNG production was 246 MT in 2014, most of which was used in trade between countries. During the next several years there would be significant increase in volume of LNG Trade. For example, about 59 MTPA of new LNG supply from six new plants came to market just in 2009, including:
Northwest Shelf Train 5: 4.4 MTPA
Sakhalin-II: 9.6 MTPA
Yemen LNG: 6.7 MTPA
Tangguh: 7.6 MTPA
Qatargas: 15.6 MTPA
Rasgas Qatar: 15.6 MTPAIn 2006, Qatar became the world's biggest exporter of LNG. As of 2012, Qatar is the source of 25 percent of the world's LNG exports. As of 2017, Qatar was estimated to supply 26.7% of the world's LNG.Investments in U.S. export facilities were increasing by 2013, these investments were spurred by increasing shale gas production in the United States and a large price differential between natural gas prices in the U.S. and those in Europe and Asia. Cheniere Energy became the first company in the United States to receive permission and export LNG in 2016. After a US-EU agreement in 2018, exports from USA to EU increased. In November 2021, U.S. producer Venture Global LNG signed a twenty-year contract with China's state-owned Sinopec to supply liquefied natural gas. China's imports of U.S. natural gas will more than double. U.S. exports of liquefied natural gas to China and other Asian countries surged in 2021, with Asian buyers willing to pay higher prices than European importers. This reversed in 2022, when most of US LNG went to Europe. US LNG export contracts are mainly made for 15–20 years. Exports from the U.S. are likely to reach 13.3 Bcf/d in 2024 due to projects coming online on the Gulf of Mexico.
Imports
In 1964, the UK and France made the first LNG trade, buying gas from Algeria, witnessing a new era of energy.
In 2014, 19 countries exported LNG.Compared with the crude oil market, in 2013 the natural gas market was about 72 percent of the crude oil market (measured on a heat equivalent basis), of which LNG forms a small but rapidly growing part. Much of this growth is driven by the need for clean fuel and some substitution effect due to the high price of oil (primarily in the heating and electricity generation sectors).
Japan, South Korea, Spain, France, Italy and Taiwan import large volumes of LNG due to their shortage of energy. In 2005, Japan imported 58.6 million tons of LNG, representing some 30 percent of the LNG trade around the world that year. Also in 2005, South Korea imported 22.1 million tons, and in 2004 Taiwan imported 6.8 million tons. These three major buyers purchase approximately two-thirds of the world's LNG demand. In addition, Spain imported some 8.2 MTPA in 2006, making it the third largest importer. France also imported similar quantities as Spain. Following the Fukushima Daiichi nuclear disaster in March 2011 Japan became a major importer accounting for one third of the total.
European LNG imports fell by 30 percent in 2012, and fell further by 24 percent in 2013, as South American and Asian importers paid more. European LNG imports increased to new heights in 2019, remained high in 2020 and 2021, and increased even more in 2022. Main contributors were Qatar, USA, and Russia.In 2017, global LNG imports reached 289.8 million tonnes of LNG. In 2017, 72.9% of global LNG demand was located in Asia.
Cargo diversion
Based on the LNG SPAs, LNG is destined for pre-agreed destinations, and diversion of that LNG is not allowed. However, if Seller and Buyer make a mutual agreement, then the diversion of the cargo is permitted — subject to sharing the additional profit created by such a diversion, by paying a penalty fee. In the European Union and some other jurisdictions, it is not permitted to apply the profit-sharing clause in LNG SPAs.
Cost of LNG plants
For an extended period of time, design improvements in liquefaction plants and tankers had the effect of reducing costs.
In the 1980s, the cost of building an LNG liquefaction plant cost $350/tpa (tonne per annum). In the 2000s, it was $200/tpa. In 2012, the costs can go as high as $1,000/tpa, partly due to the increase in the price of steel.As recently as 2003, it was common to assume that this was a "learning curve" effect and would continue into the future. But this perception of steadily falling costs for LNG has been dashed in the last several years.The construction cost of greenfield LNG projects started to skyrocket from 2004 afterward and has increased from about $400 per ton per year of capacity to $1,000 per ton per year of capacity in 2008.
The main reasons for skyrocketed costs in LNG industry can be described as follows:
Low availability of EPC contractors as result of extraordinary high level of ongoing petroleum projects worldwide.
High raw material prices as result of surge in demand for raw materials.
Lack of skilled and experienced workforce in LNG industry.
Devaluation of US dollar.
Very complex nature of projects built in remote locations and where construction costs are regarded as some of the highest in the world.Excluding high cost projects the increase of 120% over the period 2002-2012 is more in line with escalation in the upstream oil & gas industry as reported by the UCCI index The 2007–2008 global financial crisis (GFC) caused a general decline in raw material and equipment prices, which somewhat lessened the construction cost of LNG plants. However, by 2012 this was more than offset by increasing demand for materials and labor for the LNG market.
Small-scale liquefaction plants
Small-scale liquefaction plants are suitable for peakshaving on natural gas pipelines, transportation fuel, or for deliveries of natural gas to remote areas not connected to pipelines. They typically have a compact size, are fed from a natural gas pipeline, and are located close to the location where the LNG will be used. This proximity decreases transportation and LNG product costs for consumers. It also avoids the additional greenhouse gas emissions generated during long transportation.
The small-scale LNG plant also allows localized peakshaving to occur—balancing the availability of natural gas during high and low periods of demand. It also makes it possible for communities without access to natural gas pipelines to install local distribution systems and have them supplied with stored LNG.
LNG pricing
There are three major pricing systems in the current LNG contracts:
Oil indexed contract, used primarily in Japan, Korea, Taiwan and China;
Oil, oil products and other energy carriers indexed contracts, used primarily in Continental Europe; and
Market indexed contracts, used in the US and the UK.The formula for an indexed price is as follows:
CP = BP + β X
BP: constant part or base price
β: gradient
X: indexationThe formula has been widely used in Asian LNG SPAs, where base price represents various non-oil factors, but usually a constant determined by negotiation at a level which can prevent LNG prices from falling below a certain level. It thus varies regardless of oil price fluctuation.
Henry Hub Plus
Some LNG buyers have already signed contracts for future US-based cargos at prices linked to Henry Hub prices. Cheniere Energy's LNG export contract pricing consists of a fixed fee (liquefaction tolling fee) plus 115% of Henry Hub per million British thermal units of LNG. Tolling fees in the Cheniere contracts vary: US$2.25 per million British thermal units ($7.7/MWh) with BG Group signed in 2011; $2.49 per million British thermal units ($8.5/MWh) with Spain's GNF signed in 2012; and $3.00 per million British thermal units ($10.2/MWh) with South Korea's Kogas and Centrica signed in 2013.
Oil parity
Oil parity is the LNG price that would be equal to that of crude oil on a barrel of oil equivalent (BOE) basis. If the LNG price exceeds the price of crude oil in BOE terms, then the situation is called broken oil parity. A coefficient of 0.1724 results in full oil parity. In most cases the price of LNG is less than the price of crude oil in BOE terms. In 2009, in several spot cargo deals especially in East Asia, oil parity approached the full oil parity or even exceeded oil parity. In January 2016, the spot LNG price of $5.461 per million British thermal units ($18.63/MWh) has broken oil parity when the Brent crude price (≤32 US$/bbl) has fallen steeply. By the end of June 2016, LNG price has fallen by nearly 50% below its oil parity price, making it more economical than more-polluting diesel/gas oil in the transport sector.
S-curve
Most of the LNG trade is governed by long-term contracts. Many formulae include an S-curve, where the price formula is different above and below a certain oil price, to dampen the impact of high oil prices on the buyer, and low oil prices on the seller. When the spot LNG price is cheaper than long term oil price indexed contracts, the most profitable LNG end use is to power mobile engines for replacing costly gasoline and diesel consumption.
In most of the East Asian LNG contracts, price formula is indexed to a basket of crude imported to Japan called the Japan Crude Cocktail (JCC). In Indonesian LNG contracts, price formula is linked to Indonesian Crude Price (ICP).
In continental Europe, the price formula indexation does not follow the same format, and it varies from contract to contract. Brent crude price (B), heavy fuel oil price (HFO), light fuel oil price (LFO), gas oil price (GO), coal price, electricity price and in some cases, consumer and producer price indexes are the indexation elements of price formulas.
Price review
Usually there exists a clause allowing parties to trigger the price revision or price reopening in LNG SPAs. In some contracts there are two options for triggering a price revision. regular and special. Regular ones are the dates that will be agreed and defined in the LNG SPAs for the purpose of price review.
Quality of LNG
LNG quality is one of the most important issues in the LNG business. Any gas which does not conform to the agreed specifications in the sale and purchase agreement is regarded as "off-specification" (off-spec) or "off-quality" gas or LNG. Quality regulations serve three purposes:
1 – to ensure that the gas distributed is non-corrosive and non-toxic, below the upper limits for H2S, total sulphur, CO2 and Hg content;2 – to guard against the formation of liquids or hydrates in the networks, through maximum water and hydrocarbon dewpoints;3 – to allow interchangeability of the gases distributed, via limits on the variation range for parameters affecting combustion: content of inert gases, calorific value, Wobbe index, Soot Index, Incomplete Combustion Factor, Yellow Tip Index, etc.In the case of off-spec gas or LNG the buyer can refuse to accept the gas or LNG and the seller has to pay liquidated damages for the respective off-spec gas volumes.
The quality of gas or LNG is measured at delivery point by using an instrument such as a gas chromatograph.
The most important gas quality concerns involve the sulphur and mercury content and the calorific value. Due to the sensitivity of liquefaction facilities to sulfur and mercury elements, the gas being sent to the liquefaction process shall be accurately refined and tested in order to assure the minimum possible concentration of these two elements before entering the liquefaction plant, hence there is not much concern about them.
However, the main concern is the heating value of gas. Usually natural gas markets can be divided in three markets in terms of heating value:
Asia (Japan, Korea, Taiwan), where gas distributed is rich, with a gross calorific value (GCV) higher than 43 MJ/m3(n), i.e. 1,090 Btu/scf,
the UK and the US, where distributed gas is lean, with a GCV usually lower than 42 MJ/m3(n), i.e. 1,065 Btu/scf,
Continental Europe, where the acceptable GCV range is quite wide: approx. 39 to 46 MJ/m3(n), i.e. 990 to 1,160 Btu/scf.There are some methods to modify the heating value of produced LNG to the desired level. For the purpose of increasing the heating value, injecting propane and butane is a solution. For the purpose of decreasing heating value, nitrogen injecting and extracting butane and propane are proven solutions. Blending with gas or LNG can be a solution; however all of these solutions while theoretically viable can be costly and logistically difficult to manage in large scale. Lean LNG price in terms of energy value is lower than the rich LNG price.
Liquefaction technology
There are several liquefaction processes available for large, baseload LNG plants (in order of prevalence):
AP-C3MR – designed by Air Products & Chemicals, Inc. (APCI)
Cascade – designed by ConocoPhillips
AP-X – designed by Air Products & Chemicals, Inc. (APCI)
AP-SMR (Single Mixed Refrigerant) – designed by Air Products & Chemicals, Inc. (APCI)
AP-N (Nitrogen Refrigerant) – designed by Air Products & Chemicals, Inc. (APCI)
MFC (mixed fluid cascade) – designed by Linde
PRICO (SMR) – designed by Black & Veatch
AP-DMR (Dual Mixed Refrigerant) - designed by Air Products & Chemicals, Inc. (APCI)
Liquefin – designed by Air LiquideAs of January 2016, global nominal LNG liquefaction capacity was 301.5 MTPA (million tonnes per annum), with a further 142 MTPA under construction.The majority of these trains use either APCI AP-C3MR or Cascade technology for the liquefaction process. The other processes, used in a small minority of some liquefaction plants, include Shell's DMR (double-mixed refrigerant) technology and the Linde technology.
APCI technology is the most-used liquefaction process in LNG plants: out of 100 liquefaction trains onstream or under-construction, 86 trains with a total capacity of 243 MTPA have been designed based on the APCI process. Phillips' Cascade process is the second most-used, used in 10 trains with a total capacity of 36.16 MTPA. The Shell DMR process has been used in three trains with total capacity of 13.9 MTPA; and, finally, the Linde/Statoil process is used in the Snohvit 4.2 MTPA single train.
Floating liquefied natural gas (FLNG) facilities float above an offshore gas field, and produce, liquefy, store and transfer LNG (and potentially LPG and condensate) at sea before carriers ship it directly to markets. The first FLNG facility is now in development by Shell, due for completion in 2018.
Storage
Modern LNG storage tanks are typically of the full containment type, which has a prestressed concrete outer wall and a high-nickel steel inner tank, with extremely efficient insulation between the walls. Large tanks are low aspect ratio (height to width) and cylindrical in design with a domed steel or concrete roof. Storage pressure in these tanks is very low, less than 10 kilopascals (1.5 psi). Sometimes more expensive underground tanks are used for storage.
Smaller quantities (say 700 cubic metres (180,000 US gal) and less) may be stored in horizontal or vertical, vacuum-jacketed, pressure vessels. These tanks may be at pressures anywhere from less than 50 to over 1,700 kPa (7.3–246.6 psi).
LNG must be kept cold to remain a liquid, independent of pressure. Despite efficient insulation, there will inevitably be some heat leakage into the LNG, resulting in vaporisation of the LNG. This boil-off gas acts to keep the LNG cold (see "Refrigeration" below). The boil-off gas is typically compressed and exported as natural gas, or it is reliquefied and returned to storage.
Transportation
LNG is transported in specially designed ships with double hulls protecting the cargo systems from damage or leaks. There are several special leak test methods available to test the integrity of an LNG vessel's membrane cargo tanks.The tankers cost around US$200 million each.Transportation and supply is an important aspect of the gas business, since natural gas reserves are normally quite distant from consumer markets. Natural gas has far more volume than oil to transport, and most gas is transported by pipelines. There is a natural gas pipeline network in the former Soviet Union, Europe and North America. Natural gas is less dense, even at higher pressures. Natural gas will travel much faster than oil through a high-pressure pipeline, but can transmit only about a fifth of the amount of energy per day due to the lower density. Natural gas is usually liquefied to LNG at the end of the pipeline, before shipping.
Short LNG pipelines for use in moving product from LNG vessels to onshore storage are available. Longer pipelines, which allow vessels to offload LNG at a greater distance from port facilities, are under development. This requires pipe-in-pipe technology due to requirements for keeping the LNG cold.LNG is transported using tanker trucks, railway tanker cars, and purpose built ships known as LNG carriers. LNG is sometimes taken to cryogenic temperatures to increase the tanker capacity. The first commercial ship-to-ship transfer (STS) transfers were undertaken in February 2007 at the Flotta facility in Scapa Flow with 132,000 m3 of LNG being passed between the vessels Excalibur and Excelsior. Transfers have also been carried out by Exmar Shipmanagement, the Belgian gas tanker owner in the Gulf of Mexico, which involved the transfer of LNG from a conventional LNG carrier to an LNG regasification vessel (LNGRV). Before this commercial exercise, LNG had only ever been transferred between ships on a handful of occasions as a necessity following an incident. The Society of International Gas Tanker and Terminal Operators (SIGTTO) is the responsible body for LNG operators around the world and seeks to disseminate knowledge regarding the safe transport of LNG at sea.Besides LNG vessels, LNG is also used in some aircraft.
Terminals
Liquefied natural gas is used to transport natural gas over long distances, often by sea. In most cases, LNG terminals are purpose-built ports used exclusively to export or import LNG.
The United Kingdom has LNG import facilities for up to 50 billion cubic meters per year.
Refrigeration
The insulation, as efficient as it is, will not keep LNG cold enough by itself. Inevitably, heat leakage will warm and vapourise the LNG. Industry practice is to store LNG as a boiling cryogen. That is, the liquid is stored at its boiling point for the pressure at which it is stored (atmospheric pressure). As the vapour boils off, heat for the phase change cools the remaining liquid. Because the insulation is very efficient, only a relatively small amount of boil-off is necessary to maintain temperature. This phenomenon is also called auto-refrigeration.
Boil-off gas from land based LNG storage tanks is usually compressed and fed to natural gas pipeline networks. Some LNG carriers use boil-off gas for fuel.
Environmental concerns
Natural gas could be considered the least environmentally harmful fossil fuel because it has the lowest CO2 emissions per unit of energy and is suitable for use in high efficiency combined cycle power stations. For an equivalent amount of heat, burning natural gas produces about 30 percent less carbon dioxide than burning petroleum and about 45 per cent less than burning coal.
Biomethane is considered roughly CO2-neutral and avoids most of the CO2-emissions issue. If liquified (as LBM), it serves the same functions as LNG.On a per kilometre transported basis, emissions from LNG are lower than piped natural gas, which is a particular issue in Europe, where significant amounts of gas are piped several thousand kilometres from Russia. However, emissions from natural gas transported as LNG are higher than those of natural gas produced locally to the point of combustion, as emissions associated with transport are lower for the latter.
However, on the West Coast of the United States, where up to three new LNG importation terminals were proposed before the U.S. fracking boom, environmental groups, such as Pacific Environment, Ratepayers for Affordable Clean Energy (RACE), and Rising Tide had moved to oppose them. They claimed that, while natural gas power plants emit approximately half the carbon dioxide of an equivalent coal power plant, the natural gas combustion required to produce and transport LNG to the plants adds 20 to 40 percent more carbon dioxide than burning natural gas alone. A 2015 peer-reviewed study evaluated the full end-to-end life cycle of LNG produced in the U.S. and consumed in Europe or Asia. It concluded that global CO2 production would be reduced due to the resulting reduction in other fossil fuels burned. Some scientists and local residents have raised concerns about the potential effect of Poland's underground LNG storage infrastructure on marine life in the Baltic Sea. Similar concerns were raised in Croatia.
Safety and accidents
Natural gas is a fuel and a combustible substance. To ensure safe and reliable operation, particular measures are taken in the design, construction and operation of LNG facilities. In maritime transport, the regulations for the use of LNG as a marine fuel are set out in the IGF Code.In its liquid state, LNG is not explosive and can not ignite. For LNG to burn, it must first vaporize, then mix with air in the proper proportions (the flammable range is 5 percent to 15 percent), and then be ignited. In the case of a leak, LNG vaporizes rapidly, turning into a gas (methane plus trace gases), and mixing with air. If this mixture is within the flammable range, there is risk of ignition, which would create fire and thermal radiation hazards.
Gas venting from vehicles powered by LNG may create a flammability hazard if parked indoors for longer than a week. Additionally, due to its low temperature, refueling an LNG-powered vehicle requires training to avoid the risk of frostbite.LNG tankers have sailed over 100 million miles without a shipboard death or even a major accident.Several on-site accidents involving or related to LNG are listed below:
October 20, 1944, Cleveland, Ohio, U.S. The East Ohio Natural Gas Co. experienced a failure of an LNG tank. 128 people perished in the explosion and fire. The tank did not have a dike retaining wall, and it was made during World War II, when metal rationing was very strict. The steel of the tank was made with an extremely low amount of nickel, which meant the tank was brittle when exposed to the cryogenic nature of LNG. The tank ruptured, spilling LNG into the city sewer system. The LNG vaporized and turned into gas, which exploded and burned.
February 10, 1973, Staten Island, New York, U.S. During a cleaning operation, 42 workers were inside one of the TETCo LNG tanks, which had supposedly been completely drained ten months earlier. However, ignition occurred, causing a plume of combusting gas to rise within the tank. Two workers near the top felt the heat and rushed to the safety of scaffolding outside, while the other 40 workers died as the concrete cap on the tank rose 20–30 feet in the air and then came crashing back down, crushing them to death.
October 6, 1979, Lusby, Maryland, US. A pump seal failed at the Cove Point LNG import facility, releasing natural gas vapors (not LNG), which entered an electrical conduit. A worker switched off a circuit breaker, which ignited the gas vapors. The resulting explosion killed a worker, severely injured another and caused heavy damage to the building. A safety analysis was not required at the time, and none was performed during the planning, design or construction of the facility. National fire codes were changed as a result of the accident.
January 19, 2004, Skikda, Algeria. Explosion at Sonatrach LNG liquefaction facility. 27 killed, 56 injured, three LNG trains destroyed, a marine berth damaged. 2004 production was reduced by 76 percent. Total loss was US$900 million. A steam boiler that was part of an LNG liquefaction train exploded, triggering a massive hydrocarbon gas explosion. The explosion occurred where propane and ethane refrigeration storage were located. Site distribution of the units caused a domino effect of explosions. It remains unclear if LNG or LNG vapour, or other hydrocarbon gases forming part of the liquefaction process initiated the explosions. One report, of the US Government Team Site Inspection of the Sonatrach Skikda LNG Plant in Skikda, Algeria, March 12–16, 2004, has cited it was a leak of hydrocarbons from the refrigerant (liquefaction) process system.
Security concerns
On 8 May 2018, the United States withdrew from the Joint Comprehensive Plan of Action with Iran, reinstating Iran sanctions against their nuclear program. In response, Iran threatened to close off the Strait of Hormuz to international shipping. The Strait of Hormuz is a strategic route through which a third of the world's LNG passes from Middle East producers.
See also
== References == |
transport in turkey | Transport in Turkey is road-dominated and mostly fuelled by diesel. Transport consumes a quarter of energy in Turkey, and is a major source of air pollution in Turkey and greenhouse gas emissions by Turkey. The World Health Organization has called for more active transport such as cycling.
Rail transport
Rail network
The TCDD – Türkiye Devlet Demir Yolları (Turkish State Railways) possess 10,984 km of 1,435 mm (4 ft 8+1⁄2 in) gauge, of which 2,336 km are electrified (2005).There are daily regular passenger trains all through the network. TCDD has started an investment program of building 5.000 km high-speed lines until 2023. As of October 2019, three high speed train routes are running: Ankara-Eskişehir-İstanbul, Ankara-Konya and İstanbul-Eskişehir-Konya.
The freight transportation is mainly organized as block trains for domestic routes, since TCDD discourages under 200 to loads by surcharges.
Urban rail
After almost 30 years without any trams, Turkey is experiencing a revival in trams. Established in 1992, the tram system of Istanbul earned the best large-scale tram management award in 2005. Another award-winning tram network belongs to Eskişehir (EsTram) where a modern tram system opened in 2004. Several other cities are planning or constructing tram lines, with modern low-flow trams.
By 2014, there have been 12 cities in Turkey using railroads for transportation.
Cities with commuter rail systems: Istanbul, Ankara, Izmir, Gaziantep
Cities with metro systems: Istanbul, Ankara, Izmir, Bursa, Adana
Cities with light rail transit systems: Istanbul, Ankara, Izmir, Adana, Bursa, Eskişehir, Konya, Antalya, Kayseri, Gaziantep, Samsun, Kocaeli.
Railway links with adjacent countries
Azerbaijan – via Georgia – under construction
Armenia – closed (see Kars Gyumri Akhalkalaki railway line)
Bulgaria – open – 1,435 mm (4 ft 8+1⁄2 in)
Greece – open – 1,435 mm (4 ft 8+1⁄2 in) (Note: Passenger services as Express of Friendship/Filia suspended from 13 February 2012 )
Georgia – under reconstruction – break-of-gauge 1,435 mm (4 ft 8+1⁄2 in)/1,520 mm (4 ft 11+27⁄32 in).
Iran – via Lake Van train ferry – same gauge
Iraq – No direct link, traffic routed via Syria – same gauge
Syria – closed – 1,435 mm (4 ft 8+1⁄2 in) (Note: It was suspended after breakout of Syrian Civil War in 29 August 2011)
Road transport
Road transport is responsible for much air pollution in Turkey and almost a fifth of Turkey's greenhouse gas emissions, mainly via diesel. It is one of 3 G20 countries without a fuel efficiency standard. As of 2020 there are many old, inefficient, polluting trucks. Retiring old polluting vehicles by forcing all cars and trucks to meet tailpipe emission standards would reduce disease, especially from polycyclic aromatic hydrocarbons. As of 2014, the country has a roadway network of 65,623 kilometres (40,776 miles). The total length of the rail network was 10,991 kilometres (6,829 miles) in 2008, including 2,133 kilometres (1,325 miles) of electrified and 457 kilometres (284 miles) of high-speed track. The Turkish State Railways started building high-speed rail lines in 2003. The Ankara-Konya line became operational in 2011, while the Ankara-Istanbul line entered service in 2014. Opened in 2013, the Marmaray tunnel under the Bosphorus connects the railway and metro lines of Istanbul's European and Asian sides; while the nearby Eurasia Tunnel (2016) provides an undersea road connection for motor vehicles. The Bosphorus Bridge (1973), Fatih Sultan Mehmet Bridge (1988) and Yavuz Sultan Selim Bridge (2016) are the three suspension bridges connecting the European and Asian shores of the Bosphorus strait. The Osman Gazi Bridge (2016) connects the northern and southern shores of the Gulf of İzmit. The Çanakkale Bridge, connects the European and Asian shores of the Dardanelles strait.
As of 2022 fuel quality and emissions standards are not as good as those in the EU.In 2023 the World Bank said the government should plan and subsidize the rollout of public electric car chargers, particularly because so many people live in flats. They said that a subsidy would provide environmental and social benefits. They also said that cities should set an end date for diesel buses.
Road network
There are three types of intercity roads in Turkey:
– The first is the historical and free road network called State roads (Devlet Yolları) that are completely under the responsibility of the General Directorate of Highways except for urban sections (like the sections falling within the inner part of ring roads of Ankara, Istanbul or İzmir. Even if they mostly possess dual carriageways and interchanges, they also have some traffic lights and intersections.
– The second type of roads are controlled-access highways that are officially named Otoyol. But it isn't uncommon that people in Turkey call them Otoban (referring to Autobahn) as this types of roads entered popular culture by the means of Turks in Germany. They also depend on the General Directorate of Highways except those that are financed with a BOT model.– The third type of roads are provincial roads (Il Yolları) are highways of secondary importance linking districts within a province to each other , the provincial center, the districts in the neighboring provinces, the state roads, railway stations, seaports, and airports
Motorways: Motorway 3.633 km (January 2023)
Dual carriageways: 28.986 km (January 2023)
State Highways 30.954 km (January 2023)
Provincial Roads 34.113 km (January 2023)
Motorway Projects‐Vision 8.325 km (in 2053)As of 2023, there are 471 tunnels (total length 665 km) and 9.660 bridges (total length 739 km) on the network.
Public road transport
There are numerous private bus companies providing connections between cities in Turkey.
For local trips to villages there are dolmuşes, small vans that seat about twenty passengers.
As of 2010, number of road vehicles is around 15 million. The number of vehicles by type and use is as follows.
Car 7,544,871
Minibus 386,973
Bus 208,510
Small truck 2,399,038
Truck 726,359
Motorcycle 2,389,488
Special Purpose vehicle 35,492
Tractor 1,404,872
Total: 15,095,603
Cycling
Escooters
Escooter rental is available in some cities, and escooters can be used on cycle paths, and on urban roads without cycle paths where the speed limit is below 50 kph.
Car ownership
As of 2020 over half the registered motor vehicles are cars - about 12.5 million - of which 4.7 million are diesel fueled, 4.7 million LPG, and 3 million gasoline.
Air transport
In 2013 Turkey had the tenth largest passenger air market in the world with 74,353,297 passengers. In 2013 there were 98 airports in Turkey, including 22 international airports. As of 2015, Istanbul Atatürk Airport is the 11th busiest airport in the world, serving 31,833,324 passengers between January and July 2014, according to Airports Council International. The new (third) international airport of Istanbul is planned to be the largest airport in the world, with a capacity to serve 150 million passengers per annum. Turkish Airlines, flag carrier of Turkey since 1933, was selected by Skytrax as Europe's best airline for five consecutive years in 2011, 2012, 2013, 2014 and 2015. With 435 destinations (51 domestic and 384 international) in 126 countries worldwide, Turkish Airlines is the largest carrier in the world by number of countries served as of 2016.
Airlines
Airports
Total number of Airports in Turkey: 117 (2007)
Airports – with paved runways
total:
88
over 3,047 m:
16
2,438 to 3,047 m:
1,524 to 2,437 m:
19
914 to 1,523 m:
16
under 914 m:
4 (2010)
(Link:)
Airports – with unpaved runways
total:
11
1,524 to 2,437 m:
1
914 to 1,523 m:
6
under 914 m:
4 (2010)
(Link:)
Heliports
20 (2010)
Water transport
About 1,200 km
Port cities
Black Sea
Hopa
Inebolu
Samsun
Trabzon
ZonguldakAegean Sea
İzmirMediterranean Sea
İskenderun
Mersin
AntalyaSea of Marmara
Gemlik
Bandırma
Istanbul
İzmit
Derince
Air pollution
Road traffic is a major source of air pollution in Turkey, and Istanbul is one of the few European cities without a low emission zone.Transport emitted 85 megatonnes of CO2 in 2018, about one tonne per person and 16 percent of Turkey's greenhouse gas emissions. Road transport dominated transport emissions with 79 megatonnes, including agricultural vehicles.
See also
Right to Clean Air Platform Turkey
Public transport in Istanbul
List of highways in Turkey
Turkish State Highway System
List of otoyol routes in Turkey
Otoyol
List of countries by vehicles per capita Turkey "total number of vehicles" 16th, Turkey "vehicles per capita" 66th
Sources
Difiglio, Prof. Carmine; Güray, Bora Şekip; Merdan, Ersin (November 2020). Turkey Energy Outlook. iicec.sabanciuniv.edu (Report). Sabanci University Istanbul International Center for Energy and Climate (IICEC). ISBN 978-605-70031-9-5.Bisiklet Yolları Klavuzu [Bicycle Path Guidelines] (PDF) (Report) (in Turkish). Ministry of Environment and Urban Planning (Turkey). December 2019.
References
== External links == |
edf energy | EDF Energy is a British integrated energy company, wholly owned by the French state-owned EDF (Électricité de France), with operations spanning electricity generation and the sale of natural gas and electricity to homes and businesses throughout the United Kingdom. It employs 11,717 people, and handles 5.22 million business and residential customer accounts.
History
EDF Energy Customers (trading as EDF) is wholly owned by the French state owned EDF (Électricité de France) and was formed in January 2002, following the acquisition and mergers of Seeboard plc (formerly the South Eastern Electricity Board), London Electricity plc (formerly the London Electricity Board or LEB), SWEB Energy plc (formerly the South Western Electricity Board) and two coal fired power stations and a combined cycle gas turbine power station.In 2009, EDF took control of the nuclear generator in the United Kingdom, British Energy, buying share capital from the government. This made EDF one of the largest generators in the United Kingdom.The development branch of EDF was formed in April 2004, bringing together the separate infrastructure interests of what were LE Group, Seeboard and SWEB. The focus for the branch is development activity through the participation in major new infrastructure projects, largely in the public sector through public-private partnership (PPP) and private finance initiative (PFI) type schemes.The electricity distribution (or downstream) networks formerly known as EDF Energy Networks were sold in November, 2010 to Hong Kong-based Cheung Kong Group (CKG), owned by billionaire Li Ka Shing. Later, EDF Energy Networks was renamed to UK Power Networks. In December 2014, EDF sold three small UK based wind farms with a combined capacity of 73 megawatts to the China General Nuclear Power Group for an estimated £100 million.In November 2017, EDF sold its majority stake in five wind farms across Cambridgeshire and Lincolnshire for £98 million.A release from EDF confirmed that in 2018 the firm lost 200,000 consumers due to them shopping around a competitive marketplace. EDF also found that earnings for its UK business had tumbled by 16.5% to £691 million in the year to 31 December.On 4 November 2019 EDF announced the acquisition British start up Pivot Power, who specialise in battery storage and infrastructure for electric vehicle charging.EDF acquired a majority stake in Pod Point, one of the largest electric vehicle (EV) charging companies in the UK, in February 2020.On 31 August 2021, EDF announced the sales of its 1332 MW combined cycle gas turbine power station and 49 MW battery at West Burton B to EIG.The UK's nuclear stations, run by EDF, reached a milestone in November 2021, clocking up 2000 terawatt hours (TWh) of electricity – enough to power all the UK's homes for more than 18 years.
No Dash For Gas action
In February 2013, EDF sought an estimated £5 million in damages from environmental activists from the No Dash for Gas campaign, who occupied the EDF owned West Burton CCGT power station in October 2012, and pleaded guilty to charges of aggravated trespass.It is unusual in the United Kingdom for companies to seek damages from protesters. Environmentalist George Monbiot, writing in The Guardian, said EDF was conducting a strategic lawsuit against public participation, "part of a global strategy by corporations to stifle democracy", and predicted the "disastrous unintended consequences of an attempt at censorship" could result in the Streisand effect and be comparable to the McLibel case.The activists received support in the days since the case became public, with over 6,000 signatures on a supportive petition at Change.org within the first day, and over 64,000 by the time EDF dropped their lawsuit on 13 March 2013, saying that this was "a fair and reasonable solution" after the protesters had "agreed in principle to accept a permanent injunction which prevents them from entering multiple sites operated by EDF Energy".
Electricity generation
Nuclear
Following the acquisition of British Energy in 2009, the EDF portfolio includes eight nuclear power stations. They are seven AGR power stations (Dungeness B; Hinkley Point B; Hunterston B; Hartlepool; Heysham 1; Heysham 2 & Torness) and one PWR power station (Sizewell B), totalling nearly 9,000 MW of installed capacity.
In 2007, EDF announced its intention to construct up to four new EPR design reactors; two at Hinkley Point C (currently scheduled to start operation in 2025), two at Sizewell C and Bradwell B. EDF plans to build and operate the new plants through its subsidiary NNB Generation Company (NNB GenCo).
In August 2014, the company announced it had shut down four of its 15 reactors for a period of eight weeks to investigate potential cracking in the boiler spine.In 2015, EDF announced a 10-year life extension for Dungeness B, initially pushing back the closure date until 2028,
although it subsequently ceased production and commenced defuelling in June 2021.
In February 2016, EDF announced that it would keep four of its nuclear plants open in the United Kingdom. Heysham 1 and Hartlepool will have their life extended by five years until 2024, while Heysham 2 and Torness will see their closure dates pushed back by seven years to 2030.In November 2020 EDF announced Hinkley Point B power station in Somerset will move into the defuelling phase no later than 15 July 2022.
Wind
As of 2021, EDF owns and operates 37 wind farms including the 59 turbine onshore wind farm at Dorenell in Scotland, and are developing two offshore wind projects at Codling Wind Park in Ireland and Neart na Gaoithe in Scotland. The company have plans for a floating offshore wind development at Blyth and a 22 turbine onshore wind farm Garn Fach in Wales.
Solar energy
EDF develop, operate and maintain solar projects. Sutton Bridge is the company's first solar farm of grid-scale and will be approximately 139 hectares. In 2019 EDF signed an agreement to install solar panels on the roofs of a number of Tesco's largest stores in England.
Fossil fuel
EDF owned and operated one 2,000 MW coal fired power station, West Burton A Power Station, located near Retford in Nottinghamshire. Generation at West Burton A power station ended on the 31 March 2023.
Energy percentages
In the period from April 2020 to March 2021, the percentage of electricity generated by EDF from each source was as follows: nuclear - 62.1%, renewable - 29%, gas - 7.5%, coal - 1.3% with average CO2 intensity of 42 gCO2eq/kWh.In 2020, EDF nuclear power plants provided 16.1% of UK total electricity generation, down from 17.3% in 2019. As of 2020 EDF supplied 32.4% of low-carbon energy in the whole UK energy mix.
EDF Renewables
EDF Renewables in the UK is a joint venture between EDF Renewables Group and EDF.
In April 2017, EDF Renewable Energy, in a joint venture with EDF, announced the commissioning of the Corriemoillie (47.5 MW), Beck Burn (31 MW) and Pearie Law (19.2 MW) wind farms. Beck Burn was opened in July that year. Also in July 2017, EDF Renewables announced the acquisition of 11 Scottish wind farm sites from asset manager Partnerships for Renewables, with a potential capacity of 600 MW.
In May 2018, EDF Energies Nouvelles bought the Neart na Gaoithe wind farm in Scotland from Irish company Mainstream Renewable Power, following a competitive process. It will produce 450 MW. The farm is planned to go online in 2023.
EDF Renewables opened its wind farm in Blyth in July 2018, where the individual turbines are connected via 66-kilometre (41 mi) offshore cables to bring the electricity produced onshore.
Sponsorship
EDF is the ‘in Association’ sponsor of Cheltenham Science Festival and have been supporters of the Big Bang Fair since 2015.EDF has sponsored several shows on ITV, including Soapstar Superstar and City Lights. It also sponsored coverage of the 2006 World Cup in Germany (shared with Budweiser) and coverage of the 2007 Rugby World Cup (shared with Peugeot)
EDF was the main sponsor of the Anglo-Welsh Cup – the Rugby Union domestic cup for the twelve clubs in the English Premiership and the four Welsh regions – between 2006 and 2009. In July 2007, EDF was confirmed as another Level One sponsor for London 2012 with exclusive branding rights and Olympic team sponsorship for the 2008, 2010 and 2012 games as well as being the official energy provider.
In August 2008, EDF formed a partnership with The British Red Cross to help vulnerable people to get support during power failures. In January 2011, EDF took over sponsorship from British Airways of the London Eye, on a three-year deal renaming the London Eye as the EDF Energy London Eye.
Marketing
On 4 January 2008, EDF began advertising on the television through ITV, Channel 4, Channel 5 and various satellite channels. EDF are using "It's not easy being green" as their slogan to target a new greener eco friendly image. In 2009, with Euro RSCG London, EDF created the Team Green Britain campaign, in which Olympic athletes encouraged Britons to be more environmentally aware.On 2 April 2012, EDF launched an advert, including their new mascot, Zingy.In 2020 EDF launched their new brand purpose focused on tackling climate change and aired a TV advertising campaign promoting their new company ambition and purpose ‘Helping Britain achieve Net Zero’.
Distribution network operators
EDF is an energy supplier for homes across the country. It is not however a distribution network operator.
EDF's main locations
EDF's main offices are located in London, Croydon, Exeter, Sunderland, Hove and Barnwood in Gloucester.
See also
Energy policy of the United Kingdom
Energy use and conservation in the United Kingdom
Green electricity in the United Kingdom
References
External links
Official website |
energy policy of china | Ensuring adequate energy supply to sustain economic growth has been a core concern of the Chinese Government since the founding of People's Republic of China in 1949. Since the country's industrialization in the 1960s, China is currently the world's largest emitter of greenhouse gases, and coal in China is a major cause of global warming. However, from 2010 to 2015 China reduced energy consumption per unit of GDP by 18%, and CO2 emissions per unit of GDP by 20%. On a per-capita basis, it was only the world's 51st largest emitter of greenhouse gases in 2016. China is also the world's largest renewable energy producer (see this article), and the largest producer of hydroelectricity, solar power and wind power in the world. The energy policy of China is connected to its industrial policy, where the goals of China's industrial production dictate its energy demand managements.
Being a country that depends heavily on foreign petroleum import for both domestic consumption and as raw materials for light industry manufacturing, electrification is a huge component of the Chinese national energy policy. Details for the power sector are likely to be released winter 2021/22 for the 14th five-year plan, and this is expected to determine whether the country builds more coal-fired power stations, and therefore whether global climate targets are likely to be met.
Summary
Environment and carbon emissions
Between 1980 and 2000, China's emissions density (its ratio of carbon dioxide equivalent emissions to gross domestic product) declined sharply.: 26 The country quadrupled its GDP while only doubling the energy it consumed.: 26 No other country at a similar stage of industrial development has matched this achievement.: 26 On June 19, 2007, the Netherlands Environmental Assessment Agency announced that a preliminary study had indicated that China's greenhouse gas emissions for 2006 had exceeded those of the United States for the first time. The agency calculated that China's CO2 emissions from fossil fuels increased by 9% in 2006, while those of the United States fell by 1.4%, compared to 2005. The study used energy and cement production data from British Petroleum which they believed to be 'reasonably accurate', while warning that statistics for rapidly changing economies such as China are less reliable than data on OECD countries.The Initial National Communication on Climate Change of the People's Republic of China calculated that carbon dioxide emissions in 2004 had risen to approximately 5.05 billion metric tons, with total greenhouse gas emissions reaching about 6.1 billion metric tons carbon dioxide equivalent.In 2002, China ranked 2nd (after the United States) in the list of countries by carbon dioxide emissions, with emissions of 3.3 billion metric tons, representing 14.5% of the world total. In 2006, China overtook the US, producing 8% more emissions than the US to become the world's largest emitter of CO2 emissions. However per capita China was ranked 51st in CO2 emissions per capita in 2016, with emissions of 7.2 tonnes per person (compared to 15.5 tonnes per person in the United States). In addition, it has been estimated that around a third of China's carbon emissions in 2005 were due to manufacturing exported goods.
Energy use and carbon emissions by sector
In the industrial sector, six industries – electricity generation, steel, non-ferrous metals, construction materials, oil processing and chemicals – account for nearly 70% of energy use.In the construction materials sector, China produced about 44% of the world's cement in 2006. Cement production produces more carbon emissions than any other industrial process, accounting for around 4% of global carbon emissions.
National Action Plan on Climate Change
China has been taking action on climate change for some years, with the publication on June 4, 2007, of China's first National Action Plan on Climate Change, and in that year China became the first developing country to publish a national strategy addressing global warming. The plan did not include targets for carbon dioxide emission reductions, but it has been estimated that, if fully implemented, China's annual emissions of greenhouse gases would be reduced by 1.5 billion tons of carbon dioxide equivalent by 2010. Other commentators, however, put the figure at 0.950 billion metric tons.The publication of the strategy was officially announced during a meeting of the State Council, which called on governments and all sectors of the economy to implement the plan, and for the launch of a public environmental protection awareness campaign.The National Action Plan includes increasing the proportion of electricity generation from renewable energy sources and from nuclear power, increasing the efficiency of coal-fired power stations, the use of cogeneration, and the development of coal-bed and coal-mine methane.In addition, the one child policy in China has successfully slowed down the population increase, preventing 300 million births, the equivalent of 1.3 billion tons of CO2 emissions based on average world per capita emissions of 4.2 tons at 2005 level.
11th and 12th Five-Year Plans
Beginning with the 11th, each of China's Five Year plans have sought to move China away from energy-intensive manufacturing and into high-value sectors and have highlighted the importance of low-carbon technology as a strategic emerging industry, particularly in the areas of wind and solar power.: 26–27 The Plan set a national energy intensity target.: 54 of a 20% reduction.: 167 It was identified as a "binding target" and focused on throughout the Plan's implementation.: 167 Policymakers viewed emissions reductions and energy conservation as the highest priority environmental matters under the 11th Five-Year Plan.: 136 Successful achievement of emissions and energy conservation targets in the 11th Five-Year Plan shaped policymaker's approach for the 12th Five-Year Plan, prompting expanded use of binding targets to capitalize on successes in these areas.: 136 In January 2012, as part of its 12th Five-year Plan, China published a report 12th Five-year Plan on Greenhouse Emission Control (guofa [2011] No. 41), which establishes goals of reducing carbon intensity by 17% by 2015, compared with 2010 levels and raising energy consumption intensity by 16%, relative to GDP. More demanding targets were set for the most developed regions and those with most heavy industry, including Guangdong, Shanghai, Jiangsu, Zhejiang and Tianjin. China also plans to meet 11.4% of its primary energy requirements from non-fossil sources by 2015.The plan will also pilot the construction of a number of low-carbon Development Zones and low-carbon residential communities, which it hopes will result in a cluster effect among businesses and consumers.To facilitate carbon trading and to more broadly help assess emissions targets and meet the transparency requirements of the Paris Agreement, the Plan improved the system for greenhouse gas emissions monitoring.: 55 This was the first time that carbon emissions trading had featured in one of China's Five-Year Plans.: 80 The Plan also provided for the development of an ultra-high-voltage (UHV) transmission corridor to increase the integration of renewable energy from the point of generation to its point of consumption.: 39–41 In addition, the Government will in future include data on greenhouse emissions in its official statistics.
Carbon trading scheme
In a separate development, on January 13, 2012, the National Development and Reform Commission announced that the cities of Beijing, Tianjin, Shanghai, Chongqing and Shenzhen, and the provinces of Hubei and Guangdong would become the first to participate in a pilot carbon cap and trade scheme that would operate in a similar way to the European Union Emission Trading Scheme. The development follows an unsuccessful experiment with voluntary carbon exchanges that was set up in 2009 in Beijing, Shanghai and Tianjin.
Fossil fuels
Coal
Coal remains the foundation of the Chinese energy system, covering close to 70 percent of the country's primary energy needs and representing 80 percent of the fuel used in electricity generation. China produces and consumes more coal than any other country. Analysis in 2016 shows that China's coal consumption appears to have peaked in 2014. According to Global Energy Monitor, China's government has limited the hours of 40% of coal-fired power stations built in 2019, due to overcapacity in electricity generation.
Petroleum
China's oil supply was 4,855 TWh in 2009 which represented 10% of the world's supply.Although China is still a major crude oil producer, it became an oil importer in the 1990s. China became dependent on imported oil for the first time in its history in 1993 due to demand rising faster than domestic production. In 2002, annual crude petroleum production was 1,298,000,000 barrels, and annual crude petroleum consumption was 1,670,000,000 barrels. In 2006, it imported 145 million tons of crude oil, accounting for 47% of its total oil consumption. By 2014 China was importing approximately 7 mil. barrels of oil per day. Three state-owned oil companies – Sinopec, CNPC, and CNOOC – dominate its domestic market.
China announced on June 20, 2008, plans to raise petrol, diesel and aviation kerosene prices. This decision appeared to reflect a need to reduce the unsustainably high level of subsidies these fuels attract, given the global trend in the price of oil.Top oil producers were in 2010: Russia 502 Mt (13%), Saudi Arabia 471 Mt (12%), US 336 Mt (8%), Iran 227 Mt (6%), China 200 Mt (5%), Canada 159 Mt (4%), Mexico 144 Mt (4%), UAE 129 Mt (3%). The world oil production increased from 2005 to 2010 1.3% and from 2009 to 2010 3.4%.
Natural gas
China's natural gas supply was 1,015 TWh in 2009 that was 3% of the world supply.CNPC, Sinopec, and CNOOC are all active in the upstream gas sector, as well as in LNG import, and in midstream pipelines. Branch pipelines and urban networks are run by city gas companies including China Gas Holdings, ENN Energy, Towngas China, Beijing Enterprises Holdings and Kunlun Energy.
China was top seventh in natural gas production in 2010.Issued by China's State Council in September 2013, China's Action Plan for the Prevention and Control of Air Pollution illustrates government desire to increase the share of natural gas in China's energy mix. In May 2014 China signed a 30-year deal with Russia to deliver 38 billion cubic metres of natural gas each year. The Power of Siberia pipeline is designed to reduce China's dependence on coal, which is more carbon intensive and causes more pollution than natural gas. The proposed western gas route from Russia's West Siberian petroleum basin to North-Western China is known as Power of Siberia 2.In November 2021, U.S. producer Venture Global LNG signed a twenty-year contract with China's state-owned Sinopec to supply liquefied natural gas (LNG). China's imports of U.S. natural gas would more than double.
Electricity generation
In 2013, China's total annual electricity output was 5.398 trillion kWh and the annual consumption was 5.380 trillion kWh with an installed capacity of 1247 GW (all the largest in the world).
This is an increase from 2009, when China's total annual electricity output was 3.71465 trillion kWh, and the annual consumption was 3.6430 trillion kWh (second largest in the world). In the same year, the total installed electricity generating capacity was 874 GW. China is undertaking substantial long-distance transmission projects with record breaking capacities, and has the goal of achieving an integrated nationwide grid in the period between 2015 and 2020.
Coal
In 2015, China generated 73% of its electricity from coal-fired power stations, which has been dropping from a peak of 81% in 2007.In recent years, China has increased its use of coal power and continued to build new coal power plants. The National Energy Administration's early warning risk rating for coal plants approved the establishment of new power plants in 2020. China shut down roughly 7GW of power plants at the same time, continuing to decommission ageing coal-fired power reactors.
Renewables
China is the world's leading renewable energy producer, with an installed capacity of 152 GW. China has been investing heavily in the renewable energy field in recent years. In 2007, the total renewable energy investment was US$12 billion, second only to Germany. In 2012, China invested US$65.1 billion in clean energy (20% more than in 2011), fully 30% of the total investment by the G-20, including 25% (US$31.2 billion) of global solar energy investment, 37% percent (US$27.2 billion) of global wind energy investment, and 47% (US$6.3 billion) of global investment in "other renewable energy" (small hydro, geothermal, marine, and biomass); 23 GW of clean generation capacity was installed.China is also the largest producer of wind turbines and solar panels. Approximately 7% of China's energy was from renewable sources in 2006, a figure targeted to rise to 10% by 2010 and to 16% by 2020. The major renewable energy source in China is hydropower. Total hydro-electric output in China in 2009 was 615.64 TWh, constituting 16.6% of all electricity generated. The country already has the most hydro-electric capacity in the world, and the Three Gorges Dam is currently the largest hydro-electric power station in the world, with a total capacity of 22.5 GW. It has been in full operation since May 2012.
Nuclear power
In 2012, China had 15 nuclear power units with a total electric capacity of 11 GW and total output of 54.8 billion kWh, accounting for 1.9% country's total electricity output. This rose to 17 reactors in 2013. By 2016 the number of operating nuclear reactors was 32 with 22 under construction and other dozen to start construction this year. There are plans to increase nuclear power capacity and nuclear power percentage, bringing the total electricity output to 86 GW and 4% respectively by 2020. Plans are to increase this to 200 GWe by 2030, and 400 GWe by 2050. China has set an end-of-the-Century goal 1500GWs of nuclear energy, most of this from fast reactors. China has 32 reactors under construction, the highest number in the world.
Rural electrification
Following the completion of the similar Township Electrification Program in 2005, the Village Electrification Program plans to provide renewable electricity to 3.5 million households in 10,000 villages by 2010. This is to be followed by full rural electrification using renewable energy by 2015.
Renewable energy sources
Although a majority of the renewable energy in China is from hydropower, other renewable energy sources are in rapid development. In 2006, a total of 10 billion US dollars had been invested in renewable energy, second only to Germany.China is a major source of clean energy technology transfer to other developing countries.: 4
Bioenergy
In 2006, 16 million tons of corn have been used to produce a first generation biofuel (ethanol). However, because food prices in China rose sharply during 2007, China has decided to ban the further expansion of the corn ethanol industry.
On February 7, a spokesman for the State Forestry Administration announced that 130,000 square kilometres (50,000 sq mi) would be devoted to biofuel production. Under an agreement reached with PetroChina in January 2007, 400 square kilometres of Jatropha curcas is to be grown for biodiesel production. Local governments are also developing oilseed projects. There were concerns that such developments may lead to environmental damage.In 2018, The Telegraph reported that the biofuel industry is further on the rise. There also seems to be considerable interest in biofuels (i.e. biodiesel, green jet fuel, ...) which use waste material as the input source (second generation biofuel).
Solar power
China has become the world's largest consumer of solar energy. It is the largest producer of solar water heaters, accounting for 60 percent of the world's solar hot water heating capacity, and the total installed heaters is estimated at 30 million households. Solar PV production in China is also in rapid development. In 2007, 0.82 GW of Solar PV was produced, second only to Japan.China's Sixth Five-Year Plan (1981-1985) was the first to address government policy support for solar PV panel manufacturing.: 34 Policy support for solar panel manufacturing has been a part of every Five-Year Plan since.: 34 As part of the stimulus plan of "Golden Sun", announced by the government in 2009, several developments and projects became part of the milestones for the development of solar technology in China. These include the agreement signed by LDK for a 500MW solar project, a new thin film solar plant developed by Anwell Technologies in Henan province using its own proprietary solar technology and the solar power plant project in a desert, headed by First Solar and Ordos City. The effort to drive the renewable energy use in China was further assured after the speech by the Chinese President, given at the UN climate summit on 22 Sept 2009 in New York, pledging that China will plan to have 15% of its energy from renewable sources within a decade. China is using solar power in houses, buildings, and cars.Because solar works well as a distributed power source, recent Chinese policies have focused on increasing the prevalence of distributed solar energy and for developing systems so that electricity from solar energy can be used at its point of generation instead of transmitted over long distances.: 34
Wind power
China's total wind power capacity reached 2.67 gigawatts (GW) in 2006 and 44.7 GW by 2010. This figure reached 281 GW in 2020, an increase of 71.6 GW on the previous year.
Energy conservation
General work plan
Officials were warned that violating energy conservation and environmental protection laws would lead to criminal proceedings, while failure to achieve targets would be taken into account in the performance assessment of officials and business leaders.After achieving less than half the 4% reduction in energy intensity targeted for 2006, all companies and local and national government were asked to submit detailed plans for compliance before June 30, 2007.During the first four years of the plan, energy intensity improved by 14.4%, but dropped sharply in the first quarter of 2010. In August 2010, China announced the closing of 2,087 steel mills, cement works and other energy-intensive factories by September 30, 2010. The factory closings were made more palatable by a labor shortage in much of China making it easier for workers to find other jobs.
Space heating and air conditioning
A State Council circular issued on June 3, 2007, restricts the temperature of air conditioning in public buildings to no lower than 26 °C in summer (78.8 °F), and of heating to no higher than 20 °C (68 °F) in winter. The sale of inefficient air conditioning units has also been outlawed.
Public opinion
The Chinese results from the 1st Annual World Environment Review, published on June 5, 2007, revealed that, in a sample of 1024 people (50% male):
88% are concerned about climate change.
97% think their government should do more to tackle global warming.
63% think that China is too dependent on fossil fuels.
56% think that China is too reliant on foreign oil.
91% think that a minimum 25% of electricity should be generated from renewable energy sources.
61% are concerned about nuclear power.
79% are concerned about carbon dioxide emissions from developing countries.
62% think it appropriate for developed countries to demand restrictions on carbon dioxide emissions from developing countries.Another survey published in August 2007 by China Youth Daily and the British Council sampled 2,500 Chinese people with an average age of 30.1. It showed that 80% of young Chinese are concerned about global warming.
See also
== References == |
fracking | Fracking (also known as hydraulic fracturing, fracing, hydrofracturing, or hydrofracking) is a well stimulation technique involving the fracturing of formations in bedrock by a pressurized liquid. The process involves the high-pressure injection of "fracking fluid" (primarily water, containing sand or other proppants suspended with the aid of thickening agents) into a wellbore to create cracks in the deep-rock formations through which natural gas, petroleum, and brine will flow more freely. When the hydraulic pressure is removed from the well, small grains of hydraulic fracturing proppants (either sand or aluminium oxide) hold the fractures open.Hydraulic fracturing began as an experiment in 1947, and the first commercially successful application followed in 1950. As of 2012, 2.5 million "frac jobs" had been performed worldwide on oil and gas wells, over one million of those within the U.S. Such treatment is generally necessary to achieve adequate flow rates in shale gas, tight gas, tight oil, and coal seam gas wells. Some hydraulic fractures can form naturally in certain veins or dikes. Drilling and hydraulic fracturing have made the United States a major crude oil exporter as of 2019, but leakage of methane, a powerful greenhouse gas, has dramatically increased. Increased oil and gas production from the decade-long fracking boom has led to lower prices for consumers, with near-record lows of the share of household income going to energy expenditures.Hydraulic fracturing is highly controversial. Its proponents advocate the economic benefits of more extensively accessible hydrocarbons, as well as replacing coal with natural gas, which burns more cleanly and emits less carbon dioxide (CO2), and energy independence. Opponents of fracking argue that these are outweighed by the environmental impacts, which include groundwater and surface water contamination, noise and air pollution, and the triggering of earthquakes, along with the resulting hazards to public health and the environment. Research has found adverse health effects in populations living near hydraulic fracturing sites, including confirmation of chemical, physical, and psychosocial hazards such as pregnancy and birth outcomes, migraine headaches, chronic rhinosinusitis, severe fatigue, asthma exacerbations and psychological stress. Adherence to regulation and safety procedures are required to avoid further negative impacts.The scale of methane leakage associated with hydraulic fracturing is uncertain, and there is some evidence that leakage may cancel out any greenhouse gas emissions benefit of natural gas relative to other fossil fuels.
Increases in seismic activity following hydraulic fracturing along dormant or previously unknown faults are sometimes caused by the deep-injection disposal of hydraulic fracturing flowback (a byproduct of hydraulically fractured wells), and produced formation brine (a byproduct of both fractured and non-fractured oil and gas wells). For these reasons, hydraulic fracturing is under international scrutiny, restricted in some countries, and banned altogether in others. The European Union is drafting regulations that would permit the controlled application of hydraulic fracturing.
Geology
Mechanics
Fracturing rocks at great depth frequently become suppressed by pressure due to the weight of the overlying rock strata and the cementation of the formation. This suppression process is particularly significant in "tensile" (Mode 1) fractures which require the walls of the fracture to move against this pressure. Fracturing occurs when effective stress is overcome by the pressure of fluids within the rock. The minimum principal stress becomes tensile and exceeds the tensile strength of the material. Fractures formed in this way are generally oriented in a plane perpendicular to the minimum principal stress, and for this reason, hydraulic fractures in wellbores can be used to determine the orientation of stresses. In natural examples, such as dikes or vein-filled fractures, the orientations can be used to infer past states of stress.
Veins
Most mineral vein systems are a result of repeated natural fracturing during periods of relatively high pore fluid pressure. The effect of high pore fluid pressure on the formation process of mineral vein systems is particularly evident in "crack-seal" veins, where the vein material is part of a series of discrete fracturing events, and extra vein material is deposited on each occasion. One example of long-term repeated natural fracturing is in the effects of seismic activity. Stress levels rise and fall episodically, and earthquakes can cause large volumes of connate water to be expelled from fluid-filled fractures. This process is referred to as "seismic pumping".
Dikes
Minor intrusions in the upper part of the crust, such as dikes, propagate in the form of fluid-filled cracks. In such cases, the fluid is magma. In sedimentary rocks with a significant water content, fluid at fracture tip will be steam.
History
Precursors
Fracturing as a method to stimulate shallow, hard rock oil wells dates back to the 1860s. Dynamite or nitroglycerin detonations were used to increase oil and natural gas production from petroleum bearing formations. On 24 April 1865, US Civil War veteran Col. Edward A. L. Roberts received a patent for an "exploding torpedo". It was employed in Pennsylvania, New York, Kentucky, and West Virginia using liquid and also, later, solidified nitroglycerin. Later still the same method was applied to water and gas wells. Stimulation of wells with acid, instead of explosive fluids, was introduced in the 1930s. Due to acid etching, fractures would not close completely resulting in further productivity increase.
20th century applications
Harold Hamm, Aubrey McClendon, Tom Ward and George P. Mitchell are each considered to have pioneered hydraulic fracturing innovations toward practical applications.
Oil and gas wells
The relationship between well performance and treatment pressures was studied by Floyd Farris of Stanolind Oil and Gas Corporation. This study was the basis of the first hydraulic fracturing experiment, conducted in 1947 at the Hugoton gas field in Grant County of southwestern Kansas by Stanolind. For the well treatment, 1,000 US gallons (3,800 L; 830 imp gal) of gelled gasoline (essentially napalm) and sand from the Arkansas River was injected into the gas-producing limestone formation at 2,400 feet (730 m). The experiment was not very successful as the deliverability of the well did not change appreciably. The process was further described by J.B. Clark of Stanolind in his paper published in 1948. A patent on this process was issued in 1949 and an exclusive license was granted to the Halliburton Oil Well Cementing Company. On 17 March 1949, Halliburton performed the first two commercial hydraulic fracturing treatments in Stephens County, Oklahoma, and Archer County, Texas. Since then, hydraulic fracturing has been used to stimulate approximately one million oil and gas wells in various geologic regimes with good success.
In contrast with large-scale hydraulic fracturing used in low-permeability formations, small hydraulic fracturing treatments are commonly used in high-permeability formations to remedy "skin damage", a low-permeability zone that sometimes forms at the rock-borehole interface. In such cases the fracturing may extend only a few feet from the borehole.In the Soviet Union, the first hydraulic proppant fracturing was carried out in 1952. Other countries in Europe and Northern Africa subsequently employed hydraulic fracturing techniques including Norway, Poland, Czechoslovakia (before 1989), Yugoslavia (before 1991), Hungary, Austria, France, Italy, Bulgaria, Romania, Turkey, Tunisia, and Algeria.
Massive fracturing
Massive hydraulic fracturing (also known as high-volume hydraulic fracturing) is a technique first applied by Pan American Petroleum in Stephens County, Oklahoma, US in 1968. The definition of massive hydraulic fracturing varies, but generally refers to treatments injecting over 150 short tons, or approximately 300,000 pounds (136 metric tonnes), of proppant.American geologists gradually became aware that there were huge volumes of gas-saturated sandstones with permeability too low (generally less than 0.1 millidarcy) to recover the gas economically. Starting in 1973, massive hydraulic fracturing was used in thousands of gas wells in the San Juan Basin, Denver Basin, the Piceance Basin, and the Green River Basin, and in other hard rock formations of the western US. Other tight sandstone wells in the US made economically viable by massive hydraulic fracturing were in the Clinton-Medina Sandstone (Ohio, Pennsylvania, and New York), and Cotton Valley Sandstone (Texas and Louisiana).Massive hydraulic fracturing quickly spread in the late 1970s to western Canada, Rotliegend and Carboniferous gas-bearing sandstones in Germany, Netherlands (onshore and offshore gas fields), and the United Kingdom in the North Sea.Horizontal oil or gas wells were unusual until the late 1980s. Then, operators in Texas began completing thousands of oil wells by drilling horizontally in the Austin Chalk, and giving massive slickwater hydraulic fracturing treatments to the wellbores. Horizontal wells proved much more effective than vertical wells in producing oil from tight chalk; sedimentary beds are usually nearly horizontal, so horizontal wells have much larger contact areas with the target formation.Hydraulic fracturing operations have grown exponentially since the mid-1990s, when technologic advances and increases in the price of natural gas made this technique economically viable.
Shales
Hydraulic fracturing of shales goes back at least to 1965, when some operators in the Big Sandy gas field of eastern Kentucky and southern West Virginia started hydraulically fracturing the Ohio Shale and Cleveland Shale, using relatively small fracs. The frac jobs generally increased production, especially from lower-yielding wells.In 1976, the United States government started the Eastern Gas Shales Project, which included numerous public-private hydraulic fracturing demonstration projects. During the same period, the Gas Research Institute, a gas industry research consortium, received approval for research and funding from the Federal Energy Regulatory Commission.In 1997, Nick Steinsberger, an engineer of Mitchell Energy (now part of Devon Energy), applied the slickwater fracturing technique, using more water and higher pump pressure than previous fracturing techniques, which was used in East Texas in the Barnett Shale of north Texas. In 1998, the new technique proved to be successful when the first 90 days gas production from the well called S.H. Griffin No. 3 exceeded production of any of the company's previous wells. This new completion technique made gas extraction widely economical in the Barnett Shale, and was later applied to other shales, including the Eagle Ford and Bakken Shale. George P. Mitchell has been called the "father of fracking" because of his role in applying it in shales. The first horizontal well in the Barnett Shale was drilled in 1991, but was not widely done in the Barnett until it was demonstrated that gas could be economically extracted from vertical wells in the Barnett.As of 2013, massive hydraulic fracturing is being applied on a commercial scale to shales in the United States, Canada, and China. Several additional countries are planning to use hydraulic fracturing.
Process
According to the United States Environmental Protection Agency (EPA), hydraulic fracturing is a process to stimulate a natural gas, oil, or geothermal well to maximize extraction. The EPA defines the broader process to include acquisition of source water, well construction, well stimulation, and waste disposal.
Method
A hydraulic fracture is formed by pumping fracturing fluid into a wellbore at a rate sufficient to increase pressure at the target depth (determined by the location of the well casing perforations), to exceed that of the fracture gradient (pressure gradient) of the rock. The fracture gradient is defined as pressure increase per unit of depth relative to density, and is usually measured in pounds per square inch, per square foot, or bars. The rock cracks, and the fracture fluid permeates the rock extending the crack further, and further, and so on. Fractures are localized as pressure drops off with the rate of frictional loss, which is relative to the distance from the well. Operators typically try to maintain "fracture width", or slow its decline following treatment, by introducing a proppant into the injected fluid – a material such as grains of sand, ceramic, or other particulate, thus preventing the fractures from closing when injection is stopped and pressure removed. Consideration of proppant strength and prevention of proppant failure becomes more important at greater depths where pressure and stresses on fractures are higher. The propped fracture is permeable enough to allow the flow of gas, oil, salt water and hydraulic fracturing fluids to the well.During the process, fracturing fluid leakoff (loss of fracturing fluid from the fracture channel into the surrounding permeable rock) occurs. If not controlled, it can exceed 70% of the injected volume. This may result in formation matrix damage, adverse formation fluid interaction, and altered fracture geometry, thereby decreasing efficiency.The location of one or more fractures along the length of the borehole is strictly controlled by various methods that create or seal holes in the side of the wellbore. Hydraulic fracturing is performed in cased wellbores, and the zones to be fractured are accessed by perforating the casing at those locations.Hydraulic-fracturing equipment used in oil and natural gas fields usually consists of a slurry blender, one or more high-pressure, high-volume fracturing pumps (typically powerful triplex or quintuplex pumps) and a monitoring unit. Associated equipment includes fracturing tanks, one or more units for storage and handling of proppant, high-pressure treating iron, a chemical additive unit (used to accurately monitor chemical addition), fracking hose (low-pressure flexible hoses), and many gauges and meters for flow rate, fluid density, and treating pressure. Chemical additives are typically 0.5% of the total fluid volume. Fracturing equipment operates over a range of pressures and injection rates, and can reach up to 100 megapascals (15,000 psi) and 265 litres per second (9.4 cu ft/s; 133 US bbl/min).
Well types
A distinction can be made between conventional, low-volume hydraulic fracturing, used to stimulate high-permeability reservoirs for a single well, and unconventional, high-volume hydraulic fracturing, used in the completion of tight gas and shale gas wells. High-volume hydraulic fracturing usually requires higher pressures than low-volume fracturing; the higher pressures are needed to push out larger volumes of fluid and proppant that extend farther from the borehole.Horizontal drilling involves wellbores with a terminal drillhole completed as a "lateral" that extends parallel with the rock layer containing the substance to be extracted. For example, laterals extend 1,500 to 5,000 feet (460 to 1,520 m) in the Barnett Shale basin in Texas, and up to 10,000 feet (3,000 m) in the Bakken formation in North Dakota. In contrast, a vertical well only accesses the thickness of the rock layer, typically 50–300 feet (15–91 m). Horizontal drilling reduces surface disruptions as fewer wells are required to access the same volume of rock.
Drilling often plugs up the pore spaces at the wellbore wall, reducing permeability at and near the wellbore. This reduces flow into the borehole from the surrounding rock formation, and partially seals off the borehole from the surrounding rock. Low-volume hydraulic fracturing can be used to restore permeability.
Fracturing fluids
The main purposes of fracturing fluid are to extend fractures, add lubrication, change gel strength, and to carry proppant into the formation. There are two methods of transporting proppant in the fluid – high-rate and high-viscosity. High-viscosity fracturing tends to cause large dominant fractures, while high-rate (slickwater) fracturing causes small spread-out micro-fractures.Water-soluble gelling agents (such as guar gum) increase viscosity and efficiently deliver proppant into the formation.
Fluid is typically a slurry of water, proppant, and chemical additives. Additionally, gels, foams, and compressed gases, including nitrogen, carbon dioxide and air can be injected. Typically, 90% of the fluid is water and 9.5% is sand with chemical additives accounting to about 0.5%. However, fracturing fluids have been developed using liquefied petroleum gas (LPG) and propane. This process is called waterless fracturing.When propane is used it is turned into vapor by the high pressure and high temperature. The propane vapor and natural gas both return to the surface and can be collected, making it easier to reuse and/or resale. None of the chemicals used will return to the surface. Only the propane used will return from what was used in the process.The proppant is a granular material that prevents the created fractures from closing after the fracturing treatment. Types of proppant include silica sand, resin-coated sand, bauxite, and man-made ceramics. The choice of proppant depends on the type of permeability or grain strength needed. In some formations, where the pressure is great enough to crush grains of natural silica sand, higher-strength proppants such as bauxite or ceramics may be used. The most commonly used proppant is silica sand, though proppants of uniform size and shape, such as a ceramic proppant, are believed to be more effective.
The fracturing fluid varies depending on fracturing type desired, and the conditions of specific wells being fractured, and water characteristics. The fluid can be gel, foam, or slickwater-based. Fluid choices are tradeoffs: more viscous fluids, such as gels, are better at keeping proppant in suspension; while less-viscous and lower-friction fluids, such as slickwater, allow fluid to be pumped at higher rates, to create fractures farther out from the wellbore. Important material properties of the fluid include viscosity, pH, various rheological factors, and others.
Water is mixed with sand and chemicals to create hydraulic fracturing fluid. Approximately 40,000 gallons of chemicals are used per fracturing.
A typical fracture treatment uses between 3 and 12 additive chemicals. Although there may be unconventional fracturing fluids, typical chemical additives can include one or more of the following:
Acids—hydrochloric acid or acetic acid is used in the pre-fracturing stage for cleaning the perforations and initiating fissure in the near-wellbore rock.
Sodium chloride (salt)—delays breakdown of gel polymer chains.
Polyacrylamide and other friction reducers decrease turbulence in fluid flow and pipe friction, thus allowing the pumps to pump at a higher rate without having greater pressure on the surface.
Ethylene glycol—prevents formation of scale deposits in the pipe.
Borate salts—used for maintaining fluid viscosity during the temperature increase.
Sodium and potassium carbonates—used for maintaining effectiveness of crosslinkers.
Glutaraldehyde- a biocide that prevents pipe corrosion from microbial activity.
Guar gum and other water-soluble gelling agents—increases viscosity of the fracturing fluid to deliver proppant into the formation more efficiently.
Citric acid—used for corrosion prevention.
Isopropanol—used to winterize the chemicals to ensure it doesn't freeze.The most common chemical used for hydraulic fracturing in the United States in 2005–2009 was methanol, while some other most widely used chemicals were isopropyl alcohol, 2-butoxyethanol, and ethylene glycol.Typical fluid types are:
Conventional linear gels. These gels are cellulose derivative (carboxymethyl cellulose, hydroxyethyl cellulose, carboxymethyl hydroxyethyl cellulose, hydroxypropyl cellulose, hydroxyethyl methyl cellulose), guar or its derivatives (hydroxypropyl guar, carboxymethyl hydroxypropyl guar), mixed with other chemicals.
Borate-crosslinked fluids. These are guar-based fluids cross-linked with boron ions (from aqueous borax/boric acid solution). These gels have higher viscosity at pH 9 onwards and are used to carry proppant. After the fracturing job, the pH is reduced to 3–4 so that the cross-links are broken, and the gel is less viscous and can be pumped out.
Organometallic-crosslinked fluids – zirconium, chromium, antimony, titanium salts – are known to crosslink guar-based gels. The crosslinking mechanism is not reversible, so once the proppant is pumped down along with cross-linked gel, the fracturing part is done. The gels are broken down with appropriate breakers.
Aluminium phosphate-ester oil gels. Aluminium phosphate and ester oils are slurried to form cross-linked gel. These are one of the first known gelling systems.For slickwater fluids the use of sweeps is common. Sweeps are temporary reductions in the proppant concentration, which help ensure that the well is not overwhelmed with proppant. As the fracturing process proceeds, viscosity-reducing agents such as oxidizers and enzyme breakers are sometimes added to the fracturing fluid to deactivate the gelling agents and encourage flowback. Such oxidizers react with and break down the gel, reducing the fluid's viscosity and ensuring that no proppant is pulled from the formation. An enzyme acts as a catalyst for breaking down the gel. Sometimes pH modifiers are used to break down the crosslink at the end of a hydraulic fracturing job, since many require a pH buffer system to stay viscous. At the end of the job, the well is commonly flushed with water under pressure (sometimes blended with a friction reducing chemical.) Some (but not all) injected fluid is recovered. This fluid is managed by several methods, including underground injection control, treatment, discharge, recycling, and temporary storage in pits or containers. New technology is continually developing to better handle waste water and improve re-usability.
Fracture monitoring
Measurements of the pressure and rate during the growth of a hydraulic fracture, with knowledge of fluid properties and proppant being injected into the well, provides the most common and simplest method of monitoring a hydraulic fracture treatment. This data along with knowledge of the underground geology can be used to model information such as length, width and conductivity of a propped fracture.
Radionuclide monitoring
Injection of radioactive tracers along with the fracturing fluid is sometimes used to determine the injection profile and location of created fractures. Radiotracers are selected to have the readily detectable radiation, appropriate chemical properties, and a half life and toxicity level that will minimize initial and residual contamination. Radioactive isotopes chemically bonded to glass (sand) and/or resin beads may also be injected to track fractures. For example, plastic pellets coated with 10 GBq of Ag-110mm may be added to the proppant, or sand may be labelled with Ir-192, so that the proppant's progress can be monitored. Radiotracers such as Tc-99m and I-131 are also used to measure flow rates. The Nuclear Regulatory Commission publishes guidelines which list a wide range of radioactive materials in solid, liquid and gaseous forms that may be used as tracers and limit the amount that may be used per injection and per well of each radionuclide.A new technique in well-monitoring involves fiber-optic cables outside the casing. Using the fiber optics, temperatures can be measured every foot along the well – even while the wells are being fracked and pumped. By monitoring the temperature of the well, engineers can determine how much hydraulic fracturing fluid different parts of the well use as well as how much natural gas or oil they collect, during hydraulic fracturing operation and when the well is producing.
Microseismic monitoring
For more advanced applications, microseismic monitoring is sometimes used to estimate the size and orientation of induced fractures. Microseismic activity is measured by placing an array of geophones in a nearby wellbore. By mapping the location of any small seismic events associated with the growing fracture, the approximate geometry of the fracture is inferred. Tiltmeter arrays deployed on the surface or down a well provide another technology for monitoring strainMicroseismic mapping is very similar geophysically to seismology. In earthquake seismology, seismometers scattered on or near the surface of the earth record S-waves and P-waves that are released during an earthquake event. This allows for motion along the fault plane to be estimated and its location in the Earth's subsurface mapped. Hydraulic fracturing, an increase in formation stress proportional to the net fracturing pressure, as well as an increase in pore pressure due to leakoff. Tensile stresses are generated ahead of the fracture's tip, generating large amounts of shear stress. The increases in pore water pressure and in formation stress combine and affect weaknesses near the hydraulic fracture, like natural fractures, joints, and bedding planes.Different methods have different location errors and advantages. Accuracy of microseismic event mapping is dependent on the signal-to-noise ratio and the distribution of sensors. Accuracy of events located by seismic inversion is improved by sensors placed in multiple azimuths from the monitored borehole. In a downhole array location, accuracy of events is improved by being close to the monitored borehole (high signal-to-noise ratio).
Monitoring of microseismic events induced by reservoir stimulation has become a key aspect in evaluation of hydraulic fractures, and their optimization. The main goal of hydraulic fracture monitoring is to completely characterize the induced fracture structure, and distribution of conductivity within a formation. Geomechanical analysis, such as understanding a formations material properties, in-situ conditions, and geometries, helps monitoring by providing a better definition of the environment in which the fracture network propagates. The next task is to know the location of proppant within the fracture and the distribution of fracture conductivity. This can be monitored using multiple types of techniques to finally develop a reservoir model than accurately predicts well performance.
Horizontal completions
Since the early 2000s, advances in drilling and completion technology have made horizontal wellbores much more economical. Horizontal wellbores allow far greater exposure to a formation than conventional vertical wellbores. This is particularly useful in shale formations which do not have sufficient permeability to produce economically with a vertical well. Such wells, when drilled onshore, are now usually hydraulically fractured in a number of stages, especially in North America. The type of wellbore completion is used to determine how many times a formation is fractured, and at what locations along the horizontal section.In North America, shale reservoirs such as the Bakken, Barnett, Montney, Haynesville, Marcellus, and most recently the Eagle Ford, Niobrara and Utica shales are drilled horizontally through the producing intervals, completed and fractured. The method by which the fractures are placed along the wellbore is most commonly achieved by one of two methods, known as "plug and perf" and "sliding sleeve".The wellbore for a plug-and-perf job is generally composed of standard steel casing, cemented or uncemented, set in the drilled hole. Once the drilling rig has been removed, a wireline truck is used to perforate near the bottom of the well, and then fracturing fluid is pumped. Then the wireline truck sets a plug in the well to temporarily seal off that section so the next section of the wellbore can be treated. Another stage is pumped, and the process is repeated along the horizontal length of the wellbore.The wellbore for the sliding sleeve technique is different in that the sliding sleeves are included at set spacings in the steel casing at the time it is set in place. The sliding sleeves are usually all closed at this time. When the well is due to be fractured, the bottom sliding sleeve is opened using one of several activation techniques and the first stage gets pumped. Once finished, the next sleeve is opened, concurrently isolating the previous stage, and the process repeats. For the sliding sleeve method, wireline is usually not required.
These completion techniques may allow for more than 30 stages to be pumped into the horizontal section of a single well if required, which is far more than would typically be pumped into a vertical well that had far fewer feet of producing zone exposed.
Uses
Hydraulic fracturing is used to increase the rate at which substances such as petroleum or natural gas can be recovered from subterranean natural reservoirs. Reservoirs are typically porous sandstones, limestones or dolomite rocks, but also include "unconventional reservoirs" such as shale rock or coal beds. Hydraulic fracturing enables the extraction of natural gas and oil from rock formations deep below the earth's surface (generally 2,000–6,000 m (5,000–20,000 ft)), which is greatly below typical groundwater reservoir levels. At such depth, there may be insufficient permeability or reservoir pressure to allow natural gas and oil to flow from the rock into the wellbore at high economic return. Thus, creating conductive fractures in the rock is instrumental in extraction from naturally impermeable shale reservoirs. Permeability is measured in the microdarcy to nanodarcy range. Fractures are a conductive path connecting a larger volume of reservoir to the well. So-called "super fracking", creates cracks deeper in the rock formation to release more oil and gas, and increases efficiency. The yield for typical shale bores generally falls off after the first year or two, but the peak producing life of a well can be extended to several decades.
Non-oil/gas uses
While the main industrial use of hydraulic fracturing is in stimulating production from oil and gas wells, hydraulic fracturing is also applied:
To stimulate groundwater wells
To precondition or induce rock cave-ins mining
As a means of enhancing waste remediation, usually hydrocarbon waste or spills
To dispose waste by injection deep into rock
To measure stress in the Earth
For electricity generation in enhanced geothermal systems
To increase injection rates for geologic sequestration of CO2
To store electrical energy, pumped storage hydroelectricitySince the late 1970s, hydraulic fracturing has been used, in some cases, to increase the yield of drinking water from wells in a number of countries, including the United States, Australia, and South Africa.
Economic effects
Hydraulic fracturing has been seen as one of the key methods of extracting unconventional oil and unconventional gas resources. According to the International Energy Agency, the remaining technically recoverable resources of shale gas are estimated to amount to 208 trillion cubic metres (7,300 trillion cubic feet), tight gas to 76 trillion cubic metres (2,700 trillion cubic feet), and coalbed methane to 47 trillion cubic metres (1,700 trillion cubic feet). As a rule, formations of these resources have lower permeability than conventional gas formations. Therefore, depending on the geological characteristics of the formation, specific technologies such as hydraulic fracturing are required. Although there are also other methods to extract these resources, such as conventional drilling or horizontal drilling, hydraulic fracturing is one of the key methods making their extraction economically viable. The multi-stage fracturing technique has facilitated the development of shale gas and light tight oil production in the United States and is believed to do so in the other countries with unconventional hydrocarbon resources.A large majority of studies indicate that hydraulic fracturing in the United States has had a strong positive economic benefit so far. The Brookings Institution estimates that the benefits of Shale Gas alone has led to a net economic benefit of $48 billion per year. Most of this benefit is within the consumer and industrial sectors due to the significantly reduced prices for natural gas. Other studies have suggested that the economic benefits are outweighed by the externalities and that the levelized cost of electricity (LCOE) from less carbon and water intensive sources is lower.The primary benefit of hydraulic fracturing is to offset imports of natural gas and oil, where the cost paid to producers otherwise exits the domestic economy. However, shale oil and gas is highly subsidised in the US, and has not yet covered production costs – meaning that the cost of hydraulic fracturing is paid for in income taxes, and in many cases is up to double the cost paid at the pump.Research suggests that hydraulic fracturing wells have an adverse effect on agricultural productivity in the vicinity of the wells. One paper found "that productivity of an irrigated crop decreases by 5.7% when a well is drilled during the agriculturally active months within 11–20 km radius of a producing township. This effect becomes smaller and weaker as the distance between township and wells increases." The findings imply that the introduction of hydraulic fracturing wells to Alberta cost the province $14.8 million in 2014 due to the decline in the crop productivity,The Energy Information Administration of the US Department of Energy estimates that 45% of US gas supply will come from shale gas by 2035 (with the vast majority of this replacing conventional gas, which has a lower greenhouse-gas footprint).
Public debate
Politics and public policy
Popular movement and civil society organizations
An anti-fracking movement has emerged both internationally with involvement of international environmental organizations and nations such as France and locally in affected areas such as Balcombe in Sussex where the Balcombe drilling protest was in progress during mid-2013. The considerable opposition against hydraulic fracturing activities in local townships in the United States has led companies to adopt a variety of public relations measures to reassure the public, including the employment of former military personnel with training in psychological warfare operations. According to Matt Pitzarella, the communications director at Range Resources, employees trained in the Middle East have been valuable to Range Resources in Pennsylvania, when dealing with emotionally charged township meetings and advising townships on zoning and local ordinances dealing with hydraulic fracturing.There have been many protests directed at hydraulic fracturing. For example, ten people were arrested in 2013 during an anti-fracking protest near New Matamoras, Ohio, after they illegally entered a development zone and latched themselves to drilling equipment. In northwest Pennsylvania, there was a drive-by shooting at a well site, in which someone shot two rounds of a small-caliber rifle in the direction of a drilling rig. In Washington County, Pennsylvania, a contractor working on a gas pipeline found a pipe bomb that had been placed where a pipeline was to be constructed, which local authorities said would have caused a "catastrophe" had they not discovered and detonated it.
U.S. government and Corporate lobbying
The United States Department of State established the Global Shale Gas Initiative to persuade governments around the world to give concessions to the major oil and gas companies to set up fracking operations. A document from the United States diplomatic cables leak show that, as part of this project, U.S. officials convened conferences for foreign government officials that featured presentations by major oil and gas company representatives and by public relations professionals with expertise on how to assuage populations of target countries whose citizens were often quite hostile to fracking on their lands. The US government project succeeded as many countries on several continents acceded to the idea of granting concessions for fracking; Poland, for example, agreed to permit fracking by the major oil and gas corporations on nearly a third of its territory. The US Export-Import Bank, an agency of the US government, provided $4.7 billion in financing for fracking operations set up since 2010 in Queensland, Australia.
Alleged Russian state advocacy
In 2014 a number of European officials suggested that several major European protests against hydraulic fracturing (with mixed success in Lithuania and Ukraine) may be partially sponsored by Gazprom, Russia's state-controlled gas company. The New York Times suggested that Russia saw its natural gas exports to Europe as a key element of its geopolitical influence, and that this market would diminish if hydraulic fracturing is adopted in Eastern Europe, as it opens up significant shale gas reserves in the region. Russian officials have on numerous occasions made public statements to the effect that hydraulic fracturing "poses a huge environmental problem".
Current fracking operations
Hydraulic fracturing is currently taking place in the United States in Arkansas, California, Colorado, Louisiana, North Dakota, Oklahoma, Pennsylvania, Texas, Virginia, West Virginia, and Wyoming. Other states, such as Alabama, Indiana, Michigan, Mississippi, New Jersey, New York, and Ohio, are either considering or preparing for drilling using this method. Maryland and Vermont have permanently banned hydraulic fracturing, and New York and North Carolina have instituted temporary bans. New Jersey currently has a bill before its legislature to extend a 2012 moratorium on hydraulic fracturing that recently expired. Although a hydraulic fracturing moratorium was recently lifted in the United Kingdom, the government is proceeding cautiously because of concerns about earthquakes and the environmental effect of drilling. Hydraulic fracturing is currently banned in France and Bulgaria.
Documentary films
Josh Fox's 2010 Academy Award nominated film Gasland became a center of opposition to hydraulic fracturing of shale. The movie presented problems with groundwater contamination near well sites in Pennsylvania, Wyoming, and Colorado. Energy in Depth, an oil and gas industry lobbying group, called the film's facts into question. In response, a rebuttal of Energy in Depth's claims of inaccuracy was posted on Gasland's website. The Director of the Colorado Oil and Gas Conservation Commission (COGCC) offered to be interviewed as part of the film if he could review what was included from the interview in the final film but Fox declined the offer. Exxon Mobil, Chevron Corporation and ConocoPhillips aired advertisements during 2011 and 2012 that claimed to describe the economic and environmental benefits of natural gas and argue that hydraulic fracturing was safe.The 2012 film Promised Land, starring Matt Damon, takes on hydraulic fracturing. The gas industry countered the film's criticisms of hydraulic fracturing with flyers, and Twitter and Facebook posts.In January 2013, Northern Irish journalist and filmmaker Phelim McAleer released a crowdfunded documentary called FrackNation as a response to the statements made by Fox in Gasland, claiming it "tells the truth about fracking for natural gas". FrackNation premiered on Mark Cuban's AXS TV. The premiere corresponded with the release of Promised Land.In April 2013, Josh Fox released Gasland 2, his "international odyssey uncovering a trail of secrets, lies and contamination related to hydraulic fracking". It challenges the gas industry's portrayal of natural gas as a clean and safe alternative to oil as a myth, and that hydraulically fractured wells inevitably leak over time, contaminating water and air, hurting families, and endangering the earth's climate with the potent greenhouse gas methane.
In 2014, Scott Cannon of Video Innovations released the documentary The Ethics of Fracking. The film covers the politics, spiritual, scientific, medical and professional points of view on hydraulic fracturing. It also digs into the way the gas industry portrays hydraulic fracturing in their advertising.In 2015, the Canadian documentary film Fractured Land had its world premiere at the Hot Docs Canadian International Documentary Festival.
Research issues
Typically the funding source of the research studies is a focal point of controversy. Concerns have been raised about research funded by foundations and corporations, or by environmental groups, which can at times lead to at least the appearance of unreliable studies. Several organizations, researchers, and media outlets have reported difficulty in conducting and reporting the results of studies on hydraulic fracturing due to industry and governmental pressure, and expressed concern over possible censoring of environmental reports. Some have argued there is a need for more research into the environmental and health effects of the technique.
Health risks
There is concern over the possible adverse public health implications of hydraulic fracturing activity. A 2013 review on shale gas production in the United States stated, "with increasing numbers of drilling sites, more people are at risk from accidents and exposure to harmful substances used at fractured wells." A 2011 hazard assessment recommended full disclosure of chemicals used for hydraulic fracturing and drilling as many have immediate health effects, and many may have long-term health effects.In June 2014 Public Health England published a review of the potential public health impacts of exposures to chemical and radioactive pollutants as a result of shale gas extraction in the UK, based on the examination of literature and data from countries where hydraulic fracturing already occurs. The executive summary of the report stated: "An assessment of the currently available evidence indicates that the potential risks to public health from exposure to the emissions associated with shale gas extraction will be low if the operations are properly run and regulated. Most evidence suggests that contamination of groundwater, if it occurs, is most likely to be caused by leakage through the vertical borehole. Contamination of groundwater from the underground hydraulic fracturing process itself (i.e. the fracturing of the shale) is unlikely. However, surface spills of hydraulic fracturing fluids or wastewater may affect groundwater, and emissions to air also have the potential to impact on health. Where potential risks have been identified in the literature, the reported problems are typically a result of operational failure and a poor regulatory environment.": iii A 2012 report prepared for the European Union Directorate-General for the Environment identified potential risks to humans from air pollution and ground water contamination posed by hydraulic fracturing. This led to a series of recommendations in 2014 to mitigate these concerns. A 2012 guidance for pediatric nurses in the US said that hydraulic fracturing had a potential negative impact on public health and that pediatric nurses should be prepared to gather information on such topics so as to advocate for improved community health.A 2017 study in The American Economic Review found that "additional well pads drilled within 1 kilometer of a community water system intake increases shale gas-related contaminants in drinking water."A 2022 study conduced by Harvard T.H. Chan School of Public Health and published in Nature Energy found that elderly people living near or downwind of unconventional oil and gas development (UOGD) -- which involves extraction methods including fracking—are at greater risk of experiencing early death compared with elderly persons who don't live near such operations.Statistics collected by the U.S. Department of Labor and analyzed by the U.S. Centers for Disease Control and Prevention show a correlation between drilling activity and the number of occupational injuries related to drilling and motor vehicle accidents, explosions, falls, and fires. Extraction workers are also at risk for developing pulmonary diseases, including lung cancer and silicosis (the latter because of exposure to silica dust generated from rock drilling and the handling of sand). The U.S. National Institute for Occupational Safety and Health (NIOSH) identified exposure to airborne silica as a health hazard to workers conducting some hydraulic fracturing operations. NIOSH and OSHA issued a joint hazard alert on this topic in June 2012.Additionally, the extraction workforce is at increased risk for radiation exposure. Fracking activities often require drilling into rock that contains naturally occurring radioactive material (NORM), such as radon, thorium, and uranium.Another report done by the Canadian Medical Journal reported that after researching they identified 55 factors that may cause cancer, including 20 that have been shown to increase the risk of leukemia and lymphoma. The Yale Public Health analysis warns that millions of people living within a mile of fracking wells may have been exposed to these chemicals.
Environmental effects
The potential environmental effects of hydraulic fracturing include air emissions and climate change, high water consumption, groundwater contamination, land use, risk of earthquakes, noise pollution, and various health effects on humans. Air emissions are primarily methane that escapes from wells, along with industrial emissions from equipment used in the extraction process. Modern UK and EU regulation requires zero emissions of methane, a potent greenhouse gas. Escape of methane is a bigger problem in older wells than in ones built under more recent EU legislation.In December 2016 the United States Environmental Protection Agency (EPA) issued the "Hydraulic Fracturing for Oil and Gas: Impacts from the Hydraulic Fracturing Water Cycle on Drinking Water Resources in the United States (Final Report)." The EPA found scientific evidence that hydraulic fracturing activities can impact drinking water resources. A few of the main reasons why drinking water can be contaminated according to the EPA are:
Water removal to be used for fracking in times or areas of low water availability
Spills while handling fracking fluids and chemicals that result in large volumes or high concentrations of chemicals reaching groundwater resources
Injection of fracking fluids into wells when mishandling machinery, allowing gases or liquids to move to groundwater resources
Injection of fracking fluids directly into groundwater resources
Leak of defective hydraulic fracturing wastewater to surface water
Disposal or storage of fracking wastewater in unlined pits resulting in contamination of groundwater resources.Hydraulic fracturing uses between 1.2 and 3.5 million US gallons (4,500 and 13,200 m3) of water per well, with large projects using up to 5 million US gallons (19,000 m3). Additional water is used when wells are refractured. An average well requires 3 to 8 million US gallons (11,000 to 30,000 m3) of water over its lifetime. According to the Oxford Institute for Energy Studies, greater volumes of fracturing fluids are required in Europe, where the shale depths average 1.5 times greater than in the U.S. Surface water may be contaminated through spillage and improperly built and maintained waste pits, and ground water can be contaminated if the fluid is able to escape the formation being fractured (through, for example, abandoned wells, fractures, and faults) or by produced water (the returning fluids, which also contain dissolved constituents such as minerals and brine waters). The possibility of groundwater contamination from brine and fracturing fluid leakage through old abandoned wells is low. Produced water is managed by underground injection, municipal and commercial wastewater treatment and discharge, self-contained systems at well sites or fields, and recycling to fracture future wells. Typically less than half of the produced water used to fracture the formation is recovered.In the United States there is over 12 million acres that are being used for fossil fuels. About 3.6 hectares (8.9 acres) of land is needed per each drill pad for surface installations. This is equivalent of six Yellowstone National Parks. Well pad and supporting structure construction significantly fragments landscapes which likely has negative effects on wildlife. These sites need to be remediated after wells are exhausted. Research indicates that effects on ecosystem services costs (i.e., those processes that the natural world provides to humanity) has reached over $250 million per year in the U.S. Each well pad (in average 10 wells per pad) needs during preparatory and hydraulic fracturing process about 800 to 2,500 days of noisy activity, which affect both residents and local wildlife. In addition, noise is created by continuous truck traffic (sand, etc.) needed in hydraulic fracturing. Research is underway to determine if human health has been affected by air and water pollution, and rigorous following of safety procedures and regulation is required to avoid harm and to manage the risk of accidents that could cause harm.In July 2013, the US Federal Railroad Administration listed oil contamination by hydraulic fracturing chemicals as "a possible cause" of corrosion in oil tank cars.Hydraulic fracturing has been sometimes linked to induced seismicity or earthquakes. The magnitude of these events is usually too small to be detected at the surface, although tremors attributed to fluid injection into disposal wells have been large enough to have often been felt by people, and to have caused property damage and possibly injuries. A U.S. Geological Survey reported that up to 7.9 million people in several states have a similar earthquake risk to that of California, with hydraulic fracturing and similar practices being a prime contributing factor.Microseismic events are often used to map the horizontal and vertical extent of the fracturing. A better understanding of the geology of the area being fracked and used for injection wells can be helpful in mitigating the potential for significant seismic events.People obtain drinking water from either surface water, which includes rivers and reservoirs, or groundwater aquifers, accessed by public or private wells. There are already a host of documented instances in which nearby groundwater has been contaminated by fracking activities, requiring residents with private wells to obtain outside sources of water for drinking and everyday use.Per- and polyfluoroalkyl substances also known as "PFAS" or "forever chemicals" have been linked to cancer and birth defects. The chemicals used in fracking stay in the environment. Once there those chemicals will eventually break down into PFAS. These chemicals can escape from drilling sites and into the groundwater. PFAS are able to leak into underground wells that store million gallons of wastewater.Despite these health concerns and efforts to institute a moratorium on fracking until its environmental and health effects are better understood, the United States continues to rely heavily on fossil fuel energy. In 2017, 37% of annual U.S. energy consumption is derived from petroleum, 29% from natural gas, 14% from coal, and 9% from nuclear sources, with only 11% supplied by renewable energy, such as wind and solar power.
Regulations
Countries using or considering use of hydraulic fracturing have implemented different regulations, including developing federal and regional legislation, and local zoning limitations. In 2011, after public pressure France became the first nation to ban hydraulic fracturing, based on the precautionary principle as well as the principle of preventive and corrective action of environmental hazards. The ban was upheld by an October 2013 ruling of the Constitutional Council. Some other countries such as Scotland have placed a temporary moratorium on the practice due to public health concerns and strong public opposition. Countries like England and South Africa have lifted their bans, choosing to focus on regulation instead of outright prohibition. Germany has announced draft regulations that would allow using hydraulic fracturing for the exploitation of shale gas deposits with the exception of wetland areas. In China, regulation on shale gas still faces hurdles, as it has complex interrelations with other regulatory regimes, especially trade. Many states in Australia have either permanently or temporarily banned fracturing for hydrocarbons. In 2019, hydraulic fracturing was banned in UK.The European Union has adopted a recommendation for minimum principles for using high-volume hydraulic fracturing. Its regulatory regime requires full disclosure of all additives. In the United States, the Ground Water Protection Council launched FracFocus.org, an online voluntary disclosure database for hydraulic fracturing fluids funded by oil and gas trade groups and the U.S. Department of Energy. Hydraulic fracturing is excluded from the Safe Drinking Water Act's underground injection control's regulation, except when diesel fuel is used. The EPA assures surveillance of the issuance of drilling permits when diesel fuel is employed.In 2012, Vermont became the first state in the United States to ban hydraulic fracturing. On 17 December 2014, New York became the second state to issue a complete ban on any hydraulic fracturing due to potential risks to human health and the environment.
See also
Directional drilling
Environmental impact of electricity generation
Environmental effects of petroleum
Fracking by country
Fracking in the United States
Fracking in the United Kingdom
In situ leach
Nuclear power
Peak oil
Stranded asset
Shale oil extraction
References
This article incorporates public domain material from Hydraulic Fracturing for Oil and Gas: Impacts from the Hydraulic Fracturing Water Cycle on Drinking Water Resources in the United States (Final Report). United States Environmental Protection Agency. Retrieved 17 December 2016.
Further reading
External links
Hydraulic Fracturing Litigation Summary (22 April 2021) |
clean air act (united states) | The Clean Air Act (CAA) is the United States' primary federal air quality law, intended to reduce and control air pollution nationwide. Initially enacted in 1963 and amended many times since, it is one of the United States' first and most influential modern environmental laws.
As with many other major U.S. federal environmental statutes, the Clean Air Act is administered by the U.S. Environmental Protection Agency (EPA), in coordination with state, local, and tribal governments.: 2–3 EPA develops extensive administrative regulations to carry out the law's mandates. Associated regulatory programs, which are often technical and complex, implement these regulations. Among the most important, the National Ambient Air Quality Standards program sets standards for concentrations of certain pollutants in outdoor air, and the National Emissions Standards for Hazardous Air Pollutants program which sets standards for emissions of particular hazardous pollutants from specific sources. Other programs create requirements for vehicle fuels, industrial facilities, and other technologies and activities that impact air quality. Newer programs tackle specific problems, including acid rain, ozone layer protection, and climate change.
The CAA has been challenged in court many times, both by environmental groups seeking more stringent enforcement and by states and utilities seeking greater leeway in regulation.
Although its exact benefits depend on what is counted, the Clean Air Act has substantially reduced air pollution and improved US air quality—benefits which EPA credits with saving trillions of dollars and many thousands of lives each year.
Regulatory programs
In the United States, the "Clean Air Act" typically refers to the codified statute at 42 U.S.C. ch. 85. That statute is the product of multiple acts of Congress, one of which—the 1963 act—was actually titled the Clean Air Act, and another of which—the 1970 act—is most often referred to as such. In the U.S. Code, the statute itself is divided into subchapters, and the section numbers are not clearly related to the subchapters. However, in the bills that created the law, the major divisions are called "Titles", and the law's sections are numbered according to the title (e.g., Title II begins with Section 201). In practice, EPA, courts, and attorneys often use the latter numbering scheme.
Although many parts of the statute are quite detailed, others set out only the general outlines of the law's regulatory programs, and leave many key terms undefined. Responsible agencies, primarily EPA, have therefore developed administrative regulations to carry out Congress's instructions. EPA's proposed and final regulations are published in the Federal Register, often with lengthy background histories. The existing CAA regulations are codified at 40 C.F.R. Subchapter C, Parts 50–98. These Parts more often correspond to the Clean Air Act's major regulatory programs.
Today, the following are major regulatory programs under the Clean Air Act.
National Ambient Air Quality Standards
The National Ambient Air Quality Standards (NAAQS) govern how much ground-level ozone (O3), carbon monoxide (CO), particulate matter (PM10, PM2.5), lead (Pb), sulfur dioxide (SO2), and nitrogen dioxide (NO2) are allowed in the outdoor air. The NAAQS set the acceptable levels of certain air pollutants in the ambient air in the United States. Prior to 1965, there was no national program for developing ambient air quality standards, and prior to 1970 the federal government did not have primary responsibility for developing them. The 1970 CAA amendments required EPA to determine which air pollutants posed the greatest threat to public health and welfare and promulgate NAAQS and air quality criteria for them. The health-based standards were called "primary" NAAQS, while standards set to protect public welfare other than health (e.g., agricultural values) were called "secondary" NAAQS. In 1971, EPA promulgated regulations for sulfur oxides, particulate matter, carbon monoxide, photochemical oxidants, hydrocarbons, and nitrogen dioxide (36 FR 22384). Initially, EPA did not list lead as a criteria pollutant, controlling it through mobile source authorities, but it was required to do so after successful litigation by Natural Resources Defense Council (NRDC) in 1976 (43 FR 46258). The 1977 CAA Amendments created a process for regular review of the NAAQS list, and created a permanent independent scientific review committee to provide technical input on the NAAQS to EPA. EPA added regulations for PM2.5 in 1997 (62 FR 38652), and updates the NAAQS from time to time based on emerging environmental and health science.
National Emissions Standards for Hazardous Air Pollutants
The National Emissions Standards for Hazardous Air Pollutants (NESHAPs) govern how much of 187 toxic air pollutants are allowed to be emitted from industrial facilities and other sources. Under the CAA, hazardous air pollutants (HAPs, or air toxics) are air pollutants other than those for which NAAQS exist, which threaten human health and welfare. The NESHAPs are the standards used for controlling, reducing, and eliminating HAPs emissions from stationary sources such as industrial facilities. The 1970 CAA required EPA to develop a list of HAPs, and then develop national emissions standards for each of them. The original NESHAPs were health-based standards. The 1990 CAA Amendments (Pub. L.Tooltip Public Law (United States) 101–549 Title III) codified EPA's list, and required creation of technology-based standards according to "maximum achievable control technology" (MACT). Over the years, EPA has issued dozens of NESHAP regulations, which have developed NESHAPs by pollutant, by industry source category, and by industrial process. There are also NESHAPs for mobile sources (transportation), although these are primarily handled under the mobile source authorities. The 1990 amendments (adding CAA § 112(d-f)) also created a process by which EPA was required to review and update its NESHAPs every eight years, and identify any risks remaining after application of MACT, and develop additional rules necessary to protect public health.
New Source Performance Standards
The New Source Performance Standards (NSPS) are rules for the equipment required to be installed in new and modified industrial facilities, and the rules for determining whether a facility is "new". The 1970 CAA required EPA to develop standards for newly constructed and modified stationary sources (industrial facilities) using the "best system of emission reduction which (taking into account the cost of achieving such reduction) the [EPA] determines has been adequately demonstrated." EPA issued its first NSPS regulation the next year, covering steam generators, incinerators, Portland cement plants, and nitric and sulfuric acid plants (36 FR 24876). Since then, EPA has issued dozens of NSPS regulations, primarily by source category. The requirements promote industrywide adoption of available pollution control technologies. However, because these standards apply only to new and modified sources, they promote extending the lifetimes of pre-existing facilities. In the 1977 CAA Amendments, Congress required EPA to conduct a "new source review" process (40 CFR 52, subpart I) to determine whether maintenance and other activities rises to the level of modification requiring application of NSPS.
Acid Rain Program
The Acid Rain Program (ARP) is an emissions trading program for power plants to control the pollutants that cause acid rain. The 1990 CAA Amendments created a new title to address the issue of acid rain, and particularly nitrogen oxides (NOx) and sulfur dioxide (SO2) emissions from electric power plants powered by fossil fuels, and other industrial sources. The Acid Rain Program was the first emissions trading program in the United States, setting a cap on total emissions that was reduced over time by way of traded emissions credits, rather than direct controls on emissions. The program evolved in two stages: the first stage required more than 100 electric generating facilities larger than 100 megawatts to meet a 3.5 million ton SO2 emission reduction by January 1995. The second stage gave facilities larger than 75 megawatts a January 2000 deadline. The program has achieved all of its statutory goals.
Ozone layer protection
The CAA ozone program is a technology transition program intended to phase out the use of chemicals that harm the ozone layer. Consistent with the US commitments in the Montreal Protocol, CAA Title VI, added by the 1990 CAA Amendments, mandated regulations regarding the use and production of chemicals that harm Earth's stratospheric ozone layer. Under Title VI, EPA runs programs to phase out ozone-destroying substances, track their import and export, determine exemptions for their continued use, and define practices for destroying them, maintaining and servicing equipment that uses them, identifying new alternatives to those still in use, and licensing technicians to use such chemicals.
Mobile source programs
Rules for pollutants emitted from internal combustion engines in vehicles. Since 1965, Congress has mandated increasingly stringent controls on vehicle engine technology and reductions in tailpipe emissions. Today, the law requires EPA to establish and regularly update regulations for pollutants that may threaten public health, from a wide variety of classes of motor vehicles, that incorporate technology to achieve the "greatest degree of emission reduction achievable", factoring in availability, cost, energy, and safety (42 U.S.C. § 7521).
On-road vehicles regulations
EPA sets standards for exhaust gases, evaporative emissions, air toxics, refueling vapor recovery, and vehicle inspection and maintenance for several classes of vehicles that travel on roadways. EPA's "light-duty vehicles" regulations cover passenger cars, minivans, passenger vans, pickup trucks, and SUVs. "Heavy-duty vehicles" regulations cover large trucks and buses. EPA first issued motorcycle emissions regulations in 1977 (42 FR 1122) and updated them in 2004 (69 FR 2397).
Vehicle testing programThe air pollution testing system for motor vehicles was originally developed in 1972 and used driving cycles designed to simulate driving during rush-hour in Los Angeles during that era. Until 1984, EPA reported the exact fuel economy figures calculated from the test. In 1984, EPA began adjusting city (aka Urban Dynamometer Driving Schedule or UDDS) results downward by 10% and highway (aka HighWay Fuel Economy Test or HWFET) results by 22% to compensate for changes in driving conditions since 1972, and to better correlate the EPA test results with real-world driving. In 1996, EPA proposed updating the Federal Testing Procedures to add a new higher-speed test (US06) and an air-conditioner-on test (SC03) to further improve the correlation of fuel economy and emission estimates with real-world reports. In December 2006 the updated testing methodology was finalized to be implemented in model year 2008 vehicles and set the precedent of a 12-year review cycle for the test procedures.In February 2005, EPA launched a program called "Your MPG" that allows drivers to add real-world fuel economy statistics into a database on EPA's fuel economy website and compare them with others and with the original EPA test results.EPA conducts fuel economy tests on very few vehicles. Two-thirds of the vehicles the EPA tests themselves are randomly selected and the remaining third is tested for specific reasons.
Although originally created as a reference point for fossil-fueled vehicles, driving cycles have been used for estimating how many miles an electric vehicle will get on a single charge.
Non-road vehicles regulations
The 1970 CAA amendments provided for regulation of aircraft emissions (42 U.S.C. § 7571), and EPA began regulating in 1973. In 2012, EPA finalized its newest restrictions on NOx emissions from gas turbine aircraft engines with rated thrusts above 26.7 kiloNewton (3 short ton-force), meaning primarily commercial jet aircraft engines, intended to match international standards. EPA has been investigating whether to regulate lead in fuels for small aircraft since 2010, but has not yet acted. The 1990 CAA Amendments (Pub. L.Tooltip Public Law (United States) 101–549 § 222) added rules for a "nonroad" engine program (42 U.S.C. § 7547), which expanded EPA regulation to locomotives, heavy equipment and small equipment engines fueled by diesel (compression-ignition), and gas and other fuels (spark-ignition), and marine transport.
Voluntary programs
EPA has developed a variety of voluntary programs to incentivize and promote reduction in transportation-related air pollution, including elements of the Clean Diesel Campaign, Ports Initiative, SmartWay program, and others.
Fuel controls
The federal government has regulated the chemical composition of transportation fuels since 1967, with significant new authority added in 1970 to protect public health. One of EPA's earliest actions was the elimination of lead in U.S. gasoline beginning in 1971 (36 FR 1486, 37 FR 3882, 38 FR 33734), a project that has been described as "one of the great public health achievements of the 20th century." EPA continues to regulate the chemical composition of gasoline, avgas, and diesel fuel in the United States.
Stationary source operating permits
The 1990 amendments authorized a national operating permit program, sometimes called the "Title V Program", covering thousands of large industrial and commercial sources. It required large businesses to address pollutants released into the air, measure their quantity, and have a plan to control and minimize them as well as to periodically report. This consolidated requirements for a facility into a single document.: 19 In non-attainment areas, permits were required for sources that emit as little as 50, 25, or 10 tons per year of VOCs depending on the severity of the region's non-attainment status. Most permits are issued by state and local agencies. If the state does not adequately monitor requirements, the EPA may take control. The public may request to view the permits by contacting the EPA. The permit is limited to no more than five years and requires a renewal.
Monitoring and enforcement
One of the most public aspects of the Clean Air Act, EPA is empowered to monitor compliance with the law's many requirements, seek penalties for violations, and compel regulated entities to come into compliance. Enforcement cases are usually settled, with penalties assessed well below maximum statutory limits. Recently, many of the largest Clean Air Act settlements have been reached with automakers accused of circumventing the Act's vehicle and fuel standards (e.g., the 2015 "Dieselgate" scandal).
Greenhouse gas regulation
Much of EPA's regulation of greenhouse gas (GHG) emissions occurs under the programs discussed above. EPA began regulating GHG emissions following the Supreme Court's ruling in Massachusetts v. EPA, the EPA's subsequent endangerment finding, and development of specific regulations for various sources.
The EPA's authority to regulate carbon dioxide emissions was questioned by the Supreme Court in West Virginia v. EPA but restored by Congress with the Inflation Reduction Act of 2022, which clarified that carbon dioxide is one of the pollutants covered by the Clean Air Act.Standards for mobile sources have been established pursuant to Section 202 of the CAA, and GHGs from stationary sources are controlled under the authority of Part C of Title I of the Act. The EPA's auto emission standards for greenhouse gas emissions issued in 2010 and 2012 are intended to cut emissions from targeted vehicles by half, double fuel economy of passenger cars and light-duty trucks by 2025 and save over $4 billion barrels of oil and $1.7 trillion for consumers. The agency has also proposed a two-phase program to reduce greenhouse gas emissions for medium and heavy duty trucks and buses. In addition, EPA oversees the national greenhouse gas inventory reporting program.Following the Supreme Court decision in West Virginia v. EPA, which ruled that Congress did not grant EPA to require "outside the fence" options for limiting carbon dioxide at power plants, Congress passed the Inflation Reduction Act of 2022, which in amending the Clean Air Act, specifically defined carbon dioxide, hydrofluorocarbons, methane, nitrous oxide, perfluorocarbons, and sulfur hexafluoride as greenhouse gases to be regulated by the EPA, as well as giving the EPA the ability to regulate the inclusion of renewable sources, notably, through a $27 billion green bank, among other methods.
Others
Other important but less foundational Clean Air Act regulatory programs tend to build on or cut across the above programs:
Risk Assessment. Although not a regulatory program per se, many EPA regulatory programs involve risk assessment and management. Over the years, EPA has undertaken to unify and organize its many risk assessment processes. The 1990 CAA Amendments created a Commission on Risk Assessment and Management tasked with making recommendations for a risk assessment framework, and many subsequent reports have built on this work.
Visibility and Regional Haze. EPA monitors visibility and air clarity (haze) at 156 protected parks and wilderness areas, and requires states to develop plans to improve visibility by reducing pollutants that contribute to haze.
Interstate pollution control. The Clean Air Act's "good neighbor" provision requires states to control emissions that will significantly contribute to NAAQS nonattainment or maintenance in a downwind state. EPA has struggled to enact regulations that implement this requirement for many years. It developed the "Clean Air Interstate Rule" between 2003 and 2005, but this was overturned by the courts in 2008. EPA then developed the Cross-State Air Pollution Rule between 2009 and 2011, and it continues to be litigated as EPA updates it.
Startup, Shutdown, & Malfunction. EPA promulgates rules for states to address excess emissions during periods of startup, shutdown, and malfunction, when facility emissions may temporarily be much higher than standard regulatory limits.
The Clean Air Act and states
State implementation plans
The 1963 act required development of State Implementation Plans (SIPs) as part of a cooperative federalist program for developing pollution control standards and programs. Rather than create a solely national program, the CAA imposes responsibilities on the U.S. states to create plans to implement the Act's requirements. EPA then reviews, amends, and approves those plans. EPA first promulgated SIP regulations in 1971 and 1972.
Non-attainment areas
The 1977 CAA Amendments added SIP requirements for areas that had not attained the applicable NAAQS ("nonattainment areas"). In these areas, states were required to adopt plans that made "reasonable further progress" toward attainment until all "reasonably available control measures" could be adopted. As progress on attainment was much slower than Congress originally instructed, major amendments to SIP requirements in nonattainment areas were part of the 1990 CAA Amendments.
Prevention of significant deterioration
The 1977 CAA Amendments modified the SIP requirements by adding "Prevention of Significant Deterioration" (PSD) requirements. These requirements protect areas, including particularly wilderness areas and national parks, that already met the NAAQS. The PSD provision requires SIPs to preserve good quality air in addition to cleaning up bad air. The new law also required New Source Review (investigations of proposed construction of new polluting facilities) to examine whether PSD requirements would be met.
Federalism
The Constitution contains no provisions listing environmental standards as an enumerated Federal power, and until 1970 these were essentially handled at the state and local level. However, legislators of the 1960s had been heavily influenced by New Deal-era ideologies of government, allowing considerable expansion of Federal authority, often in excess of what was strictly allowed in the Constitution. The Clean Air Act provided the EPA with enforcement authority and requiring states to develop State Implementation Plans for how they would meet new national ambient air quality standards by 1977. This cooperative federal model continues today. The law recognizes that states should lead in carrying out the Clean Air Act, because pollution control problems often require special understanding of local conditions such as geography, industrial activity, transportation and housing patterns. However, states are not allowed to have weaker controls than the national minimum criteria set by EPA. EPA must approve each SIP, and if a SIP is not acceptable, EPA can retain CAA enforcement in that state. For example, California was unable to meet the new standards set by the 1970 amendments, which led to a lawsuit and a federal state implementation plan for the state. The federal government also assists the states by providing scientific research, expert studies, engineering designs, and money to support clean air programs.
The law also prevents states from setting standards that are more strict than the federal standards, but carves out a special exemption for California due to its past issues with smog pollution in the metropolitan areas. In practice, when California's environmental agencies decide on new vehicle emission standards, they are submitted to the EPA for approval under this waiver, with the most recent approval in 2009. The California standard was adopted by twelve other states, and established the de facto standard that automobile manufacturers subsequently accepted, to avoid having to develop different emission systems in their vehicles for different states. However, in September 2019, President Donald Trump attempted to revoke this waiver, arguing that the stricter emissions have made cars too expensive, and by removing them, will make vehicles safer. EPA's Andrew Wheeler also stated that while the agency respects federalism, it could not allow one state to dictate standards for the entire nation. California's governor Gavin Newsom considered the move part of Trump's "political vendetta" against California and stated his intent to sue the federal government. Twenty-three states, along with the District of Columbia and the cities of New York City and Los Angeles, joined California in a federal lawsuit challenging the administration's decision. In March 2022 the Biden administration reversed the Trump-era rule, allowing California to again set stricter auto emissions standards.
History
Between the Second Industrial Revolution and the 1960s, the United States experienced increasingly severe air pollution. Following the 1948 Donora smog event, the public began to discuss air pollution as a major problem, states began to pass a series of laws to reduce air pollution, and Congress began discussing whether to take further action in response. At the time, the primary federal agencies interested in air pollution were the United States Bureau of Mines, which was interested in "smoke abatement" (reducing smoke from coal burning), and the United States Public Health Service, which handled industrial hygiene and was concerned with the causes of lung health problems.After several years of proposals and hearings, Congress passed the first federal legislation to address air pollution in 1955. The Air Pollution Control Act of 1955 authorized a research and training program, sending $3 million per year to the U.S. Public Health Service for five years, but did not directly regulate pollution sources. The 1955 Act's research program was extended in 1959, 1960, and 1962 while Congress considered whether to regulate further.
Beginning in 1963, Congress began expanding federal air pollution control law to accelerate the elimination of air pollution throughout the country. The new law's programs were initially administered by the U.S. Secretary of Health, Education, and Welfare, and the Air Pollution Office of the U.S. Public Health Service, until they were transferred to the newly created EPA immediately before major amendments in 1970. EPA has administered the Clean Air Act ever since, and Congress added major regulatory programs in 1977 and 1990. Most recently, the U.S. Supreme Court's ruling in Massachusetts v. EPA resulted in an expansion of EPA's CAA regulatory activities to cover greenhouse gases.
Clean Air Act of 1963 and early amendments
The Clean Air Act of 1963 (Pub. L.Tooltip Public Law (United States) 88–206) was the first federal legislation to permit the U.S. federal government to take direct action to control air pollution. It extended the 1955 research program, encouraged cooperative state, local, and federal action to reduce air pollution, appropriated $95 million over three years to support the development of state pollution control programs, and authorized the HEW Secretary to organize conferences and take direct action against interstate air pollution where state action was deemed to be insufficient.The Motor Vehicle Air Pollution Control Act (Pub. L.Tooltip Public Law (United States) 89–272) amended the 1963 Clean Air Act and set the first federal vehicle emissions standards, beginning with the 1968 models. These standards were reductions from 1963 emissions levels: 72% reduction for hydrocarbons, 56% reduction for carbon monoxide, and 100% reduction for crankcase hydrocarbons.. The law also added a new section to authorize abatement of international air pollution.
The Air Quality Act of 1967 (Pub. L.Tooltip Public Law (United States) 90–148) authorized planning grants to state air pollution control agencies, permitted the creation of interstate air pollution control agencies, and required HEW to define air quality regions and develop technical documentation that would allow states to set ambient air quality and pollution control technology standards, and required states to submit implementation plans for improvement of air quality, and permitted HEW to take direct abatement action in air pollution emergencies. It also authorized expanded studies of air pollutant emission inventories, ambient monitoring techniques, and control techniques. This enabled the federal government to increase its activities to investigate enforcing interstate air pollution transport, and, for the first time, to perform far-reaching ambient monitoring studies and stationary source inspections. The 1967 act also authorized expanded studies of air pollutant emission inventories, ambient monitoring techniques, and control techniques. While only six states had air pollution programs in 1960, all 50 states had air pollution programs by 1970 due to the federal funding and legislation of the 1960s.
1970 and 1977 amendments
In the Clean Air Amendments of 1970 (Pub. L.Tooltip Public Law (United States) 91–604), Congress greatly expanded the federal mandate by requiring comprehensive federal and state regulations for both industrial and mobile sources. The law established the National Ambient Air Quality Standards (NAAQS), New Source Performance Standards (NSPS); and National Emissions Standards for Hazardous Air Pollutants (NESHAPs), and significantly strengthened federal enforcement authority, all toward achieving aggressive air pollution reduction goals.
To implement the strict amendments, EPA Administrator William Ruckelshaus spent 60% of his time during his first term on the automobile industry, whose emissions were to be reduced 90% under the new law. Senators had been frustrated at the industry's failure to cut emissions under previous, weaker air laws.Major amendments were added to the Clean Air Act in 1977 (1977 CAAA) (91 Stat. 685, Pub. L.Tooltip Public Law (United States) 95–95). The 1977 Amendments primarily concerned provisions for the Prevention of Significant Deterioration (PSD) of air quality in areas attaining the NAAQS. The 1977 CAAA also contained requirements pertaining to sources in non-attainment areas for NAAQS. A non-attainment area is a geographic area that does not meet one or more of the federal air quality standards. Both of these 1977 CAAA established major permit review requirements to ensure attainment and maintenance of the NAAQS. These amendments also included the adoption of an offset trading policy originally applied to Los Angeles in 1974 that enables new sources to offset their emissions by purchasing extra reductions from existing sources.The Clean Air Act Amendments of 1977 required Prevention of Significant Deterioration (PSD) of air quality for areas attaining the NAAQS and added requirements for non-attainment areas.
1990 amendments
Another set of major amendments to the Clean Air Act occurred in 1990 (1990 CAAA) (104 Stat. 2468, Pub. L.Tooltip Public Law (United States) 101–549). The 1990 CAAA substantially increased the authority and responsibility of the federal government. New regulatory programs were authorized for control of acid deposition (acid rain) and for the issuance of stationary source operating permits. The NESHAPs were incorporated into a greatly expanded program for controlling toxic air pollutants. The provisions for attainment and maintenance of NAAQS were substantially modified and expanded. Other revisions included provisions regarding stratospheric ozone protection, increased enforcement authority, and expanded research programs.The 1990 Clean Air Act added regulatory programs for control of acid deposition (acid rain) and stationary source operating permits. The provisions aimed at reducing sulfur dioxide emissions included a cap-and-trade program, which gave power companies more flexibility in meeting the law's goals compared to earlier iterations of the Clean Air Act. The amendments moved considerably beyond the original criteria pollutants, expanding the NESHAP program with a list of 189 hazardous air pollutants to be controlled within hundreds of source categories, according to a specific schedule.: 16 The NAAQS program was also expanded. Other new provisions covered stratospheric ozone protection, increased enforcement authority and expanded research programs.Further amendments were made in 1990 to address the problems of acid rain, ozone depletion, and toxic air pollution, and to establish a national permit program for stationary sources, and increased enforcement authority. The amendments also established new auto gasoline reformulation requirements, set Reid vapor pressure (RVP) standards to control evaporative emissions from gasoline, and mandated new gasoline formulations sold from May to September in many states. Reviewing his tenure as EPA Administrator under President George H. W. Bush, William K. Reilly characterized passage of the 1990 amendments to the Clean Air Act as his most notable accomplishment.The 1990 amendments also included a requirement for the Department of Labor to issue, no later than 12 months from the date of publication of the amendments, and in collaboration with the EPA, a "process safety standard". The text also highlighted the 14 principles on which this should be based. These were implemented in 1992 in OSHA's Process Safety Management regulation (Title 29 CFR Part 1910, Subpart H § 1910.119), as well as in EPA's 1996 Risk Management Program (RMP) rule (Title 40 CFR Part 68).
2022 amendments
The Inflation Reduction Act, the budget reconciliation bill signed by President Joe Biden in August 2022, amends the Clean Air Act to allow the EPA to administer $27 billion in grants to green banks nationwide, through a competitive funding mechanism to be called the Greenhouse Gas Reduction Fund. $14 billion will go to a select few nonprofit green banks for a broad variety of direct investments in decarbonization startups, $6 billion will go to investments in low-income and historically disadvantaged communities, and $7 billion will go to distributed solar power in similar communities with no financing alternatives. It is expected to generate high rates of return for the government on private sector investments. It also designates carbon dioxide and other greenhouse gases as substances to be regulated by the EPA, in reaction to the Supreme Court case West Virginia v. EPA, which limited the EPA's authority to institute a program such as the Obama-era Clean Power Plan. The IRA also allows the EPA more leeway to promote renewable energy.
Effects
According to a 2022 review study in the Journal of Economic Literature, there is overwhelming causal evidence that shows that the CAA improved air quality.According to the most recent study by EPA, when compared to the baseline of the 1970 and 1977 regulatory programs, by 2020 the updates initiated by the 1990 Clean Air Act Amendments would be costing the United States about $60 billion per year, while benefiting the United States (in monetized health and lives saved) about $2 trillion per year. In 2020, a study prepared for the Natural Resources Defense Council estimated annual benefits at 370,000 avoided premature deaths, 189,000 fewer hospital admissions, and net economic benefits of up to $3.8 trillion (32 times the cost of the regulations). Other studies have reached similar conclusions.Mobile sources including automobiles, trains, and boat engines have become 99% cleaner for pollutants like hydrocarbons, carbon monoxide, nitrogen oxides, and particle emissions since the 1970s. The allowable emissions of volatile organic chemicals, carbon monoxide, nitrogen oxides, and lead from individual cars have also been reduced by more than 90%, resulting in decreased national emissions of these pollutants despite a more than 400% increase in total miles driven yearly. Since the 1980s, 1/4th of ground level ozone has been cut, mercury emissions have been cut by 80%, and since the change from leaded gas to unleaded gas 90% of atmospheric lead pollution has been reduced. A 2018 study found that the Clean Air Act contributed to the 60% decline in pollution emissions by the manufacturing industry between 1990 and 2008.
Legal challenges
Since its inception, the authority given to the EPA by Congress and the EPA's rulemaking within the Clean Air Act has been subject to numerous lawsuits. Some of the major suits where the Clean Air Act has been focal point of litigation include the following:
Train v. Natural Resources Defense Council, Inc., 421 U.S. 60 (1975)
Under the Clean Air Act, states were required to submit their implementation plans within nine months of the EPA's promulgation of the new standards. The EPA approved several state plans that allowed for variances in their emissions limitations, and the Natural Resources Defense Council challenged that approval. The Supreme Court held that the EPA's approval was valid, and that as long as the “ultimate effect of a State's choice of emission limitations is compliance with the national standards for ambient air,” a state is “at liberty to adopt whatever mix of emission limitations it deems best suited to its particular situation.”Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984)
The Clean Air Act instructed the EPA to regulate emissions from sources of air pollution, but did not define what should be considered a source for the emission of air pollution, so the EPA had interpreted what a source was based on the legislation. The EPA's interpretation was challenged, but after review, the Supreme Court ruled in a 7–0 decision that the EPA had judicial deference to establish their own interpretation of law when the law is ambiguous and the interpretation is reasonable and consistent. This principle has come to be known as the Chevron deference applying to any executive agency granted powers from Congress.Whitman v. American Trucking Ass'ns, Inc., 531 U.S. 457 (2001)
Following the EPA's rulemaking related to setting NAAQS standards related to diesel truck emissions, the trucking industry challenged the EPA's rule in lower courts, asserting the EPA's rule failed to justify reasoning for these new levels and violated the nondelegation doctrine. The D.C. Circuit Court found in favor of the trucking industry, determining that the EPA's rule did not consider the costs of implementing emissions regulations and controls. The Supreme Court reversed the D.C. District Court's ruling, affirming that not only was the delegation of power from Congress to the EPA by the Clean Air Act constitutional, but that the law did not require the EPA to consider costs as part of its determination for air quality controls.Massachusetts v. Environmental Protection Agency, 549 U.S. 497 (2007)
With pressure from states and environmental groups on the EPA to regulate carbon dioxide and other greenhouse gas emissions from motor vehicles, the EPA determined in 2003 that the language of the Clean Air Act did not allow them to regulate emissions from motor vehicles, nor were they motivated to set such regulations even if they were able to. Multiple states and agencies sued the EPA for failing to act on what they considered to be harmful air pollutants. The Supreme Court ruled 5–4 that not only did the Clean Air Act mandate the EPA to regulate carbon dioxide and other greenhouse gases as air pollutants, but that failing to regulate these emissions would leave the EPA liable to further litigation. While the decision has remained contentious, the Court's decision in Massachusetts v. EPA was considered landmark, as it opened up the courts for further environmental lawsuits to force entities to respond to climate change.American Electric Power Co. v. Connecticut, 564 U.S. 410 (2011)
Several states and cities sued electric utility companies to force them to use a cap-and-trade system to reduce their emissions under a claim their emissions were a public nuisance. The Supreme Court ruled in an 8–0 decision that private companies cannot be sued by other parties for emissions-related issues, as this is a power specifically delegated to the EPA through the Clean Air Act under federal common law.Utility Air Regulatory Group v. Environmental Protection Agency, 573 U.S. 302 (2014)
The EPA issued new rules in 2010 that expanded emissions regulations for both regulated air pollutants and greenhouse gases to light and heavy engines and smaller stationary sources. These expanded rules were challenged by several power companies and regulatory groups, as they greatly expanded what types of facilities would need to acquire environmental permits prior to construction. The Supreme Court generally upheld the EPA's powers through the Clean Air Act, through it vacated portions of the EPA's new rules affecting smaller sources.Michigan v. EPA, 576 U.S. 743 (2015)
In 2012, the EPA issued new rules that identified new pollutants such as mercury as hazardous materials to be regulated in power plant emissions. The cost of implementing mercury pollution controls on new and existing plants can be expensive, and several states, companies, and other organizations sued the EPA that their analysis leading to these new rules did not consider the cost factor involved. The Supreme Court ruled 5–4 that the EPA's failure to consider the costs of these pollution controls was inappropriate, and that cost must be a factor in any finding that the EPA finds "necessary and appropriate" under the Clean Air Act.West Virginia v. EPA, 597 U.S. ___ (2022)
In 2014, the EPA under the Obama administration proposed the Clean Power Plan (CPP), which aimed to combat climate change by reducing carbon dioxide emissions from coal-burning power plants. Using its authority under 42 U.S.C. § 7411(d), the agency sought to regulate emissions from existing power plants. After the final plan was announced in 2015, a number of states successfully petitioned federal courts for an emergency stay to block the implementation of the plan. In 2017, the CPP never having been implemented, the EPA under the Trump administration enacted the Affordable Clean Energy (ACE) rule, which repealed the CPP. A different set of states and environmental advocacy groups challenged the ACE rule in federal court, and on January 19, 2021, the D.C. Circuit Court of Appeals vacated the rule because it relied on a flawed interpretation of § 7411(d). A number of states petitioned the Supreme Court, arguing that the D.C. Circuit's interpretation of § 7411(d) was too broad, and four petitions were consolidated as West Virginia v. EPA. On June 30, 2022, the Supreme Court ruled in a 6–3 decision that the CPP was invalid because it fell under the major questions doctrine and thus required more specific Congressional approval than could reasonably be argued to be present in the statute.
Current challenges
As poor air quality is still an issue in the United States, it's been noted that it is 1.5 times more likely for a minority to live in an area burdened with air pollution than it is for a majority person. While the United States is still being affected by the COVID-19 pandemic, research has shown that if someone lives in an area that experiences air pollution, they are 8% more likely to experience long-term COVID effects and could potentially have a fatal outcome. This creates a disparity between minority communities in America and majority Americans even though the air pollution that these communities face is often a result of redlining which places minority community in areas where the industries that pollute the air, as well as high traffic roadways, reside. Solutions proposed by the Sierra Club and the Environmental Protection Agency (EPA) include funding clean energy transportation services, lower acceptable emission levels, and enforce the Clean Air Act that is already in place more intensely. While the Clean Air Act has been generally effective since it was enacted in 1970, there have been efforts from the fossil fuel industry to override the regulations which have led to worse air pollution in many communities.An example of an area where the Clean Air Act has not been too effective is the San Joaquin Valley which experiences poor air quality conditions that stem from harmful agricultural practices, heavy traffic on major roadways, and the oil industrial businesses. Research shows that the leading air pollutant in the region is known as, PM 2.5, or fine particulate matter and causes health issues in pregnant women exposed to it such as more severe asthma, decreased FEV1, compromised immunity, and an increased risk of premature birth. Other symptoms that come from exposure to PM 2.5 include chronic bronchitis, reduced lung function in children, and heart and lung-related hospitalizations that can potentially lead to premature death especially if the individual has previous health concerns. It's been noted that in 2004, a mass decrease in PM 2.5 pollution occurred because of a decline in agricultural and biomass burning practices, however, it is still a common occurrence today and PM 2.5 levels have increased. Organizations such as the California Air Resources Board (CARB) have recommended that more sufficient regulations be implemented in the San Joaquin Valley to reduce toxic emissions into the atmosphere. There are now efforts from the EPA, the CARB, and the San Joaquin Valley Air Pollution Control District to enforce regulations from the Clean Air Act and fund further efforts to live in a more sustainable fashion.
Future challenges
As of 2017, some US cities still do not meet all national ambient air quality standards. It is likely that tens of thousands of premature deaths are still being caused by fine-particle pollution and ground-level ozone pollution.Climate change poses a challenge to the management of conventional air pollutants in the United States due to warmer, dryer summer conditions that can lead to increased air stagnation episodes. Prolonged droughts that may contribute to wildfires would also result in regionally high levels of air particles.Transboundary air pollution (both entering and exiting the United States) is not directly regulated by the Clean Air Act, requiring international negotiations and ongoing agreements with other nations, particularly Canada and Mexico.Environmental justice continues to be an ongoing challenge for the Clean Air Act. By promoting pollution reduction, the Clean Air Act helps reduce heightened exposure to air pollution among minority and low-income communities. But African American populations are "consistently over represented" in areas with the poorest air quality. Dense populations of low-income and minority communities inhabit the most polluted areas across the United States, which is considered to exacerbate health problems among these populations. High levels of exposure to air pollution is linked to several health conditions, including asthma, cancer, premature death, and infant mortality, each of which disproportionately impact minority and low-income communities. The pollution reduction achieved by the Clean Air Act is associated with a decline in each of these conditions and can promote environmental justice for communities that are disproportionately impacted by air pollution and diminished health status.
References
External links
As codified in 42 U.S.C. chapter 85 of the United States Code from the LII
As codified in 42 U.S.C. chapter 85 of the United States Code from the US House of Representatives
Clean Air Act (details) as amended in the GPO Statute Compilations collection
Summary of the Clean Air Act from the EPA
EPA Enforcement and Compliance History Online
Clean Air Act: A Summary of the Act and Its Major Requirements. Congressional Research Service report, 2022.
EPA Alumni Association Oral History Video "Early Implementation of the Clean Air Act of 1970 in California" |
high-performance buildings | High-performance buildings are those which deliver a relatively higher level of energy efficiency performance or greenhouse gas reduction than what is required by building codes or other regulations. Architects, designers, and builders typically design and build high-performance buildings using a range of established strategies, techniques, tools, and materials to ensure that, upon completion, the building will consume a minimal amount of energy for heating, cooling, illumination, and ventilation during operation.
Occupant benefits
Those living or working in high-performance buildings enjoy a range of benefits when compared with traditional, less-efficient buildings. Documented benefits include:
Lower energy bills. Lower energy consumption reduces operating costs and helps shield owners or managers from future increases in energy prices.
Healthier living. High performance homes are designed for improved indoor air quality. They often include active ventilation, and use materials and finishes with lower amounts of volatile organic compounds (VOCs).
Greater thermal comfort. Occupants will feel more comfortable at equivalent indoor temperatures relative to traditional buildings, due to reduced drafts and temperature variations in the building.
Reduced pollution and CO2 emissions. High performance buildings consume dramatically less energy in operation, which can significantly lower greenhouse gas (GHG) emissions if the building burns natural gas for space and water heating.
Natural light. High performance buildings maximize effective use of daylight, which can provide a more pleasing indoor environment while reducing utility costs.
Increased resale value. Many homebuyers are looking for energy-efficient homes, which can typically be sold more quickly and for more money than conventional homes.
Reduced noise levels. The increased insulation levels and better windows found in a high performance building can reduce sound transmission from outside.
Higher resilience. High levels of insulation, combined with passive design, can help maintain comfortable indoor temperatures longer than conventional homes during power outages or summer heat events.A 2012 study by the American Council for an Energy-Efficient Economy found that multifamily buildings present a tremendous opportunity for energy savings. Comprehensive, cost-effective upgrades in multifamily buildings could improve efficiency by 15-30%, the Council found, representing an annual sector-wide savings of almost US$3.4 billion.
Climate benefits
Globally, buildings constitute a leading consumer of energy and a significant source of greenhouse gas emissions. In 2010, buildings accounted for 32% of total global final energy use, 19% of energy-related GHG emissions, including emissions produced in the production of electricity that is used by buildings. In the United States in 2016, carbon emissions from homes and commercial businesses contributed 6,511 million metric tons of CO2 equivalent to the atmosphere, or 11 percent of the nation's total.Governments with jurisdiction over building codes and standards and that are interested in reducing the climate impact of buildings may seek to reduce these emissions by either incentivizing requiring higher levels of energy efficiency performance in new homes and other buildings.
All-Electric Buildings
High-performance buildings use less energy than their conventional counterparts. For buildings that burn natural gas for spaced and water heating, improved energy efficiency can yield corresponding reductions in greenhouse gas emissions.
Those building high-performance buildings, or renovating an existing building for improved energy and climate performance, often seek to reduce greenhouse gas emissions by using a low carbon energy system such an electric heat pump instead of a natural gas furnace or hot water heater. In the United States, a growing movement is seeking to "electrify everything," including buildings. As of mid-2021, at least 45 U.S. cities have to date passed "all electric" ordinances that either mandate or incentivize all-electric new construction.
== References == |
eu allowance | EU Allowances (EUA) are climate credits (or carbon credits) used in the European Union Emissions Trading Scheme (EU ETS). EU Allowances are issued by the EU Member States into Member State Registry accounts. By April 30 of each year, operators of installations covered by the EU ETS must surrender an EU Allowance for each ton of CO2 emitted in the previous year.
The emission allowance is defined in Article 3(a) of the EU ETS Directive as being "an allowance to emit one tonne of carbon dioxide equivalent during a specified period, which shall be valid only for the purposes of meeting the requirements of this Directive and shall be transferable in accordance with the provisions of this Directive".The EU Allowances are connected to the EUs goal of achieving climate neutrality in the EU by 2050 and a 55% reduction in greenhouse gas emissions by 2030.
Cap and trade system
The EU ETS works on the 'cap and trade' principle. Companies receive or buy emission allowances within the cap and they are allowed to trade them with one another. The total number of allowances is limited, which ensures that they have a value. If a company emits more in a year than its allowances, then heavy fines can be imposed. The fine is 100 euros per excess tonne, but the company still needs to surrender EUAs for the uncovered emissions in the subsequent year - so the 100 EUR fine does not present a ceiling price for EUAs. Companies that do not use their allowances can "bank" them to cover future needs or sell them to other companies.
Free allocation of Allowances
Free allocation of allowances decreases each year. Over the 2013 to 2020 trading period 43% of allowances were available for free allocation; and the manufacturing industry received 80% of its allowances for free at the beginning of that trading period which decreased gradually to 30% in 2020. On the other hand, power generators in principle do not receive any free allowance but have to buy them (except in some member states like Poland, Bulgaria, Hungary, Lithuania, etc.).
Auctioning of allowances
The default method of allocating allowances, which were not allocated for free within the EU emissions trading system (EU ETS) is auctioning. This is the most transparent allocation method, as it shows that polluters should pay and how much. The auctioning is governed by the EU ETS Auctioning Regulation, which ensures that it is conducted in an open, transparent, harmonized, and non-discriminatory manner. Currently, there are two auctioning platforms:
European Energy Exchange (EEX), in Leipzig, is the common auction platform for the large majority of countries participating in the EU ETS and also conducts emissions auctions for Poland during the transitional period. EEX also published the volumes that will be auctioned in 2020.
ICE Futures Europe (ICE), which acts as the United Kingdom's platform and is located in London. The first EUAA auction on ICE took place in September 2014.Auctioning share is increasing from 2013 to 2020. In 2013, over 40% of allowances were auctioned and it is estimated that 57% of the allowances will be auctioned during 2013–2020. The volume of free allowances decreases faster than the cap. what causes more allowances being auctioned. EU ETS Directive foresees that the share of allowances to be auctioned will remain the same after 2020. EU leaders decided in October 2014 that free allocation shall not expire, but the share of allowances being auctioned will not reduce during the next decade.
Allocation of allowances in phases
Auctioned allowances in 2013-2020
Price volatility
The price of carbon is a result of supply and demand and can sometimes be volatile. The demand is linked to emissions in the EU countries, and can vary depending on factors such as temperature (increased heating demand), economic activity, and the amount of renewable energy produced from wind and solar. New investments in lowering emissions is also a factor. In 2020, the EUA prices may be also influenced by Brexit.
== References == |
governorship of arnold schwarzenegger | The governorship of Arnold Schwarzenegger began in 2003, when Arnold Schwarzenegger ran for Governor of California in a recall election. He was subsequently elected Governor when the previous governor Gray Davis was recalled and Schwarzenegger placed first among replacement candidates. Schwarzenegger served the remainder of Davis' incomplete term between 2003 and 2007. Schwarzenegger was then reelected to a second term in 2006, serving out this full term and leaving office in January 2011. Schwarzenegger was unable to run for a third term due to term limits imposed by Constitution of California.
At the start of his first term as governor, Schwarzenegger proposed deep cuts in the state budget and was met with opposition in the California State Legislature. When San Francisco started granting same-sex marriage licenses at the behest of mayor Gavin Newsom, the governor ordered state attorney general Bill Lockyer to intervene in the matter and vetoed legislation that would have legalized same-sex marriage. Because of their opposition to his budget cuts, Schwarzenegger controversially called his opponents in the legislature girlie men. At the 2004 Republican National Convention, Schwarzenegger gave a speech endorsing the reelection of George W. Bush as President of the United States. In his State of the State address in 2005, Schwarzenegger proposed a redistricting reform that would have retired judges drawing new districts for the state. The first executions of Schwarzenegger's term occurred in 2005 with Donald Beardslee in January and Stanley Williams in December, which drew opposition from opponents of capital punishment and his native country of Austria. In June, the governor called for a special election in an effort to pass several of his proposed reforms. However, the voters ultimately rejected all of Schwarzenegger's propositions. Schwarzenegger started off 2006 by apologizing for holding the special election, which had cost the state money, and proposed a centrist agenda moving forward. The governor opposed the federal government's effort to build fencing on the Mexico–United States border and likened it to the Berlin Wall. In 2006, Schwarzenegger made several efforts to address global warming by signing the Global Warming Solutions Act of 2006 and negotiating the creation of a carbon emissions trading market with British Prime Minister Tony Blair. By year's end, the governor called on the federal government to give a deadline for the withdrawal of U.S. troops from Iraq.
On November 7, 2006, Schwarzenegger defeated Democratic state treasurer Phil Angelides in the 2006 California gubernatorial election, winning a second term as governor. In his second term, Schwarzenegger pledged to be a centrist politician and cooperate with the Democrats to resolve statewide political issues. Only days into the term, the governor proposed universal health insurance in the state and called for new bonds for schools, prisons, and other infrastructure. In May 2007, Schwarzenegger met with two of his counterparts in Canada, Dalton McGuinty and Gordon Campbell, in order to address climate change and advocate for stem cell research. An oil spill occurred in November when the Cosco Busan struck the San Francisco–Oakland Bay Bridge. In 2008, Schwarzenegger proposed a balanced budget amendment to the state constitution.
Also in his second term, Schwarzenegger proposed an austere fiscal policy in response to the Great Recession. Continuing his efforts to address environmental issues, the governor signed a memorandum of understanding with Mexican President Felipe Calderón and signed legislation pertaining to global warming. However, by October, Schwarzenegger vetoed 35 percent of the bills that the California State Legislature passed, which was the highest the rate had ever been since the statistic was first tracked when Ronald Reagan was governor of the state. In the election, voters approved Proposition 11, which shifted redistricting powers away from the legislature and created the California Citizens Redistricting Commission. In the midst of the Great Recession in 2009, Schwarzenegger called upon the legislature to pass deep budget cuts and warned that the state was facing insolvency. At the same time, the governor approved of President Barack Obama's federal stimulus bill. Schwarzenegger appointed Laura Chick as inspector general to oversee California's share of the stimulus bill. In May, the governor voiced his openness to marijuana legalization and a special election resulted in all but one of his state propositions being rejected.
Background
Pre-gubernatorial politics
Prior to his 2003 run for governor, Schwarzenegger had had occasional involvement in politics.
In Austria, Schwarzenegger was officially a member of the youth weightlifting team of the Austrian People's Party (ÖVP).In 1985, Schwarzenegger appeared in "Stop the Madness", an anti-drug music video sponsored by the Reagan administration. He first came to wide public notice as a Republican during the 1988 presidential election, accompanying then-Vice President George H. W. Bush at a campaign rally.
Campaign in the 2003 gubernatorial recall election
Schwarzenegger was elected in the recall election that unseated Democratic governor Gray Davis.
Transition into office
On November 17, 2003, Schwarzenegger was sworn in as the 38th Governor of California.
Electoral politics and national political activities
Schwarzenegger actively supported the reelection campaign of President George W. Bush in the 2004 United States presidential election. Schwarzenegger gave a speech at the 2004 Republican National Convention on August 31 at Madison Square Garden, closing his speech by remarking, "George W. Bush has worked hard to protect and preserve the American dream for all of us. And that's why I say, send him back to Washington for four more years." On October 29, 2004 Schwarzenegger appeared at a reelection campaign rally for President George W. Bush in Columbus, Ohio, saying, "I am here to pump you up to reelect President George W. Bush."On November 2, 2006 Schwarzenegger urged for a deadline to withdraw American troops from Iraq. On September 12, 2007, Schwarzenegger vetoed a bill that would have allowed Californians to vote in a nonbonding referendum on whether they favored an immediate withdrawal of U.S. troops from Iraq.At the California Republican Party convention in Indian Wells on September 7, Schwarzenegger warned his fellow Republicans about how they were faring with the electorate, remarking that "in movie terms, we are dying at the box office. We are not filling the seats."On January 31, 2008, Schwarzenegger endorsed U.S. Senator John McCain’s campaign for the Republican nomination in the 2008 United States presidential election. Marking a household split, on February 3, 2008, Schwarzenegger's wife, California First Lady Maria Shriver, endorsed U.S. Senator Barack Obama's campaign for president in the Democratic primaries. After John McCain (by then the Republican nominee) called for an end to the federal ban on offshore drilling on June 16, 2008 Schwarzenegger and other governors promised on June 18 to block attempts to tap offshore petroleum reserves, citing concerns about the environment and tourism. In a taped interview on Meet the Press on June 29, Schwarzenegger defended McCain, calling him "the real deal on the environment".As part of a bipartisan group of governors on February 24, 2008, Schwarzenegger called on George W. Bush, the U.S. Congress, and the presidential candidates to back a major spending program to repair the nation's roads, bridges, rail lines, and water systems.It was reported on April 12, 2009 that Schwarzenegger and Democratic Pennsylvania Governor Ed Rendell sent a private memo to Obama saying he needs to assert more political leadership instead of leaving it to Congress to draft a plan for improving the nation's aging highways, bridges, and ports.
Ballot measures
In 2004, Schwarzenegger urged Californians to vote against Proposition 70, which would allow the expansion of casinos in return for payments on par with state corporate taxes, saying, "The Indians are ripping us off." On October 18, 2004 endorsed Propositions 62 and 71, the former of which would establish open primary elections and the latter of which would authorize the sale of $3 billion in bonds and the creation of a state institute that would award grants to stem cell researchers. At a rally in Los Angeles on October 28, 2004 Schwarzenegger joined three former California governors, including his predecessor Gray Davis, to voice his opposition to Proposition 66, which would augmented the state's three-strikes law.
2005 special election on ballot measures
Schwarzenegger announced on June 13, 2005 that a special election would occur on November 8 of that year for voters to decide on a package of government reforms he championed on how Californian spends state tax dollars and how Californians elect their politicians. In the special election, all four of Schwarzenegger's signature ballot proposals (Propositions 74, 75, 76, and 77) were rejected by the voters as well as four other initiatives. After learning that at least two of his initiatives had failed, Schwarzenegger told supporters, "Tomorrow, we begin anew. I feel the same tonight as that night two years ago...You know with all my heart, I want to do the right thing for the people of California." On January 5, 2006, Schwarzenegger gave a State of the State address in which he apologized to the voters of California for sponsoring the costly special election and proposed a series of policies that represented a dramatic return to the political center.
2009 special election on ballot measures
2006 reelection
In February 2006, Steve Schmidt and Matthew Dowd were respectively named the campaign manager and chief strategist for Schwarzenegger's reelection campaign. On April 14, Schwarzenegger's reelection campaign released his federal and state tax returns for 2002–2004 after State Controller Steve Westly and State Treasurer Phil Angelides, both Democratic contenders for the governorship, released theirs. On June 6, Schwarzenegger won nearly 90 percent of the vote in the Republican gubernatorial primary election without serious opposition.On September 13, 2006 Angelides admitted to leaking a controversial tape of Schwarzenegger to the media. Katie Levinson, the communications director for the Schwarzenegger campaign called the action "unethical at best, criminal at worst". On October 7, Schwarzenegger participated in a debate with Angelides, who had won the Democratic nomination. On October 12, Schwarzenegger appeared on The Tonight Show with Jay Leno. the appearance drew criticism because Angelides was not provided a similar media opportunity. While the Democratic Party gained six governorships in the 2006 elections, Schwarzenegger managed to defeat Angelides in the general election on November 7.
Political positions
Schwarzenegger is a member of the Republican Party. On September 7, 2007, Schwarzenegger said, "I am proud to be a member of the party of Abraham Lincoln. I am proud to be a member of the party of Ronald Reagan."When Schwarzenegger was inaugurated for his second term on January 5, 2007, he pledged to work as a centrist by creating an era of "post-partisanship" that he claimed would bring all Californians together to solve the state's problems.The New Yorker's Connie Bruck called Schwarzenegger "supermoderate". Andrew Gumbel of The Independent wrote, "[Schwarzenegger is] a man of surprises. He mixes in the social circles of Hollywood's liberal elite...yet he has always regarded himself as a staunch Republican—among other things, an act of rebellion against the staunch social democratic values of his native Austria." Los Angeles Times writer Joe Mathews wrote that Schwarzenegger routinely sides with business and asserts quasi-libertarian views on individual freedom but has crossed borders and associated with groups whose experiences seem foreign to his own. Daniel Weintraub of the San Francisco Chronicle wrote, "[Schwarzenegger] is liberal on some issues, conservative on others and, sometimes, but not always, in the middle."
Ratings
In October 2003, On the Issues rated Schwarzenegger as a "moderate liberal populist". In April 2008, the organization reclassified Schwarzenegger as a "centrist". In a 2010 report published by Equality California, an LGBT rights organization, Schwarzenegger received a 57 percent score. On April 21, 2010, Citizens for Responsibility and Ethics in Washington named Schwarzenegger and ten other governors as the "worst governors", accusing him of self-enrichment, cozying up with special interests, conflicts of interest, cronyism, pressuring state officials, mismanagement, and vetoing hospital transparency bills.In 2004, the California League of Conservation Voters, an environmental organization, released a scorecard giving Schwarzenegger a 58%. Schwarzenegger would receive the same score in 2005. In 2006, his score fell to 50%. Schwarzenegger's score improved to 63% in 2007. In 2008, Schwarzenegger received a 60% score. In 2009, Schwarzenegger's score from the CLCV fell to its lowest ever, falling to 28%. However, Schwarzenegger's score would recover in 2010, improving to 56%. As Governor of California, Schwarzenegger's lifetime score from the CLCV is 53%.In March 2005, the Cato Institute, an American libertarian think tank, issued a "fiscal policy report card" for 2004 in which it assigned an A grade to Schwarzenegger's performance as governor. Schwarzenegger was given a D grade in the 2006 report card from the Cato Institute. In the 2008 report card, Schwarzenegger was given a C grade. In 2010, Schwarzenegger received a D grade in the report card.
Use of veto power
As a Republican governor in a state with Democratic majorities in both chambers of its state legislature, Schwarzenegger made use of his veto power.
On September 27, 2008, Schwarzenegger signed and vetoed about a hundred bills each, facing a September 30 deadline at which unsigned bills automatically became law. On the next day, Schwarzenegger vetoed 131 bills, twice as many as he had signed. By October, 2008, Schwarzenegger had declined to sign 415 of the 1,187 bills that had appeared on his desk that year, a rate of 35 percent. This was the highest since state officials began tracking that statistic when Ronald Reagan was governor.
Appointments and staffing
On April 30, 2005, Schwarzenegger appointed Alan Bersin, the superintendent of the San Diego Unified School District, to serve as the state education secretary. In March 2007, Schwarzenegger appointed David Long, Riverside County's superintendent of schools since 1999, to the position of state secretary of education.On November 30, 2005 Schwarzenegger named Public Utilities Commissioner Susan Kennedy, a Democrat, as his new chief of staff, replacing Patricia Clarey. In a news conference, Schwarzenegger said, "[Kennedy is] a woman that is known as being a hardworking woman, dedicated, and is willing to work whatever it takes to get the job done."On September 4, 2007, Schwarzenegger named former federal prosecutor Paul Seave, a Democrat, to be his new director of gang and youth violence policy.On January 24, 2008, the State Senate rejected Schwarzenegger's nomination of Judith Case to the California Air Resources Board by a party-line vote of 20–15 after Democratic lawmakers questioned her commitment to fighting for cleaner air.On March 20, 2008, Schwarzenegger removed Clint Eastwood and Bobby Shriver, his brother-in-law, from the state parks commission, where both had served since before Schwarzenegger took office.On March 18, 2009, Schwarzenegger appointed former Assemblyman Fred Aguiar as secretary of the SCSA.On April 3, 2009, Schwarzenegger appointed Laura Chick to the newly created office of inspector general to oversee its share of the $787 billion from the federal economic stimulus package.
Judicial appointments
On December 9, 2005 Schwarzenegger nominated Carol Corrigan, a moderate Republican, to the state Supreme Court to fill the vacancy created by the departure of Janice Rogers Brown.Schwarzenegger made eighteen judicial appointments on August 20, 2007 that included a substantially greater mix of women and minorities after having been sharply criticized earlier in the year for having previously named mostly white men to the bench.
Fiscal matters
Schwarzenegger's first action as governor was to return the state's vehicle registration fee to 0.65 percent of a car's value, after it had previously been raised to 2 percent on October 1, 2003.It was announced on December 12, 2003 that Schwarzenegger and the California State Legislature reached an agreement that put on the ballot a bond issue to finance as much as $15 billion in debt and a constitutional spending limit. On December 18, Schwarzenegger declared a fiscal crisis and said he would bypass the legislature to impose $150 million in spending cuts.On January 6, 2004, Schwarzenegger gave his first State of the State address in which he warned voters to expect deep budget cuts and urged them to support $15 billion in bonds. In the budget proposal that he presented on January 9, Schwarzenegger's plan was to cut spending by more than $4.6 billion, with the largest reduction, roughly $2.7 billion, coming from health and human services programs. Acknowledging that the reductions would be painful to many of the poorest Californians, Schwarzenegger said "irresponsible" spending by his predecessor forced his hand.On July 17, 2004, Schwarzenegger called state legislators girlie men and called upon voters to "terminate" them at the polls in November if they didn't pass his $103 billion budget. Amid Democratic criticism of these remarks, Schwarzenegger's spokesperson said on July 19 that no apology would be forthcoming. On July 31, Schwarzenegger signed a $78.8 billion budget, which was a $32 billion reduction over five years.Schwarzenegger signed agreements with five Native American tribes on June 21, 2004 that administration officials said would provide $275 million a year for the state's general fund—representing about 15 percent of the tribes' profits.On May 9, 2007, Schwarzenegger's office announced that eleven California-based companies signed contracts worth $3 billion with Chinese companies in a move to expand trade between the U.S. state and China.On May 26, 2005 Schwarzenegger travelled to San Jose, California, to fill a pothole dug by city crews just a few hours before, as part of an attempt to dramatize his efforts to increase funding for transportation projects.Schwarzenegger announced on June 13, 2005 that a special election would be held on November 8 of that year for voters to decide on a package of government reforms he championed on state spending and elections. In the special election, all four of Schwarzenegger's signature ballot proposals were rejected by the voters as well along with four other initiatives.In his fourth annual State of the State address on January 9, 2007, Schwarzenegger called for $43.3 billion in new bond spending for schools, prisons, and other infrastructure.Schwarzenegger gave his fifth State of the State address on January 8, 2008, in which he proposed a balanced budget amendment, a constitutional amendment prohibiting the state from spending more than it collects in taxes.On January 11, 2008, Schwarzenegger proposed austerity measures by taking billions of dollars from public schools, shutting down four-dozen state parks, and releasing tens of thousands of prisoners. At the same time, the governor declared a fiscal emergency and called a special session of the state legislature to trim the current year's spending.Schwarzenegger signed six bills on February 16, 2008 that aimed at reducing at least part of the state's $14.5 billion deficit that stretches over two fiscal years. On February 19, 2008, Schwarzenegger signed an executive order requiring state agencies to make additional spending cuts that total $100 million as part of an effort to help solve the state's fiscal crisis.On April 24, 2008, Schwarzenegger predicted that California would face a budget deficit of more than $10 billion in the upcoming fiscal year.On September 23, 2008, Schwarzenegger signed the state's budget, ending an 85-day deadlock over how to close the state's $15.2 billion deficit.On July 9, 2008, Schwarzenegger signed a bill that aimed to keep many homeowners from losing their properties to foreclosure. On July 31, 2008, Schwarzenegger ordered pay for up to 200,000 state workers, cut state worker's minimum, minimum wage, and laid off more than 10,000 others, blaming a looming cash crisis. Schwarzenegger proposed a one-cent sales tax increase on August 4, framing it as a temporary sacrifice to be recouped by Californians in years to come.On August 6, 2008, Schwarzenegger said that he wouldn't sign any bills until the legislature passed a budget. Schwarzenegger sued Controller John Chiang on August 11, aiming to force the unpaid furlough of 15,600 more state workers two days a month. Even though he earlier promised to not sign any bills, Schwarzenegger signed a measure on August 26 for a statewide bullet train system that he strongly supported. On October 27, Schwarzenegger said he would call a special legislative session to address the state's budget a day after the November 4 elections.On December 1, 2008, Schwarzenegger declared a fiscal emergency, calling for fast legislative action to alleviate the state's $11.2 billion shortfall in revenue, "Without immediate action our state is headed for a fiscal disaster and that is why with more than two dozen new legislators sworn in today—I am wasting no time in calling a fiscal emergency special session."On December 18, 2008, Schwarzenegger promised to veto a budget bill that he said would cut spending too little, raise taxes and fees too much, and shortchange economic stimulus programs. Schwarzenegger called on the legislature on December 19 to convene a new special legislative session to address the state's fiscal crisis and ordered layoffs and mandatory unpaid time off for state workers as a money-saving measure.On January 6, 2009, Schwarzenegger vetoed an $18 billion deficit-cutting package with his spokesperson saying that it did not meet the governor's demands for making more cuts, streamlining government, and creating an economic stimulus. In a January 7 news conference, Schwarzenegger said, "I cannot go out and get Republican votes when I wouldn't vote for it." On January 15, Schwarzenegger gave an unusually terse State of the State address in which he warned that the legislature must agree on a budget solution before the state faced insolvency.On January 28, 2009 Schwarzenegger threatened to dismiss state workers if a judge or employee unions blocked his plan to furlough thousands of workers two days a month beginning the next week. After a judge ruled on January 29 that the governor had the legal authority to order workers to take time off without pay, Schwarzenegger told statewide elected officials on January 30 furlough state workers two days a month.On February 6, more than 200,000 state employees had to take the day off without pay to help ease California's budget crisis. Schwarzenegger signed a budget bill on February 20 raising $12.8 billion in new taxes. On February 20, Schwarzenegger called the federal stimulus plan a "terrific package" and said he was "more than happy" to take money from any governor that declined to accept aid from the stimulus.On March 27, 2009, Schwarzenegger signed five bills that would allow California to receive more than $17.5 billion in federal economic stimulus aid.On May 14, 2009, Schwarzenegger unveiled a budget proposal planning to close a huge budget deficit with deeper cuts to education and health programs and by borrowing billions more dollars.
Environmentalism
Schwarzenegger attended an energy conference on November 16, 2005, where he urged diplomats and business leaders to forge ties that would reduce the world's dependence on oil and increase energy efficiency.Schwarzenegger signed the Sustainable Oceans Act on May 26, 2006, which made California the first U.S. state to adopt comprehensive controls on future fish farming in its coastal waters.Schwarzenegger met with New York City mayor Michael Bloomberg in Sunnyvale, California, on September 21, 2006 to discuss California's sustainability initiatives. On September 27, Schwarzenegger signed into law the Global Warming Solutions Act of 2006 and said the effort kicked off "a bold new era of environmental protection".On April 11, 2007 Schwarzenegger gave a speech at a conference in Washington, D.C., where he said, "For too long the environmental movement was powered by guilt, and that doesn't work. The movement can't nag or scold, but must be a positive force." Schwarzenegger also said the environmental movement must become "hip and sexy" if it is to succeed. On April 25, Schwarzenegger threatened to sue the United States Environmental Protection Agency if it failed to act soon on a state bid to crack down on greenhouse gas emissions from cars.Schwarzenegger met Ontario premier Dalton McGuinty on May 30, 2007, when the two signed deals to fight climate change and boost stem cell research. The governor then met Canadian Prime Minister Stephen Harper, who he swapped hockey jerseys with. On May 31, 2007, Schwarzenegger and British Columbia premier Gordon Campbell signed a memorandum of understanding on climate change in Vancouver, setting targets for greenhouse gas emissions below 1990 levels.On July 31, 2006, Schwarzenegger and United Kingdom Prime Minister Tony Blair agreed to create a market for the trading of carbon emissions, and share economic and scientific research on climate change and non-polluting technology.On September 24, 2007 Schwarzenegger addressed the United Nations. In his remarks, he said that he believed rich and poor nations needed to get over their disagreements about how to fight climate change and forge a new pact to replace the Kyoto Protocol.On November 8, 2007, Schwarzenegger, with the backing of state Attorney General Jerry Brown, sued the Bush administration pursuing California exercising the ability to impose its own automobile clean air standards. Schwarzenegger said that he was prepared to "sue again and sue again" until California received permission to impose its own tough standards on automakers to curb global warming. Schwarzenegger announced on December 20, 2007 plans to sue the federal government over its decision not to allow a California plan to reduce greenhouse gas emissions. On January 2, 2008, California sued the Environmental Protection Agency, challenging its recent decision to block California rules curbing greenhouse gas emissions from new cars and trucks.On February 14, 2008 Schwarzenegger and Mexican President Felipe Calderón signed a memorandum of understanding to formalize a working relationship on environmental issues such as air-quality monitoring.At a Yale University climate conference on April 18, 2008, Schwarzenegger signed a pledge with 17 other U.S. states to pressure Congress and the next president to quickly adopt aggressive limits on greenhouse gas emissions.Before the midnight deadline to sign bills on September 30, 2008 before they automatically became law without his signature, Schwarzenegger signed a bill aimed at helping the state fight global warming by better coordinating local planning efforts to curb suburban sprawl.On November 14, 2008, Schwarzenegger signed an executive order directing state agencies to study the effects of global warming and recommend how the state needs to adapt to such changes in land use planning and building new infrastructure.After the California Air Resources Board voted unanimously to adopt the nation's most comprehensive anti-global warming plan on December 11, 2008, Schwarzenegger said that California was providing a road map for the rest of the country.With California having been refused a waiver from less-stringent national standards in 2007 under the Bush presidency, new president Barack Obama ordered his environmental officials on January 26, 2009 to immediately review California's regulation, a move that was praised by Schwarzenegger as "a great victory for California and for cleaning the air around the nation for generations to come".Schwarzenegger traveled to Washington, D.C. on May 19, 2009 to celebrate a victory on clean air with Obama.
Automobile policies
Soon after being inaugurated as governor in November 2003, Schwarzenegger's first action as governor was returning the vehicle registration fee to 0.65 percent of a car's value, after it had previously been raised to 2 percent on October 1, 2003.In 2004, Schwarzenegger initiated the California Hydrogen Highway plan to create infrastructure to support hydrogen fuel-powered transport.On September 15, 2006, Schwarzenegger signed into law a bill that made California the fourth U.S. state to ban motorists from holding cell phones while driving. On September 13, 2007, Schwarzenegger signed a bill that banned cell phone use for drivers under the age of 18. On September 24, 2008, Schwarzenegger signed a bill that made it illegal to read or send text messages while driving in California.Schwarzenegger sued the Bush Administration in a dispute on whether California was allowed to impose its own clean air standards on automobiles.
LGBTQ matters
On June 29, 2006 Schwarzenegger attended a fundraiser for Log Cabin Republicans, where he said, "I can't promise you that we will always be [of] the same mind, but I can promise you that I will always have an open mind."
Same-sex marriage
On February 20, 2004 Schwarzenegger ordered California Attorney General Bill Lockyer to intervene immediately to stop San Francisco from granting marriage licenses to same-sex couples. On the March 3 episode of The Tonight Show with Jay Leno, Schwarzenegger said it would be "fine with [him]" if Californians changed the state's family code to allow for same-sex marriages. He also said he opposed a proposed constitutional amendment supported by George W. Bush that would nationally ban them.On September 29, 2005 Schwarzenegger vetoed 52 bills, among them legislation to legalize same-sex marriage. On October 12, 2007, for a second time, Schwarzenegger vetoed a bill on to legalize same-sex marriage. Schwarzenegger said, "I support current domestic partnership rights and will continue to vigorously defend and enforce these rights."On May 15, 2008 the state Supreme Court, striking down a 1977 law and Proposition 22 in a 4–3 decision, ruled that same-sex couples had a constitutional right to marry. In a statement, Schwarzenegger said that he respected the ruling and did not support a constitutional amendment to overturn it. On April 11, 2008 Schwarzenegger told a group of gay Republicans that an attempt to ban same-sex marriage by changing the state constitution is a "total waste of time" and promised to oppose such an initiative if it qualified for the state ballot. Schwarzenegger, therefore, did not support 2008 California Proposition 8, which was passed by voters nevertheless, again banning same-sex marriages in the state.Proposition 8 was later struck down by court decisions, and Schwarzenegger did not appeal. At the end of his governorship, Civil rights attorney Shannon Minter of the National Center for Lesbian Rights gave Schwarzenegger a B− grade on gay and lesbian issues, calling Schwarzenegger's decision not to appeal Perry v. Schwarzenegger, which struck down Proposition 8, "a really quite dramatic stand for a Republican governor to have taken."
Disaster management
After a 6.5 magnitude earthquake on December 21, 2003, Schwarzenegger visited Paso Robles on December 23 and declared a state of emergency in San Luis Obispo County.On January 12, 2005 Schwarzenegger went to La Conchita, California, after a deadly landslide on January 10, and told residents, "In the past few days, we have seen the power of nature cause damage and despair, but we will match that power with our own resolve."Schwarzenegger declared a state of emergency in ten counties on January 16, 2007 after freezing weather damaged California farmers' crops, causing up to $1 billion in damages.On April 30, 2007, Schwarzenegger declared a state of emergency after a highway collapse in Oakland, authorizing free transit on the Bay Area Rapid Transit rail system, ferries, and buses for one day.With more than a dozen wildfires raging across southern California, on October 22, 2007 Schwarzenegger declared a state of emergency in seven counties and reassigned 800 soldiers in the National Guard from patrolling the border to help battle the wildfires as well as calling the situation "a tragic time for California". On October 23, Schwarzenegger said that he was "happy" with the number of firefighters working the blazes, but officials said that they were stretched thin and that a lack of resources was as much a burden as the temperatures and winds. In spite of their differences on policy, Schwarzenegger and George W. Bush travelled to southern California on October 25 to view the scarred landscape by helicopter and Bush telling Californians that they wouldn't be forgotten in Washington, D.C. During a news conference on October 27, Schwarzenegger said that at least two fires were started intentionally and two more had suspicious origins and issued a warning for arsonists, "We will hunt down the people that are responsible for that. If I were one of the people who started the fires, I would not sleep soundly right now, because we're right behind you."On November 7, 2007, container ship Cosco Busan struck a tower of the San Francisco–Oakland Bay Bridge, causing an oil spill. This led to Schwarzenegger declaring a state of emergency on November 9, saying, "There is tremendous damage on the wildlife and on the beaches. If mistakes were made, then we will bring them out." On November 13, Schwarzenegger issued an order suspending all fishing and crabbing for human consumption in areas affected by the spill until at least December 1. The ban on fishing and crabbing in the San Francisco area was lifted by Schwarzenegger on November 29 after studies showed no ill effects from the oil spill, but state officials urged seafood lovers to stay away from some mussels and oysters.After the Pacific Fishery Management Council voted on April 10, 2008 to cancel the chinook fishing season in an effort to reverse the catastrophic disappearance of California's run of the king salmon, Schwarzenegger declared a state of emergency and sent a letter to George W. Bush asking for his help in obtaining federal disaster assistance.On May 27, 2008, Schwarzenegger and Nevada governor Jim Gibbons declared a state of emergency in the Lake Tahoe basin after taking the advice of a two-state commission that declared the region ripe for catastrophic fire.On June 4, 2008 Schwarzenegger issued a drought declaration—the first of its kind since 1991—ordering the transfer of water from less dry areas to those that are dangerously dry. The governor also said he would ask the federal government for aid to farmers and press water districts, cities, and local water agencies to accelerate conservation. On June 12, 2008, Schwarzenegger declared a state of emergency in nine counties over the drought, ordering several state agencies to help drill wells, use the California Aqueduct to transport water to farmers, and to expedite water transfers between agencies.On September 29, 2008, Schwarzenegger signed several bills that aimed to speed response and improve cleanup efforts after a major oil spill.Schwarzenegger declared a state of emergency on February 27, 2009 because of three years of below-average rain and snowfall in California, a step that urges urban water agencies to reduce water use by 20 percent.
Criminal justice
On August 16, 2004, Schwarzenegger stated that he was considering giving weightlifting equipment back to prisoners, who had been barred from using weights since 1997.At a rally in Los Angeles on October 28, 2004 Schwarzenegger joined three former California governors, including his predecessor Gray Davis, to voice his opposition to Proposition 66, which would augmented the state's three-strikes law.Schwarzenegger allowed the execution of Donald Beardslee to proceed on January 19, 2005, marking the first California state execution during his tenure as governor and the first to occur in three years.On June 26, 2006, eversing a decade of California policy, Schwarzenegger called for the construction of at least two more prisons and the addition of thousands of beds in existing facilities in order to deal with what he called "dangerously overcrowded" prisons. Schwarzenegger proclaimed a state of emergency regarding prison crowding on October 4, 2006 and said, "Our prisons are now beyond maximum capacity, and we must act immediately and aggressively to resolve this issue."On January 19, 2006, Santa Barbara judge Frank Ochoa overturned Schwarzenegger's decision to deny parole to inmate Frank Pintye, who was present when his friend beat a 69-year-old man with a tire iron and then set the man ablaze.The California State Legislature approved the largest single prison construction program in U.S. history and agreed to send 8,000 convicts to other states on April 26, 2007.A bill passed by the legislature, which cost between $7.8 to $8.3 billion and adds 53,000 beds to California's prison and county jails, was signed into law by Schwarzenegger on May 3, 2007.
Execution of Stanley Williams
On November 25, 2005 Schwarzenegger said he would consider granting clemency to convicted killer and Crips co-founder Stanley Williams. In a closed-door meeting, Schwarzenegger met with lawyers of Stanley Williams and prosecutors with each side having thirty minutes to plead its case to the governor. Margita Thompson told reporters that Schwarzenegger's decision on whether to grant clemency to Williams would come as late as December 12. Schwarzenegger denied Williams clemency on December 12, writing, "Stanley Williams insists he is innocent, and that he will not and should not apologize or otherwise atone for the murders of the four victims in this case. Without an apology and atonement for these senseless and brutal killings, there can be no redemption." After the U.S. Supreme Court refused to stay the execution, Williams was executed shortly after midnight at San Quentin State Prison on December 13.In his birth nation of Austria, Schwarzenegger faced backlash over the execution on December 19 from left-wing councillors in Graz, who announced that they were seeking to strip him of his Austrian citizenship. Schwarzenegger sent a letter to Graz on December 19 demanding his name to be removed from a stadium that bore his name since 1997. He also wrote that he was revoking his permission for Graz to use his name in any advertising campaigns that promote the city. On December 26, Schwarzenegger's name was removed from the stadium.
Healthcare and public health
Schwarzenegger signed a bill on September 28 that banned mercury in vaccines for young children and pregnant women, making California the second U.S. state after Iowa to do so. On September 30, Schwarzenegger vetoed two bills—one that would have required the California Department of Health Services to set up a website to help consumers compare prices among Canadian pharmacies and buy medicines from them and another bill that would have required California to monitor foreign suppliers of prescription drugs to make sure they met American standards for purity, handling and packaging.On December 7, 2004 Schwarzenegger was giving a speech in Long Beach at an annual conference celebrating women's contributions to the state when he was interrupted by protesting nurses who he criticized as "special interests"On March 6, 2005, Schwarzenegger declared his desire to ban all sales of junk food in California schools and instead fill school vending machines with fresh fruits, vegetables and milk. On September 15, Schwarzenegger signed bills that banned the sale of sodas in high schools and set fat, sugar, and calorie standards for all food, except cafeteria lunches, sold in public schools.On September 29, 2005 Schwarzenegger vetoed 52 bills, among them legislation that would give residents access to cheaper prescriptions from Canada and create greater oversight of the state's $3 billion stem cell research program.After George W. Bush vetoed expanded federal funding of embryonic stem cell research on July 19, 2006, Schwarzenegger authorized a $150 million loan to fund California's stem cell institute on July 20.On January 8, 2007, Schwarzenegger proposed a system of universal health insurance for Californians.On July 12, 2007, Schwarzenegger met with Bay Area executives, asking them to support his health care reform plan, while deriding a Democratic alternative and single-payer healthcare.On October 14, 2007, Schwarzenegger signed bills that banned phthalates in children's products.On December 11, 2007, Schwarzenegger allowed some financially struggling hospitals to keep operating until 2020 even though the state said they were most likely to crumple during a major seismic event.Schwarzenegger signed a bill on July 25, 2008 that made California the first U.S. state to ban trans fats in restaurant food.
Education
In April 2005, Schwarzenegger appointed Alan Bersin to serve as the state secretary of education. In March 2007, Schwarzenegger appointed David Long to serve as state secretary of education.While giving a commencement speech at Santa Monica College on June 14, 2005, Schwarzenegger faced boos, jeers, turned backs, and signs of protest to his policies on education funding.In his 2008 State of the State address, Schwarzenegger stated that his education priority would be to transform 98 school districts that had posted rock-bottom test scores for at least five years. On February 27, 2008, Schwarzenegger and state schools chief Jack O'Connell announced a joint plan to help 96 troubled school districts improve academically.Schwarzenegger filed a lawsuit against the United States Forest Service on February 28, 2008 for adopting a management plan that would allow road construction and oil drilling in California's largest national forests, saying, "We are forced to once again stand up for California's forests. Despite repeated attempts to ensure that the United States Forest Service honor its written assurances that California's roadless areas would be protected, they have failed to do so."
Immigration
California sits along the United States-Mexico border, making immigration matters particularly relevant in the state.
Schwarzenegger vetoed a bill on September 22, 2004 that would have given as many as two million illegal immigrants California drivers licenses, claiming that the measure failed to provide sufficient security provisions at a time of heightened terrorism concerns.In a radio interview on April 28, 2005, Schwarzenegger praised the Minutemen campaign that used armed volunteers to stop illegal immigrants from crossing into the U.S, which drew condemnation from Democrats, immigrants' rights groups, the Mexican government, and some Republicans.Schwarzenegger caused controversy on October 4, 2006 when he said that Mexican immigrants "try to stay Mexican" rather than assimilate in the United States.On April 23, 2006, Schwarzenegger said that the proposed building a 700-mile wall along the border with Mexico to deter illegal immigration would amount to "going back to the Stone Ages" and urged the federal government to instead use high-tech gear and more patrols to secure the nation's southern boundary. In leaked audio tapes, Schwarzenegger likened the proposed Mexico–United States border fence to the Berlin Wall in March, "We had the Berlin Wall; we had walls everywhere. But we always looked at the wall as kind of like the outside of the wall is the enemy. Are we looking at Mexico as the enemy? No, it's not. These are our trading partners."On June 23, 2006, Schwarzenegger rejected a request from President George W. Bush to more than double the number of California National Guard troops that would be deployed to the border, fearing the commitment could leave the state vulnerable if an earthquake or wildfire erupted.On November 9, 2006, Schwarzenneger met with Mexican President Vicente Fox, and discussed immigration among other matters.
Firearms
On September 13, 2004, Schwarzenegger signed the .50 Caliber BMG Regulation Act, which banned the manufacturing, sale, distribution, and importation of .50 BMG rifles, making California the first U.S. state to do so. On October 14, 2007, Schwarzenegger signed legislation that made California the first U.S. state to require semiautomatic pistols sold in the state to leave a unique imprint on bullets that are fired.
State election reform
In his State of the State address on January 5, 2005, Schwarzenegger proposed turning over the drawing of the state's political map to a panel of retired judges. This was ultimately rejected by voters in the November 8, 2005 special election that Schwarzenegger called when Californians voted against Proposition 77.After the passage of 2008 Proposition 11, Schwarzenegger declared victory on the issue of independent redistricting, saying, "This is why this is historic—the first time where really citizens independently of the Legislature...will draw the district lines in the future."On January 15, 2008, Schwarzenegger endorsed 2008 Proposition 93 in a flip-flop on term limits.
Marijuana
GQ reported on October 29, 2007 that Schwarzenegger had told Piers Morgan in an interview that "[marijuana] is not a drug, it's a leaf." Schwarzenegger said on May 6, 2009 that he believed it was time to debate legalizing marijuana for recreational use in California.
Foreign relations
In 2004, Schwazenegger made an official trip abroad, visiting Israel on May 2, 2004 where he met Prime Minister Ariel Sharon and attended the groundbreaking ceremony for the city's Simon Wiesenthal Centre Museum of Tolerance. On May 3, Schwarzenegger met King Abdullah II of Jordan in a hastily arranged visit following criticism from Arab Americans that his trip to the Middle East had excluded a meeting with Arabs. On November 10, 2004, Schwarzenegger traveled to Japan and met Prime Minister Junichiro Koizumi on November 12, who remarked that the governor was more popular in Japan than U.S. president George W. Bush.In November 2005, Schawzenegger traveled to China, speaking at an energy conference in Beijing on November 16 to encourage energy partnerships and decreased reliance on oil. On November 19, Schwarzenegger wrapped up his trip to China in Hong Kong, where he unveiled an anti-piracy ad he had filmed with Jackie Chan.In July 2006, Schwarzenegger reached an agreement with the United Kingdom relating to partnerships on carbon emissions as well as green energy and related research.Schwarzenegger met with Mexican President Vicente Fox on November 9, 2006 to discuss immigration and trade issues and to encourage further efforts on both sides to control greenhouse gases.On June 26, 2007, Schwarzenegger visited London, where he met Tony Blair on Blair's final full day in office as Prime Minister and issued a plea for countries to join the fight against global warming.Via satellite, Schwarzenegger addressed the British Conservative Party on September 30, 2007, during which he called opposition leader David Cameron "a new, dynamic leader".On October 30, 2007, Schwarzenegger met with Uruguayan President Tabaré Vázquez.Schwarzenegger was in Baghdad on March 17, 2010, when he praised U.S. soldiers for helping Iraqi Prime Minister Nouri al-Maliki build and nurture Iraq's public institutions.
Other matters
Schwarzenegger declared April 24, 2005 a "Day of Remembrance of the Armenian genocide" to the chagrin of the Ankara Chamber of Commerce, an umbrella organization grouping some 300 Ankara-based unions and businesses.On September 29, 2005 Schwarzenegger vetoed 52 bills, including legislation to raise the minimum wage. Schwarzenegger signed a bill on September 30 that tripled damages celebrities could win from paparazzi if they were assaulted during a shoot and denied the photographers profits from any pictures taken in an altercation. On October 7, Schwarzenegger signed legislation to outlaw the sale to teenagers of electronic games featuring reckless mayhem and explicit sexuality.On May 2, 2006, Schwarzenegger told NFL commissioner Paul Tagliabue and a committee of eleven owners that he wanted two teams to play in Los Angeles.After obtaining a six-minute recording, the Los Angeles Times published an article on September 8, 2006, writing that Schwarzenegger had casually said that "black blood" mixed with "Latino blood" equals "hot" when discussing Assemblywoman Bonnie Garcia's ethnicity with his chief of staff Susan Kennedy. Even though Garcia said she was not offended, Schwarzenegger apologized for the comment.Schwarzenegger claimed on November 8, 2007 that he assumed an unspecified behind-the-scenes role in talks to bring an end to the screenwriters strike. In a news conference in Sacramento, Schwarzenegger said, "I'm talking to the parties that are involved because I think it's very important that we settle that as quickly as possible, because it has a tremendous economic impact on our state."In Fresno, California, on June 6, 2008, Schwarzenegger met Honduran President Manuel Zelaya, who discussed job offers for Honduran workers. On June 12, 2008, Schwarzenegger and Chilean President Michelle Bachelet presided over the signing of a number of bilateral scientific, agricultural, and educational agreements.
Allegations of past groping
On December 9, 2003, Schwarzenegger declared that there was no investigation needed into the groping allegations that had been made against him. On the same day, stuntwoman Rhonda Miller sued Schwarzenegger for libel after his campaign emailed reporters a link to a criminal court website and search Rhonda Miller. The website indicated a Rhonda Miller had a criminal record for offenses which included prostitution, forgery, and drug dealing, but the stuntwoman's legal team said that the Rhonda Miller with the record was a different person. On August 25, 2006, Schwarzenegger settled a libel lawsuit with Anna Richardson, who claimed she was groped by him during a 2000 interview and later defamed by his aides during his 2003 campaign.
Assessments
In an article of Time on November 1, 2010, Thad Kousser of the University of California, San Diego said, "[Schwarzenegger] is not divisive nor scandal plagued, but he's generally fallen short of changing the political culture of Sacramento and the policy course of the state." Nick Roman of KPCC wrote that, "Schwarzenegger's legacy is varied and puzzling, inspiring, and infuriating—just like the state he governed."
Electoral history
Note that San Bernardino County did not report write-in votes for individual candidates.
See also
Opinion polling on the Arnold Schwarzenegger governorship
== References == |
planyc | PlaNYC was a strategic plan released by New York City Mayor Michael Bloomberg in 2007 to prepare the city for one million more residents, strengthen the economy, combat climate change, and enhance the quality of life for all New Yorkers. The plan brought together over 25 City agencies to work toward the vision of a greener, greater New York and significant progress was made towards the long-term goals over the following years.
PlaNYC specifically targeted ten areas of interest: Housing and Neighborhoods; Parks and Public Spaces; Brownfields; Waterways; Water Supply; Transportation; Energy; Air Quality; Solid Waste; and Climate Change.
Over 97% of the 127 initiatives in PlaNYC were launched within one-year of its release and almost two-thirds of its 2009 milestones were achieved or mostly achieved. The plan was updated in 2011 and was expanded to 132 initiatives and more than 400 specific milestones for December 31, 2013.
Daniel L. Doctoroff, the deputy mayor for economic development and rebuilding, led the team of experts that developed the plan, which The New York Times called the Bloomberg administration's "most far-reaching"—"its fate could determine whether his administration will be remembered as truly transformative."In April 2015, an updated strategic document outlining city policies for inclusive growth, sustainability, and resilience to climate change was released as One New York: The Plan for a Strong and Just City or OneNYC.
Components
The plan had three major components:
OpeNYC: Preparation for a sharp rise in New York City's population, expected to increase by more than one million over two decades.
MaintaiNYC: Repairing aging infrastructure, including city bridges, water mains, mass transit, building codes and power plants.
GreeNYC: Conserving New York City resources, with a goal of reducing New York City's carbon emissions by 30%.
Congestion pricing
One of the most controversial aspects of the plan was the mayor's call for congestion pricing, specifically a bid to levy a fee of $8.00 on all cars entering midtown Manhattan during peak hours on weekdays, with a few exemptions for through traffic. The proposal was canceled in 2008 despite support from environmental groups and the governor's office because of great opposition from residents in Brooklyn and Queens (on Long Island), who would have had to pay a toll to enter and exit the island.A major criticism stemmed from the plan's assumption that more riders could use mass transit. New York City Transit, after doing an analysis of each subway route, revealed that many subway routes were already used to capacity, and that the tracks allowed no room to add more trains. Promoters of this mechanism argued that the system could generate much needed funds for MTA Capital Construction projects such as the Second Avenue Subway, 7 Subway Extension, and East Side Access.
Climate change mitigation
In 2007, the city aimed to reduce greenhouse gas emissions by 30 percent of the 2005 levels by 2030. Emissions were reduced by 13 percent between 2007 and 2011. This was attributed to a 26 percent decrease in carbon intensity present in the city's electrical supply during this period as a result of more efficient power plants and increased use of renewable energy. Con Edison also stepped in to curb the threat of fugitive sulfur hexafluoride leakage in its electricity transmission and distribution system, which further lowered emissions by 3 percent.Mitigation efforts included switching fuel sources to cleaner energy. A decrease in demand for energy consumption, new solid waste management strategies, and more sustainable transportation systems were projected to result in a 30 percent decrease in greenhouse gas emissions for the city.In 2011, the Department of Environmental Protection (DEP) enforced its Climate Change Program Assessment and Action Plan by researching the potential effects of climate change on the city's water supply. Areas projected to be affected were determined by the DEP's climate change impact scenarios. Funded projects included the Croton Walter Filtration Plant, which opened in 2015 to filter sediments entering the water supply after storms, and the renovation of the Delaware Aqueduct. The DEP took action on its own projects such as improving the sewage system by developing a new stormwater drainage strategy focused on areas threatened by flooding and sewer backups and overflows. There was an overall emphasis on maximizing synergy and minimizing tradeoffs among energy, air, water, land, and climate policies.
Support
PlaNYC was supported by Campaign for New York's Future, a coalition of civic, business, environmental, labor, community and public health organizations.
Sustainable Energy Property Tracking System
According to a study by the mayor's office, the city's municipal buildings accounted for nearly 3.8 million metric tons of greenhouse gas emissions each year and utilized 6.5 percent of the city's energy. The city's rate of energy consumption in NYC municipal buildings totaled nearly $1 billion each year, and accounted for about 64 percent of the city's greenhouse gas emissions. One of the main goals of Mayor Bloomberg's PlaNYC was to reduce greenhouse gas emissions by 30 percent by 2017.In order to meet this goal, the government of New York City signed an agreement worth more than ten million dollars with TRIRIGA, an integrated workplace management system and environmental sustainability software provider that was later acquired by IBM, through which the city would deploy TRIRIGA's environmental and energy management software across more than 4,000 government buildings throughout the city.New York City used performance data from IBM TRIRIGA system to provide the city with the critical analysis required to implement carbon reduction strategies and to inform the project selection process for PlaNYC funded retrofit projects.
Energy and water usage were measured and entered into the Sustainable Energy Property Tracking System (SEPTS) to help identify resource-intensive facilities and prioritize energy efficiency investment decisions.
References
External links
PlaNYC: A Greater, Greener New York
Mayor Michael Bloomberg's homepage
Apple Wants to Take Bite Out of Big Apple, Claims City's Environmental Logo Infringes on Trademark, Natalie Zmuda, Advertising Age, April 3, 2008
The Process Behind PlaNYC: How the City of New York Developed Its Comprehensive Long-Term Sustainability Plan |
built environment | The term built environment refers to human-made conditions and is often used in architecture, landscape architecture, urban planning, public health, sociology, and anthropology, among others. These curated spaces provide the setting for human activity and were created to fulfill human desires and needs. The term can refer to a plethora of components including the traditionally associated buildings, cities, public infrastructure, transportation, open space, as well as more conceptual components like farmlands, dammed rivers, wildlife management, and even domesticated animals. The built environment is made up of physical features. However, when studied, the built environment often highlights the connection between physical space and social consequences. It impacts the environment and how society physically maneuvers and functions, as well as less tangible aspects of society such as socioeconomic inequity and health. Various aspects of the built environment contribute to scholarship on housing and segregation, physical activity, food access, climate change, and environmental racism.
Features
There are multiple different components that make up the built environment. Below are some prominent examples of what makes up the urban fabric:
Buildings
Buildings are used for a multitude of purposes: residential, commercial, community, institutional, and governmental. Building interiors are often designed to mediate external factors and provide space to conduct activities, whether that is to sleep, eat, work, etc. The structure of the building helps define the space around it, giving form to how individuals move through the space around the building.
Public infrastructure
Public infrastructure covers a variety of things like roads, highways, pedestrian circulation, public transportation, and parks.
Roads and highways are an important feature of the built environment that enable vehicles to access a wide range of urban and non urban spaces. They are often compared to veins within a cardiovascular system in that they circulate people and materials throughout a city similar to how veins distribute energy and materials to the cells. Pedestrian circulation is vital for the walkability of a city and general access on a human scale. The quality of sidewalks and walkways have an impact on safety and accessibility for those using these spaces. Public transportation is essential in urban areas, particularly in cities and areas that have a diverse population and income range.
Agriculture
Agricultural production accounts for roughly 52% of U.S. land use. Not only does population growth cause an expansion of cities, it also necessitates more agriculture to accommodate the demand for food for an expanding population.
History
Built environment as a term was coined in the 1980s, becoming widespread in the 1990s and places the concept in direct contrast to the supposedly "unbuilt" environment. The term describes a wide range of fields that form an interdisciplinary concept that has been accepted as an idea since classical antiquity and potentially before. Through the study of anthropology, the progression of the built environment into what it is today has been able to be examined. When people are able to travel outside of urban centers and areas where the built environment is already prominent, it pushes the boundaries of said built environment into new areas. While there are other factors that influence the built environment, like advancements in architecture or agriculture, transportation allowed for the spread and expansion of the built environment.
Pre-industrial Revolution
Agriculture, the cultivation of soil to grow crops and animals to provide food as well as products, was first developed about 12,000 years ago. This switch, also called the Neolithic Revolution, was the beginning of favoring permanent settlements and altering the land to grow crops and farm animals. This can be thought of as the start of the built environment, the first attempt to make permanent changes to the surrounding environment for human needs. The first appearance of cities was around 7500 BCE, dotted along where land was fertile and good for agricultural use. In these early communities, a priority was to ensure basic needs were being met. The built environment, while not as extensive as it is today, was beginning to be cultivated with the implementation of buildings, paths, farm land, domestication of animals and plants, etc. Over the next several thousand years, these smaller cities and villages grew into larger ones where trade, culture, education, and economics were driving factors. As cities began to grow, they needed to accommodate more people, as well as shifted from focusing on meeting survival needs to prioritizing comfort and desires – there are still many individuals today who do not have their basic needs met and this idea of a shift is within the framework of the evolution of society. This shift caused the built aspect of these cities to grow and expand to meet the growing population needs.
Industrial Revolution
The pinnacle of city growth was during the Industrial Revolution due to the demand for jobs created by the rise in factories. Cities rapidly grew from the 1880s to the early 1900s within the United States. This demand led individuals to move from farms to cities which resulted in the need to expand city infrastructure and created a boom in population size. This rapid growth in population in cities led to issues of noise, sanitation, health problems, traffic jams, pollution, compact living quarters, etc. In response to these issues, mass transit, trolleys, cable cars, and subways, were built and prioritized in an effort to improve the quality of the built environment. An example of this during the industrial revolution was the City Beautiful movement. The City Beautiful movement emerged in the 1890s as a result of the disorder and unhealthy living conditions within industrial cities. The movement promoted improved circulation, civic centers, better sanitation, and public spaces. With these improvements, the goal was to improve the quality of life for those living in them, as well as make them more profitable. The City Beautiful movement, while declined in popularity over the years, provided a range of urban reforms. The movement highlighted city planning, civic education, public transportation, and municipal housekeeping.
Post Industrial Revolution to present
The invention of cars, as well as train usage, became more accessible to the general masses due to the advancements in the steel, chemicals, and fuel generated production. In the 1920s, cars became more accessible to the general public due to Henry Ford's advances in the assembly line production. With this new burst of personal transportation, new infrastructure was built to accommodate. Freeways were first built in 1956 to attempt to eliminate unsafe roads, traffic jams, and insufficient routes. The creation of freeways and interstate transportation systems opened up the possibility and ease of transportation outside a person's city. This allowed ease of travel not previously found and changed the fabric of the built environment. New streets were being built within cities to accommodate cars as they became increasingly popular, railway lines were being built to connect areas not previously connected, for both public transportation as well as goods transportation. With these changes, the scope of a city began to expand outside its borders. The widespread use of cars and public transportation allowed for the implementation of suburbs; the working individual was able to commute long distances to work everyday. Suburbs blurred the line of city "borders", the day-to-day life that may have originally been relegated to a pedestrian radius now encompassed a wide range of distances due to the use of cars and public transportation. This increased accessibility allowed for the continued expansion of the built environment.
Currently, the built environment is typically used to describe the interdisciplinary field that encompasses the design, construction, management, and use of human-made physical influence as an interrelated whole. The concept also includes the relationship of these elements of the built environment with human activities over time—rather than a particular element in isolation or at a single moment in time, these aspects act together via the multiplier effect. The field today draws upon areas such as economics, law, public policy, sociology, anthropology, public health, management, geography, design, engineering, technology, and environmental sustainability to create a large umbrella that is the built environment.There are some in modern academia who look at the built environment as all encompassing, that there is no natural environment left. This argument comes from the idea that the built environment not only refers to that which is built, arranged, or curated, but also to what is managed, controlled, or allowed to continue. What is referred to as "nature" today can be seen as only a commodity that is placed into an environment that is constructed to fulfill the human will and desire. This commodity allows humans to enjoy the view and experience of nature without it inconveniencing their day-to-day life. It can be argued that the forests and wild-life parks that are held on a pedestal and are seemingly natural are in reality curated and allowed to exist for the enjoyment of the human experience. The planet has been irrevocably changed by human interaction. Wildlife has been hunted, harvested, brought to the brink of extinction, modified to fit human needs, the list goes on. This argument juxtaposes the argument that the built environment is only what is built, that the forests, oceans, wildlife, and other aspects of nature are their own entity.
Impact
The term built environment encompasses a broad range of categories, all of which have potential impacts. When looking at these potential impacts, the environment, as well as people, are heavily affected.
Health
The built environment can heavily impact the public's health. Historically, unsanitary conditions and overcrowding within cities and urban environments have led to infectious diseases and other health threats. Dating back to Georges-Eugene Haussmann's comprehensive plans for urban Paris in the 1850s, concern for lack of air-flow and sanitary living conditions has inspired many strong city planning efforts. During the 19th century in particular, the connection between the built environment and public health became more apparent as life expectancy decreased and diseases, as well as epidemics, increased. Today, the built environment can expose individuals to pollutants or toxins that cause chronic diseases like asthma, diabetes, and coronary vascular disease along with many others. There is evidence to suggest that chronic disease can be reduced through healthy behaviors like a proper active lifestyle, good nutrition, and reduced exposure to toxins and pollutants. Yet, the built environment is not always designed to facilitate those healthy behaviors. Many urban environments, in particular suburbs, are automobile reliant making it difficult or unreasonable to walk or bike places. This condition not only adds to pollution, but can also make it hard to maintain a proper active lifestyle. Public health research has expanded the list of concerns associated with the built environment to include healthy food access, community gardens, mental health, physical health, walkability, and cycling mobility. Designing areas of cities with good public health is linked to creating opportunities for physical activity, community involvement, and equal opportunity within the built environment. Urban forms that encourage physical activity and provide adequate public resources for involvement and upward mobility are proven to have far healthier populations than those that discourage such uses of the built environment.
Social
Housing and segregation
Features in the built environment present physical barriers which constitute the boundaries between neighborhoods. Roads and railways, for instance, play a large role in how people can feasibly navigate their environment. This can result in the isolation of certain communities from various resources and from each other. The placement of roads, highways, and sidewalks also determines what access people have to jobs and childcare close to home, especially in areas where most people do not own vehicles. Walkability directly influences community, so the way a neighborhood is built affects the outcomes and opportunities of the community that lives there. Even less physically imposing features, such as architectural design, can distinguish the boundaries between communities and decrease movement across neighborhood lines.The segregation of communities is significant because the qualities of any given space directly impact the wellbeing of the people who live and work there. George Galster and Patrick Sharkey refer to this variation in geographic context as "spatial opportunity structure", and claim that the built environment influences socioeconomic outcomes and general welfare. For instance, the history of redlining and housing segregation means that there is less green space in many Black and Hispanic neighborhoods. Access to parks and green space has been proven to be good for mental health which puts these communities at a disadvantage. The historical segregation has contributed to environmental injustice, as these neighborhoods suffer from hotter summers since urban asphalt absorbs more heat than trees and grass. The effects of spatial segregation initiatives in the built environment, such as redlining in the 1930s and 1940s, are long lasting. The inability to feasibly move from forcibly economically depressed areas into more prosperous ones creates fiscal disadvantages that are passed down generationally. With proper public education access tied to the economic prosperity of a neighborhood, many formerly redlined areas continue to lack educational opportunities for residents and, thus, job and higher-income opportunities are limited.
Environmental
The built environment has a multitude of impacts on the planet, some of the most prominent effects are greenhouse gas emissions and Urban Heat Island Effect.
The built environment expands along with factors like population and consumption which directly impact the output of greenhouse gases. As cities and urban areas grow, the need for transportation and structures grows as well. In 2006, transportation accounted for 28% of total greenhouse gas emissions in the U.S. Building's design, location, orientation, and construction process heavily influence greenhouse gas emissions. Commercial, industrial, and residential buildings account for roughly 43% of U.S. CO2 emissions in energy usage. In 2005, agricultural land use accounted for 10–12% of total human-caused greenhouse gas emissions worldwide.Urban heat islands are pockets of higher temperature areas, typically within cities, that effect the environment, as well as quality of life. Urban Heat Islands are caused by reduction of natural landscape in favor of urban materials like asphalt, concrete, brick, etc. This change from natural landscape to urban materials is the epitome of the built environment and its expansion.
See also
References
Further reading
Jackson, Richard J.; Dannenberg, Andrew L.; Frumkin, Howard (2013). "Health and the Built Environment: 10 Years After". American Journal of Public Health. 103 (9): 1542–1544. doi:10.2105/ajph.2013.301482. PMC 3780695. PMID 23865699.
Leyden, Kevin M (2003). "Social Capital and the Built Environment: The Importance of Walkable Neighborhoods" (PDF). American Journal of Public Health. 93 (9): 1546–1551. doi:10.2105/ajph.93.9.1546. PMC 1448008. PMID 12948978. Archived from the original (PDF) on 2017-10-18. Retrieved 2014-02-26.
Jeb Brugmann, Welcome to the urban revolution: how cities are changing the world, Bloomsbury Press, 2009
Jane Jacobs, The Death and Life of Great American Cities, Random House, New York, 1961
Andrew Knight & Les Ruddock, Advanced Research Methods in the Built Environment, Wiley-Blackwell 2008
Paul Chynoweth, The Built Environment Interdiscipline: A Theoretical Model for Decision Makers in Research and Teaching, Proceedings of the CIB Working Commission (W089) Building Education and Research Conference, Kowloon Sangri-La Hotel, Hong Kong, 10 - 13 April 2006.
Richard J. Jackson with Stacy Sinclair, Designing Healthy Communities, Jossey-Bass, San Francisco, 2012
Russell P. Lopez, The Built Environment and Public Health, Jossey-Bass, San Francisco, 2012
External links
Australian Sustainable Built Environment Council (ASBEC)
Faculty of Built Environment, UTM, Skudai, Johor, Malaysia
Designing Healthy Communities, link to nonprofit organization and public television documentary of same name
The Built Environment and Health: 11 Profiles of Neighborhood Transformation |
energy performance certificate (united kingdom) | Energy performance certificates (EPCs) are a rating scheme to summarise the energy efficiency of buildings. The building is given a rating between A (Very efficient) - G (Inefficient). The EPC will also include tips about the most cost-effective ways to improve the home energy rating. Energy performance certificates are used in many countries.
Legislative history
EPCs are administered and regulated for separately in (a) England and Wales, (b) Scotland and (c) Northern Ireland.
EPCs were introduced in England and Wales on 1 August 2007 as part of Home Information Packs (HIPs) for domestic properties with four or more bedrooms. Over time this requirement was extended to smaller properties. When the requirement for HIPs was removed in May 2010, the requirement for EPCs continued. Rental properties, which have a certificate valid for 10 years, became required on a new tenancy commencing on or after 1 October 2008.The legislative basis for EPCs in the UK is European Union Directive 2010/31/EU as transposed into UK law by:
The Energy Performance of Buildings (England and Wales) Regulations 2012 (S.I. 2012/3318) (as amended), in relation to England and Wales,
The Energy Performance of Buildings (Scotland) Regulations 2008 (S.S.I 2008/309) (as amended), in relation to Scotland,
The Energy Performance of Buildings (Certificates and Inspections) Regulations (Northern Ireland) 2008 (S.I. 2008/170) (as amended), in relation to Northern Ireland.
Procedure
The energy assessment needed to produce an EPC is performed by a qualified and accredited energy assessor who visits the property, examines key items such as cavity wall, floor and loft insulation, domestic boiler, hot water tank, radiators, heating controls windows for double glazing, and so on. They then input the observations into a software program which performs the calculation of energy efficiency. The program gives a single number for the rating of energy efficiency, and a recommended value of the potential for improvement. There are similar figures for environmental impact. A table of estimated annual energy bills (and the potential for improvement) is also presented, but without any reference to householder bills. The householder will have to pay for the survey, which costs around £75 - £100 for a four bedroom house. The exercise is entirely non-invasive, so the software will make assumptions on the insulation properties of various elements of the property based on age and construction type. The assessor has the ability to over-ride these assumptions if visual or written evidence is provided to support the presence of insulation which may have been subsequently installed.
Domestic EPCs
The calculation of the energy rating on the EPC is based on the Standard Assessment Procedure (SAP). Existing dwellings are assessed using Reduced Data SAP (RdSAP), a simplified version of the SAP methodology that requires fewer data inputs. SAP and RdSAP are derived from the UK Building Research Establishment's Domestic Energy Model (BREDEM), which was originally developed in the 1980s and also underlies the NHER Rating. EPCs are produced by domestic energy assessors who are registered under an approved certification scheme.
Property details
The certificate contains the following property details:
property address
property type (for example detached house)
date of inspection
certificate date and serial number
total floor area.The total floor area is the area contained within the external walls of the property. The figure includes internal walls, stairwells and the like, but excludes garages, porches, areas less than 1.5 metres (4 ft 11 in) high, balconies and any similar area that is not an internal part of the dwelling.[1]
The A to G scale
Energy performance certificates present the energy efficiency of dwellings on a scale of A to G. The most efficient homes – which should have the lowest fuel bills – are in band A. The certificate uses the same scale to define the impact a home has on the environment. Better-rated homes should have less impact through carbon dioxide (CO2) emissions. The average property in the UK is in band D.
Domestic RHI yardstick
The EPC will become more significant from April 2014 when Domestic Renewable Heat Incentives (RHI) become available. The amount of the deemed expected annual heat use for a domestic property can be obtained from the EPC and this will determine the amount of Domestic RHI which is payable on installing renewable heat options like ground source heat pumps and solar thermal collectors.
EPC recommendations
The certificate includes recommendations on ways to improve the home's energy efficiency to save money. The accuracy of the recommendations will depend on the inspection standards applied by the inspector, which may be variable. Inspectors, who may be Home Inspectors (HIs) or Domestic Energy Assessors (DEAs), are audited by their accreditation bodies in order to maintain standards. The recommendations appear general in tone, but are in fact bespoke to the property in question. The logic by which the RDSAP program makes its recommendations was developed as part of a project to create the RDSAP methodology, which took place during the early years of the 21st century. The EU directive requires the EPC recommendations to be cost effective in improving the energy efficiency of the home, but in addition to presenting the most cost effective options, more expensive options which are less cost effective are also presented. To distinguish them from the more cost effective measures, these are shown in a section described as 'further measures'. Because the EPC is designed to be produced at change of occupancy, it must be relevant to any occupier and it therefore must make no allowance for the particular preferences of the current occupier.
Exempt properties
Properties exempt from the Housing Act 2004 are:
Non-residential, such as offices, shops, warehouses.
Mixed use, a dwelling house which part of a business (farm, shop, petrol station)
Unsafe properties, a property that poses a serious health and safety risk to occupants or visitors
Properties to be demolished, properties that are due to be demolished where the marketing of the property, all the relevant documents and planning permission exists.
Listed buildings (Recast of EPC requirements from 9 January 2013)*
Stand alone buildings of less than 50m2 (Recast of EPC requirements from 9 January 2013)
Buildings of religion or worship (Recast of EPC requirements from 9 January 2013)
Residential buildings with use of less than 4 months per year (Recast of EPC requirements from 9 January 2013)The possible exemption of listed buildings has always been a contentious issue. As a devolved issue, no exemption of listed buildings exists under the Scottish Regulations. In England & Wales, listed buildings are only exempt "...in so far as compliance with certain minimum energy performance requirements would unacceptably alter their character or appearance." The only way to determine whether an EPC will have recommendations that would unacceptably alter the appearance or character of the listed dwelling is lodge an EPC and find out.
Non-domestic energy performance certificates
In addition to the requirements in relation to dwellings there is also a requirement for EPCs on the sale, rent or construction of buildings other than dwellings with a floor area greater than 50m2 from 6 April 2008, that contain fixed services that condition the interior environment.Properties that are exempt from requiring a domestic EPC will generally require a non-domestic energy performance certificate, which was also required by the Energy Performance of Buildings Directive. Non-dwellings are "responsible for almost 20 per cent of the UK’s energy consumption and carbon emissions."All non-domestic EPCs must be carried out by, or under the direct supervision of, a trained non-domestic energy assessor, registered with an approved accreditation body.
The Department for Levelling Up, Housing and Communities (DLUHC), formerly the Ministry for Housing, Communities and Local Government (MHCLG), has arranged for a publicly accessible central register
There are three levels of building, Level 3, Level 4 and Level 5. The complexity and the services used by that building will determine which level it falls under. The Commercial Energy Assessor must be qualified to the level of the building to carry out the inspection.
From October 2008 all buildings including factories, offices, retail premises and public sector buildings - must have an EPC whenever the building is sold, built or rented. Public buildings in England and Wales (but not Scotland) also require a display energy certificate showing actual energy use, and not just the theoretical energy rating. From January 2009 inspections for air conditioning systems will be introduced.
The A to G scale for non-domestic EPCs
The A to G scale is a linear scale based on two key points defined as follows:
The zero point on the scale is defined as the performance of the building that has zero net annual CO2 emissions associated with the use of the fixed building services as defined in the Building Regulations. This is equivalent to a Building Emissions Rate (BER) of zero.
The border between grade B and grade C is set at the Standard Emissions rate (SER)† and given an Asset Rating of 50. Because the scale is linear, the boundary between grades D and grade E corresponds to a rating of 100.†This is based on the actual building dimensions but with standard assumptions for fabric, glazing and building services.
See an example.
Display energy certificates
Display energy certificates (DECs) show the actual energy usage of a building, the Operational Rating, and help the public see the energy efficiency of a building. This is based on the energy consumption of the building as recorded by gas, electricity and other meters. The DEC should be clearly displayed at all times and clearly visible to the public. A DEC is always accompanied by an Advisory Report that lists cost effective measures to improve the energy rating of the building.
Display energy certificates are only required for buildings with a total useful floor area over 500m2 that are occupied by a public authority and institution providing a public service to a large number of persons and therefore visited by those persons. The useful floor area limit will be reduced to 250m2 in July 2015.
Where the building has a total useful floor area of more than 1,000m2, the DEC is valid for 12 months. The accompanying advisory report is valid for seven years. Where the building has a total useful floor area of between 500m2 and 1,000m2, the DEC and advisory report are valid for 10 years.
However, to make it easier for public authorities with multiple buildings on one site to comply with the legislation, a site-based approach for the first year (to October 2009) is allowed where it is not possible to produce individual DECs. This means that only one DEC will need to be produced based on the total energy consumption of the buildings on the site. Public bodies most affected by this relaxation are NHS Trusts, universities and schools.
The requirement for display energy certificates came into effect from 1 October 2008. They were trialled in the UK under an EU-funded project also called "Display" and co-ordinated by Energie-Cités; participants included Durham County Council and the Borough of Milton Keynes.
The A to G scale for DECs
This is the operational rating for this building. The rating shows the energy performance of the building as it is being used by the occupants, when compared to the performance of other buildings of the same type. A building with performance equal to one typical of its type would therefore have an Operational Rating of 100. A building that resulted in zero CO2 emissions would have an Operational Rating of zero, and a building that resulted in twice the typical CO2 emissions would have an Operational Rating of 200.
This rating indicates whether the building is being operated above or below average performance for a building of this type.See an example.
Criticism
As a rating system, EPCs are known to have reliability issues. Non-domestic EPCs are found to have a regression effect where low-rated properties are more likely to obtain higher ratings in the renewal, while high-rated ones tend to get lower ratings. This directly casts doubts on EPC-based policies like Minimum Energy Efficiency Standards.
EPCs have gained some political controversy, partly reflecting the housing market crisis in the United Kingdom (2008).
Many in the housing industry, such as the Royal Institution of Chartered Surveyors, have criticised the introduction of EPCs, on the grounds of poor quality. Whilst critical, RICS still provided courses on domestic energy assessment, as well as courseware manuals for the professions of domestic energy assessors.
A further objection is often made concerning the quality of inspection made to produce the certificate. It cannot be invasive, so the inspector cannot drill walls or ceilings to determine the state or even existence of any insulation. The energy assessor can either assume the worst ('as built' to Building Regulations for the dwelling's age) or rely on the householder to produce documentary evidence on what may have been installed. This can produce uncertainty about the validity of the output from the assessor's analysis.
Finally, EPCs pose particular problems for the owners of listed buildings, as improvements, such as double glazing, are often barred by the controls on changes to such structures, making it difficult to rectify low ratings.
See also
Energy efficiency in British housing
Energy policy of the United Kingdom
Energy policy of the European Union
Global warming
Energy White Paper
Domestic energy assessor
House energy rating
Home energy rating
References
External links
Buying or selling your home - Guide - Energy Performance Certificates on GOV.UK
Energy performance of buildings - Policy page on GOV.UK
Energy Performance Certificates guidance documents on GOV.UK
Factsheet on EPCs from the Royal Institution of Chartered Surveyors (RICS)
Historic England advice on EPCs
Retrieve the EPC certificate for a property by address
Commercial EPC Explained (Easy EPC)
Access requirements for an Energy Performance Certificate Survey (Ozone Group)
Domestic Energy Performance Certificate Explained (Ozone Group) |
biofuel in sweden | Biofuels are renewable fuels that are produced by living organisms (biomass). Biofuels can be solid, gaseous or liquid, which comes in two forms: ethanol and biodiesel and often replace fossil fuels. Many countries now use biofuels as energy sources, including Sweden. Sweden has one of the highest usages of biofuel in all of Europe, at 32%, primarily due to the widespread commitment to E85, bioheating and bioelectricity.
Sweden's energy usage is divided into three sectors: housing and services, industry, and transport and is used in three different ways: to produce heating, electricity and vehicle fuels. In 2014 Sweden has used 555 TWh of energy, 130 of which came from biofuels.Increased biofuel usage is the main reason why Sweden has managed to decrease greenhouse gas emissions by 25% between 1990 and 2014.
History
Biofuel usage in Sweden has been increasing since 1970, growing from around 43 TWh in 1970 to 127 TWh in 2009. This increase is usually linked to the heavy expansion of biomass based district heating in 1980s. The oil crisis between 1973 and 1979 has stimulated the transition from oil to other energy sources such as peat and biomass. Swedish government created financial incentives for transitioning from fossil fuels to biofuels. Simultaneously the wood related industries were growing, using increasing amounts of wood as fuel and providing black liquor to the industry as bi-product. Finally, taxation of other energy sources, has contributed to the increase of biofuel usage, as taxes on fossil fuels have increased since 1990. One of the examples is carbon dioxide tax, introduced in 1991, as a measure to reduce Sweden's environmental impact. Same year a three party agreement was signed to invest 950 million Swedish Krona in bioheat facilities.
Bioenergy Sources
Wood Fuels
The forestry sector, which uses various forms of wood, makes up 90% of the biomass in Sweden. This includes parts of trees that cannot be used for timber or paper production. Recycled wood is considered biofuel too. The other 10% of biomass comes from waste, industry bi-products, biogas and farmable fuels.
Waste
Only combustible waste, such as cardboard, plastic and biological material, is counted as biofuel. It can be burned to produce energy, while organic matter can be composted to produce rich soil or used for biogas production. Most of the waste in Sweden comes from households, while a smaller part is provided by the industry. This majority of this bioenergy source goes to combustion plants and is being used for the district heating. A smaller part, only sorted organic matter, is decomposed in anaerobic or aerobic conditions to produce biogas or compost. The biological waste is primarily responsibility of the local municipality.On 1 January 2002, Sweden has enforced a law to make the sorting of combustible waste mandatory. It has been made illegal to deposit unsorted combustible waste, with materials that can not be burned.
Industrial By-Products
A lot of wood related industries create bi-products. Forest industry uses its own residual products, such as chips, bark etc. Pulp industry burns its by-products, one of which is black liquor, to create steam to bleach paper and produce electricity.
Biogas
Biogas, like natural gas, mostly consists of methane. It is produced in anaerobic conditions when organic material is being broken down by microorganisms. Food waste from households, restaurants, food industry, waste from agriculture and sewage material is used in the production.In 2015, Sweden produced over 1.9 TWh of biogas in 282 production plants. The majority of biogas is used as vehicle fuel. The upgraded biogas is pumped into existing gas networks in areas of Bjuv, Falkenberg, Göteborg, Helsingborg, Laholm, Lidingö, Lund, Malmö, Stockholm and Trelleborg.
Farmable Fuels
Farmable fuels are intentionally grown plants, in most cases monocultures. Farm land is used to grow fast growing crops, such as flax and hemp, and forests, mostly salix, which can be burned, used to produce biogas, ethanol, biodiesel or other types of biofuel.
Types of Biofuel
There can be three different types of energy carriers produced from biomass: solid fuels (wood, briquettes, pellets, charcoal etc.), liquid fuels (methanol, ethanol, synthetic gasoline, biodiesel), gasses (biogas, hydrogen, syngas). Technically solid fuels can be made to be high-energy dense, hence Sweden produces biofuels mostly in solid form.
Uses of Biofuel
Bioheat
Producing heat is the most common use of biomass in Sweden, more than half of indoor heating is produced this way. Biomass is used in both large-scale district heating and small-scale direct use in boilers; about 90% of Swedish houses are heated by district heating. This type of heating can be combined with electricity production to optimize energy use. Even though all types of biomass are being used for bioheating, this sector is dominated by wood fuels.
Biopower
Biopower is the electricity produced from biomass. It is the fourth largest electricity source in Sweden, corresponding to 7% of electricity production. In 2016 there were 209 electricity plants and 20 were being constructed. On average, biopower production in Sweden is estimated to be 3,900 hours of the total 8,760 hours per year, with a potential to increase to 8,000 hours, corresponding to 35TWh.
Transportation
Sweden has committed to using more cars that require biofuels, much due to the fact that the Swedish parliament has decided that Sweden should have a fossil free vehicle fleet. However, it was reported in 2010 that only 4% of the biofuels are being used within Sweden for transportation purposes. By people buying cars that run on ethanol, more E85 stations are now being utilized. By replacing cars that required ethanol, less petrol and diesel are being used, which helps the environment. The goal is now to reduce emissions from transport by 70% by 2030 and then completely switch to fossil free traffic. According to Svebio.se, this can be accomplished with a combination of improved efficiency electrification and fuel switching from fossil fuels to biofuels.
History of Transportation
Ethanol-powered ED95 buses were introduced in 1986 on a trial basis as the fuel for two buses in Örnsköldsvik, and by 1989 30 ethanol-operated buses were in service in Stockholm. SEKAB provided the fuel, called ED95, consists of a blend of 95% ethanol and 5% ignition improver and it is used in modified diesel engines where high compression is used to ignite the fuel. Other countries have now this technology on trial under the auspicies of the BioEthanol for Sustainable Transport (BEST) project, which is coordinated by the city of Stockholm.Flexible-fuel vehicles were introduced in Sweden as a demonstration test in 1994, when three Ford Taurus were imported to show the technology existed. Because of the existing interest, a project was started in 1995 with 50 Ford Taurus E85 flexifuel in different parts of Sweden: Umeå, Örnsköldsvik, Härnösand, Stockholm, Karlstad, Linköping, and Växjö. Between 1997 and 1998 an additional 300 Taurus were imported, and the number of E85 fueling grew to 40. Then in 1998 the city of Stockholm placed an order for 2,000 of FFVs for any car manufacturer willing to produce them. The objective was to jump-start the FFV industry in Sweden. The two domestic car makers Volvo Group and Saab AB refused to participate arguing there were not in place any ethanol filling stations. However, Ford Motor Company took the offer and began importing the flexifuel version of its Focus model, delivering the first cars in 2001, and selling more than 15,000 FFV Focus by 2005, then representing an 80% market share of the flexifuel market. In 2005 both Volvo and Saab introduced to the Swedish market their flexifuel models, and to the European market in the following years.
Flexible Vehicles
Sweden has achieved the largest E85 flexible-fuel vehicle fleet in Europe, with a sharp growth from 717 sold vehicles in 2001 to 243,136 by December 2014. Also, Sweden has the largest ethanol bus fleet in the world, with over 600 buses running on ED95, mainly in Stockholm Dozens of municipalities have started producing biogas from sewage. At the end of 2009 there were 23,000 gas vehicles and 104 public filling stations.The recent and accelerated growth of the Swedish fleet of E85 flexifuel vehicles is the result of the National Climate Policy in Global Cooperation Bill passed in 2005, which not only ratified the Kyoto Protocol but also sought to meet the 2003 EU Biofuels Directive regarding targets for use of biofuels, and also led to the 2006 government's commitment to eliminate oil imports by 2020, with the support of BIL Sweden, the national association for the automobile industry.
Current Situation
During 2004 the government passed a law that said all bigger Swedish fuel stations were required to provide an alternative fuel option. From 2009 all small gas stations, that sell more than 1,000 m3 per year, have to provide this as well. The lower cost of building a station for ethanol compared with a station for petroleum makes it very common to see gas stations that sell ethanol.
One fifth of cars in Stockholm can run on alternative fuels, mostly ethanol fuel. As of December 2007, carmakers that offer ethanol-powered vehicles in Sweden are SAAB, Volvo, VW, Koenigsegg, Skoda, SEAT, Citroen, Peugeot, Renault and Ford.By 2010, SL had become one the world's largest fleet of renewable fuel buses. Out of the 2000 buses operated, there are 400 ethanol buses and 100 biogas buses.
Sales of E85 fuel and E85 cars decreased sharply between 2014 and 2015 due to lower price of fossil fuel. The Swedish government has decreased the biofuel tax.
Taxation and Policies
Taxation
In 1991 Sweden's energy tax system was modified, with the introduction of a carbon dioxide tax, a reduction in the general energy tax, a tax on sulphur emissions and various value-added taxes on electricity and fuels. The present tax structure compromises three elements: an energy tax, a carbon dioxide tax and a sulphur tax. The energy tax and carbon dioxide tax are not levied on biofuels.The Swedish government has a policy, which aims at the reduction of the use of fossil fuels and the promotion of the use of renewable energy sources such as biofuels. This is done by taxation and administrative measures. The most important policy measure for biofuels in Sweden is that biofuels are exempt from energy taxes, environmental taxes and fees. The tax exemptions for biofuels are extended until 2018.Besides these direct energy taxations, the production and use of biofuels is promoted in an indirect way by green taxes. These fiscal measures seem to have a very positive effect on the production and use of biofuels. There are no subsidies provided for the use of biofuels in Sweden.
Policies
Sweden's policies on biofuels are part of the more general policies on energy and environment. The key actors in deciding policy are the Ministry of the Environment and the Ministry of Enterprise, Energy and Communication as well as the Ministry of Education and Research. Policy support is provided by The Swedish Energy Agency and by Vinnova. Biofuels became prominent in the political agenda in Sweden after 2000.The Pump Act that came into force in 2005 also improved opportunities to fill up with E85. The law requires all filling stations that sell more than a certain amount of petrol and diesel to also supply a renewable fuel. In 2008 a government grant was provided to help meet the costs of infrastructure and other costs for biofuels other than ethanol.
Vehicle Fuel Taxation
In Sweden biofuels were exempted of both, the CO2 and energy taxes until 2009, resulting in a 30% price reduction at the pump of E85 fuel over gasoline and 40% for biodiesel. Furthermore, other demand side incentives for flexifuel vehicle owners include a SEK 10,000 (USD 1,300 as of May, 2009) bonus to buyers of FFVs, exemption from the Stockholm congestion tax, up to 20% discount on auto insurance, free parking spaces in most of the largest cities, lower annual registration taxes, and a 20% tax reduction for flexifuel company cars. Also, a part of the program, the Swedish Government ruled that 25% of their vehicle purchases (excluding police, fire and ambulance vehicles) must be alternative fuel vehicles. By the first months of 2008, this package of incentives resulted in sales of flexible-fuel cars representing 25% of new car sales.Since 2005 the gasoline fulling stations, which sell more than 3 million liters of fuel a year are required to sell at least one type of biofuel, resulting in more than 1,200 gas stations selling E85 by August 2008. Despite all the sharp growth of E85 flexifuel cars, by 2007 they represented just 2% of the 4 million Swedish vehicle fleet. In addition, this law also mandated all new filling stations to offer alternative fuels, and stations with an annual volume of more than 1 million liters are required to have an alternative fuel pump by 31 December 2009. Therefore, the number of E85 pumps is expected to reach by 2009 nearly 60% of Sweden's 4,000 filling stations.
Environmental Impacts
The unsustainable extraction of forest resources, such as wood fuel, may lead to forest degradation and permanent loss of biodiversity. Plantations can have both positive and negative impacts on biodiversity, depending on the change in land use. Further, even when traditional biomass is harvested sustainably, wood fuel use may not be carbon neutral due to incomplete combustion – the idealized fuel cycle in which all carbon is converted to carbon dioxide is unrealistic. Also, poorly conducted wood fuel harvesting can have significant effects on water quality and quantity, leading to increased soil erosion and run-off.However, while combustion of biogas, like natural gas, produces carbon dioxide (CO2), a greenhouse gas, the carbon in biogas comes from plant matter that fixed this carbon from atmospheric CO2. Thus, biogas production is carbon-neutral and does not add to greenhouse gas emissions.The assumption is that the replacement of fossil fuels with fuels generated from biomass would have significant and positive climate-change effects by generating lower levels of the greenhouse gasses that contribute to global warming. Biofuels are only one component of a range of alternatives for mitigating greenhouse gas emissions. Greenhouse gas balances are not positive for all feedstocks. Investment should be directed towards crops that have the highest positive greenhouse gas balances with the lowers environmental and social costs.
Biofuel companies
Bioheat
Svensk Fjärrvärme is the main provider of district heating in Sweden.
Biopower
Electricity production in Sweden is highly localized, increasing the supply reliability. To produce more environmentally friendly energy Sweden and Norway share electricity certification system, which increases revenues from renewable electricity production.
Transportation
SEKAB is a major Nordic producer and importer of Bioethanol.
Chemrec develops black liquor gasification technology for second generation biofuels such as Biomethanol and BioDME. On January 26, 2011, the European Union's Directorate-General for Competition approved the Swedish Energy Agency's award of 500 million Swedish kronor (approx. €56M as at January 2011) toward the construction of a 3 billion Swedish kronor (approx. €335M) industrial scale experimental development biofuels plant at the Domsjö Fabriker biorefinery complex in Örnsköldsvik, Sweden, using Chemrec's black liquor gasification technology.
See also
Climate change in Sweden
Renewable energy in Sweden
Wind power in Sweden
Biofuel
District heating
Energy content of biofuel
Sustainable biofuel
List of renewable energy topics by country
Biofuels by region
Ethanol fuel by country
Flexible-fuel vehicle
BioEthanol for Sustainable Transport (BEST)
References
External links
United Nations Environment Programme, Towards Sustainable Production and Use of Resources: Assessing Biofuels, October 2009
World Bank, Biofuels: The Promise and the Risks. World Development Report 2008: Agriculture for Development |
sustainability at the university of british columbia | Sustainability at the University of British Columbia (UBC) is accomplished by integrating sustainability into the learning experience.
Climate change is now the most serious global environmental threat. Its potential impacts include global warming, sea level rise, and increased extreme weather events. Climate change is a direct consequence of elevated greenhouse gas concentrations in the atmosphere.Greenhouse gases are gases that trap heat in the atmosphere. Some examples include Carbon dioxide, Methane, and Nitrous oxide. They are emitted from fossil fuel burning. Electricity production generates the largest share of greenhouse gas emissions. Moreover, other sources include transportation, industry, and agriculture.
Effects of greenhouse gases
These gases are said to make the planet warmer by "thickening the Earth’s blanket." This can lead to the overall average annual temperature to increase. Moreover, global warming will decrease snow and glaciers resulting in rising sea levels and increased coastal flooding. In addition, continued warming from the release of greenhouse gases into the atmosphere is expected to have substantial impacts on the economy, other environmental issues and human health.Warming is likely to worsen conditions for air quality and increase the risk of heat-related illnesses. In addition, the frequency and strength of extreme event such as floods and storms are likely to threaten safety and health. In turn, warming temperatures are likely to change water resources which affect many areas, including energy production, human health, agriculture, and ecosystems.
Steam to hot water conversion action plan
In 2011, the University of British Columbia (UBC) created one of the largest steam to hot water conversions to replace UBC's old steam system. The new system will increase operational efficiencies by reducing heat distribution losses. It will heat the campus while operating at a significantly lower average temperature of 80 °C, rather than 190 °C and as a result, it will result in substantial energy and financial savings. The campus has used natural gas for heat since the 1960s. The new "neighborhood district energy system" will use high-efficiency natural gas boilers.
New system's purpose
The main purpose of this project is to reduce campus greenhouse gas emissions. The new hot water system will reduce UBC's district heating system energy use by 24 per cent. Furthermore, it will reduce UBC's greenhouse gas emissions by 22 per cent. To save money on operational and energy costs and to advance clean energy research and development opportunities. For example, it will save 4 million a year in operational and energy costs. Moreover, to facilitate a long-term target of eliminating the use of fossil fuels on campus by 2050.
Progress
UBC's steam heating system pipes run underground. The hot water conversion will occur in nine different construction phases to minimize campus disruption. For example, phases two through seven will be completed from 2012 to 2015. They will be used to reduce natural gas consumption on campus.
See also
Efficient energy use
== References == |
middle east | The Middle East (term originally coined in English [see § Terminology]) is a geopolitical region encompassing the Arabian Peninsula, the Levant, Turkey, Egypt, Iran, and Iraq. The term came into widespread usage as a replacement of the term Near East (as opposed to the Far East) beginning in the early 20th century. The term "Middle East" has led to some confusion over its changing definitions, and being seen as too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus, and additionally includes all of Egypt (not just the Sinai) and all of Turkey (not just the part barring East Thrace).
Most Middle Eastern countries (13 out of 18) are part of the Arab world. The most populous countries in the region are Egypt, Turkey, and Iran, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, with the geopolitical importance of the region being recognized for millennia. Several major religions have their origins in the Middle East, including Judaism, Christianity, and Islam. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Azeris, Copts, Jews, Assyrians, Iraqi Turkmen, Yazidis, and Greek Cypriots.
The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization (a label now applied to multiple regions of the world). Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum, with monarchs of the Arabian Peninsula in particular benefiting economically from petroleum exports. Because of the arid climate and heavy reliance on the fossil fuel industry, the Middle East is both a heavy contributor to climate change and a region expected to be severely negatively impacted by it.
Other concepts of the region exist including the broader the Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan, or the "Greater Middle East" which additionally also includes parts of East Africa, Mauritania, Afghanistan, Pakistan, and sometimes the South Caucasus and Central Asia.
Terminology
The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when American naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian Empires were vying for influence in Central Asia, a rivalry which would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East, and said that after Egypt's Suez Canal, it was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal.
The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf.
Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term.Until World War II, it was customary to refer to areas centered around Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, and the Middle East then meant the area from Mesopotamia to Burma, namely the area between the Near East and the Far East. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States, with the Middle East Institute founded in Washington, D.C. in 1946, among other usage.The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner.
While non-Eurocentric terms such "Southwest Asia" or "Swasia" has been sparsedly used, the inclusion of an African country, Egypt, in the definition questions the usefulness of using such terms.
Usage and criticism
The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan and Korea).With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the re-emerging countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history, where it describes an area identical to the term Middle East, which is not used by these disciplines (see Ancient Near East).The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar.The term Middle East has also been criticised by journalist Louay Khraish and historian Hassan Hanafi for being a Eurocentric and colonialist term.The Associated Press Stylebook says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs:
Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred.
Translations
There are terms similar to Near East and Middle East in other European languages, but since it is a relative description, the meanings depend on the country and are different from the English terms generally. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning) and in Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (meaning Near East in all the four Slavic languages) remains as the only appropriate term for the region. However, some languages do have "Middle East" equivalents, such as the French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, and the Italian Medio Oriente.Perhaps because of the influence of the Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press, comprising the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, apart from Arabic, other languages of countries of the Middle East also use a translation of it. The Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), the Turkish is Orta Doğu and the Greek is Μέση Ανατολή (Mesi Anatoli).
Countries and territory
Countries and territory usually considered within the Middle East
Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory.
a. ^ ^ Jerusalem is the proclaimed capital of Israel, which is disputed, and the actual location of the Knesset, Israeli Supreme Court, and other governmental institutions of Israel. Ramallah is the actual location of the government of Palestine, whereas the proclaimed capital of Palestine is East Jerusalem, which is disputed.
b. ^ Controlled by the Houthis due to the ongoing civil war. Seat of government moved to Aden.
Other definitions of the Middle East
Various concepts are often being paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. The Near East, Fertile Crescent, and Levant are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included.
The countries of the South Caucasus – Armenia, Azerbaijan, and Georgia – are occasionally included in definitions of the Middle East."Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century, to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included.
History
The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea. It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa.
Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup.
The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions.
From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million.
From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty.
The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the British Empire and their allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards.
In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries.
During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites.
Demographics
Ethnic groups
Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs.
Migration
"Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states."According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America.
Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics.
A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks. Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979.
Religions
The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects.
Languages
The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Hebrew and Greek. Arabic and Hebrew represent the Afro-Asiatic language family. Persian, Kurdish and Greek belong to the Indo-European language family. Turkish belongs to Turkic language family. About 20 minority languages are also spoken in the Middle East.
Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken Yemen and Oman. Another Semitic language such as Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic language.
Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others.
The third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran.
Hebrew is one of the two official languages of Israel, the other being Arabic. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic.
Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior.
English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a second language, especially among the middle and upper classes, in countries such as Egypt, Jordan, Iran, Kurdistan, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there.
French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is also used by the Franco-Maltese diaspora in Egypt. Also, due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews of Israel.
Armenian speakers are also to be found in the region. Georgian is spoken by the Georgian diaspora.
Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well.
The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995 Romanian is spoken by 5% of the population.Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants.
Economy
Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). Overall, as of 2007, according to the CIA World Factbook, all nations in the Middle East are maintaining a positive rate of growth.
According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.062 trillion), Turkey ($1.029 trillion), and Israel ($539 billion). Regarding nominal GDP per capita, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.573 trillion), Saudi Arabia ($2.301 trillion), and Iran ($1.692 trillion) had the largest economies in terms of GDP PPP. When it comes to GDP PPP per capita, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573).The economic structure of Middle Eastern nations are different in the sense that while some nations are heavily dependent on export of only oil and oil-related products (such as Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is also an important sector of the economies, especially in the case of UAE and Bahrain.
With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions of the Middle East. In recent years, however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies.Unemployment is notably high in the Middle East and North Africa region, particularly among young people aged 15–29, a demographic representing 30% of the region's total population. The total regional unemployment rate in 2005, according to the International Labour Organization, was 13.2%, and among youth is as high as 25%, up to 37% in Morocco and 73% in Syria.
Climate change
Gallery
See also
Notes
References
Further reading
External links
"Middle East – Articles by Region" Archived 9 February 2014 at the Wayback Machine – Council on Foreign Relations: "A Resource for Nonpartisan Research and Analysis"
"Middle East – Interactive Crisis Guide" Archived 30 November 2009 at the Wayback Machine – Council on Foreign Relations: "A Resource for Nonpartisan Research and Analysis"
Middle East Department University of Chicago Library
Middle East Business Intelligence since 1957: "The leading information source on business in the Middle East" – meed.com
Carboun – advocacy for sustainability and environmental conservation in the Middle East
Middle East at Curlie
Middle East News from Yahoo! News
Middle East Business, Financial & Industry News – ArabianBusiness.com |
aluminium recycling | Aluminium recycling is the process in which secondary aluminium is created from scrap or other forms of end-of-life or otherwise unusable aluminium. It involves re-melting the metal, which is cheaper and more energy-efficient than the production of aluminum from raw bauxite via electrolysis of aluminum oxide (Al2O3) during the Hall–Héroult and Bayer processes.
Recycling scrap aluminium requires only 5% of the energy used to make new aluminium from the raw ore. In 2022, the United States produced 3.86 metric tons of secondary aluminum for every metric ton of primary aluminum produced. Over the same time period, secondary aluminum accounted for 34% of the total new supply of aluminum including imports. Used beverage containers are the largest component of processed aluminum scrap, and most of it is manufactured back into aluminium cans.
Recycling process
Collection & Sorting
The first step in aluminium recycling is the collection and sorting of aluminium scrap from various sources. Scrap aluminium comes primarily from either manufacturing scrap or end-of-life aluminium products such as vehicles, building materials, and consumer products. Manufacturing scrap includes shreds, shavings, cuttings, and other leftover aluminium from manufacturing processes. Post-consumer scrap consists of obsolete or discarded aluminium products. Aluminium cans, in particular, are a major source of recyclable aluminium scrap. Once collected, aluminium scrap is sorted based on alloy type, grade, impurity levels, and other factors. Sorting may be done manually or using technologies like eddy current separators, air classifiers, and density separators. The scrap is sorted into categories like wrought alloy scrap, casting alloy scrap, used beverage cans, automobile scrap, and mixed scrap. Proper sorting is essential for producing high-quality recycled aluminium.
Pre-Treatment
After sorting, the scrap may undergo pre-treatment processes to prepare it for melting. These can include baling, shredding, crushing, granulating, decoating, and demagnetizing. Shredding and crushing reduce the particle size of the scrap and liberate it from other materials, while granulating produces fine particles ideal for melting. Thermal decoating removes coatings like paint and plastic from aluminium surfaces. Demagnetizing removes iron particles clinging to the aluminium scrap. Pre-treatment improves the density of the scrap charge and removes contaminants, resulting in faster melting, cleaner metal, reduced dross formation, and lower energy consumption.
Melting
Once pre-treated, the aluminium scrap undergoes melting and liquid metal treatment to produce refined aluminium alloy suitable for casting or reprocessing. Different furnace types are used based on the type of scrap, desired metal quality, and economics. Smaller scrap is typically processed in rotary or reverberatory gas-fired furnaces, while large individual pieces of scrap can be charged directly into reverb furnaces through side wells. Electric induction furnaces are also used. As the scrap melts, fluxes are added to bind and absorb impurities which are scraped off the top as dross. Chlorine gas may also be injected to remove impurities through flotation. The melt can then undergo refining processes like flux injection to further reduce hydrogen and impurities. Degassing removes dissolved hydrogen while chemical filtration removes solid impurities and inclusions. The final result is molten aluminium alloy ready for casting.
Casting
The molten recycled aluminium is cast into solid forms such as ingots, sows, or directly into sheets or extrusion billets. Direct-chill casting is commonly used to solidify the liquid aluminium into large cylindrical billets for extrusion or rolling. The direct chill method sprays water onto the hot metal as it exits the mold, quickly chilling it into a solid billet form. For ingots, book molds are often used, producing slabbed ingots suitable for remelting or rolling. Continuous casting directly shapes the aluminium into rolling slabs without an intermediate ingot casting step. Twin-belt or twin-roll strip casting produces alloy strips 6-7mm thick directly from the melt for subsequent rolling. The casting method depends on the subsequent processing and use of the recycled aluminium alloy.
History
Although aluminium in its pure form has been produced as early as 1825, secondary aluminium production, or recycling, rose in volume with the introduction of industrially viable primary aluminium processes, namely the combination of the Bayer and Hall-Héroult processes. The Hall-Héroult process for aluminium production from alumina was invented in 1886 by Charles Hall and Paul Héroult. Carl Josef Bayer created a multi-step process to convert raw Bauxite into alumina in 1888. As aluminium production rose with the use of these two processes, aluminium recycling grew too. In 1904, the first two aluminium can recycling plants were built in the United States; one recycling plant was built in Chicago, Illinois and the other was built in Cleveland, Ohio. Aluminium recycling increased most significantly in volume when metal resources were strained during WWI, as the U.S. government campaigned for civilians to donate old products such as aluminium pots, pans, boats, vehicles, and toys to recycle for the construction of aluminium airframes.
Advantages
Aluminium is an infinitely recyclable material, and it takes up to 95 percent less energy to recycle it than to produce primary aluminum, which also limits emissions, including greenhouse gases. Today, about 75 percent of all aluminum produced in history, nearly a billion tons, is still in use.
The recycling of aluminium generally produces significant cost savings over the production of new aluminium, even when the cost of collection, separation and recycling are taken into account. Over the long term, even larger national savings are made when the reduction in the capital costs associated with landfills, mines, and international shipping of raw aluminium are considered.
Energy savings
Recycling aluminium uses about 5% of the energy required to create aluminium from bauxite; the amount of energy required to convert aluminium oxide into aluminium can be vividly seen when the process is reversed during the combustion of thermite or ammonium perchlorate composite propellant.
Aluminium die extrusion is a specific way of getting reusable material from aluminium scraps but does not require a large energy output of a melting process. In 2003, half of the products manufactured with aluminium were sourced from recycled aluminium material.
Environmental savings
The benefit with respect to emissions of carbon dioxide depends in part on the type of energy used. Electrolysis can be done using electricity from non-fossil-fuel sources, such as nuclear, geothermal, hydroelectric, or solar. Aluminium production is attracted to sources of cheap electricity. Canada, Brazil, Norway, and Venezuela have 61 to 99% hydroelectric power and are major aluminium producers. However the anodes widely used in the Hall–Héroult process are made of carbon and are consumed during aluminum production, generating large quantities of carbon dioxide, regardless of electricity source. Efforts are underway to eliminate the need for carbon anodes. The use of recycled aluminium also decreases the need for mining and refining bauxite.
The vast amount of aluminium used means that even small percentage losses are large expenses, so the flow of material is well monitored and accounted for financial reasons. Efficient production and recycling benefits the environment as well.
Impact
Environmental
Because many countries continue to rely on coal-generated electricity for aluminium production, the aluminium industry contributes to 2% of global greenhouse gas emissions, around 1.1 billion tons of carbon dioxide. Many countries now seek to decarbonize aluminium not only as it is the second most used metal in the world, but also because it would heavily address the total greenhouse gas emissions in an effort to slow climate change.As one of the most recyclable –and recycled– materials in use today, aluminium can be virtually infinitely recycled. Since recycled aluminium takes 5% of the energy used to make new aluminium, around 75% of aluminium manufactured continues to be in use today. According to the Aluminium Association, in industrial markets such as automotive and building, aluminium is recycled at rates of up to 90%.
Since 1991, greenhouse gas emissions from aluminium cans have dropped about 40%, similar to energy demand levels. This can be attributed to a reduction in the carbon intensity of primary aluminium production, improving the efficacy of manufacturing operations, and lighter cans. While primary aluminium only accounts for 26.6% of the can, it makes up the major source of the can's carbon footprint. For example, as of 2020, 86% of China's aluminium production relies mostly on coal-generated electricity. On the other hand, Canada sources roughly 90% of its primary aluminium production using hydroelectric power, considering it to be the most sustainable in the world.Aluminium and its applications are wide and numerous–from defense construction and electrical transmission to playing a key role in emission-reducing goods (electric vehicles and solar panels). As such, countries have begun to decarbonize aluminium to combat global climate change.
Economic
Aluminium recycling has several economic benefits when done properly. In fact, the Environmental Protection Agency considers recycling a "critical" part of the United States economy, contributing to tax revenue, wages, and job creation. By facilitating scrap handling and improving its efficiency –from "end of life" scrap to repurposing scrap throughout the production stage ("in-house" scrap) –aluminium recycling helps in achieving the goals of a circular economy. This type of economy focuses on minimizing the extraction of natural resources, leading to a reduction of consumer and industrial waste. A few examples of countries that have adopted the shift to a circular economy include the European Union, Finland, France, Slovenia, Italy, Germany, and the Netherlands.A recent study conducted within the United States has highlighted some ways that aluminium recycling has proven to have economic benefits, including:
Job creation: Contributing to more than 100,000 jobs in the reprocessing to the United States economy.
Economic activity: Bring about $1.6 billion in material sales
Wage Increases: Increasing the wages of waste management and the recycling industry from $2.1 billion to $5 billion.
Energy Conservation: Save enough energy to power 1.5 million homes per year.
Waste Management: Maintain more than 1 million tons of waste out of landfills every year.As countries take note of the various economic and environmental benefits of aluminium recycling, increased efforts are expected to improve the efficacy of this process.
Recycling rates
According to 2020 data from the International Aluminium Institute, the global recycling efficiency rate is 76%. Around 75% of the almost 1.5 billion
tonnes of aluminium ever produced is still in productive use today.Brazil recycles 98.2% of its aluminium can production, equivalent to 14.7 billion beverage cans per year, ranking first in the world, more than Japan's 82.5% recovery rate. Brazil has topped the aluminium can recycling charts eight years in a row.
Europe
Challenges
Aside from recycled aluminium beverage cans, the majority of recycled aluminium comes mixture of different alloys. Those alloys generally have high percentages of silicon (Si) and require additional refinement during the shredding, sorting, and refining process to reduce impurities. Due to the levels of impurities found after refinement, the applications of recycled aluminium alloys are limited to castings and extrusions. The aerospace industry often restrict impurity levels of Si and Fe in alloys to a 0.40% maximum. Controlling the appearance of these elements is increasingly difficult the more often the metal has be recycled and require higher cost operations for the alloys to meet performance requirements.
Byproducts
White dross, a residue from primary aluminium production and secondary recycling operations, usually classified as waste, still contains useful quantities of aluminium which can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases (including, among others, hydrogen, acetylene, and ammonia) which spontaneously ignites on contact with air; contact with damp air results in the release of copious quantities of ammonia gas. Despite these difficulties, however, the waste has found use as a filler in asphalt and concrete.
See also
Environmental issues with mining
Ferrous metal recycling
Reverse vending machine
References
External links
Secondary Aluminum Smelters of the World - A list of companies who produce secondary aluminium (i.e., recycled or remelted from scrap metal) |
air source heat pump | An air source heat pump (ASHP) is a heat pump that can absorb heat from air outside a building and release it inside; it uses the same vapor-compression refrigeration process and much the same equipment as an air conditioner, but in the opposite direction. ASHPs are the most common type of heat pump and, usually being smaller, tend to be used to heat individual houses or flats rather than blocks, districts or industrial processes.
Air-to-air heat pumps provide hot or cold air directly to rooms, but do not usually provide hot water. Air-to-water heat pumps use radiators or underfloor heating to heat a whole house and are often also used to provide domestic hot water.
An ASHP can typically gain 4 kWh thermal energy from 1 kWh electric energy. They are optimized for flow temperatures between 30 and 40°C (86–104°F), suitable for buildings with heat emitters sized for low flow temperatures. With losses in efficiency, an ASHP can even provide full central heating with a flow temperature up to 80 °C (176 °F).As of 2023 about 10% of home heating worldwide is from ASHPs. They are the main way to phase-out gas boilers (also known as "furnaces") from houses, to avoid their greenhouse gas emissions.
Technology
Air at any natural temperature contains some heat. An air source heat pump transfers some of this from one place to another, for example between the outside and inside of a building.
An air-to air system can be designed to transfer heat in either direction, to heat or cool the interior of the building in winter and summer respectively. Internal ducting may be used to distribute the air. An air-to-water system only pumps heat inwards, and can provide space heating and hot water. For simplicity, the description below focuses on use for interior heating.
The technology is similar to a refrigerator or freezer or air conditioning unit: the different effect is due to the location of the different system components. Just as the pipes on the back of a refrigerator become warm as the interior cools, so an ASHP warms the inside of a building whilst cooling the outside air.
The main components of a split-system (called split as there are both inside and outside coils) air source heat pump are:
An outdoor evaporator heat exchanger coil, which extracts heat from ambient air
One or more indoor condenser heat exchanger coils. They transfer the heat into the indoor air, or an indoor heating system such as water-filled radiators or underfloor circuits and a domestic hot water tank.Less commonly a packaged ASHP has everything outside, with hot (or cold) air sent inside through a duct.These are also called monobloc and are useful for keeping flamable propane outside the house.An ASHP can provide three or four times as much heat as an electric resistance heater using the same amount of electricity. Burning gas or oil will emit carbon dioxide and also NOx, which can be harmful to health. An air source heat pump issues no carbon dioxide, nitrogen oxide or any other kind of gas. It uses a small amount of electricity to transfer a large amount of heat.
Unlike an air conditioning unit, most ASHPs are reversible and are able to either warm or cool buildings and in some cases also provide domestic hot water.
Heating and cooling is accomplished by pumping a refrigerant through the heat pump's indoor and outdoor coils. Like in a refrigerator, a compressor, condenser, expansion valve and evaporator are used to change states of the refrigerant between colder liquid and hotter gas states.
When the liquid refrigerant at a low temperature and low pressure passes through the outdoor heat exchanger coils, ambient heat causes the liquid to boil (change to gas or vapor). Heat energy from the outside air has been absorbed and stored in the refrigerant as latent heat. The gas is then compressed using an electric pump; the compression increases the temperature of the gas.
Inside the building, the gas passes through a pressure valve into heat exchanger coils. There, the hot refrigerant gas condenses back to a liquid and transfers the stored latent heat to the indoor air, water heating or hot water system. The indoor air or heating water is pumped across the heat exchanger by an electric pump or fan.
The cool liquid refrigerant then re-enters the outdoor heat exchanger coils to begin a new cycle. Each cycle usually takes a few minutes.Most heat pumps can also operate in a cooling mode where the cold refrigerant is moved through the indoor coils to cool the room air.
Usage
Air source heat pumps are used to provide interior space heating and cooling even in colder climates, and can be used efficiently for water heating in milder climates. A major advantage of some ASHPs is that the same system may be used for heating in winter and cooling in summer. Though the cost of installation is generally high, it is less than the cost of a ground source heat pump, because a ground source heat pump requires excavation to install its ground loop. The advantage of a ground source heat pump is that it has access to the thermal storage capacity of the ground which allows it to produce more heat for less electricity in cold conditions.
Home batteries can mitigate the risk of power cuts and like ASHPs are becoming more popular. Some ASHPs can be coupled to solar panels as primary energy source, with a conventional electric grid as backup source.
Thermal storage solutions incorporating resistance heating can be used in conjunction with ASHPs. Storage may be more cost-effective if time of use electricity rates are available. Heat is stored in high density ceramic bricks contained within a thermally-insulated enclosure; storage heaters are an example. ASHPs may also be paired with passive solar heating. Thermal mass (such as concrete or rocks) heated by passive solar heat can help stabilize indoor temperatures, absorbing heat during the day and releasing heat at night, when outdoor temperatures are colder and heat pump efficiency is lower.
Replacing gas heating in existing houses
As of 2023 ASHPs are bigger than gas boilers and need more space outside, so the process is more complex and can be more expensive than if it was possible to just remove a gas boiler and install an ASHP in its place. If running costs are important choosing the right size is important because an ASHP which is too large will be more expensive to run.It is difficult to retrofit conventional heating systems that use radiators/radiant panels, hot water baseboard heaters, or even smaller diameter ducting, with ASHP-sourced heat. The lower heat pump output temperatures would mean radiators would have to be increased in size or a low temperature underfloor heating system be installed instead.
Alternatively, a high temperature heat pump can be installed and existing heat emitters can be retained, however as of 2023 these heat pumps are more expensive to buy and run so may only be suitable for buildings which are hard to alter or insulate, such as some large historic houses.
In cold climates
Operation of normal ASHPs is generally not recommended below −10°C. However ASHPs designed specifically for very cold climates (in the US these are certified under Energy Star) can extract useful heat from ambient air as cold as −30 °C (−22 °F), however below −25°C electric resistance heating may be more efficient. This is made possible by the use of variable-speed compressors, powered by inverters. Although air source heat pumps are less efficient than well-installed ground source heat pumps in cold conditions, air source heat pumps have lower initial costs and may be the most economic or practical choice. A hybrid system, with both a heat pump and an alternative source of heat such as a fossil fuel boiler, may be suitable if it is impractical to properly insulate a large house. Alternatively multiple heat pumps or a high temperature heat pump may be considered.In some weather conditions condensation will form and then freeze onto the coils of the heat exchanger of the outdoor unit, reducing air flow through the coils. To clear this the unit operates a defrost cycle, switching to cooling mode for a few minutes, heating the coils until the ice melts. Air-to-water heat pumps use heat from the circulating water for this purpose, which results in a small and probably undetectable drop in water temperature; for air-to-air systems heat is either taken from the air in the building or using an electrical heater. Some air-to-air systems simply stop the operation of the fans of both units and switch to cooling mode, so that the outdoor unit returns to being the condenser such that it heats up and defrosts.
Noise
An air source heat pump requires an outdoor unit containing moving mechanical components including fans which produce noise. Modern devices offer schedules for silent mode operation with reduced fan speed. This will reduce the maximum heating power but can be applied at mild outdoor temperatures without efficiency loss. Acoustic enclosures are another approach to reduce the noise in a sensitive neighbourhood. In insulated buildings, operation can be paused at night without significant temperature loss. Only at low temperatures, frost protection forces operation after a few hours.
In the United States, the allowed nighttime noise level is 45 A-weighted decibels (dBA), and in the UK 42 measured from the nearest neighbour. In Germany the limit in residential areas is 35, which is usually measured by European Standard EN 12102.Another feature of air source heat pumps (ASHPs) external heat exchangers is their need to stop the fan from time to time for a period of several minutes in order to get rid of frost that accumulates in the outdoor unit in the heating mode. After that, the heat pump starts to work again. This part of the work cycle results in two sudden changes of the noise made by the fan. The acoustic effect of such disruption is especially powerful in quiet environments where background nighttime noise may be as low as 0 to 10dBA. This is included in legislation in France. According to the French concept of noise nuisance, "noise emergence" is the difference between ambient noise including the disturbing noise, and ambient noise without the disturbing noise.By contrast a ground source heat pump has no need for an outdoor unit with moving mechanical components.
Efficiency ratings
The efficiency of air source heat pumps is measured by the coefficient of performance (COP). A COP of 4 means the heat pump produces 4 units of heat energy for every 1 unit of electricity it consumes. Within temperature ranges of −3 °C (27 °F) to 10 °C (50 °F), the COP for many machines is fairly stable.
In mild weather with an outside temperature of 10 °C (50 °F), the COP of efficient air source heat pumps ranges from 4 to 6. However, on a cold winter day, it takes more work to move the same amount of heat indoors than on a mild day. The heat pump's performance is limited by the Carnot cycle and will approach 1.0 as the outdoor-to-indoor temperature difference increases, which for most air source heat pumps happens as outdoor temperatures approach −18 °C (0 °F). Heat pump construction that enables carbon dioxide as a refrigerant may have a COP of greater than 2 even down to −20 °C, pushing the break-even figure downward to −30 °C (−22 °F). A ground source heat pump has comparatively less of a change in COP as outdoor temperatures change, because the ground from which they extract heat has a more constant temperature than outdoor air.
The design of a heat pump has a considerable impact on its efficiency. Many air source heat pumps are designed primarily as air conditioning units, mainly for use in summer temperatures. Designing a heat pump specifically for the purpose of heat exchange can attain greater COP and an extended life cycle. The principal changes are in the scale and type of compressor and evaporator.
Seasonally adjusted heating and cooling efficiencies are given by the heating seasonal performance factor (HSPF) and seasonal energy efficiency ratio (SEER) respectively. In the US the legal minimum efficiency is 14 or 15 SEER and 8.8 HSPF.
Refrigerant types
Impact on decarbonization and electricity supply
Heat pumps are key to decarbonizing home energy use by phasing out gas boilers.While heat pumps with backup systems other than electrical resistance heating are often encouraged by electric utilities, air source heat pumps are a concern for winter-peaking utilities if electrical resistance heating is used as the supplemental or replacement heat source when the temperature drops below the point that the heat pump can meet all of the home's heat requirement. Even if there is a non-electric backup system, the fact that efficiencies of ASHPs decrease with outside temperatures is a concern to electric utilities. The drop in efficiency means their electrical load increases steeply as temperatures drop.
A study in Canada's Yukon Territory, where diesel generators are used for peaking capacity, noted that widespread adoption of air source heat pumps could lead to increased diesel consumption if the increased electrical demand due to ASHP use exceeds available hydroelectric capacity. Notwithstanding those concerns, the study did conclude that ASHPs are a cost-effective heating alternative for Yukon residents. As wind farms are increasingly used to supply electricity to the grid, the increased winter load matches well with the increased winter generation from wind turbines, and calmer days result in decreased heating load for most houses even if the air temperature is low.
Heat pumps could help stabilize grids through demand response. As heat pump penetration increases some countries, such as the UK, may need to encourage households to use thermal energy storage, such as very well insulated water tanks. In some countries, such as Australia, integration of this thermal storage with rooftop solar would also help.
Economics
Cost
As of 2023 buying and installing an ASHP in an existing house is expensive if there is no government subsidy, but the lifetime cost will likely be less than or similar to a gas boiler and air conditioner. This is generally also true if cooling is not required, as the ASHP will likely last longer if only heating. The lifetime cost of an air source heat pump will be affected by the price of electricity compared to gas (where available), and may take two to ten years to break even. The IEA recommends governments subsidize the purchase price of residential heat pumps, and some countries do so.
Market
In Norway, Australia and New Zealand most heating is from heat pumps. In 2022 heat pumps outsold fossil fuel based heating in the US and France. ASHPs can be helped to compete by increasing the price of fossil gas compared to that of electricity and using suitable flexible electricity pricing. In the US air-to-air is the most common type. As of 2023 over 80% of heat pumps are air source.
Maintenance and reliability
It is thought that ASHP need less maintenance than fossil fuelled heating, and some say that ASHPs are easier to maintain than ground source heat pumps due to the difficulty of finding and fixing underground leaks. Installing too small an ASHP could shorten its lifetime (but one which is too large will be less efficient). However others say that boilers require less maintenance than ASHPs. A Consumer Reports survey found that "on average, around half of heat pumps are likely to experience problems by the end of the eighth year of ownership".
References
Sources
IPCC reports
IPCC (2021). Masson-Delmotte, V.; Zhai, P.; Pirani, A.; Connors, S. L.; et al. (eds.). Climate Change 2021: The Physical Science Basis (PDF). Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press (In Press).
Forster, P.; Storelvmo, T.; Armour, K.; Collins, W. (2021). "Chapter 7: The Earth's energy budget, climate feedbacks, and climate sensitivity Supplementary Material" (PDF). IPCC AR6 WG1 2021. |
energy in azerbaijan | Two-thirds of energy in Azerbaijan comes from fossil gas and almost a third from oil. Azerbaijan is a major producer of oil and gas, much of which is exported. Most electricity is generated by gas-fired power plants.Corruption in Azerbaijan is alleged to be connected to the oil and gas industry, which is very important for the economy.Within the country, so not counting use of the exports, greenhouse gas emissions per person are around the world average. Azerbaijan aims to reduce its emissions by reducing gas leaks and reducing flaring.
History
In 1846, Azerbaijan became the site of the world's first industrially drilled oil well. By 1899, Azerbaijan produced half of the volume of the world's oil.The Araz hydroelectric power station with a total capacity of 22 MW was constructed in 1970, Tartar hydroelectric power station with a total capacity of 50 MW in 1976 and Shamkir hydroelectric power station with a total capacity of 380 MW in 1982.At that time, along with the construction of power stations, electrical networks were systematically developed and the country's sustainable energy system was created. In those years, "Ali Bayramli" thermal power station with 330 kV – "Aghdam – Ganja – Aghstafa", "Ali Bayramli – Yashma – Derbent", 5th Mingachevir, 500kV 1st and 2nd Absheron, "Mukhranis – Vali" and other power lines, "Yashma", "Ganja", "Agstafa" with 330/110/10 kV, Imishli with 330/110/10 kV, Absheron with 500/330/220 kV, "Hovsan", "Nizami", "Mushfig", "Sangachal", "Masalli", "Agsu", and "Babek" with 220/110/10 electrical substations have been put into operation.$53 million loan was granted to Azerbaijan by the European Bank for Reconstruction and Development for the construction of the Yenikend hydroelectric power station in December 1995, and constructed a Yenikend HPP with a total capacity of 150 MW.The reconstruction of the Mingachevir hydroelectric power station, 330 kV Aghjabadi, 110 kV Barda substantions and the 330kV Azerbaijan Thermal Power Station – 330 kV "Agjabadi-Imishli" transmission lines were implemented at the expense of the European Bank for Reconstruction and Development and the Islamic Development Bank.Two gas-turbine units with a capacity of 53.5 MW each at Baku thermal power station at the expense of German bank – Bayerische Landesbank Girozentrale, and a 400MW steam gas plant at the "Shimal" power plant at the expense of the Japanese International Cooperation Bank's loan were commissioned in 2002.On February 14, 2005, the head of state approved the State Program on "Development of the Fuel and Energy Complex (2005–2015) in the Republic of Azerbaijan".The electricity demand of the economy of the country has been completely paid by 12 thermal power stations such as Azerbaijan TPP, Shirvan TPP, Shimal TPP, Baku TPP, Nakhchivan TPP, Astara, Khachmaz, Sheki, Nakhchivan, Baku, Quba, Sangachal power stations, and 6 water power stations such as Mingechevir, Shamkir, Yenikend, Varvara, Araz, Vaykhir HPP. Their total capacity was about 5900 megawatts. 90 percent of electricity production in Azerbaijan accounts for TPPs, and 10 percent for hydroelectric power stations.The Energy Regulatory Agency under the Ministry of Energy was established on the basis of the Department for State Energy And Gas Supervision of the Ministry of Energy of the Republic of Azerbaijan by the Decree of president dated December 22, 2017, and its charter was approved.According to the World Energy Trilemma İndex, compiled by the World Energy Council for 2017, Azerbaijan has taken the 31st place (BBA) among 125 countries.According to the Global Energy Architecture Performance Index report 2017, compiled by the World Economic Forum, Azerbaijan ranked 36th out of 127 countries with a 0.67 score.According to the 2016 report of the organization mentioned above, Azerbaijan ranked 32nd out of 126 countries with 0.68 score. Economic growth and development was 0,68 score, environmental sustainability 0.57 score, energy access and security 0.79 score.
On April 19, 2019, SOCAR president Rovnag Abdullayev and BP’s regional president for Azerbaijan, Georgia, and Turkey, Garry Johns signed a contract cost $6 billion. The final investment decision on the Azeri Central East (ACE) platform, which is planned to be built on the Azeri-Chirag-Gunashli (ACG) block, has been adopted at the signing ceremony. The construction is scheduled to start in 2019, and the completion is scheduled for mid-2022.
Oil
Production: - 931,990 bbl/d (148,175 m3/d) (2008)
Consumption: - 160,000 bbl/d (25,000 m3/d) (2007)From 1987 to 1993, production decreased from 13.8 million tons of oil and 12.5 billion cubic meters of gas to 10.3 million tons of oil and 6.8 billion cubic meters of gas. The annual rate of decline in production was 7.1% for oil and 13.5% for gas. The exploratory drilling decreased by 17 times, or by 170,000 meters, was 10,000 meters in 1995 compared to 1970.
"Shah deniz-2"
"Shah Deniz-2" energy strategic projects is the energy security and energy diversification project.Contract of the Shah Deniz gas field was signed in 1996, and the first pipeline connecting the Caspian Sea with the Georgian side of the Black Sea coast was built in 1999. The Baku-Tbilisi-Ceyhan main oil export pipeline connecting the Caspian Sea with the Mediterranean and international markets was built in 2006, and the Southern Gas Pipeline in 2007.
Transparency
The 2013 report by UK-based Global Witness NGO revealed that companies working in Azerbaijan’s oil industry have no transparency and accountability. It has been documented that millions of dollars of revenue disappear into the hands of obscurely owned private companies that cooperate with SOCAR.The report concluded that the opacity of the deals struck by Socar "is systemic" and added,
“These findings should be of great concern to the international community as a whole. Oil and its derivative products are central to the Azerbaijani economy, making up 95% of exports in 2011. It is important for Europe that Azerbaijan keeps the oil and gas flowing and maintains a transparent and well-run energy industry. Yet this briefing shows that much of the oil business in Azerbaijan remains opaque, and corruption is still perceived to be at epidemic levels…"
Natural gas
On March 10, 2016, Natiq Aliyev, Azerbaijani energy minister, publicly said that Azerbaijan has enough gas reserves to fill the Southern Gas Corridor (SGC). The SGC is an energy project whose goal is to move 10 billion cub meters of gas from Azerbaijan through Georgia and Turkey to Europe.
Electricity
production: 24.32 billion kWh (2017)
consumption: 17.09 billion kWh (2017)Electrical power is the widely utilized energy source in Azerbaijan in terms of domestic and industrial use.Electricity production and its distribution are covered by the state-owned Azerenerji JSC and Azerishig JSC. The whole country's electricity demand is furnished by the power stations operating under Azerenerji. 13 of those stations are thermal power stations with the installed capacity of 5,400 MW power, and 17 are hydro-power stations with the installed capacity of 1,162.2 MW power. Moreover, a number of small power stations have been set up by other companies in the country by utilizing water, wind, solar, and domestic waste.Total installed capacity in September 2019 is 6.6455 million kW. Eight thermal plants supply 80% of capacity, including Shimal-2 power station put into used in early September 2019. 12% comes from 2 hydroelectric plants (Mingachevir HPP and Shamkir HPP), and the rest from other thermal, hydro and small hydro plants. The main power plants (both are thermal) are near Shirvan (Janub TPP – 780 MW) and Mingechaur (Azerbaijan TPP – 2,400 MW).
Report of 2017
The power of the country's electro-energy system has reached 7,172.6 MW. Currently, the system's capacity is 5,200 MW and the peak power required is around 3,750-3,900 MW. In 2017, the production of electricity amounted to 22,209.8 million kWh including 20,445.4 million kWh at thermal power plants and 1,732.8 million kWh of electricity at hydroelectric power stations and totally decreased by 2.0% compared to the corresponding period of 2016 (22,665.7 million kWh).
Totally 4,778.8 million cubic meters of natural gas and 311.5 thousand tons of mazut fuel oil were used for electricity generation during the year.
50 MVA transformer with 110/35 kV, two 110 kV circuit breakers, and 35 kV electrical equipment were installed at Hoca Hasan substation of Binagadi district. 110 kV double-circuit transmission line between 110 kV "Liman" and "White City" substations, three transformer substations with 35 / 0,4 kV were constructed.
In 2017, oil production amounted to around 38.7 million tons in the country. 28.9 million tons of extracted oil belonged to the Azeri-Chirag-Gunashli, 2.4 million tons to Shah Deniz (condensate), and 7.4 million tons to the State Oil Company of the Azerbaijan Republic.
In 2017, President Ilham Aliyev took part in the opening of the following substations:
"Sarıcali" Substation with 110/35/10 kV in Saatli district
"Yenikend" Substation with 110/35/6 kV in Samukh district
"New Ganja" Substation with 110/35/10 kV in Ganja town
"Neftchala" Substation at 110/35/6 kV in Neftchala district
"Garagashli" Substation with 110/35/10 kV in Salyan district
Shamkir Automated Management and Control Center of "Azerishig" OJSC.
Hydroelectric power plants
Mingechevir Hydro Power Plant – 402 MW
Sarsang Hydro Power Plant – 50 MW
Shamkir Hydro Power Plant – 405 MW
Yenikend Hydro Power Plant – 150 MW
Foreign investment competition with non-energy sectors
In January 2015, the president of Azerbaijan, Ilham Aliyev, announced that he would direct his government to create programs to bring investment dollars to industries other than oil. Specifically, President Aliyev cited industrial and agricultural industries as an example.Aliyev cited Azerbaijani's economy by saying, "That's why it's much easier to attract investments to stable countries with socio-political stability and information growth". He said that the banking industry will become more important in helping develop the country's non-energy industries.
See also
Petroleum industry in Azerbaijan
Economy of Azerbaijan
Natural Resources of Azerbaijan
References
Mir-Babayev, M.F., 2011, The role of Azerbaijan in the world’s oil industry: "Oil-Industry History" (USA), v.12,#1, p.109-123 |
controlled burn | A controlled or prescribed (Rx) burn, which can include hazard reduction burning, backfire, swailing or a burn-off, is a fire set intentionally for purposes of forest management, fire suppression, farming, prairie restoration or greenhouse gas abatement. A controlled burn may also refer to the intentional burning of slash and fuels through burn piles. Fire is a natural part of both forest and grassland ecology and controlled fire can be a tool for foresters.
Hazard reduction or controlled burning is conducted during the cooler months to reduce fuel buildup and decrease the likelihood of serious hotter fires. Controlled burning stimulates the germination of some desirable forest trees, and reveals soil mineral layers which increases seedling vitality, thus renewing the forest. Some cones, such as those of lodgepole pine, sequoia and many chaparral shrubs are pyriscent, meaning heat from fire opens cones to disperse seeds.
In industrialized countries, controlled burning is usually overseen by fire control authorities for regulations and permits.
History
There are two basic causes of wildfires. One is natural, mainly through lightning, and the other is human activity. Controlled burns have a long history in wildland management. Pre-agricultural societies used fire to regulate both plant and animal life. Fire history studies have documented periodic wildland fires ignited by indigenous peoples in North America and Australia. Native Americans frequently used fire to manage natural environments in a way that benefited humans and wildlife, starting low-intensity fires that released nutrients for plants, reduced competition, and consumed excess flammable material that otherwise would eventually fuel high-intensity, catastrophic fires.Fires, both naturally caused and prescribed, were once part of natural landscapes in many areas. In the US, these practices ended in the early 20th century, when federal fire policies were enacted with the goal of suppressing all fires. Since 1995, the US Forest Service has slowly incorporated burning practices into its forest management policies.Fire suppression has changed the composition and ecology of North American habitats, including highly fire-dependent ecosystems such as oak savannas and canebrakes, which are now critically endangered habitats on the brink of extinction. In the Eastern United States, fire-sensitive trees such as the red maple are increasing in number, at the expense of fire-tolerant ones like oaks.
Forest use
Controlled burning reduces fuels, may improve wildlife habitat, controls competing vegetation, improves short term forage for grazing, improves accessibility, helps control tree disease, and perpetuates fire dependent species. To improve the application of prescribed burns for conservation goals, which may involve mimicking historical or natural fire regimes, scientists assess the impact of variation in fire attributes. Fire frequency is the most discussed fire attribute in the scientific literature, likely because it is considered the most critical fire regime aspect. Scientists less often report data concerning the effects of variation in other fire attributes (i.e., intensity, severity, patchiness, spatial scale, or phenology), even though these also likely impact conservation goals.Furthermore, low-intensity fire treatments can be administered in places where mechanized treatments such as disc harrowing cannot. In some areas where grasses and herbaceous plants thrive, species variation and cover can drastically increase a few years after fuel treatments.Many trees depend on fire as a successful way to clear out the competition and release their seeds. In particular, the giant sequoia depends on fire to reproduce: the cones of the tree open after a fire releases their seeds, the fire having cleared all competing vegetation.Eucalyptus regnans or mountain ash of Australia also depends on fire but in a different fashion. The plant structure shows a unique evolution with fire, quickly replacing damaged buds or stems in the case of danger. They carry their seeds in capsules which can be deposited at any time of the year. During a wildfire, the capsules drop nearly all of their seeds and the fire consumes the eucalypt adults, but most of the seeds survive using the ash as a source of nutrients. At their rate of growth, they quickly dominate the land and a new eucalyptus forest grows.
Agricultural use
In addition to forest management, controlled burning is also used in agriculture. In the developing world, this is often referred to as slash and burn. In industrialized nations, it is seen as one component of shifting cultivation, as a part of field preparation for planting. Often called field burning, this technique is used to clear the land of any existing crop residue as well as kill weeds and weed seeds. Field burning is less expensive than most other methods such as herbicides or tillage, but because it produces smoke and other fire-related pollutants, its use is not popular in agricultural areas bounded by residential housing.Prescribed fires are broadly used in the context of woody plant encroachment, with the aim of improving the balance of woody plants and grasses in shrublands and grasslands.In Northern-India, especially, In Punjab, Haryana, and Uttar Pradesh, crop residue burning is a major problem. CRB leads to degradation in environmental quality in these and neighboring states including in the capital of India, New Delhi.In East Africa, bird densities increased months after controlled burning had occurred.
Controversies
The Oregon Department of Environmental Quality began requiring a permit for farmers to burn their fields in 1981, but the requirements became stricter in 1988 following a multi-car collision in which smoke from field burning near Albany, Oregon, obscured the vision of drivers on Interstate 5, leading to a 23-car collision in which 7 people died and 37 were injured. This resulted in more scrutiny of field burning and proposals to ban field burning in the state altogether.In the European Union, burning crop stubble after harvest is used by farmers for plant health reasons under several restrictions in cross-compliance regulations.With controlled burns, there is also a risk that the fires get out of control. For example, the Calf Canyon/Hermits Peak Fire, the largest wildfire in the history of New Mexico, was started by two distinct instances of controlled burns, which had both been set by the US Forest Service, getting out of control and merging.
In the north of Great Britain, large areas of grouse moors are managed by burning in a practice known as muirburn. This kills trees and grasses, preventing natural succession, and generates the mosaic of ling (heather) of different ages which allows very large populations of red grouse to be reared for shooting. The peat-lands are some of the largest carbon sinks in the UK, providing an immensely important ecological service. The governments has restricted burning to the area but hunters have been continuing to set the moors ablaze, releasing a large amount of carbon into the atmosphere and destroying native habitat.
Political history
The conflict of controlled burn policy in the United States has roots in historical campaigns to combat wildfires and to the eventual acceptance of fire as a necessary ecological phenomenon. Following colonization of North America, the US used fire suppression laws to eradicate the indigenous practice of prescribed fire. This was done against scientific evidence that supported prescribed burns as a natural process. At the loss to the local environment, colonies utilized fire suppression in order to benefit the logging industry. The notion of fire as a tool had somewhat evolved by the late 1970s as the National Park Service authorized and administered controlled burns. Following prescribed fire reintroduction, the Yellowstone fires of 1988 occurred, which significantly politicized fire management. The ensuing media coverage was a spectacle that was vulnerable to misinformation. Reports drastically inflated the scale of the fires which disposed politicians in Wyoming, Idaho, and Montana to believe that all fires represented a loss of revenue from tourism. Paramount to the new action plans is the suppression of fires that threaten the loss of human life with leniency toward areas of historic, scientific, or special ecological interest.There is still a debate amongst policy makers about how to deal with wildfires. Senators Ron Wyden and Mike Crapo of Oregon and Idaho have been moving to reduce the shifting of capital from fire prevention to fire suppression following the harsh fires of 2017 in both states.Tensions around fire prevention continue to rise due to the increasing prevalence of climate change. As drought conditions worsen, North America has been facing an abundance of destructive wildfires. Since 1988, many states have made progress toward controlled burns. In 2021, California increased the amount of trained personnel to perform controlled burns and created more accessibility for landowners.
Procedure
Depending on the context and goals of a prescribed fire, additional planning may be necessary. While the most common driver of fuel treatment is the prevention of loss of human life, certain parameters can also be changed to promote biodiversity and to rearrange stand ages appropriately.
For the burning of slash, waste materials left over from logging, there are several types of controlled burns. Broadcast burning is the burning of scattered slash over a wide area. Pile burning is gathering up the slash into piles before burning. These burning piles may be referred to as bonfires. High temperatures can harm the soil, damaging it physically, chemically or sterilizing it. Broadcast burns tend to have lower temperatures and will not harm the soil as much as pile burning, though steps can be taken to treat the soil after a burn. In lop and scatter burning, slash is left to compact over time, or is compacted with machinery. This produces a lower intensity fire, as long as the slash is not packed too tightly.The risk of fatal fires can also be reduced proactively by reducing ground fuels before they can create a fuel ladder and begin an active crown fire. Predictions show thinned forests lead to mitigated fire intensity and flame length compared to untouched or fire-proofed areas.Back burning is the term given to the process of lighting vegetation in such a way that it has to burn against the prevailing wind. This produces a slower moving and more controllable fire. While controlled burns utilize back burning during planned fire events to create a "black line", back burning or backfiring is also done to stop a wildfire that is already in progress. Firebreaks are also often used as an anchor point to start a line of fires along natural or man-made features such as a river, road or a bulldozed clearing.To minimise the impact of smoke, burning should be restricted to daylight hours whenever possible.
Greenhouse gas abatement
Controlled burns on Australian savannas can result in a long-term cumulative reduction in greenhouse gas emissions. One working example is the West Arnhem Fire Management Agreement, started to bring "strategic fire management across 28,000 square kilometres (11,000 sq mi) of Western Arnhem Land" to partially offset greenhouse gas emissions from a liquefied natural gas plant in Darwin, Australia. Deliberately starting controlled burns early in the dry season results in a mosaic of burnt and unburnt country which reduces the area of stronger, late dry season fires; it is also known as "patch burning".
See also
Agroecology
Fire ecology
Fire-stick farming
Native American use of fire in ecosystems
Wildfire suppression
References
Further reading
Beese, W.J., Blackwell, B.A., Green, R.N. & Hawkes, B.C. (2006). "Prescribed burning impacts on some coastal British Columbia ecosystems." Information Report BC-X-403. Victoria B.C.: Natural Resources Canada, Canadian Forest Service, Pacific Forestry Centre. Retrieved from: http://hdl.handle.net/10613/2740
Casals P, Valor T, Besalú A, Molina-Terrén D. Understory fuel load and structure eight to nine years after prescribed burning in Mediterranean pine forests. DOI: 10.1016/j.foreco.2015.11.050
Valor T, González-Olabarria JR, Piqué M. Assessing the impact of prescribed burning on the growth of European pines. DOI: 10.1016/j.foreco.2015.02.002.
External links
U.S. National Park Service Prescribed Fire Policy
Savanna Oak Foundation article on controlled burns
The Nature Conservancy's Global Fire Initiative |
world resources institute | The World Resources Institute (WRI) is a global research non-profit organization established in 1982 with funding from the MacArthur Foundation under the leadership of James Gustave Speth. Subsequent presidents include Jonathan Lash (1993 - 2011), Andrew D. Steer (2012 - 2021) and current president Ani Dasgupta (2021-).WRI studies sustainable practices for business, economics, finance and governance, with the purpose of better supporting human society in six areas: food, forests, water, energy, cities, and climate. The institute's flagship report series is the World Resources Report, each of which deals with a different topic. WRI encourages initiatives for monitoring, data analysis, and risk assessment, including global and open source projects. WRI has maintained a 4 out of 4 stars rating from Charity Navigator since 1 October 2008.
Organization
The mission of the World Resources Institute (WRI) is to “move society to provide for the needs and aspirations of current and future generations”. It seeks to promote a sustainable human society with a basis of human health and well-being, environmental sustainability, and economic opportunity. WRI partners with local and national governments, private companies, publicly held corporations, and other non-profits, and offers services including global climate change issues, sustainable markets, ecosystem protection, and environmental responsible governance services.The World Resources Institute maintains international offices in the Brazil, China, Colombia, Ethiopia, India, Indonesia, Kenya, Mexico, the Netherlands, Turkey, the United Kingdom and the United States and is active in over 50 countries.
A report by the Center for International Policy's Foreign Influence Transparency Initiative of the top 50 think tanks on the University of Pennsylvania's Global Go-To Think Tanks rating index found that during the period 2014–2018 World Resources Institute received more funding from outside the United States than any other think tank, with a total of more than US$63 million, though this was described as "unsurprising" given the institute's presence in so many countries.
In 2014, Stephen M. Ross, an American real estate developer, gave the organization US$30 million to establish the WRI Ross Center for Sustainable Cities.
Initiatives
WRI's activities are focused on the areas of water (including oceans), forests, climate,
energy,
food
and cities.
WRI is active in initiatives for monitoring, data analysis, and risk assessment.
WRI emphasizes the extent to which systems are linked, and the need to connect issues such as addressing food insecurity with strategies to address climate change, protect ecosystems, and provide economic security.WRI worked with companies to develop a common standard, the Greenhouse Gas Protocol for quantifying and managing GHG emissions.
WRI tracks estimates of fossil fuel combustion and greenhouse gas emissions, published as biennial reports.
WRI's Science Based Targets initiative (SBTi) was established in 2015 to help companies to set emission reduction targets in line with climate science.
Tools such as the CAIT Climate Data Explorer have enabled journalists and others to examine greenhouse gas data by country and per capita emissions. As of May 2020 CAIT was integrated into a similar platform, Climate Watch.In 1997 and 2000, WRI published the first comparative study of material flow accounting (MFA), using time series data to comprehensively assess all material inputs and outputs (excluding water) used by industrial economies.In 2008, the World Resources Institute reported on water quality world-wide, identifying over 400 dead zones due to eutrophication including areas in the Baltic Sea, the Chesapeake Bay in the United States, and Australia's Great Barrier Reef (33, 34). Eutrophication results from the discharge of highly concentrated phosphorus in urban wastewater into lakes and rivers, and from agricultural nutrient pollution. WRI advocates for the use of local nature-based solutions (NBS), which tend to be cost-effective, to improve ecosystems, resist water-related climate impacts, and mitigate the effects of warming.
WRI publishes the Aqueduct Water Risk Atlas, ranking countries in terms of risk of severe water crises.WRI is active in studying the world's coral reefs, publishing reports in 1998 and 2011 that tracked damages due to coastal development, overfishing, climate change and rising ocean acidity. A 2022 report examines reefs to a 500 metres (1,600 ft) resolution and analyzes the protection that reefs provide to people, infrastructure and the GDP.Beginning in 2002, the World Resources Institute worked with the Cameroon Forest Initiative, to combine disparate sources of data on land use to form digital and paper maps to track changes to Cameroon's forests and improve their management. They integrated satellite imagery with information on agricultural terrain, boundaries, protected land, community-owned forests, and authorized land use by commercial logging operations and mining concessions.In 2014, WRI built upon Matthew C. Hansen's work at the University of Maryland on forest change analysis. WRI partnered with Google Earth Engine to develop Global Forest Watch (GFW), an open-source web application that uses Landsat satellite imagery to map forest changes. Weekly GLAD deforestation alerts and daily Fires alerts can be specific to a 30 square metres (320 sq ft) area. Global Forest Watch is most frequently used by nongovernmental organizations (NGOs), academic researchers, government employees, and the private sector. It is also used by journalists and indigenous groups, many of whose lands are threatened.
Applications of Global Forest Watch include rapid detection and response to fires, detecting illegal logging, assuring supply chain transparency, and assessing endangered tiger habitats.Working with the Sustainability Consortium, WRI works to identify and quantify major drivers of forest losses. For example, they have identified industrial scale internationally traded commodity crops such as beef, soybeans, palm oil, corn, and cotton as a dominant driver of forest loss in South America and Southeast Asia.As of January 2021, WRI used Global Forest Watch to generate a forest carbon flux map that combined data about emissions and removals of forest-related greenhouse gases. Using a new method for integrating ground, airborne, and satellite data to measure carbon fluctuations in forests, they were able to map forests worldwide at a resolution of 30 metres (98 ft) yearly from 2001–2019. They were able to identify the contributions of different forest types, confirming that tropical forests both absorb more carbon than other types of forests, and release more as a result of deforestation and degradation. By integrating emissions and removals, the map increases the transparency and accuracy of global carbon estimates and can support more effective forest management decisions.In addition to mapping carbon emissions from forest loss, WRI is working with scientists at Purdue University, Science-i, and the Global Forest Biodiversity Initiative to develop methods for assessing carbon accumulation rates in forested ecosystems. Such rates are affected by three forest growth components, which are difficult to measure: ingrowth, upgrowth and mortality. Being able to assess this more accurately would reduce uncertainty in estimating the impact of global forests as a carbon sink.WRI has partnered with Google Earth Engine to develop Dynamic World, a near real-time (NRT) application that uses high-resolution satellite images to do land use land cover (LULC) classification. Dynamic World identifies areas of land and water such as wetlands, forests, trees, crops and urban areas. Released in June 2022, its uses include monitoring ecosystem restoration, assessing protected areas, and detecting land changes due to deforestation and fires.WRI's LandMark project provides maps and information indicating lands that are collectively held and used by Indigenous peoples and local communities. Data for the Amazon region has shown that rainforest managed by local and Indigenous communities stores carbon dioxide, while rainforest managed by government and private interests is a net source of greenhouse gases.Other WRI initiatives include The Access Initiative, which ranks countries based on environmental democracy, the ability of citizens to engage in decision-making about natural resources, as measured by transparency, public participation laws, and access to justice.In 2014, philanthropist Stephen M. Ross established the WRI Ross Center for Sustainable Cities through a major gift. The Center focuses on the development of sustainable cities and improvements in quality of life in developing countries around the world. WRI's flagship report for 2021 was Seven Transformations for More Equitable and Sustainable Cities. It followed Accelerating Building Efficiency: Eight Actions for Urban Leaders (2019).The Platform for Accelerating the Circular Economy (PACE) is a public-private collaboration platform and project accelerating focusing on building the circular economy. PACE was launched during the 2018 World Economic Forum Annual meeting.The Renewable Energy Buyers Alliance (REBA) is an alliance of large clean energy buyers, energy providers, and service providers that is unlocking the marketplace for all non-residential energy buyers to lead a rapid transition to a cleaner, prosperous, zero-carbon renewable energy future. It has over 200 members including Google, GM, Facebook, Walmart, Disney and other large companies, and reached 6 GW capacity in 2018.WIR's Champions 12.3 coalition promotes a “Target, Measure, Act” strategy with the goal of halving food loss and waste by 2030.
Criticism
A 1990 study by the World Resources Institute was criticized by Anil Agarwal, who had been on the council of the World Resources Institute from 1988 to 1990. Agarwal, who "was among the first to argue that concepts of social equity need to be integrated into international policies aimed at mitigating the harmful effects of human-induced climate change", accused WRI of allocating too much responsibility for global warming to developing countries, and under-acknowledging the impact of U.S. overconsumption on global warming. He called the WRI study an example of environmental colonialism and suggested that a fairer analysis would balance sources of emissions against terrestrial sinks for each nation. His critique sparked considerable debate about the appropriate methodologies for such analysis, and resulted in increased awareness of the issues involved.
References
See also
Open energy system databases § Power Explorer
Rafe Pomerance |
amtrak | The National Railroad Passenger Corporation, doing business as Amtrak (; reporting marks AMTK, AMTZ), is the national passenger railroad company of the United States. It operates inter-city rail service in 46 of the 48 contiguous U.S. states and three Canadian provinces. Amtrak is a portmanteau of the words America and trak, the latter itself a sensational spelling of track.
Founded in 1971 as a quasi-public corporation to operate many U.S. passenger rail routes, Amtrak receives a combination of state and federal subsidies but is managed as a for-profit organization. The company's headquarters is located one block west of Union Station in Washington, D.C. Amtrak is headed by a Board of Directors, two of whom are the Secretary of Transportation and CEO of Amtrak, while the other eight members are nominated to serve a term of five years.Amtrak's network includes over 500 stations along 21,400 miles (34,000 km) of track. It directly owns approximately 623 miles (1,003 km) of this track and operates an additional 132 miles of track; the remaining mileage is over rail lines owned by other railroad companies. Some track sections allow trains to run as fast as 150 mph (240 km/h).
In fiscal year 2022, Amtrak served 22.9 million passengers and had $2.1 billion in revenue, with more than 17,100 employees as of fiscal year 2021. Nearly 87,000 passengers ride more than 300 Amtrak trains daily. Nearly two-thirds of passengers come from the 10 largest metropolitan areas; 83% of passengers travel on routes shorter than 400 miles (645 km).
History
Private passenger service
In 1916, 98% of all commercial intercity travelers in the United States moved by rail, and the remaining 2% moved by inland waterways. Nearly 42 million passengers used railways as primary transportation. Passenger trains were owned and operated by the same privately owned companies that operated freight trains. As the 20th century progressed, patronage declined in the face of competition from buses, air travel, and the car. New streamlined diesel-powered trains such as the Pioneer Zephyr were popular with the traveling public but could not reverse the trend. By 1940, railroads held 67 percent of commercial passenger-miles in the United States. In real terms, passenger-miles had fallen by 40% since 1916, from 42 billion to 25 billion.Traffic surged during World War II, which was aided by troop movement and gasoline rationing. The railroad's market share surged to 74% in 1945, with a massive 94 billion passenger-miles. After the war, railroads rejuvenated their overworked and neglected passenger fleets with fast and luxurious streamliners. These new trains brought only temporary relief to the overall decline. Even as postwar travel exploded, passenger travel percentages of the overall market share fell to 46% by 1950, and then 32% by 1957. The railroads had lost money on passenger service since the Great Depression, but deficits reached $723 million in 1957. For many railroads, these losses threatened financial viability.The causes of this decline were heavily debated. The National Highway System and airports, both funded by the government, competed directly with the railroads, which paid for their own infrastructure. American car culture was also on the rise in the post-World War II years. Progressive Era rate regulation limited the railroad's ability to turn a profit. Railroads also faced antiquated work rules and inflexible relationships with trade unions. To take one example, workers continued to receive a day's pay for 100-to-150-mile (160 to 240 km) workdays. Streamliners covered that in two hours.Matters approached a crisis in the 1960s. Passenger service route-miles fell from 107,000 miles (172,000 km) in 1958 to 49,000 miles (79,000 km) in 1970, the last full year of private operation. The diversion of most United States Post Office Department mail from passenger trains to trucks, airplanes, and freight trains in late 1967 deprived those trains of badly needed revenue. In direct response, the Atchison, Topeka and Santa Fe Railway filed to discontinue 33 of its remaining 39 trains, ending almost all passenger service on one of the largest railroads in the country. The equipment the railroads had ordered after World War II was now 20 years old, worn out, and in need of replacement.
Formation
As passenger service declined, various proposals were brought forward to rescue it. The 1961 Doyle Report proposed that the private railroads pool their services into a single body. Similar proposals were made in 1965 and 1968 but failed to attract support. The federal government passed the High Speed Ground Transportation Act of 1965 to fund pilot programs in the Northeast Corridor, but this did nothing to address passenger deficits. In late 1969, multiple proposals emerged in the United States Congress, including equipment subsidies, route subsidies, and, lastly, a "quasi-public corporation" to take over the operation of intercity passenger trains. Matters were brought to a head on June 21, 1970, when the Penn Central, the largest railroad in the Northeastern United States and teetering on bankruptcy, filed to discontinue 34 of its passenger trains.In October 1970, Congress passed, and President Richard Nixon signed into law, the Rail Passenger Service Act. Proponents of the bill, led by the National Association of Railroad Passengers (NARP), sought government funding to ensure the continuation of passenger trains. They conceived the National Railroad Passenger Corporation (NRPC), a quasi-public corporation that would be managed as a for-profit organization, but which would receive taxpayer funding and assume operation of intercity passenger trains.There were several key provisions:
Any railroad operating intercity passenger service could contract with the NRPC, thereby joining the national system.
The United States federal government, through the Secretary of Transportation, would own all of the NRPC's issued and outstanding preferred stock.
Participating railroads bought into the NRPC using a formula based on their recent intercity passenger losses. The purchase price could be satisfied either by cash or rolling stock; in exchange, the railroads received NRPC common stock.
Any participating railroad was freed of the obligation to operate intercity passenger service after May 1, 1971, except for those services chosen by the Department of Transportation (DOT) as part of a "basic system" of service and paid for by NRPC using its federal funds.
Railroads that chose not to join the NRPC system were required to continue operating their existing passenger service until 1975, at which time they could pursue the customary ICC approval process for any discontinuance or alteration to the service.Of the 26 railroads still offering intercity passenger service in 1970, only six declined to join the NRPC.The original working brand name for NRPC was Railpax, but less than two weeks before operations began, the official marketing name was changed to Amtrak, a portmanteau of the words America and trak, the latter itself a sensational spelling of track.Nearly everyone involved expected the experiment to be short-lived. The Nixon administration and many Washington insiders viewed the NRPC as a politically expedient way for the President and Congress to give passenger trains a "last hurrah" as demanded by the public. They expected the NRPC to quietly disappear as public interest waned. After Fortune magazine exposed the manufactured mismanagement in 1974, Louis W. Menk, chairman of the Burlington Northern Railroad, remarked that the story was undermining the scheme to dismantle Amtrak. Proponents also hoped that government intervention would be brief and that Amtrak would soon be able to support itself. Neither view had proved to be correct; for popular support allowed Amtrak to continue in operation longer than critics imagined, while financial results made passenger train service returning to private railroad operations infeasible.
1970s: The Rainbow Era
Amtrak began operations on May 1, 1971. Amtrak received no rail tracks or rights-of-way at its inception. All Amtrak's routes were continuations of prior service, although Amtrak pruned about half the passenger rail network. Of the 366 train routes that operated previously, Amtrak continued only 184. Several major corridors became freight-only, including the ex-New York Central Railroad's Water Level Route from New York to Ohio and Grand Trunk Western Railroad's Chicago to Detroit route. The reduced passenger train schedules created confusion amongst staff. At some stations, Amtrak service was available only late at night or early in the morning, prompting complaints from passengers. Disputes with freight railroads over track usage caused some services to be rerouted, temporarily cancelled, or replaced with buses. On the other hand, the creation of the Los Angeles–Seattle Coast Starlight from three formerly separate train routes was an immediate success, resulting in an increase to daily service by 1973.Needing to operate only half the train routes that had operated previously, Amtrak would lease around 1,200 of the best passenger cars from the 3,000 that the private railroads owned. All were air-conditioned, and 90% were easy-to-maintain stainless steel. When Amtrak took over, passenger cars and locomotives initially retained the paint schemes and logos of their former owners which resulted in Amtrak running trains with mismatched colors – the "Rainbow Era". In mid-1971, Amtrak began purchasing some of the equipment it had leased, including 286 EMD E and F unit diesel locomotives, 30 GG1 electric locomotives and 1,290 passenger cars. By 1975, the official Amtrak color scheme was painted on most Amtrak equipment and newly purchased locomotives and the rolling stock began appearing.
Amtrak inherited problems with train stations (most notably deferred maintenance) and redundant facilities from the competing railroads that once served the same communities. Chicago is a prime example; on the day prior to Amtrak's inception, intercity passenger trains used four different Chicago terminals: LaSalle, Dearborn, North Western Station, Central, and Union. The trains at LaSalle remained there, as their operator Rock Island could not afford to opt into Amtrak. Of all the trains serving Dearborn Station, Amtrak retained only a pair of Santa Fe trains, which relocated to Union Station beginning with the first Amtrak departures on May 1, 1971. Dearborn Station closed after the last pre-Amtrak trains on the Santa Fe arrived in Chicago on May 2. None of the intercity trains that had served North Western Station became part of the Amtrak system, and that terminal became commuter-only after May 1. The trains serving Central Station continued to use that station until an alternate routing was adopted in March 1972. In New York City, Amtrak had to maintain two stations (Penn and Grand Central) due to the lack of track connections to bring trains from upstate New York into Penn Station; a problem that was rectified once the Empire Connection was built in 1991. The Amtrak Standard Stations Program was launched in 1978 and proposed to build a standardized station design across the system with an aim to reduce costs, speed construction, and improve its corporate image. However, the cash-strapped railroad would ultimately build relatively few of these standard stations.
Amtrak soon had the opportunity to acquire rights-of-way. Following the bankruptcy of several northeastern railroads in the early 1970s, including Penn Central, which owned and operated the Northeast Corridor (NEC), Congress passed the Railroad Revitalization and Regulatory Reform Act of 1976. A large part of the legislation was directed to the creation of Conrail, but the law also enabled the transfer of the portions of the NEC not already owned by state authorities to Amtrak. Amtrak acquired the majority of the NEC on April 1, 1976. (The portion in Massachusetts is owned by the Commonwealth and managed by Amtrak. The route from New Haven to New Rochelle is owned by New York's Metropolitan Transportation Authority and the Connecticut Department of Transportation as the New Haven Line.) This mainline became Amtrak's "jewel" asset, and helped the railroad generate revenue. While the NEC ridership and revenues were higher than any other segment of the system, the cost of operating and maintaining the corridor proved to be overwhelming. As a result, Amtrak's federal subsidy was increased dramatically. In subsequent years, other short route segments not needed for freight operations were transferred to Amtrak.In its first decade, Amtrak fell far short of financial independence, which continues today, but it did find modest success rebuilding trade. Outside factors discouraged competing transport, such as fuel shortages which increased costs of automobile and airline travel, and strikes which disrupted airline operations. Investments in Amtrak's track, equipment and information also made Amtrak more relevant to America's transportation needs. Amtrak's ridership increased from 16.6 million in 1972 to 21 million in 1981.In February 1978, Amtrak moved its headquarters to 400 N Capitol Street W, Washington D.C.
1980s and 1990s: The Building Era
In 1982, former Secretary of the Navy and retired Southern Railway head William Graham Claytor Jr. came out of retirement to lead Amtrak. During his time at Southern, Claytor was a vocal critic of Amtrak's prior managers, who all came from non-railroading backgrounds. Transportation Secretary Drew Lewis cited this criticism as a reason why the Democrat Claytor was acceptable to the Reagan White House.: 7 Despite frequent clashes with the Reagan administration over funding, Claytor enjoyed a good relationship with Lewis, John H. Riley, the head of the Federal Railroad Administration (FRA), and with members of Congress. Limited funding led Claytor to use short-term debt to fund operations.Building on mechanical developments in the 1970s, high-speed Washington–New York Metroliner Service was improved with new equipment and faster schedules. Travel time between New York and Washington, D.C. was reduced to under 3 hours due to system improvements and limited stop service. This improvement was cited as a reason why Amtrak grew its share of intercity trips between the cities along the corridor. Elsewhere in the country, demand for passenger rail service resulted in the creation of five new state-supported routes in California, Illinois, Missouri, Oregon and Pennsylvania, for a total of 15 state-supported routes.
Amtrak added two trains in 1983, the California Zephyr between Oakland and Chicago via Denver and revived the Auto Train, a unique service that carries both passengers and their vehicles. Amtrak advertised it as a great way to avoid traffic along the I-95 running between Lorton, Virginia (near Washington, D.C.) and Sanford, Florida (near Orlando) on the Silver Star alignment.In 1980s and 1990s, stations in Baltimore, Chicago, and Washington, D.C. received major rehabilitation and the Empire Connection tunnel opened in 1991, allowing Amtrak to consolidate all New York services at Penn Station. Despite the improvements, Amtrak's ridership stagnated at roughly 20 million passengers per year, amid uncertain government aid from 1981 to about 2000.In the early 1990s, Amtrak tested several different high-speed trains from Europe on the Northeast Corridor. An X 2000 train was leased from Sweden for test runs from October 1992 to January 1993, followed by revenue service between Washington, D.C. and New York City from February to May and August to September 1993. Siemens showed the ICE 1 train from Germany, organizing the ICE Train North America Tour which started to operate on the Northeast Corridor on July 3, 1993.In 1993, Thomas Downs succeeded Claytor as Amtrak's fifth president. The stated goal remained "operational self-sufficiency". By this time, however, Amtrak had a large overhang of debt from years of underfunding. In the mid-1990s, Amtrak suffered through a serious cash crunch. Under Downs, Congress included a provision in the Taxpayer Relief Act of 1997 that resulted in Amtrak receiving a $2.3 billion tax refund that resolved their cash crisis. However, Congress also instituted a "glide path" to financial self-sufficiency, excluding railroad retirement tax act payments.George Warrington became the sixth president in 1998, with a mandate to make Amtrak financially self-sufficient. Under Warrington, the company tried to expand into express freight shipping, placing Amtrak in competition with the "host" freight railroads and the trucking industry.
On March 9, 1999, Amtrak unveiled its plan for the Acela Express, a high-speed train on the Northeast Corridor between Washington, D.C. and Boston. Several changes were made to the corridor to make it suitable for higher-speed electric trains. The Northend Electrification Project extended existing electrification from New Haven, Connecticut, to Boston to complete the overhead power supply along the 454-mile (731 km) route, and several grade crossings were improved or removed.
2000s: Growth in the 21st century
Ridership increased during the first decade of the 21st century after the implementation of capital improvements in the NEC and rises in automobile fuel costs. The inauguration of the high-speed Acela in late 2000 generated considerable publicity and led to major ridership gains. However, through the late 1990s and very early 21st century, Amtrak could not add sufficient express freight revenue or cut sufficient other expenditures to break even. By 2002, it was clear that Amtrak could not achieve self-sufficiency, but Congress continued to authorize funding and released Amtrak from the requirement. In early 2002, David L. Gunn replaced Warrington as seventh president. In a departure from his predecessors' promises to make Amtrak self-sufficient in the short term, Gunn argued that no form of passenger transportation in the United States is self-sufficient as the economy is currently structured. Highways, airports, and air traffic control all require large government expenditures to build and operate, coming from the Highway Trust Fund and Aviation Trust Fund paid for by user fees, highway fuel and road taxes, and, in the case of the General Fund, from general taxation. Gunn dropped most freight express business and worked to eliminate deferred maintenance.A plan by the Bush administration "to privatize parts of the national passenger rail system and spin off other parts to partial state ownership" provoked disagreement within Amtrak's board of directors. Late in 2005, Gunn was fired. Gunn's replacement, Alexander Kummant (2006–08), was committed to operating a national rail network, and like Gunn, opposed the notion of putting the Northeast Corridor under separate ownership. He said that shedding the system's long-distance routes would amount to selling national assets that are on par with national parks, and that Amtrak's abandonment of these routes would be irreversible. In late 2006, Amtrak unsuccessfully sought annual congressional funding of $1 billion for ten years. In early 2007, Amtrak employed 20,000 people in 46 states and served 25 million passengers a year, its highest amount since its founding in 1970. Politico noted a key problem: "the rail system chronically operates in the red. A pattern has emerged: Congress overrides cutbacks demanded by the White House and appropriates enough funds to keep Amtrak from plunging into insolvency. But, Amtrak advocates say, that is not enough to fix the system's woes."Joseph H. Boardman replaced Kummant as president and CEO in late 2008.
In 2011, Amtrak announced its intention to improve and expand the high-speed rail corridor from Penn Station in NYC, under the Hudson River in new tunnels, and double-tracking the line to Newark, NJ, called the Gateway Program, initially estimated to cost $13.5 billion (equal to $18 billion in 2022).From May 2011 to May 2012, Amtrak celebrated its 40th anniversary with festivities across the country that started on National Train Day (May 7, 2011). A commemorative book entitled Amtrak: An American Story was published, a documentary was created, six locomotives were painted in Amtrak's four prior paint schemes, and an Exhibit Train toured the country visiting 45 communities and welcoming more than 85,000 visitors.After years of almost revolving-door CEOs at Amtrak, in December 2013, Boardman was named "Railroader of the Year" by Railway Age magazine, which noted that with over five years in the job, he is the second-longest serving head of Amtrak since it was formed more than 40 years ago. On December 9, 2015, Boardman announced in a letter to employees that he would be leaving Amtrak in September 2016. He had advised the Amtrak Board of Directors of his decision the previous week. On August 19, 2016, the Amtrak Board of Directors named former Norfolk Southern Railway President & CEO Charles "Wick" Moorman as Boardman's successor with an effective date of September 1, 2016. During his term, Moorman took no salary and said that he saw his role as one of a "transitional CEO" who would reorganize Amtrak before turning it over to new leadership.On November 17, 2016, the Gateway Program Development Corporation (GDC) was formed for the purpose of overseeing and effectuating the rail infrastructure improvements known as the Gateway Program. GDC is a partnership of the States of New York and New Jersey and Amtrak. The Gateway Program includes the Hudson Tunnel Project, to build a new tunnel under the Hudson River and rehabilitate the existing century-old tunnel, and the Portal North Bridge, to replace a century-old moveable bridge with a modern structure that is less prone to failure. Later projects of the Gateway Program, including the expansion of track and platforms at Penn Station New York, construction of the Bergen Loop and other improvements will roughly double capacity for Amtrak and NJ Transit trains in the busiest, most complex section of the Northeast Corridor.In June 2017, it was announced that former Delta and Northwest Airlines CEO Richard Anderson would become Amtrak's next President & CEO. Anderson began the job on July 12, assuming the title of President immediately and serving alongside Moorman as "co-CEOs" until the end of the year. On April 15, 2020, Atlas Air Chairman, President and CEO William Flynn was named Amtrak President and CEO. In addition to Atlas Air, Flynn has held senior roles at CSX Transportation, SeaLand Services and GeoLogistics Corp. Anderson would remain with Amtrak as a senior advisor until December 2020.As Amtrak approached profitability in 2020, the company undertook planning to expand and create new intermediate-distance corridors across the country. Included were several new services in Ohio, Tennessee, Colorado, and Minnesota, among other states.During the COVID-19 pandemic, Amtrak continued operating as an essential service. It started requiring face coverings the week of May 17, and limited sales to 50% of capacity. Most long-distance routes were reduced to three weekly round trips in October 2020.In March 2021, following President Joe Biden's American Jobs Plan announcement, Amtrak CEO Bill Flynn outlined a proposal called Amtrak Connects US that would expand state-supported intercity corridors with an infusion of upfront capital assistance. This would expand service to cities including Las Vegas, Phoenix, Baton Rouge, Nashville, Chattanooga, Louisville, Columbus (Ohio), Wilmington (North Carolina), Cheyenne, Montgomery, Concord, and Scranton. Also in March 2021, Amtrak announced plans to return 12 of its long-distance routes to daily schedules later in the spring. Most of these routes were restored to daily service in late-May 2021. However, a resurgence of the virus caused by the Omicron variant caused Amtrak to modify and/or suspend many of these routes again from January to March 2022.
Operations
Routes
Amtrak is required by law to operate a national route system. Amtrak has presence in 46 of the 48 contiguous states, as well as the District of Columbia (with only thruway connecting services in Wyoming and no services in South Dakota). Amtrak services fall into three groups: short-haul service on the Northeast Corridor, state-supported short-haul service outside the Northeast Corridor, and medium- and long-haul service known within Amtrak as the National Network. Amtrak receives federal funding for the vast majority of its operations including the central spine of the Northeast Corridor as well as for its National Network routes. In addition to the federally funded routes, Amtrak partners with transportation agencies in 18 states to operate other short and medium-haul routes outside of the Northeast Corridor, some of which connect to it or are extensions from it. In addition to its inter-city services, Amtrak also operates commuter services under contract for three public agencies: the MARC Penn Line in Maryland, Shore Line East in Connecticut, and Metrolink in Southern California.
Service on the Northeast Corridor (NEC), between Boston, and Washington, D.C., as well as between Philadelphia and Harrisburg, is powered by overhead lines; for the rest of the system, diesel-fueled locomotives are used. Routes vary widely in the frequency of service, from three-days-a-week trains on the Sunset Limited to several times per hour on the Northeast Corridor. For areas not served by trains, Amtrak Thruway routes provide guaranteed connections to trains via buses, vans, ferries and other modes.The most popular and heavily used services are those running on the NEC, including the Acela and Northeast Regional. The NEC runs between Boston and Washington, D.C. via New York City and Philadelphia. Some services continue into Virginia. The NEC services accounted for 4.4 million of Amtrak's 12.2 million passengers in fiscal year 2021. Outside the NEC the most popular services are the short-haul corridors in California, the Pacific Surfliner, Capitol Corridor, and San Joaquin, which are supplemented by an extensive network of connecting buses. Together the California corridor trains accounted for a combined 2.35 million passengers in fiscal year 2021. Other popular routes include the Empire Service between New York City and Niagara Falls, via Albany and Buffalo, which carried 613.2 thousand passengers in fiscal year 2021, and the Keystone Service between New York City and Harrisburg via Philadelphia that carried 394.3 thousand passengers that same year.Four of the six busiest stations by boardings are on the NEC: New York Penn Station (first), Washington Union Station (second), Philadelphia 30th Street Station (third), and Boston South Station (fifth). The other two are Chicago Union Station (fourth) and Los Angeles Union Station (sixth).
On-time performance
On-time performance is calculated differently for airlines than for Amtrak. A plane is considered on-time if it arrives within 15 minutes of the schedule. Amtrak uses a sliding scale, with trips under 250 miles (400 km) considered late if they are more than 10 minutes behind schedule, up to 30 minutes for trips over 551 miles (887 km) in length.Outside the Northeast Corridor and stretches of track in Southern California and Michigan, most Amtrak trains run on tracks owned and operated by privately owned freight railroads. BNSF is the largest host to Amtrak routes, with 6.3 million train-miles. Freight rail operators are required under federal law to give dispatching preference to Amtrak trains. However, Amtrak has accused freight railroads of violating or skirting these regulations, resulting in passenger trains waiting for freight traffic to clear the track.The railroads' dispatching practices were investigated in 2008, resulting in stricter laws about train priority. Subsequently, Amtrak's overall on-time performance went up from 74.7% in fiscal 2008 to 84.7% in 2009, with long-distance trains and others outside the NEC seeing the greatest benefit. The Missouri River Runner jumped from 11% to 95%, becoming one of Amtrak's best performers. The Texas Eagle went from 22.4% to 96.7%, and the California Zephyr, with a 5% on-time record in 2008, went up to 78.3%. However, this improved performance coincided with a general economic downturn, resulting in the lowest freight-rail traffic volumes since at least 1988, meaning less freight traffic to impede passenger traffic.In 2018, Amtrak began issuing report cards, grading each host railroad similar to students, based on the railroad's impact to on-time performance. The first report card, issued in March 2018, includes one A (given to Canadian Pacific) and two Fs (given to Canadian National and Norfolk Southern). Amtrak's 2020 host report card gives Canadian Pacific and Canadian National an A, BNSF and CSX a B, Union Pacific a C+, and Norfolk Southern a D−.
Ridership
Amtrak carried 15.8 million passengers in 1972, its first full year of operation. Ridership has increased steadily ever since, carrying a record 32 million passengers in fiscal year 2019, more than double the total in 1972. For the fiscal year ending on September 30, 2020, Amtrak reported 16.8 million passengers, with the decline resulting from effects of the COVID-19 pandemic. Fiscal year 2021 saw ridership decrease more, with 12.2 million passengers reported. Fiscal year 2022 saw an increase to 22.9 million passengers, however, it is still lower than pre-pandemic numbers.
Guest Rewards
Amtrak's loyalty program, Guest Rewards, is similar to the frequent-flyer programs of many airlines. Guest Rewards members accumulate points by riding Amtrak and through other activities, and can redeem these points for free Amtrak tickets and other rewards.
Rail passes
Amtrak offers rail passes, which can be cheaper than air travel for long distances, and allows side trips without extra charge.
Lines
Along the NEC and in several other areas, Amtrak owns 730 miles (1,170 km) including 17 tunnels consisting of 29.7 miles (47.8 km) of track, and 1,186 bridges (including the famous Hell Gate Bridge) consisting of 42.5 miles (68.4 km) of track. In several places, primarily in New England, Amtrak leases tracks, providing track maintenance and controlling train movements. Most often, these tracks are leased from state, regional, or local governments. The lines are further divided into services. Amtrak owns and operates the following lines:
Northeast Corridor: the Northeast Corridor between Washington, D.C., and Boston via Baltimore, Philadelphia, Newark, New York and Providence is largely owned by Amtrak (363 of 457 miles), working cooperatively with several state and regional commuter agencies. Between New Haven, Connecticut, and New Rochelle, New York, Northeast Corridor trains travel on the Metro-North Railroad's New Haven Line, which is owned and operated by the Connecticut Department of Transportation and the Metropolitan Transportation Authority.
Keystone Corridor: Amtrak owns the 104.2-mile line from Philadelphia to Harrisburg, Pennsylvania. As a result of an investment partnership with the Commonwealth of Pennsylvania, signal and track improvements were completed in October 2006 that allow all-electric service with a top speed of 110 miles per hour (180 km/h) to run along the corridor.
Empire Corridor: Amtrak owns the 11 miles (18 km) between New York Penn Station and Spuyten Duyvil, New York. In 2012, Amtrak leased the 94 miles (151 km) between Poughkeepsie, New York, and Schenectady, New York, from owner CSX. In addition, Amtrak owns the tracks across the Whirlpool Rapids Bridge and short approach sections near it.
Michigan Line: Amtrak acquired the 98 miles of Porter, Indiana to Kalamazoo, Michigan section of the former Michigan Central main line from Conrail in 1976.
New Haven–Springfield Line: Amtrak purchased the 62 miles (100 km) between New Haven and Springfield from Penn Central in 1976.
Post Road Branch: 12.42 miles (19.99 km), Castleton-on-Hudson to Rensselaer, New YorkIn addition to these lines, Amtrak owns station and yard tracks in Chicago, Los Angeles, New Orleans, New York City, Oakland (Kirkham Street Yard), Orlando, Portland, Oregon, Seattle, Philadelphia, and Washington, D.C. Amtrak leases station and yard tracks in Hialeah, near Miami, Florida, from the State of Florida.Amtrak owns New York Penn Station, Philadelphia 30th Street Station, Baltimore Penn Station and Providence Station. It also owns Chicago Union Station, formerly through a wholly owned subsidiary, the Chicago Union Station Company until absorbed by Amtrak in 2017. Through the Washington Terminal Company, in which it owns a 99.7 percent interest, it owns the rail infrastructure around Washington Union Station. It holds a 99% interest in 30th Street Limited, a partnership responsible for redeveloping the area in and around 30th Street Station. Amtrak also owns Passenger Railroad Insurance.
Service lines
Amtrak organizes its business into six "service lines", which are treated like divisions at most companies.There are three operating service lines: Northeast Corridor, which operates Amtrak's high-speed Acela and Northeast Regional trains; State Supported, which provides service on corridor routes of less than 750 miles through cost-sharing agreements with state governments; and Long Distance, which operates routes over 750 miles and receives financial support from the federal government.
Additionally there are three service lines involved in activities other than operating Amtrak trains. They are: Ancillary, which includes operating commuter trains under contract, establishing Amtrak Thruway connecting services, operating charter trains, and hauling private railcars; Real Estate & Commercial which manages property owned by Amtrak, including leasing space to other businesses inside stations; and Infrastructure Access/Reimbursable which charges other railroads for access to Amtrak owned tracks and performs work that can be reimbursed by other railroads or state governments. Net revenue generated by these service lines is used to fund Amtrak's other operations.
Rolling stock
Amtrak owns 2,142 railway cars and 425 locomotives for revenue runs and service, collectively called rolling stock. Notable examples include the GE Genesis and Siemens Charger diesel locomotives, the Siemens ACS-64 electric locomotive, the Amfleet series of single-level passenger cars, and the Superliner series of double-decker passenger cars.
The railroad is currently working to replace its fleet, spending $2.4 billion on 28 Avelia Liberty trainsets for its flagship Acela service and $7.3 billion for 65 Airo trainsets for other Northeast Corridor services. Additionally, California, North Carolina, and a group of Midwestern states purchased Siemens Venture trainsets for use on routes operated by Amtrak in their states, which started entering service in 2022. In 2023, Amtrak announced it had made a request for proposals, looking to replace hundreds of railcars used on long-distance routes.
On-board services
Classes of service
Amtrak offers four classes of service: First Class, Sleeper Service, Business Class, and Coach Class:
First Class: First Class service is only offered on the Acela. Seats are larger than those of Business Class and come in a variety of seating styles (single, facing singles with table, double, facing doubles with table and wheelchair accessible). First Class is located in a separate car from business class and is located at the end of the train (to reduce the number of passengers walking in the aisles). A car attendant provides passengers with hot towel service, a complimentary meal and alcoholic beverages. First Class passengers have access to lounges located at most larger stations.
Sleeper Service: Private room accommodations on long-distance trains, including roomettes, bedrooms, bedroom suites, accessible bedrooms, and, on some trains, family bedrooms. Included in the price of a room are attendant service and on most routes, full hot meals. At night, attendants convert rooms into sleeping areas with fold-down beds and linens. Shower facilities with towels and bar soap are available. Complimentary juice, coffee and bottled water are included as well. Sleeper car passengers have access to all passenger facilities aboard the train. Sleeper Service passengers have access to lounges located at select stations.
Business Class: Business Class seating is offered on the Acela, Northeast Regional, many short-haul corridor trains and some long-distance trains. It is the standard class of service on the Acela. On all other trains where it is offered, Business Class is located in a dedicated car or section of the train. While the specific features vary by route, many include extra legroom and complimentary non-alcoholic drinks. Seats in business class recline and feature a fold-down tray table, footrest, individual reading light, and power outlet. Passengers have access to some lounges, but busier locations may exclude Business Class customers.
Coach Class: Coach Class is the standard class of service on all Amtrak trains except the Acela. Seats in coach recline and feature a fold-down tray table, footrest, individual reading light, and power outlet. Coach cars on long-distance trains are configured with fewer seats per car so that passengers have additional legroom and seats which are equipped with leg rests. Some corridor and short-distance trains have one coach car designated as a "quiet car" where loud conversation, phone calls, and sound played from electronic devices are not permitted.
Wi-Fi and electronic services
Amtrak first offered free Wi-Fi service to passengers aboard the Downeaster in 2008, the Acela and the Northeast Regional trains on the NEC in 2010, and the Amtrak Cascades in 2011. In February 2014, Amtrak rolled out Wi-Fi on corridor trains out of Chicago. When all the Midwest cars offer the AmtrakConnect service, about 85% of all Amtrak passengers nationwide will have Wi-Fi access. As of 2014, most Amtrak passengers have access to free Wi-Fi. The service has developed a reputation for being unreliable and slow due to its cellular network connection; on some routes it is usually unusable, either freezing on the login page or, if it manages to log in, failing to provide any internet bandwidth.
Routes with Wi-Fi are typical on routes running east of the Mississippi River and the US Coastlines. West–east routes such as Sunset Limited, Southwest Chief and Texas Eagle notably lacks Wi-Fi whether through Amtrak or using a private hotspot, as cell towers are not common on the rail paths through desert and mountain wilderness.Amtrak launched an e-ticketing system on the Downeaster in November 2011 and rolled it out nationwide on July 30, 2012. Amtrak officials said the system gives "more accurate knowledge in realtime of who is on the train which greatly improves the safety and security of passengers; en route reporting of onboard equipment problems to mechanical crews which may result in faster resolution of the issue; and more efficient financial reporting".
Baggage and cargo services
Amtrak allows carry-on baggage on all routes; services with baggage cars allow checked baggage at selected stations. With the passage of the Wicker Amendment in 2010 passengers are allowed to put lawfully owned, unloaded firearms in checked Amtrak baggage, reversing a decade-long ban on such carriage.The Amtrak Express cargo service provides small-package and less-than-truckload shipping between most Amtrak stations that handle checked baggage (over 100 cities). Cargo travels alongside checked luggage in baggage cars. Service and hours vary by station, limited by available equipment and staffing. Nearly all stations with checked baggage service can handle small packages, while large stations with forklifts can handle palletized shipments. Amtrak Express also offers station-to-station shipment of human remains to many cities.
Amtrak is popular among bicycle touring enthusiasts due to the ease of riding with a bike. In contrast to airlines, which require riders to dismantle their bicycles and place them in specialized bags, most Amtrak trains have onboard bike racks in either the coaches or checked baggage car. Bicycle reservations are required on most routes and cost up to $20.
Labor issues
In the modern era, Amtrak faces a number of important labor issues. As of 2023, the average Amtrak employee annual salary was $121,000 per year. In the area of pension funding, because of limitations originally imposed by Congress, most Amtrak workers were traditionally classified as "railroad employees" and contributions to the Railroad Retirement system have been made for those employees. However, because the size of the contributions is determined on an industry-wide basis rather than with reference to the employer for whom the employees work, some critics, such as the National Association of Railroad Passengers, maintain that Amtrak is subsidizing freight railroad pensions by as much as US$150 million/year.In recent times, efforts at reforming passenger rail have addressed labor issues. In 1997 Congress released Amtrak from a prohibition on contracting for labor outside the corporation (and outside its unions), opening the door to privatization. Since that time, many of Amtrak's employees have been working without a contract. The most recent contract, signed in 1999, was mainly retroactive.
Because of the fragmentation of railroad unions, Amtrak had 14 separate unions to negotiate with, including 24 separate contracts between them as of 2009. This makes it difficult to make substantial changes, in contrast to a situation where one union negotiates with one employer. Former Amtrak president Kummant followed a cooperative posture with Amtrak's trade unions, ruling out plans to privatize large parts of Amtrak's unionized workforce.
Environmental impacts
Amtrak's environmental impact
Per passenger mile, Amtrak is 30–40 percent more energy-efficient than commercial airlines and automobiles overall, though the exact figures for particular routes depend on load factor along with other variables. The electrified trains in the NEC are considerably more efficient than Amtrak's diesels and can feed energy captured from regenerative braking back to the electrical grid. Passenger rail is also very competitive with other modes in terms of safety per mile.
In 2005, Amtrak's carbon dioxide equivalent emissions were 0.411 lbs/mi (0.116 kg per km). For comparison, this is similar to a car with two people, about twice as high as the UK rail average (where more of the system is electrified), about four times the average US motorcoach, and about eight times a Finnish electric intercity train or fully loaded fifty-seat coach. It is, however, about two thirds of the raw CO2-equivalent emissions of a long-distance domestic flight.Amtrak operates over thirty passenger train routes throughout the U.S. and Canada. According to a 2009 UK study, rail transport on passenger trains produces significantly less greenhouse gas emissions per unit distance than both road transport and domestic air transport in the UK.Amtrak operates diesel, electric, and dual-mode (diesel and electric) locomotives. Diesel-powered engines produce more greenhouse gas emissions during operation than electric trains.
As for the locational pollution directly from Amtrak operation, their diesel trains cause more regional air pollution, impacting the ecosystems around the sites of operation. Also, more stops along train routes can lead to higher greenhouse gas emissions. Amtrak rail facilities located in Delaware were cited as the state's largest source of polychlorinated biphenyl (PCB) contamination into the Delaware River, which build up in the tissue of animals and are human carcinogens.
Environmental impact on Amtrak
Amtrak railways and surrounding infrastructure are susceptible to degradation by natural causes over time. Railways experience water damage from climate change backed increases in rainfall in wet areas, and rail buckling caused by hotter and dryer seasons in naturally dry areas.In September 2021, the remnants of Hurricane Ida flooded the Amtrak Northeast Corridor running from Boston to Washington D.C. and caused it to shut down for an entire day. In February 2023, heavy snowfall and debris on tracks caused major disruptions from delays to cancellations.Rising summertime temperatures are causing an increase in railway buckles. A study conducted on the railways in the southeast United Kingdom found that when temperature changes become extreme in the summertime due to climate change, the tracks buckle due to the outward force of the metal expanding in collaboration with the weight of train car traffic. This causes speed restrictions to be put in place around certain temperature intervals, slowing travel time and lessening the amount of train rides in a day. The study found that in 2004, 30,000 delay minutes were attributed to increased heat causing a total of over 1.7 million U.S. dollars, of total heat related delay cost.
Public funding
Amtrak receives annual appropriations from federal and state governments to supplement operating and capital programs.
Funding history
1970s to 1990s
Amtrak commenced operations in 1971 with $40 million in direct federal aid, $100 million in federally insured loans, and a somewhat larger private contribution. Officials expected that Amtrak would break even by 1974, but those expectations proved unrealistic and annual direct federal aid reached a 17-year high in 1981 of $1.25 billion. During the Reagan administration, appropriations were halved and by 1986, federal support fell to a decade low of $601 million, almost none of which were capital appropriations. In the late 1980s and early 1990s, Congress continued the reductionist trend even while Amtrak expenses held steady or rose. Amtrak was forced to borrow to meet short-term operating needs, and by 1995 Amtrak was on the brink of a cash crisis and was unable to continue to service its debts. In response, in 1997 Congress authorized $5.2 billion for Amtrak over the next five years – largely to complete the Acela capital project – on the condition that Amtrak submit to the ultimatum of self-sufficiency by 2003 or liquidation. While Amtrak made financial improvements during this period, it did not achieve self-sufficiency.
2000s
In 2004, a stalemate in federal support of Amtrak forced cutbacks in services and routes as well as the resumption of deferred maintenance. In fiscal 2004 and 2005, Congress appropriated about $1.2 billion for Amtrak, $300 million more than President George W. Bush had requested. However, the company's board requested $1.8 billion through fiscal 2006, the majority of which (about $1.3 billion) would be used to bring infrastructure, rolling stock, and motive power back to a state of good repair. In Congressional testimony, the DOT Inspector General confirmed that Amtrak would need at least $1.4 billion to $1.5 billion in fiscal 2006 and $2 billion in fiscal 2007 just to maintain the status quo. In 2006, Amtrak received just under $1.4 billion, with the condition that Amtrak would reduce (but not eliminate) food and sleeper service losses. Thus, dining service was simplified and now requires two fewer on-board service workers. Only Auto Train and Empire Builder services continue regular made-on-board meal service. In 2010 the Senate approved a bill to provide $1.96 billion to Amtrak, but cut the approval for high-speed rail to a $1 billion appropriation.State governments have partially filled the breach left by reductions in federal aid. Several states have entered into operating partnerships with Amtrak, notably California, Pennsylvania, Illinois, Michigan, Oregon, Missouri, Washington, North Carolina, Oklahoma, Texas, Wisconsin, Vermont, Maine, and New York, as well as the Canadian province of British Columbia, which provides some of the resources for the operation of the Cascades route.
With the dramatic rise in gasoline prices during 2007–08, Amtrak saw record ridership. Capping a steady five-year increase in ridership overall, regional lines saw 12% year-over-year growth in May 2008. In October 2007, the Senate passed S-294, Passenger Rail Improvement and Investment Act of 2007 (70–22) sponsored by Senators Frank Lautenberg and Trent Lott. Despite a veto threat by President Bush, a similar bill passed the House on June 11, 2008, with a veto-proof margin (311–104). The final bill, spurred on by the September 12 Metrolink collision in California and retitled Passenger Rail Investment and Improvement Act of 2008, was signed into law by President Bush on October 16, 2008. The bill appropriates $2.6 billion a year in Amtrak funding through 2013.
2010s
Amtrak points out that in 2010, its farebox recovery (percentage of operating costs covered by revenues generated by passenger fares) was 79%, the highest reported for any U.S. passenger railroad. This increased to 94.9% in 2018.Amtrak has argued that it needs to increase capital program costs in 2013 in order to replace old train equipment because the multi-year maintenance costs for those trains exceed what it would cost to simply buy new equipment that would not need to be repaired for several years. However, despite an initial request for more than $2.1 billion in funding for the year, the company had to deal with a year-over-year cut in 2013 federal appropriations, dropping to under $1.4 billion for the first time in several years. Amtrak stated in 2010 that the backlog of needed repairs of the track it owns on the Northeast Corridor included over 200 bridges, most dating to the 19th century, tunnels under Baltimore dating to the American Civil War era and functionally obsolete track switches which would cost $5.2 billion to repair (more than triple Amtrak's total annual budget). Amtrak's budget is only allocated on a yearly basis, and it has been argued by Joseph Vranich that this makes multi-year development programs and long-term fiscal planning difficult if not impossible.In Fiscal Year 2011, the U.S. Congress granted Amtrak $563 million for operating and $922 million for capital programs.
Controversy
Government aid to Amtrak was controversial from the beginning. The formation of Amtrak in 1971 was criticized as a bailout serving corporate rail interests and union railroaders, not the traveling public. Critics have asserted that Amtrak has proven incapable of operating as a business and that it does not provide valuable transportation services meriting public support, a "mobile money-burning machine". Many fiscal conservatives have argued that subsidies should be ended, national rail service terminated, and the NEC turned over to private interests. "To fund a Nostalgia Limited is not in the public interest." Critics also question Amtrak's energy efficiency, though the U.S. Department of Energy considers Amtrak among the most energy-efficient forms of transportation.The Rail Passenger Service Act of 1970, which established Amtrak, specifically states that, "The Corporation will not be an agency or establishment of the United States Government". Then common stock was issued in 1971 to railroads that contributed capital and equipment; these shares convey almost no benefits, but their holders declined a 2002 buy-out offer by Amtrak. There are currently 109.4 million shares of preferred stock, at a par value of $100 per share, all held by the US government. As at February 2015, there were are 9.4 million shares of common stock, with a par value of $10 per share, held by American Premier Underwriters (53%), BNSF (35%), Canadian Pacific (7%) and Canadian National (5%).In January 2023, an Auto Train delay caused passengers, fearing being held hostage, to call police, and resulted in Senate involvement.
Incidents
The following are major accidents and incidents that involved Amtrak trains:
After settling for $17 million in the 2017 Washington state train crash, to prevent further lawsuits, the board adopted a new policy requiring arbitration.
Publication
In April 1974 Amtrak News was launched as Amtrak's bi-monthly in-house journal.
See also
Notes
Explanatory citations
Citations
References
Carper, Robert S. (1968). American Railroads in Transition; The Passing of the Steam Locomotives. A. S. Barnes. ISBN 978-0-498-06678-8.
Edmonson, Harold A. (2000). Journey to Amtrak: The year history rode the passenger train. Kalmbach Books. ISBN 978-0-89024-023-6.
Glischinski, Steve (1997). Santa Fe Railway. Osceola, Wisconsin: Motorbooks International. ISBN 978-0-7603-0380-1.
Government Accountability Office (October 2005). "Amtrak Management: Systemic Problems Require Actions to Improve Efficiency, Effectiveness, and Accountability" (PDF). Archived from the original (PDF) on November 25, 2005. Retrieved November 23, 2005.
Hosmer, Howard; et al. (1958). Railroad Passenger Train Deficit (Report). Interstate Commerce Commission. 31954.
Karr, Ronald Dale (2017). The Rail Lines of Southern New England (2nd ed.). Pepperell, Massachusetts: Branch Line Press. ISBN 978-0-942147-12-4. OCLC 1038017689. Archived from the original on October 24, 2021. Retrieved October 22, 2021.
McCommons, James (2009). Waiting on a Train: The Embattled Future of Passenger Rail Service. White River Junction, Vermont: Chelsea Green. ISBN 978-1-60358-064-9.
McKinney, Kevin (June 1991). "At the dawn of Amtrak". Trains: 34–41. OCLC 23730369.
Office of Inspector General for the Department of Transportation (July 10, 2012). "Analysis of the Causes of AMTRAK Train Delays" (PDF). United States Department of Transportation. OCLC 862979061.
Peterman, David Randall (September 28, 2017). Amtrak: Overview (PDF). Washington, D.C.: Congressional Research Service.
Sanders, Craig (2006). Amtrak in the Heartland. Bloomington, Indiana: Indiana University Press. ISBN 978-0-253-34705-3.
Saunders, Richard (2001) [1978]. Merging Lines: American Railroads 1900–1970 (Revised ed.). DeKalb, Illinois: Northern Illinois University Press. ISBN 978-0-87580-265-7.
Saunders, Richard (2003). Main Lines: Rebirth of the North American Railroads, 1970–2002. DeKalb, Illinois: Northern Illinois University Press. ISBN 0-87580-316-4.
Schafer, Mike; Welsh, Joe; Holland, Kevin J. (2001). The American Passenger Train. Saint Paul, MN: MBI. ISBN 0-7603-0896-9.
Schafer, Mike (June 1991). "Amtrak's Atlas: 1971–1991". Trains.
Solomon, Brian (2004). Amtrak. Saint Paul, Minnesota: MBI. ISBN 978-0-7603-1765-5.
Stover, John F. (1997). American Railroads (2nd ed.). Chicago: University of Chicago Press. ISBN 0-226-77657-3.
Thoms, William E. (1973). Reprieve for the Iron Horse: The AMTRAK Experiment–Its Predecessors and Prospects. Baton Rouge, LA: Claitor's Publishing Division. OCLC 1094744.
Vranich, Joseph (1997). Derailed: What Went Wrong and What to Do about America's Passenger Trains. New York: St. Martin's Press. ISBN 0-3121-7182-X.
Vranich, Joseph (2004). End of the Line: The Failure of Amtrak Reform and the Future of America's Passenger Trains. Washington, D.C.: AEI Press. ISBN 0-8447-4203-1.
Wilner, Frank N. (1994). The Amtrak Story. Omaha, NE: Simmons-Boardman. ISBN 0-9113-8216-X.
Zimmermann, Karl R. (1981). Amtrak at Milepost 10. PTJ Publishing. ISBN 0-937658-06-5.
Further reading
"Articles of Incorporation of the National Railroad Passenger Corporation" (PDF). Muckrock.com. April 17, 1971.
Baron, David P. (August 1990). "Distributive Politics and the Persistence of Amtrak". The Journal of Politics. 52 (3): 883–913. doi:10.2307/2131831. JSTOR 2131831. S2CID 153981819.
Fostik, John (2017). Amtrak Across America: An Illustrated History (1st ed.). Enthusiast Books. ISBN 978-1583883501.
Hanus, Chris; Shaske, John (2009). USA West by Train: The Complete Amtrak Travel Guide. Way of the Rail Publishing. ISBN 978-0-9730897-6-9.
Pitt, John (2008). USA by Rail. Bradt Travel Guides. ISBN 978-1-84162-255-2.
The Staff of Amtrak (2011). Amtrak: An American Story (40th Anniversary Book). Kalmbach Publishing Company, Books Division. ISBN 9780871164445.
Wilner, Frank N. (2013). Amtrak: Past, Present, Future. Simmons-Boardman Books. ISBN 978-0-911-382600.
External links
Official website
Amtrak - Historical Timeline
Amtrak - Great American Stations
Amtrak Connects US - official website outlining 15-year expansion plans
All Aboard Amtrak! 50 Years of America's Railroad - digital exhibit from Northwestern University's Transportation Library for Amtrak's 50th anniversary
The Museum of Railway Timetables (Amtrak timetables from 1971 to 2016) |
open burning of waste | The open burning of waste is a disposal method of waste or garbage. It is a disposal method used globally, but often used in low and middle-income countries that lack adequate waste disposal infrastructure. Numerous governments and institutions have identified the open burning of waste as a major contributor to greenhouse gas emissions. It also poses health risks with the cocktail of air pollutants often created when waste is burned in an open air environment.
At COP26 open waste burning was raised as a major contributor to climate change. It produces a wide range of atmospheric pollutants including short lived climate pollutants (SLCPs), such as black carbon (BC). BC emissions are a major source of fine particulate matter, with a climate change impact up to 5,000 greater than CO2.
Background
The United Nations has raised concerns about the amount of black carbon and methane produced from open burning as a method of waste disposal. Many cities and regions suffer with air pollution and low air quality as a direct result of open burning of waste.According to the Canadian government it is common for many toxic gases to be released into the atmosphere as a result of open burning of waste. They can include arsenic, mercury, lead, carbon monoxide and nitrogen oxides. Studies by researchers from London's King’s and Imperial colleges both showed that burning of polystyrene and polyethylene terephthalate produce high amounts of soot. Both are common in plastic water bottles.The climate change conference COP26 held an official side event focused on raising awareness of the open burning of waste. In September 2022, an agreement was reached on reducing open waste burning in Africa at the 18th session of the African Ministerial Conference on the Environment (AMCEN). The conference hosted delegates from 54 African countries.
Sustainability & impact
The reduction of open burning can drastically change the air pollutants in the local area, therefore having a transformational impact on human health in that particular region. The Global Review on Safer End of Engineered Life suggested that the health of tens of millions of people worldwide was impacted by the disposal practice, with up to one billion tonnes burned globally each year.At a United Nations summit in 2022, the delegation focused on job creation as one potential solution to eradicate the practice of open burning of waste, namely in Africa. Up to 80% of waste generated in African cities is recyclable, with an estimated value of $8 billion each year. Many institutions see this as an opportunity to create jobs, while improving the health and air quality on the continent.
== References == |
monroney sticker | The Monroney sticker or window sticker is a label required in the United States to be displayed in all new automobiles and includes the listing of certain official information about the car. The window sticker was named after Almer Stillwell "Mike" Monroney, United States Senator from Oklahoma.
History
In 1955, Senator Mike Monroney chaired a subcommittee of the Senate Interstate and Foreign Commerce Committee that investigated complaints from car dealerships in the United States about abusive treatment by manufacturers. The subcommittee continued working and investigated deceptive practices by car dealerships. Since there was no price disclosed on each car, dealers could inflate the manufacturer's suggested retail price to give the impression that buyers received a larger discount allowance or higher value for the used car they traded. There were also hidden fees and nonessential costs that were added by some dealers and consumers lacked price information, listing of options, and destination charges as they were shopping for new cars.Monroney sponsored the Automobile Information Disclosure Act of 1958, which mandated the disclosure of information about the car, its equipment, and pricing for all new automobiles sold in the United States. The act does not apply to vehicles with a gross vehicle weight rating (GVWR) of more than 8,500 lb (3,856 kg).Since the mid-1970s the United States Environmental Protection Agency provides fuel economy metrics in the label to help consumers choose more fuel-efficient vehicles.
New requirements for the Monroney label were issued starting with 2008 model year cars and light-duty trucks sold in the US. This was included in the 2007 Energy Independence and Security Act (EISA) that mandated inclusion of additional information about fuel efficiency as well as ratings on each vehicle's greenhouse gas emissions and other air pollutants.A more comprehensive fuel economy and environment label was mandatory beginning in model year 2013, though some carmakers voluntarily adopted it for 2012. The new window sticker includes specific labels for alternative fuel and alternative propulsion vehicles available in the US market, such as plug-in hybrids, electric vehicles, flexible-fuel vehicles, hydrogen fuel cell vehicle, and natural gas vehicles.
The new label introduces the comparison of alternative fuel and advanced technology vehicles with conventional internal combustion engine vehicles using miles per gallon of gasoline equivalent (MPGe) as a metric. Other information provided for the first time includes greenhouse gas and smog emissions ratings, estimates of fuel cost over the next five years, and a QR Code that can be scanned by a smartphone to allow users access to additional online information.
Label contents
The Monroney sticker is required to be affixed to the side window or windshield by the manufacturers before shipment of new vehicles to the dealer for sale in the United States and it can only be removed by the consumer (Chapter 28, Sections 1231–1233, Title 15 of the United States Code). A fine of up to US$1,000 per vehicle for each offense is authorized if the sticker is missing, and other fees and penalties are authorized if the sticker is altered illegally (including imprisonment).The sticker must include the following information:
Make, model, trim, and serial number
The manufacturer's suggested retail price (MSRP)
Engine and transmission specifications
Standard equipment and warranty details
Optional equipment and pricing
Transportation charges for shipment to the dealerThe statute has been amended to include:
City and highway fuel economy ratings, as determined by the Environmental Protection Agency (EPA)
As of September 2007, crash test ratings as determined by the National Highway Traffic Safety Administration
Redesigned fuel economy label
As required by the 2007 Energy Independence and Security Act (EISA), with the introduction of advanced-technology vehicles in the U.S. new information should be incorporated in the Monroney label of new cars and light-duty trucks sold in the country, such as ratings on fuel economy, greenhouse gas emissions, and other air pollutants. The U.S. Environmental Protection Agency and the National Highway Traffic Safety Administration (NHTSA) conducted a series of studies to determine the best way to redesign this label to provide consumers with simple energy and environmental comparisons across all vehicles types, including battery electric vehicles (BEV), plug-in hybrid electric vehicles (PHEV), and conventional internal combustion engine vehicles powered by gasoline and diesel, to help consumers choose more efficient and environmentally-friendly vehicles.
As part of the research and redesign process, EPA conducted focus groups where presented participants with several options to express the consumption of electricity for plug-in electric vehicles. The research showed that participants did not understand the concept of a kilowatt hour as a measure of electric energy use in spite of the fact that this is the unit used in their monthly electric bills. Instead, participants favored a miles per gallon of gasoline equivalent, MPGe, as the metric to compare with the familiar miles per gallon used for gasoline vehicles. The research also concluded that the kW-hrs per 100 miles metric was more confusing to focus group participants compared to miles per kW-hr. Based on these results, EPA decided to use the following fuel economy and fuel consumption metrics on the redesigned labels: MPG (city and highway, and combined); MPGe (city and highway, and combined); Gallons per 100 miles; kW-hrs per 100 miles.The proposed design and final content for two options of the new sticker label that will be introduced in 2013 model year cars and trucks were consulted for 60 days with the public in 2010, and both included miles per gallon equivalent and kW-hrs per 100 miles as the fuel economy metrics for plug-in cars, but in one option MPGe and annual electricity cost are the two most prominent metrics. One of the design options had a letter grading system from A to D and the rating would have compared a given vehicle's fuel economy and air pollution to those of the entire fleet of new cars. The letter grade system was opposed by carmakers and rejected after the public consultation. In November 2010, EPA introduced MPGe as a comparison metric on its new sticker for fuel economy for the Nissan Leaf and the Chevrolet Volt.
2013 fuel economy and environment label
In May 2011, the National Highway Traffic Safety Administration (NHTSA) and EPA issued a joint final rule establishing new requirements for a fuel economy and environment label that will be mandatory for all new passenger cars and trucks starting with model year 2013, though carmakers can adopt it voluntarily for the 2012 model year. The ruling includes new labels for alternative fuel and alternative propulsion vehicles available in the US market, such as plug-in hybrids, electric vehicles, flexible-fuel vehicles, hydrogen fuel cell vehicle, and natural gas vehicles. The common fuel economy metric adopted to allow the comparison of alternative fuel and advanced technology vehicles with conventional internal combustion engine vehicles is miles per gallon of gasoline equivalent (MPGe). A gallon of gasoline-equivalent means the number of kilowatt-hours of electricity, cubic feet of compressed natural gas (CNG), or kilograms of hydrogen that is equal to the energy in a gallon of gasoline.The new labels include for the first time an estimate of how much fuel or electricity it takes to drive 100 miles (160 km), providing U.S. consumers with fuel consumption per distance traveled, the efficiency metric commonly used in many other countries. EPA's objective is to avoid the traditional miles per gallon metric that can be potentially misleading when consumers compare fuel economy improvements and this is known as the "MPG illusion."Other information provided for the first time in the redesigned labels includes:
Greenhouse gas ratings of how a model compares to all others for tailpipe emissions of carbon dioxide. A footnote-like note clarifies that upstream emissions from electricity generation are not included.
Smog emissions ratings based on air pollutants such as nitrogen oxide and particulates.
New ways to compare energy use and cost between new-technology cars that use electricity and conventional cars that are gasoline-powered.
Estimates on how much typical consumers will save or spend on fuel over the next five years compared to the average new vehicle.
Information on the driving range while running in all-electric mode and charging time for plug-in hybrids and electric cars.
A QR Code that a smartphone can scan to allow users access to online information about how various models compare on fuel economy, the price of gasoline and electricity where the user lives, and other environmental and energy factors. This tool will also allow consumers to enter information about their typical commutes and driving behavior to get a more precise estimate of fuel costs and savings.
Typical labels for each fuel or advanced technology
See also
Fuel economy in automobiles
Miles per gallon gasoline equivalent
References
External links
Automobile Information Disclosure Act Information at the US Department of Justice
Fact Sheet: New Fuel Economy and Environment Labels for a New Generation of Vehicles
"15 U.S. Code Chapter 28 – Disclosure of Automobile Information". Legal Information Institute. Retrieved December 27, 2020 – via Cornell University (Law).
Peele, Robert (January 2, 2009). "The Senator Behind the Window Sticker". The New York Times. Retrieved December 27, 2020. |
coal power in turkey | Coal in Turkey generates between a quarter and a third of the nation's electricity. There are 54 active coal-fired power stations with a total capacity of 21 gigawatts (GW).Air pollution from coal-fired power stations is damaging public health,: 48 and it is estimated that a coal phase-out by 2030 instead of by the 2050s would save over 100,000 lives. Flue gas emission limits were improved in 2020, but data from mandatory reporting of emission levels is not made public. Turkey has not ratified the Gothenburg Protocol, which limits fine dust polluting other countries.
Turkey's coal is almost all low calorie lignite, but government policy supports its continued use. In contrast, Germany is closing lignite-fired stations under 150 MW. Drought in Turkey is frequent, but thermal power stations use significant amounts of water.Coal-fired power stations are the largest source of greenhouse gas, at about a tonne each year per person, which is about the world average. Coal-fired stations emit over 1 kg of carbon dioxide for every kilowatt hour generated, over twice that of gas power. Academics suggest that in order to reach Turkey's target of carbon neutrality by 2053, coal power should be phased out by the mid-2030s. In January 2023 the National Energy Plan was published: it forecast a capacity increase to 24.3 GW by 2035,: 23 including 1.7 GW more by 2030.: 15 The plan forecasts coal generation decreasing but capacity payments continuing for flexible and baseload power.: 25
Energy policy
Energy strategy includes increasing the share of not just renewable energy in Turkey, but also other local energy resources to support the country's development and to reduce dependence on energy imports. As of 2022 Turkey has not ratified the Gothenburg Protocol on emissions ceilings for sulphur dioxide and nitrogen oxides. Earlier in 2021 Turkey ratified the Paris Agreement to limit climate change, but as of October 2021 policy was still to increase domestic coal share in the energy mix, and planned increases in coal power were forecast to increase CO2 emissions.: 79, 87 Greenhouse gas emissions are pledged to peak by 2038 at the latest.
Generation
Coal-fired power stations generate approximately one third of the nation's electricity: in 2020 made up of 62 TWh from imported coal and 44 TWh from local coal (almost all lignite). As of 2023 there are 54 licensed coal-fired power stations with a total capacity in December 2022 of 21.8111 gigawatts (GW). There is no unlicensed coal power.: 10 The average thermal efficiency of Turkey's coal-fired power stations is 36%. Generation fell in 2021 due to the high cost of imported coal (over $70 /MWh). Emba Hunutlu was the last coal plant to be built and started up in 2022. Shanghai Electric Power said it would be China's largest ever direct investment in Turkey. However, according to the World Wide Fund for Nature, it could not make a profit if it was not subsidized. Afşin-Elbistan C and further new coal-fired power stations will probably not be constructed, due to public opposition, court cases, and the risk of them becoming stranded assets. Typical thermal efficiencies are 39%, 42% and 44% for subcritical, supercritical and ultra supercritical power stations.In 2022 the average age of a coal power station was 17 years,: 62 as much of the operational fleet was built in the 21st century. There was oversupply of generating capacity and a drop in demand in 2020, and a quarter of power stations were estimated to be cashflow negative. Solar generation fits better with consumption, as annual peak electricity demand is on summer afternoons, due to air conditioning.Germany is closing lignite-fired stations under 150 MW. Neighbouring Greece is closing down all its lignite-fueled power stations.Yunus Emre power station was completed in 2020,: 42 but had only generated 700 hours of power to the grid by 2022. As coal in the local area is unsuitable for its boilers it became a stranded asset: it was bought by Yıldızlar Holding (Yıldızlar SSS Holding A.Ş. not to be confused with Yıldız Holding).: 30 In May 2023 Vice President Fuat Oktay said that unit 1 would be restarted in June, and by mid-August about 60 GWh had been sent to the grid.With a few exceptions stations smaller than 200 MW provide both electricity and heat, often to factories, whereas almost all those larger than 200 MW just generate electricity. Companies owning large amounts of coal power include Eren, Çelikler, Aydem, İÇDAŞ, Anadolu Birlik (via Konya Sugar) and Diler.: 31
Flexibility
Turkey plans to substantially increase the contribution of solar and wind power to its mix of generation. Cost-effective system operation with a high proportion of these intermittent generation sources requires system flexibility, where other sources of generation can be ramped up or down promptly in response to changes in intermittent generation. However, conventional coal-fired generation may not have the flexibility required to accommodate a large proportion of solar and wind power. Retrofitting to increase the ramp-up rate to reach full load in 1 hour, and lower minimum generation to half max may be possible for about 9 GW (just under half) of installed capacity.
Coal industry
Government policy supports continued generation from lignite (brown coal) because it is mined locally, whereas almost all hard coal (anthracite and bituminous coal) is imported. In 2020, 51 million tonnes (83%) of lignite and 22 million tonnes (55%) of hard coal was burnt in power stations.In 2020 Anadolu Birlik Holding, Çelikler Holding, Ciner Holding, Diler Holding, Eren Holding, Aydem, IC İçtaş, Kolin and Odaş, were substantially involved in electricity generation from coal.
Locally mined lignite
Power stations burning lignite tend to be near local coalmines, such as Elbistan, because Turkish lignite's calorific value is less than 12.5 MJ/kg (and Afsin Elbistan lignite less than 5 MJ/kg, which is a quarter of typical thermal coal), and about 90% has lower heat value under 3,000 kcal/kg, so is not worth transporting. According to energy analyst Haluk Direskeneli because of the low quality of Turkish lignite large amounts of supplementary fuel oil is used in lignite fired power stations.
Imported coal
To minimize transport costs, power stations burning imported coal are usually located on the coast; with clusters in Çanakkale and Zonguldak provinces and around Iskenderun Bay. Coal with up to 3% sulphur and minimum 5,400 kcal/kg can be imported, with capacity to burn about 25 million tons a year: in 2020 almost three quarters of imports were from Colombia. According to thinktank Ember, as of 2021, building new wind and solar power is cheaper than running existing coal power stations which depend on imported coal.
Air pollution
Air pollution is a significant environmental and public health problem in Turkey, and has been for decades. A 1996 court order to shut 3 polluting power stations was not enforced. Levels of air pollution have been recorded above the World Health Organization (WHO) guidelines in 51 out of 81 provinces. As for long range air pollution, Turkey has not ratified the Gothenburg Protocol which covers PM 2.5 (fine particles), and reporting under the Convention on Long-Range Transboundary Air Pollution has been criticized as incomplete.: 10 New flue gas emission limits were introduced in January 2020, resulting in five 20th century power stations being shut down that month because they did not meet the new limits. They were all re-licensed after improvements in 2020, such as new flue gas filters, but the effectiveness of the improvements is being questioned, as expenditure may not have been sufficient. There is not enough data regarding modern filters, due to many government ambient air monitoring points both being defective and not measuring fine particulate matter. Fine particulates (PM2.5), are the most dangerous pollutant but have no legal ambient limit.The "Industry Related Air Pollution Control Regulation" says that flue-gas stacks must be at least 10m from the ground and 3m above the roof. Larger power stations must measure local pollutants vented into the atmosphere from the smokestack and report them to the Environment Ministry but, unlike the EU, they are not required to publish the data. In 2022 academics called for better monitoring and stricter emission limits.Coal contributes to air pollution in big cities. Air pollution from some large coal-fired power stations is publicly visible in Sentinel satellite data. The Organisation for Economic Co-operation and Development (OECD) says that old coal-fired power stations are emitting dangerous levels of fine particulates: so it recommends reducing particulate emissions by retrofitting or closing old coal-fired power plants. Although the Turkish government receives reports of measurements of air pollution from the smokestacks of individual coal-fired power stations, it does not publish the reports, unlike the EU. The OECD has also recommended Turkey create and publish a pollutant release and transfer register.
Flue gas emission limits in milligrams per cubic metre (mg/Nm3) are:
The limits are laxer than the EU Industrial Emissions Directive and the SO2 limit for large coal-fired power plants in other countries, such as India at 100 mg/m3, and China at 35 mg/m3.
Greenhouse gas emissions
Coal-fired power stations emit over 1 kg of carbon dioxide for every kilowatt hour generated, over twice that of gas-fired power stations. Turkey's coal-fired power stations are the largest contributor to the country's greenhouse gas emissions. Production of public heat and electricity emitted 138 megatonnes of CO2 equivalent (CO2e) in 2019, mainly through coal burning.
Turkey has approved the environmental impact assessment to build Afşin-Elbistan C, and according to the assessment over 5 kg of CO2 would be emitted per kWh generated. This would be less carbon efficient than any power station on the list of least carbon efficient power stations. The forecast emissions of 60 million tonnes a year from this station would be more than 10% of the nation's total greenhouse gas emissions, and would make the power station the largest point source in the world, overtaking the current Secunda CTL.Because lignite quality varies greatly, to estimate the carbon dioxide emissions from a particular power station, the net calorific value of the lignite it burnt must be reported to the government. But this is not published, unlike some other countries. However public information from space-based measurements of carbon dioxide by Climate TRACE is expected to reveal individual large power stations in 2022, and smaller ones by GOSAT-GW in 2023 and possibly in 2025 by Sentinel-7.A 2020 study estimated that fitting carbon capture and storage to a power station burning Turkish lignite would increase the cost of its electricity by over 50%. In 2021 Turkey targeted net zero carbon emissions by 2053. After the Paris Agreement on limiting climate change was ratified in 2021 many environmental groups called for the government to set a target year for coal phase-out.Coal combustion emitted over 150 Mt of CO2 in total in 2018, about a third of Turkey's greenhouse gas. Emissions from individual power plants over 20 MW are measured. Life-cycle emissions of Turkish coal-fired power stations are over 1 kg CO2eq per kilowatt-hour. The environmental impact assessment (EIA) for the proposed Afşin-Elbistan C power station estimated CO2 emissions would be more than 60 million tonnes of CO2 per year. By comparison, total annual greenhouse gas emissions by Turkey are about 520 million tonnes; thus more than a tenth of greenhouse gas emissions by Turkey would be from the planned power station.As of 2019 coal mine methane remains an environmental challenge, because removing it from working underground mines is a safety requirement but if vented to the atmosphere it is a potent greenhouse gas.
Water consumption
Because Turkey's lignite-fired power stations have to be very close to their mines to avoid excessive lignite transport costs, they are mostly inland (see map of active coal-fired power stations in Turkey). Coal power stations may require a large quantity of water for the circulating water plant and coal washing if required. In Turkey, fresh water is used because of the locations of the plants. Between 600 and 3000 cubic metres of water is used per GWh generated, much more than solar and wind power. This intensive use has led to shortages in nearby villages and farmlands.
Ash
The mineral residue that remains from burning coal is known as coal ash, and contains toxic substances that may pose a health risk to workers in coal-fired power stations and people living or working near Turkey's large coal ash dams. A 2021 report from İklim Değişikliği Politika ve Araştırma Derneği (Climate Change Policy and Research Association) said that 2020s environmental law was being evaded by the repeated granting of less stringent 1 year temporary operating licenses, and said that coal ash storage permit criteria (inspections by universities) were unclear, so some power stations were not properly storing unhealthy coal ash . They said that some inspections may be insufficient and summarized inspection reports as:
Taxes, subsidies and incentives
Around the year 2000 government incentives were offered to build cogeneration power stations (such as autoproducers in factories but not connected to the grid), much small cogeneration was built in industrial parks or in sugar factories. About 20 of these small autoproducers were operating by 2021 but there is no list publicly available as they are not connected to the grid and no longer require licences. Because of its low calorific value lignite-fired electricity costs more to generate than in other European countries (except for Greece).The companies which built most recent stations: Cengiz, Kolin, Limak and Kalyon; are mainly in the construction rather than the energy sector; and some say they took on lignite-power at a loss to be politically favoured for other construction projects.: 160 In 2019 large lignite-burning stations were subsidized with capacity payments totalling over 1 billion lira (US$180 million, which was over half of total capacity payments), and in 2020 over 1.2 billion lira (US$210 million). In 2021 four power stations burning a mixture of lignite and imported coal also received capacity payments. This capacity mechanism has been criticised by some economists, as they say it encourages strategic capacity withholding, with a study of 2019 data showing that a 1% increase in the electricity price correlated with a 1-minute increase in length of power station generation failures. There is also a market clearing price cap of 2,000 lira(about US$350 in 2021)/MWh. These economists say that auctions of firm capacity (this is done in some other countries), with a financial penalty if not delivered, would be a better mechanism. As of November 2022 23 coal-fired power stations are eligible for capacity mechanism payments.Some electricity from these stations is purchased by the state-owned electricity company at a guaranteed price of US$50–55/MWh until the end of 2027.: 109 In the last quarter of 2021 the guaranteed purchase price was 458 lira(US$81) per MWh. Imported coal is taxed at US$70 per tonne minus the price of coal on the international market. The EU Carbon Border Adjustment Mechanism could push coal-power after gas in the merit order: in other words it could become more expensive.
Capacity payments
Unlike new solar and wind power in Turkey's electricity market, these were not decided by reverse auction but fixed by the government, and energy demand management is not eligible. Subsidy continues in 2020 and 13 coal fired power stations received January payments. The Chamber of Engineers (tr:Makina Mühendisleri Odası) has called for the capacity mechanism to be scrapped.
Phase-out
In 2019, the OECD said that Turkey's coal-fired power plant development programme is creating a high carbon lock-in risk due to the large capital costs and long infrastructure lifetimes. It also stated that energy and climate policies that are not aligned in future may prevent some assets from providing an economic return due to the transition to a low-carbon economy. The average Turkish coal-fired power station is predicted to have higher long-run operating costs than new renewables by 2023 and all renewables by 2030. The insurance industry is slowly withdrawing from fossil fuels.In 2021 the World Bank said that a plan for a just transition away from coal is needed, and environmentalists say it should be gone by 2030. The World Bank has proposed general objectives and estimated the cost, but has suggested government do far more detailed planning. According to a 2021 study by several NGOs if coal power subsidies were completely abolished and a carbon price introduced at around US$40 (which is lower than the 2021 EU Allowance) then no coal power stations would be profitable and all would close down before 2030. According to Carbon Tracker in 2021 $1b of investment on the Istanbul Stock Exchange was at risk of stranding, including $300 m for EÜAŞ.: 12 Turkey has $3.2 billion in loans for its energy transition. Small modular reactors have been suggested to replace coal power. A 2023 study suggests the early 2030s and at the latest 2035 as a practical target for phase-out.Some energy analysts say old plants should be shut down. Three coal-fired power plants, which are in Muğla Province, Yatağan, Yeniköy and Kemerköy, are becoming outdated. However, if the plants and associated lignite mines were shut down, about 5000 workers would need funding for early retirement or retraining. There would also be health and environmental benefits, but these are difficult to quantify as very little data is publicly available in Turkey on the local pollution by the plants and mines. Away from Zonguldak mining and the coal-fired power plant employ most working people in Soma district. According to Dr. Coşku Çelik "coal investments in the countryside have been regarded as an employment opportunity by the rural population".
Notes
References
Sources
Atilgan, Burcin; Azapagic, Adisa (2016). "An integrated life cycle sustainability assessment of electricity generation in Turkey". Energy Policy. 93: 168–186. doi:10.1016/j.enpol.2016.02.055.Turkish Greenhouse Gas Inventory report [TurkStat report]. Turkish Statistical Institute (Technical report). 13 April 2021.Turkish Greenhouse Gas Inventory 1990 – 2019 common reporting format (CRF) tables [TurkStat tables] (TUR_2021_2019_13042021_230815.xlsx). Turkish Statistical Institute (Technical report). April 2021.Çınar Engineering Consultancy (March 2020). Afşin C power station environmental impact report (Report) (in Turkish). Ministry of Environment and Urban Planning.
Aytaç, Orhan (May 2020). Ülkemi̇zdeki̇ Kömür Yakitli Santrallar Çevre Mevzuatiyla uyumlu mu? [Are Turkey's coal-fired power stations in accordance with environmental laws?] (PDF) (Report) (in Turkish). TMMOB Maki̇na Mühendi̇sleri̇ Odasi. ISBN 978-605-01-1367-9.IEA (March 2021). Turkey 2021 – Energy Policy Review (Technical report). International Energy Agency.Efimova, Tatiana; Mazur, Eugene; Migotto, Mauro; Rambali, Mikaela; Samson, Rachel (February 2019). OECD Environmental Performance Reviews: Turkey 2019. OECD (Report). OECD Environmental Performance Reviews. doi:10.1787/9789264309753-en. ISBN 9789264309760.
European Commission (May 2019). "Turkey 2019 Report" (PDF).
Turkstat (April 2020). Turkish Greenhouse Gas Inventory report [TurkStat report] (Report).
External links
Coal in Turkey by Ekosfer
Map of coal power stations by Global Energy Monitor
Database of European coal stations including Turkey by Beyond Coal
Graph of thermal power station construction funding in Turkey
List from Openstreetmap
Coal countdown by Bloomberg |
vedanta resources | Vedanta Resources Limited is a diversified mining company headquartered in London, United Kingdom. It is the largest mining and non-ferrous metals company in India and has mining operations in Australia and Zambia and oil and gas operations in three countries. Its main products are Zinc, Lead, Silver, Oil & Gas, Iron Ore, Steel, Aluminium and Power. It has also developed commercial power stations in India in Odisha (2,400 MW) and Punjab (1,980 MW).The company with 20,000 employees is primarily owned by the family of Anil Agarwal through Volcan Investments, a holding vehicle with a 61.7% stake in the business. Vedanta limited (formerly Sesa Goa / Sterlite) is one of the many Indian subsidiaries of Vedanta resources and operates iron ore mines in Goa.Vedanta was listed on the London Stock Exchange and was a constituent of the FTSE 250 Index until chairman, Anil Agarwal's offer to take the company private went unconditional in September 2018.
History
The company was founded in Bombay (now Mumbai) in 1976 by Anil Agarwal, as a scrap-metal dealership. In 1979, he acquired the Shamsher Sterling Corporation (subsequently renamed Sterlite Industries), a manufacturer of power and control cables.The company acquired a majority stake in Balco, the Indian state aluminium business, in 2001. It was first listed on the London Stock Exchange in 2003 when, as Vedanta Resources, it raised US$876 million through an initial public offering.In 2006, Vedanta acquired Sterlite Gold, a gold mining business, and in 2007, Vedanta Resources bought a 51% stake in Sesa Goa, India's largest producer-exporter of iron ore, and the company became listed on NYSE with a US$2 billion ADS issue.In 2008, Vedanta bought certain of the assets of Asarco, a copper mining business, out of Chapter 11 for US$2.6 billion. and in 2010, it acquired Anglo-American's portfolio of Zinc assets in South Africa, Namibia and Ireland.In 2011, Vedanta acquired 58.5% controlling stake in Cairn India, the India's largest private sector Oil & Gas company and in 2013, Sterlite Industries and Sesa Goa announced a merger. The merger took place in August 2013 and the consolidated group was then called Sesa Sterlite Ltd (now Vedanta Limited). In June 2018, Vedanta acquired 90% stake in Electrosteel Steels, a steel producer.In September 2018, the company announced that Anil Agarwal would be taking Vedanta Resources private on 1 October 2018.
Operations
Copper
Sterlite Industries (India): Sterlite is registered office headquartered in Tuticorin, Tamil Nadu, India. Sterlite has been a public listed company in India since 1988, and its equity shares are listed and traded on the NSE and the BSE, and are also listed and traded on the NYSE in the form of ADSs. Vedanta owns 53.9% of Sterlite and has management control of the company. Protest by Public of Tuticorin started for not following Environmental Clearance Issues. The Tamil Nadu Pollution Control Board (TNPCB) accused the factory of releasing noxious gas in the air. It said sulphur-dioxide levels had gone off the charts on the night of 23 March in the year 2013. It showed a reading of 2939.55 mg/cubic metre against the prescribed limit of 1250 mg/ cubic metre more people where affected by cancer and other breathing disorders but the Indian government did not take any action.Konkola Copper Mines: Vedanta owns 79.4% of KCM's share capital and have management control of the company. KCM's other shareholder is ZCCM Investment Holdings plc. The government of Zambia has a controlling stake in ZCCM Investment Holdings plc.Copper Mines of Tasmania: CMT is headquartered in Queenstown, Tasmania. Sterlite owns 100.0% of CMT and has management control of the company.
Zinc
Hindustan Zinc: HZL is headquartered in Udaipur in the state of Rajasthan. HZL's equity shares are listed and traded on the NSE and BSE. Sterlite owns 64.9% of the share capital in HZL and has management control. Sterlite has a call option to acquire the government of India's remaining ownership interest.
Iron ore
Vedanta's iron ore mining operations in India are operated under the umbrella of Vedanta Limited, a company headquartered in Panaji, India. It has mining operations in Goa and Karnataka. Originally founded as Sesa Goa, a Portuguese company, Sesa Goa was purchased by Vedanta (then known as Sterlite industries) in the 1990s. As of 30 June 2018, the company is owned 50% by the promoters (under the names of 12 members of the Agarwal family) and 50% by the public. This includes ownership by and "Westglobe limited", "Twinstar holdings", Finsider international and mutual funds (ICICI Prudential), foreign portfolio investors (17%), LIC India (6%) and Citibank New York (4%).Sterlite Energy: Sterlite Energy is headquartered in Mumbai. Sterlite owns 100.0% of Sterlite Energy and has management control of the company.
Philanthropy
In 1992, Anil Agarwal created the Vedanta Foundation as the vehicle through which the group companies would carry out their philanthropic programs and activities. In the financial year 2013–14, the Vedanta group companies and the Vedanta foundation invested US$49.0 million in building hospitals, schools and infrastructure, conserving the environment and funding community programs that improve health, education and livelihood of over 4.1 million people. The initiatives were undertaken in partnership with the government and non-governmental organizations (NGOs). Among his inspirations, Agarwal counts Andrew Carnegie and David Rockefeller who built public works with their fortunes, and Bill Gates. The activities funded by his philanthropy are focused on child welfare, women empowerment and education.
Anil Agarwal was ranked second in Hurun India Philanthropy List 2014 for his personal donation of ₹1,796 crore (about US$36 million). He was ranked 25th in the Hurun India Rich List with a personal fortune of ₹12,316 crore.In 2015, the Vedanta group in partnership with Ministry for Women and Child development inaugurated the first "Nand Ghar" or modern anganwadi, of the 4,000 planned to set up. Agarwal has pledged to donate 75% of his family's wealth to charity, saying he was inspired by Bill Gates.
Criticism
Environmental damage
Vedanta has been criticised by human rights and activist groups, including Survival International, Amnesty International and Niyamgiri Surakshya Samiti because of the company's operations in Niyamgiri hills in Odisha, India that are said to threaten the lives of the Dongria Kondh people who populate this region. The Niyamgiri hills are also claimed to be an important wildlife habitat in Eastern Ghats of India as per a report by the Wildlife Institute of India as well as independent reports/studies carried out by civil society groups. In January 2009, thousands of locals formed a human chain around the hill in protest at the plans to start bauxite mining in the area. The Union Environment Ministry in August 2010 rejected earlier clearances granted to a joint venture led by the Vedanta Group company Sterlite Industries for mining bauxite from Niyamgiri hills.Vedanta's Alumina Refinery in Lanjigarh was criticised by the Orissa State Pollution Control Board (the statutory environmental regulation body) for air pollution and water pollution in the area. According to Amnesty International, local people reported dust, allegedly from the plant, settling on clothes, crops and food. Vedanta officials claimed there was no dust pollution from the plant at all. An environmental inspection of the plant reported water pollution by the plant including a small increase of the pH value of the river Vamshadhara below the refinery and a high level of SPM in the stack emissions.In October 2009 it was reported that the British government has criticised Vedanta for its treatment of the Dongria Kondh tribe in Orissa, India. The company refused to co-operate with the British government and with an OECD investigation. It has rejected charges of environmental damage, saying it may be related to the increased use of fertiliser by farmers.It was reported in August 2015 that villagers in Chingola, Zambia can smell and taste toxic pollution/leaks from the largest copper mine in Africa owned by KCM.Vedanta Resources was ranked as "the worst of the 12 biggest diversified miners at reducing emissions and planning for climate change", according to the Digging Deep report (CDP).
Safety concerns
2007 Mining Deaths
Unsafe mining operations led to 18 deaths and 1,256 injuries involving own employees and contractors in 2007.
Balco, Korba, Chhattisgarh
A chimney under construction by Gannon Dunkerley & Company at the Balco smelter in Korba, Chhattisgarh collapsed on 23 September 2009, killing at least 40 workers. Balco and GDCL management have been accused of negligence in the incident.
Gamsberg Mine Landslide
On 17 November 2020, a mining-related "geotechnical incident" caused a landslide at the Gamsberg Mine in South Africa and 10 miners became trapped; with mining halted, eight miners were rescued, one died and one body was missing. On 18 January 2021, the company confirmed that mining operations had resumed.
Litigation
India
In respect of bauxite mines at Lanjigarh, Orissa, public interest litigations were filed in 2004 by Indian non-government organisations led by the People's Union for Civil Liberties to the supreme court sub-committee regarding the potential environmental impact of the mines. The Ministry of Environment and Forests received reports from expert organisations and has submitted its recommendations to the supreme court. The sub-committee has found "blatant violations" of environmental regulations and grave concerns about the impact of the Niyamgiri mine on both the environment and the local tribal population. The committee recommended to the court that mining in such an ecologically sensitive area should not be permitted.
Human rights
In February 2010, the Church of England decided to disinvest from the company on ethical grounds.The Director of Survival International, Stephen Corry, said, "The Church’s unprecedented and very welcome decision sends a strong signal to companies that trample on tribal peoples' rights: we will not bankroll your abuses. Anybody that has shares in Vedanta should sell them today if they care about human rights."Vedanta responded by expressing disappointment at the church's actions, and that it is "fully committed to pursuing its investments in a responsible manner, respecting the environment and human rights".The NGO Amnesty International has also criticised the company's record on human rights. It has said, "[I]t is clear that Vedanta Resources and its subsidiaries [...] have failed to respect the human rights of the people of Lanjigarh and the Niyamgiri Hills" adding, "The proposed bauxite mine [...] threatens the survival of a protected Indigenous community [...] However, these risks have been largely ignored and consultation with and disclosure of information to affected communities have been almost non-existent."Several shareholders sold their shares because of human rights concerns. This includes the Joseph Rowntree Charitable Trust, the Marlborough Ethical Fund, Millfield House Foundation and PGGM. The Economic Times criticised the project in an editorial, stating that if the mine goes ahead it will "impoverish a defenceless populace, perhaps to extinction." In July 2010, the Chief Secretary of the Indian state of Orissa ordered a new investigation into the rights of the Dongria Kondh tribe affected by Vedanta Resources' bauxite mine, in what Survival International characterised as the "...third major blow to Vedanta in a month".A four-member panel set up by the government of India in the Ministry of Environment and Forests investigated the bauxite mining proposal over Niyamgiri near Lanjigarh in the districts of Kalahandi and Rayagada in Orissa. The area has been the traditional habitat of two particularly vulnerable tribal groups, the Dongria Kondh and the Kutia Kondh. The committee submitted its report on 16 August 2010, saying "The Vedanta Company has consistently violated the Forest Conservation Act [FCA], the Forest Rights Act [FRA], the Environment Protection Act [EPA] and the Orissa Forest Act in active collusion with the State officials. Allowing mining by depriving two primitive tribal groups of their rights over the proposed mining site to benefit a private company would shake the faith of the tribal people in the laws of the land ". Based on a panel report, the government of India has served a show cause notice on the company on why its Stage I environment clearance should not be cancelled.In October 2017, London's Court of Appeal in the case of Lungowe v Vedanta Resources plc ruled that nearly 2,000 Zambians could sue Vedanta Resources plc as a parent company in English courts over alleged pollution of their village. In concluding the same litigation in 2019, the Supreme Court of the United Kingdom confirmed that Vedanta could be sued in England concerning business liability for human rights violations and environmental damage.
Legal violations
In July 2010, Sterlite Industries, a subsidiary of Vedanta Group, received a tax notice of about ₹3.24 billion (US$41 million), and was charged with violating several rules by the excise department in India. Excise officials charged Sterlite Industries with misdeclaration because the company is alleged to have tried shipping out copper waste for the purpose of separating gold and silver when the waste also contained other precious metals like platinum and palladium. Vendanta also owes the Income Tax Department ₹10,247 crore as retrospective tax as of January 2014.
References
External links
Official website |
architectural engineering | Architectural engineering or architecture engineering, also known as building engineering, is a discipline that deals with the engineering and construction of buildings, such as structural, mechanical, electrical, lighting, environmental, climate control, telecommunications, security, and other areas.
It is related to both architecture and civil engineering, and distinguished from architectural design, as an art and science of designing buildings.From reduction of greenhouse gas emissions to the construction of resilient buildings, architectural engineers are at the forefront of addressing several major challenges of the 21st century. They apply the latest scientific knowledge and technologies to the design of buildings. Architectural engineering as a relatively new licensed profession emerged in the 20th century as a result of the rapid technological developments. Architectural engineers are at the forefront of two major historical opportunities that today's world is immersed in: (1) that of rapidly advancing computer-technology, and (2) the parallel revolution arising from the need to create a sustainable planet.
Related engineering and design fields
Structural Engineering
Structural engineering involves the analysis and design of the built environment (buildings, bridges, equipment supports, towers and walls). Those concentrating on buildings are sometimes informally referred to as "building engineers". Structural engineers require expertise in strength of materials, structural analysis, and in predicting structural load such as from weight of the building, occupants and contents, and extreme events such as wind, rain, ice, and seismic design of structures which is referred to as earthquake engineering. Architectural Engineers sometimes incorporate structural as one aspect of their designs; the structural discipline when practiced as a specialty works closely with architects and other engineering specialists.
Mechanical, electrical, and plumbing (MEP)
Mechanical engineering and electrical engineering engineers are specialists when engaged in the building design fields. This is known as mechanical, electrical, and plumbing (MEP) throughout the United States, or building services engineering in the United Kingdom, Canada, and Australia. Mechanical engineers often design and oversee the heating, ventilation and air conditioning (HVAC), plumbing, and rainwater systems. Plumbing designers often include design specifications for simple active fire protection systems, but for more complicated projects, fire protection engineers are often separately retained. Electrical engineers are responsible for the building's power distribution, telecommunication, fire alarm, signalization, lightning protection and control systems, as well as lighting systems.
The architectural engineer (PE) in the United States
In many jurisdictions of the United States, the architectural engineer is a licensed engineering professional. Usually a graduate of an EAC/ABET-accredited architectural engineering university program preparing students to perform whole-building design in competition with architect-engineer teams; or for practice in one of structural, mechanical or electrical fields of building design, but with an appreciation of integrated architectural requirements. Although some states require a BS degree from an EAC/ABET-accredited engineering program, with no exceptions, about two thirds of the states accept BS degrees from ETAC/ABET-accredited architectural engineering technology programs to become licensed engineering professionals. Architectural engineering technology graduates, with applied engineering skills, often gain further learning with an MS degree in engineering and/or NAAB-accredited Masters of Architecture to become licensed as both an engineer and architect. This path requires the individual to pass state licensing exams in both disciplines. States handle this situation differently on experienced gained working under a licensed engineer and/or registered architect prior to taking the examinations. This education model is more in line with the educational system in the United Kingdom where an accredited MEng or MS degree in engineering for further learning is required by the Engineering Council to be registered as a Chartered Engineer. The National Council of Architectural Registration Boards (NCARB) facilitate the licensure and credentialing of architects but requirements for registration often vary between states. In the state of New Jersey, a registered architect is allowed to sit for the PE exam and a professional engineer is allowed to take the design portions of the Architectural Registration Exam (ARE), to become a registered architect.
Formal architectural engineering education, following the engineering model of earlier disciplines, developed in the late 19th century, and became widespread in the United States by the mid-20th century. With the establishment of a specific "architectural engineering" NCEES Professional Engineering registration examination in the 1990s, and first offering in April 2003, architectural engineering became recognized as a distinct engineering discipline in the United States. Up to date NCEES account allows engineers to apply to other states PE license "by comity".
In most license-regulated jurisdictions, architectural engineers are not entitled to practice architecture unless they are also licensed as architects. Practice of structural engineering in high-risk locations, e.g., due to strong earthquakes, or on specific types of higher importance buildings such as hospitals, may require separate licensing as well. Regulations and customary practice vary widely by state or city.
The architect as architectural engineer
In some countries, the practice of architecture includes planning, designing and overseeing the building's construction, and architecture, as a profession providing architectural services, is referred to as "architectural engineering". In Japan, a "first-class architect" plays the dual role of architect and building engineer, although the services of a licensed "structural design first-class architect"(構造設計一級建築士) are required for buildings over a certain scale.In some languages, such as Korean and Arabic, "architect" is literally translated as "architectural engineer". In some countries, an "architectural engineer" (such as the ingegnere edile in Italy) is entitled to practice architecture and is often referred to as an architect. These individuals are often also structural engineers. In other countries, such as Germany, Austria, Iran, and most of the Arab countries, architecture graduates receive an engineering degree (Dipl.-Ing. – Diplom-Ingenieur).In Spain, an "architect" has a technical university education and legal powers to carry out building structure and facility projects.In Brazil, architects and engineers used to share the same accreditation process (Conselho Federal de Engenheiros, Arquitetos e Agrônomos (CONFEA) – Federal Council of Engineering, Architecture and Agronomy). Now the Brazilian architects and urbanists have their own accreditation process (CAU – Architecture and Urbanism Council). Besides traditional architecture design training, Brazilian architecture courses also offer complementary training in engineering disciplines such as structural, electrical, hydraulic and mechanical engineering. After graduation, architects focus in architectural planning, yet they can be responsible to the whole building, when it concerns to small buildings (except in electric wiring, where the architect autonomy is limited to systems up to 30kVA, and it has to be done by an Electrical Engineer), applied to buildings, urban environment, built cultural heritage, landscape planning, interiorscape planning and regional planning.In Greece licensed architectural engineers are graduates from architecture faculties that belong to the Polytechnic University, obtaining an "Engineering Diploma". They graduate after 5 years of studies and are fully entitled architects once they become members of the Technical Chamber of Greece (TEE – Τεχνικό Επιμελητήριο Ελλάδος). The Technical Chamber of Greece has more than 100,000 members encompassing all the engineering disciplines as well as architecture. A prerequisite for being a member is to be licensed as a qualified engineer or architect and to be a graduate of an engineering and architecture schools of a Greek university, or of an equivalent school from abroad. The Technical Chamber of Greece is the authorized body to provide work licenses to engineers of all disciplines as well as architects, graduated in Greece or abroad. The license is awarded after examinations. The examinations take place three to four times a year. The Engineering Diploma equals a master's degree in ECTS units (300) according to the Bologna Accords.
Education
The architectural, structural, mechanical and electrical engineering branches each have well established educational requirements that are usually fulfilled by completion of a university program.In Canada, a CEAB-accredited engineer degree is the minimum academic requirement for registration as a P.Eng (professional engineer) anywhere in Canada and the standard against which all other engineering academic qualifications are measured. A graduate of a non-CEAB-accredited program must demonstrate that his or her education is at least equivalent to that of a graduate of a CEAB-accredited program.In the United States, the engineer's degree requires a year of study beyond a master's degree or two years from a bachelor's degree and often includes a requirement for a research thesis. In Vietnam, the engineer's degree is called Bằng kỹ sư, the first degree after five years of study. The Ministry of Education of Vietnam has also issued separate regulations for the naming of degrees not in accordance with international regulation.
Architectural engineering as a single integrated field of study
Its multi-disciplinary engineering approach is what differentiates architectural engineering from architecture (the field of the architect): which is an integrated, separate and single, field of study when compared to other engineering disciplines.
Through training in and appreciation of architecture, the field seeks integration of building systems within its overall building design. Architectural engineering includes the design of building systems including heating, ventilation and air conditioning (HVAC), plumbing, fire protection, electrical, lighting, architectural acoustics, and structural systems. In some university programs, students are required to concentrate on one of the systems; in others, they can receive a generalist architectural or building engineering degree.
See also
Architectural drawing
Architectural technologist
Architectural technology
Building engineer
Building officials
Civil engineering
Construction engineering
Contour crafting
History of architectural engineering
International Building Code
Mechanical, electrical, and plumbing
Outline of architecture
Storm hardening
== References == |
criticism of the kyoto protocol | Although it is a worldwide treaty, the Kyoto Protocol has received criticism.
Criticism of the Kyoto Protocol
Some also argue the protocol does not go far enough to curb greenhouse emissions and avoid dangerous climate change (Niue, The Cook Islands, and Nauru added notes to this effect when signing the protocol).Some environmental economists have been critical of the Kyoto Protocol. Many see the costs of the Kyoto Protocol as outweighing the benefits, some believing the standards which Kyoto sets to be too optimistic, others seeing a highly inequitable and inefficient agreement which would do little to curb greenhouse gas emissions. There are also economists who believe that an entirely different approach needs to be followed than the approach suggested by the Kyoto Protocol.In Russia, Andrey Illarionov, who was an economic policy advisor to the President of Russia, Vladimir Putin, expressed the opinion that since human civilization is based on the consumption of hydrocarbons, the adoption of the Kyoto agreements could have a negative impact on Russian economy. He regarded the Kyoto agreement as discriminatory and not universal, since the main sources of carbon dioxide emissions like the US, China, India, Brazil, Mexico and Korea, as well as a number of developing countries, did not impose any restrictions on themselves. Andrei Illarionov also referred to a large number of works that cast doubt on the very idea of a "greenhouse" effect caused by the accumulation of carbon dioxide.
Base year as 1990 controversy
Further, there is controversy surrounding the use of 1990 as a base year, as well as not using per capita emissions as a basis. Countries had different achievements in energy efficiency in 1990. For example, the former Soviet Union and eastern European countries did little to tackle the problem and their energy efficiency was at its worst level in 1990, the year just before their communist regimes fell. On the other hand, Japan, as a big importer of natural resources, had to improve its efficiency after the 1973 oil crisis and its emissions level in 1990 was better than most developed countries. However, such efforts were set aside, and the inactivity of the former Soviet Union was overlooked and could even generate big income due to the emission trade. There is an argument that the use of per capita emissions as a basis in the following Kyoto-type treaties can reduce the sense of inequality among developed and developing countries alike, as it can reveal in activities and responsibilities among countries.
James Hansen's criticism
James E. Hansen, director of NASA's Goddard Institute for Space Studies and eminent climate scientist, has claimed that the United Nations Climate Change Conference taking place at the Bella Center in Copenhagen, Denmark, between December 7–18, 2009 (which includes the 15th Conference of the Parties (COP 15) to the United Nations Framework Convention on Climate Change and the 5th Meeting of the Parties (COP/MOP 5) to the Kyoto Protocol) is a 'farce' and planned to boycott it because it was seeking a counter-productive agreement to limit emissions through an inefficient and indulgent "cap and trade" system. "They are selling indulgences there" Hansen states. "The developed nations want to continue basically business as usual so they are expected to purchase indulgences to give some small amount of money to developing countries. They do that in the form of offsets and adaptation funds." Hansen prefers a progressive "carbon tax", not the Kyoto Protocol "cap and trade" system; this tax would begin at the equivalent of about $1 per gallon of petrol and revenues would all be returned directly to members of the public as a dividend inversely proportional to their carbon footprint.
"So, for example, in the Kyoto Protocol, that was very ineffective. Even the countries that took on supposedly the strongest requirements, like Japan for example—if you look at its actual emissions, its actual fossil fuel use, you see that their CO2 emissions actually increased even though they were supposed to decrease. Because their coal use increased and they used offsets to meet their objective. Offsets don't help significantly. That's why the approach that Copenhagen is using to specify goals for emission reductions and then to allow offsets to accomplish much of that reduction is really a fake. And that has to be exposed. Otherwise, just like in the Kyoto Protocol, we'll realize 10 years later, oops, it really didn't do much."
Green organizations' criticism
Rising Tide North America claims:
"Emission limits do not include emissions by international aviation and shipping, but are in addition to the industrial gases, chlorofluorocarbons, or CFCs, which are dealt with under the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer. The benchmark 1990 emission levels were accepted by the Conference of the Parties of UNFCCC (decision 2/CP.3)"
Exemption of Developing Countries
There has been criticism (especially from the United States) over the exemption of developing countries, such as China and India, from having to reduce their greenhouse gas emissions under the Kyoto Protocol. The Bush Administration has criticized the Kyoto Protocol on the basis that 80 percent of the world is exempt from emissions reduction standards as well as the potential of economic harm to the United States. Further argument is that developing countries at the time of the creation of the treaty and now have been large emitters of greenhouse gases. Greenhouse gases do not remain in the area in which they are emitted, but rather move throughout the atmosphere of Earth. Therefore, some say that even if the world's largest greenhouse gas emitter tackled the issue of climate change, there will be minimal impact in the atmosphere if other countries around the world didn't work on reducing their emission levels as well. There is also criticism over the true impact of the Kyoto Protocol in the long run on reduction of greenhouse gas emissions because it is questioned how much developed countries can offset their emissions while developing countries continue to emit these greenhouse gases.
Long-Term Impact
There is criticism that the Kyoto Protocol does not do enough to address the issue of climate change and pollution in the long run. One criticism is that climate change is a unique environmental issue, but the Kyoto Protocol followed the format of the other international treaties (not necessarily useful for environmental issues) instead of promoting innovation in approaching the issue of global warming. Another criticism is that the Kyoto Protocol focuses too much on carbon emissions and doesn't address other pollutants, such as sulfur dioxide and nitrogen oxides, which either do direct harm to human health and/or can be addressed using technology. Some also claim that the Kyoto Protocol does not promote long-term solutions to reduce greenhouse gas emissions,
but rather short-term solutions in having countries try to meet emission reduction standards (either by lowering emissions or find ways to obtain trading credits). In the same way, there has been criticism that the Kyoto Protocol does not address the concentration of atmospheric greenhouse gases, but rather greenhouse gas emissions, focusing on the short-term over the long-term.
Oregon Petition
The Global Warming Petition Project, also known as the Oregon Petition, is a petition urging the United States government to reject the global warming Kyoto Protocol of 1997 and similar policies. The petition's website states, "The current list of 31,487 petition signers includes 9,029 PhD; 7,157 MS; 2,586 MD and DVM; and 12,715 BS or equivalent academic degrees.
The text of the Global Warming Petition Project reads:We urge the United States government to reject the global warming agreement that was written in Kyoto, Japan in December, 1997...The proposed limits on greenhouse gases would harm the environment, hinder the advance of science and technology, and damage the health and welfare of mankind...There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth's atmosphere and disruption of the Earth's climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth.
Criticism of Carbon Trade
There are a large number of critics of carbon trading as a control mechanism. Critics include environmental justice nongovernmental organizations, economists, labor organizations and those concerned about energy supply and excessive taxation. Some see carbon trading as a government takeover of the free market. They argue that trading pollution allowances should be avoided because they result in failures in accounting, dubious science and the destructive impacts of projects upon local peoples and environments. Instead, they advocate making reductions at the source of pollution and energy policies that are justice-based and community-driven. Many argue that emissions trading schemes based upon cap and trade will necessarily reduce jobs and incomes. Most of the criticisms have focused on the carbon market created through investment in Kyoto Mechanisms. Criticism of cap-and-trade emissions trading has generally been more limited to lack of credibility in the first phase of the EU ETS.Critics argue that emissions trading does little to solve pollution problems overall, since groups that do not pollute sell their conservation to the highest bidder. Overall reductions would need to come from a sufficient reduction of allowances available in the system.
Regulatory agencies run the risk of issuing too many emission credits, diluting the effectiveness of regulation, and practically removing the cap. In this case, instead of a net reduction in carbon dioxide emissions, beneficiaries of emissions trading simply pollute more. The National Allocation Plans by member governments of the European Union Emission Trading Scheme were criticised for this when it became apparent that actual emissions would be less than the government-issued carbon allowances at the end of Phase I of the scheme. Certain emissions trading schemes have been criticised for the practice of grandfathering, where polluters are given free allowances by governments, instead of being made to pay for them. Critics instead advocate for auctioning the credits. The proceeds could be used for research and development of sustainable technology.Critics of carbon trading, such as Carbon Trade Watch, argue that it places disproportionate emphasis on individual lifestyles and carbon footprints, distracting attention from the wider, systemic changes and collective political action that needs to be taken to tackle climate change resulting from global warming. Groups such as the Corner House have argued that the market will choose the easiest means to save a given quantity of carbon in the short term, which may be different from the pathway required to obtain sustained and sizable reductions over a longer period, and so a market-led approach is likely to reinforce technological lock-in. For instance, small cuts may often be achieved cheaply through investment in making a technology more efficient, where larger cuts would require scrapping the technology and using a different one. They also argue that emissions trading is undermining alternative approaches to pollution control with which it does not combine well, and so the overall effect it is having is to actually stall significant change to less polluting technologies.
The corresponding uncertainty under a tax is the level of emissions reductions achieved.The Financial Times published an article about cap-and-trade systems which argued that "Carbon markets create a muddle" and "...leave much room for unverifiable manipulation".More recent criticism of emissions trading regarding implementation is that old growth forests, which have slow carbon absorption rates, are being cleared and replaced with fast-growing vegetation, to the detriment of the local communities.Recent proposals for alternative schemes to avoid the problems of cap-and-trade schemes include Cap and Share, which was being actively considered by the Irish Parliament in May 2008, and the Sky Trust schemes. These schemes state that cap-and-trade or cap-and-tax schemes inherently impact the poor and those in rural areas, who have less choice in energy consumption options.
See also
Carbon emission trading
References
Notes
https://web.archive.org/web/20100727230956/http://www.ilr.cornell.edu/globallaborinstitute/projects/climate/retreat/upload/ClimateFederalSweeney.pdf list of various NGOs that opposes carbon trade and claim Kyoto Protocol is not enough
Stop Climate Chaos coalition of NGOs that argues Kyoto is not enough
Carbon Trade Watch http://news.bbc.co.uk/2/hi/science/nature/6132826.stm BBC
Transnational Institute http://www.tni.org/archives/reports_ctw_sky full report
Transnational Institute http://www.tni.org/carbon-trade-fails full report published by Dag Hammarskjöld Foundation
The Corner House (organisation) http://www.thecornerhouse.org.uk/item.shtml?x=51982 report
http://www.washingtonexaminer.com/opinion/blogs/beltway-confidential/Scientists-urge-Merkel-to-change-global-warming-view--52513912.html Scientists urge Merkel to change global warming view
https://web.archive.org/web/20110701145855/http://epw.senate.gov/public/index.cfm?FuseAction=Files.View&FileStore_id=83947f5d-d84a-4a84-ad5d-6e2d71db52d9 Minority Report from US Senate
http://pubs.acs.org/cen/letters/87/8730letters.html Letter of the chemistry scientist repudiating the chief editor of their scientific magazine
External links
James Hansen
http://www.treehugger.com/files/2009/12/nasa-climate-change-scientist-to-boycott-copenhagen-climate-summit.php TreeHugger
https://www.theguardian.com/science/2009/mar/18/nasa-climate-change-james-hansen The Guardian
http://www.thestar.com/sciencetech/Environment/article/285582 Toronto Star
https://web.archive.org/web/20091207233524/http://www.ecofactory.com/news/top-nasa-climate-scientist-copenhagen-must-fail-120309Hansen quotes over climate change at British scientific journal Nature (journal)Heffernan, Olive (3 December 2009). "Crunch time for climate change". Nature Climate Change. 1 (912): 134. doi:10.1038/climate.2009.127.
Kloor, Keith (26 November 2009). "The eye of the storm". Nature Climate Change. 1 (912): 139–140. doi:10.1038/climate.2009.124.
Heffernan, Olive (5 May 2009). "Sufficient certainty". Nature Climate Change. 1 (905): 53. doi:10.1038/climate.2009.42.
Inman, Mason (30 April 2009). "A sensitive subject". Nature Climate Change. 1 (905): 59–61. doi:10.1038/climate.2009.41.
Ackerman, Frank (9 April 2009). "Stern advice for Copenhagen". Nature Climate Change. 1 (905): 62–63. doi:10.1038/climate.2009.34.
Kleiner, Kurt (19 February 2009). "Peak energy: promise or peril?". Nature Climate Change. 1 (903): 31–33. doi:10.1038/climate.2009.19.
Inman, Mason (15 January 2009). "Where warming hits hard". Nature Climate Change. 1 (902): 18–21. doi:10.1038/climate.2009.3.
Inman, Mason (20 November 2008). "Carbon is forever". Nature Climate Change. 1 (812): 156–158. doi:10.1038/climate.2008.122.
Oppenheimer, Michael (16 January 2008). "An outspoken scientist". Nature Climate Change. 1 (802): 20–21. doi:10.1038/climate.2008.3.
Haag, Amanda Leigh (September 2007). "The even darker side of brown clouds". Nature Climate Change. 1 (709): 52–53. doi:10.1038/climate.2007.41.
Leigh, Amanda (18 December 2008). "What we've learned in 2008". Nature Climate Change. 1 (901): 4–6. doi:10.1038/climate.2008.142.
A graphical representation of the protocol's failures & achievements |
decarbonization of shipping | The decarbonization of shipping is an ongoing goal to reduce greenhouse gas emissions from shipping to net-zero by or around 2050, which is the goal of the International Maritime Organization (IMO). The IMO has an initial strategy. This includes the practice of lowering or limiting the combustion of fossil fuels for power and propulsion to limit emission of carbon dioxide (CO2).
Background
International trade of goods is primarily sea-based, followed by pipeline, air, and then rail/truck. Most sea vessels that transport goods use diesel or fuel oil, generating carbon dioxide. The maritime shipping industry transported almost 11 billion metric tonnes of cargo in 2022, which accounts for nearly 3% of global carbon dioxide emissions. These emissions and potential oil spills can pose chronic risks to coastal regions, marine life, and ultimately ocean health in terms of pH and ecological diversity. A decrease in pH would make the oceans more acidic, lower free carbonates (which are a component of shellfish and corals exoskeletons/scaffolds), and decrease CO2 conversion to carbonates. These are some of the environmental effects of shipping.
Issue
Since marine shipping moves nearly 80% of goods by tonnage and the trend of shipping is expected to double and may triple by 2050, decarbonization strategies are critical in tackling global warming and marine health. Many major shipping entities have pledged to cut carbon emissions with the goal of carbon neutrality by 2050. An industry forum called the "Getting to Zero Coalition" has set a goal of carbon neutrality by 2030, which cannot be met by a single approach.
In 2021 the Center for Strategic and International Studies stated that governments and shipping industry leaders, such as Maersk, Mediterranean Shipping Company, and France’s CMA CGM "have shown interest in decarbonization projects." In 2021 the European Union (EU) signaled "strong policy support for maritime decarbonization through their ‘Fit For 55’ (FF55) proposal, a package of 14 legislative proposals."Groups that represent more than 90% of the global shipping industry have called for a globally applicable carbon tax on the shipping industry's emissions, in order to provide financial incentives for implementation of new technologies, and provide necessary funding for research and development.A 2021 article states that extensive research and development is needed, as well as retrofitting and operational changes. The rapidly changing industry response to decarbonization can be monitored in a weekly newsletter, several conferences, and a two day overview online course. "Delay beyond 2023 would mean the future transition for international shipping is too rapid to be feasible," says Alice Larkin. "It has to be all hands on deck for international shipping now.”
Proposed solutions
Various approaches have been proposed or implemented, such as the use of low carbon feedstocks (methanol, ammonia) or hydrogen and electrification with energy storage, construction of ships with lighter materials with high tensile strength, and digital operations for enhanced transport efficiency and container ship packing. Some ships are partially automated with a skeleton crew to reduce the potential for human error, using telemetry based on ship onboard sensors, cloud computing, and machine learning or neural network-based decision-making.In larger shipping operations, a digital twin is created to simulate the trajectory based on real data from the actual ship, allowing operational managers to predict future scenarios and make decisions. These tools must be transparent yet safe to avoid hijacking and interference with other ships or transport, while also being low-cost for most operators to deploy and maintain.Electric ships are useful for short trips. Sparky, an "all-electric 70 tonne bollard pull harbor tugboat", is "the first e-tug of its type in the world." Sparky was christened in Auckland in August 2022. The world's first hybrid tugboat, the Foss tug Carolyn Dorothy, began operation in 2009 in the Port of Los Angeles and the Port of Long Beach. The tour boat Kvitbjørn, ("polar bear"), operates in Svalbard, just a few hundred miles from the North Pole, piloting a newly developed Volvo Penta hybrid-electric propulsion system. In June 2022, the Danish electric ferry Ellen made a record 90 km voyage on a single charge.Net zero fuels could be used, for example in ammonia or hydrogen-powered ships. Green hydrogen and ammonia produced from zero-carbon electricity (solar or wind power), are "the most promising options ... to decarbonize shipping" in 2022, according to the World Bank. Biofuels can be net-zero fuels if "the production of fuel removes a quantity of carbon dioxide from the atmosphere that is equivalent to the amount of carbon dioxide emitted during combustion." On July 21, 2022, Carnival's AIDAprima "became the first larger scale cruise ship to be bunkered with a blend of marine biofuel ... made from 100% sustainable raw materials such as waste cooking oil, and marine gas oil (MGO)." As of April 2022, "ammonia, methanol and methane are viable deep sea shipping fuels, while compressed and liquid hydrogen are not", according to a World Economic Forum article. The world's first hydrogen-powered tugboat was launched in May 2022, at the Astilleros Armon shipyard in Navia, Spain, and is scheduled to enter service in the Port of Antwerp-Bruges in December 2022. Dual fuels engines, fuel storage options, and retrofit readiness are important to ensure adaptability. Stena was the first shipowner in the world to retrofit a large vessel for methanol, converting its ro-pax Stena Germanica in 2015. Stena is partnering with methanol producer Proman and with MAN Energy Solutions to retrofit engines for dual-fuel operation on diesel and methanol.Wind power is a traditional choice for shipping. Wallenius Marine is "developing the Oceanbird, a cargo ship powered by wind that can carry 7,000 cars." K Line is installing Seawing wind propulsion systems on five of its bulk carriers. The kite parafoils, which fly about 300 meters above the sea level, are estimated to reduce emissions by about 20%.Nuclear marine propulsion has been suggested to be the only long-proven and scalable propulsion technology that produces practically zero greenhouse gas emissions. Small modular reactors for shipping are being investigated in South Korea.The European Investment Bank invests in port infrastructure to improve sustainability and reduce global transport chain emissions, including efforts that mitigate pollution from moored ships, such as shoreside electricity and ship garbage receiving facilities. Between 2018 and 2022, the Bank funded €1.3 billion on ports.
References
External links
Maersk Mc-Kinney Møller Center for Zero Carbon Shipping |
phragmites australis | Phragmites australis, known as the common reed, is a species of flowering plant in the grass family Poaceae. It is a wetland grass that can grow up to 20 feet (6 metres) tall and has a cosmopolitan distribution worldwide.
Description
Phragmites australis commonly forms extensive stands (known as reed beds), which may be as much as 1 square kilometre (0.39 square miles) or more in extent. Where conditions are suitable it can also spread at 5 metres (16 feet) or more per year by horizontal runners, which put down roots at regular intervals. It can grow in damp ground, in standing water up to 1 m (3 ft 3 in) or so deep, or even as a floating mat. The erect stems grow to 2–4 m (6+1⁄2–13 ft) tall, with the tallest plants growing in areas with hot summers and fertile growing conditions.
The leaves are 18–60 centimetres (7–23+1⁄2 in) long and 1–6 cm (1⁄2–2+1⁄4 in) broad. The flowers are produced in late summer in a dense, dark purple panicle, about 15–40 cm (6–15+1⁄2 in) long. Later the numerous long, narrow, sharp pointed spikelets appear greyer due to the growth of long, silky hairs. These eventually help disperse the minute seeds.
Taxonomy
Recent studies have characterized morphological distinctions between the introduced and native stands of Phragmites australis in North America. The Eurasian phenotype can be distinguished from the North American phenotype by its shorter ligules of up to 0.9 millimetres (1⁄32 in) as opposed to over 1 mm, shorter glumes of under 3.2 mm (1⁄8 in) against over 3.2 mm (although there is some overlap in this character), and in culm characteristics.
Phragmites australis subsp. americanus – the North American genotype has been described as a distinct species, Phragmites americanus
Phragmites australis subsp. australis – the Eurasian genotype
Phragmites australis subsp. berlandieri (E.Fourn.) Saltonst. & Hauber
Phragmites australis subsp. isiacus (Arcang.) ined.
Ecology
It is a helophyte (aquatic plant), especially common in alkaline habitats, and it also tolerates brackish water, and so is often found at the upper edges of estuaries and on other wetlands (such as grazing marsh) which are occasionally inundated by the sea. A study demonstrated that P. australis has similar greenhouse gas emissions to native Spartina alterniflora. However, other studies have demonstrated that it is associated with larger methane emissions and greater carbon dioxide uptake than native New England salt marsh vegetation that occurs at higher marsh elevations.Common reed is suppressed where it is grazed regularly by livestock. Under these conditions it either grows as small shoots within the grassland sward, or it disappears altogether. In Europe, common reed is rarely invasive, except in damp grasslands where traditional grazing has been abandoned.
Invasive status
In North America, the status of Phragmites australis is a source of confusion and debate. It is commonly considered a non-native and often invasive species, introduced from Europe in the 1800s. However, there is evidence of the existence of Phragmites as a native plant in North America long before European colonization of the continent. The North American native subspecies, P. a. subsp. americanus (sometimes considered a separate species, P. americanus), is markedly less vigorous than European forms. The expansion of Phragmites in North America is due to the more vigorous, but similar-looking European subsp. australis.Phragmites australis subsp. australis outcompetes native vegetation and lowers the local plant biodiversity. It forms dense thickets of vegetation that are unsuitable habitat for native fauna. It displaces native plants species such as wild rice, cattails, and native orchids. Phragmites has a high above ground biomass that blocks light to other plants allowing areas to turn into Phragmites monoculture very quickly. Decomposing Phragmites increases the rate of marsh accretion more rapidly than would occur with native marsh vegetation.Phragmites australis subsp. australis is causing serious problems for many other North American hydrophyte wetland plants, including the native P. australis subsp. americanus. Gallic acid released by phragmites is degraded by ultraviolet light to produce mesoxalic acid, effectively hitting susceptible plants and seedlings with two harmful toxins. Phragmites is so difficult to control that one of the most effective methods of eradicating the plant is to burn it over 2–3 seasons. The roots grow so deep and strong that one burn is not enough. Ongoing research suggests that goats could be effectively used to control the species.
Natural enemies
Since 2017, over 80% of the beds of Phragmites in the Pass a Loutre Wildlife Management Area have been damaged by the invasive roseau cane scale (Nipponaclerda biwakoensis), threatening wildlife habitat throughout the affected regions of the area. While typically considered a noxious weed, in Louisiana the reed beds are considered critical to the stability of the shorelines of wetland areas and waterways of the Mississippi Delta, and the die-off of reed beds is believed to accelerate coastal erosion.
Uses
The entire plant is edible raw or cooked. The young stems can be boiled, or later on be used to make flour. The underground stems can be used but are tough, as can the seeds but they are hard to find.Stems can be made into eco-friendly drinking straws. Many parts of the plant can be eaten. The young shoots can be consumed raw or cooked. The hardened sap from damaged stems can be eaten fresh or toasted. The stems can be dried, ground, sifted, hydrated, and toasted like marshmallows. The seeds can be crushed, mixed with berries and water, and cooked to make a gruel. The roots can be prepared similar to those of cattails.Common reed is the primary source of thatch for traditional thatch housing in Europe and beyond. The plant is extensively used in phytodepuration, or natural water treatment systems, since the root hairs are excellent at filtering out impurities in waste water. It also shows excellent potential as a source of biomass.
References
Further reading
Invasive Phragmites (Phragmites australis) Best Management Practices in Ontario (PDF). Archived from the original (PDF) on 6 February 2021. |
drax group | Drax Group PLC is a power generation business. The principal downstream enterprises are based in the UK and include Drax Power Limited, which runs the biomass fuelled Drax power station, near Selby in North Yorkshire. The Group also runs an international biomass supply chain business. The company is listed on the London Stock Exchange and is a constituent of the FTSE 250 Index.
In 2021, the company was taken out of the S&P Global Clean Energy Index, as it is no longer considered to be a "clean" energy company by the S&P.
History
In 1990, the electricity industry of England and Wales was privatised under the Electricity Act 1989. Three generating companies and 12 regional electricity companies were created. As a result of privatisation, Drax Power Station came under the ownership of National Power, one of the newly formed generating companies. Over the years that followed privatisation, the map of the industry changed dramatically. One significant change was the emergence of vertically integrated companies, combining generation, distribution and supply interests. In certain cases, it became necessary for generation assets to be divested, and so in 1999 Drax Power Station was acquired by the US-based AES Corporation for £1.87 billion (US$3 billion). A partial re-financing of Drax was completed in 2000, with £400 million of senior bonds being issued by AES Drax Holdings, and £267 million of subordinated debt issued by AES Drax Energy.Increased competition, over-capacity and new trading arrangements contributed to a significant drop in wholesale electricity prices, which hit an all-time low in 2002. Many companies experienced financial problems, and Drax Power Station's major customer went into administration, triggering financial difficulties for Drax. Following a series of standstill agreements with its creditors, the AES Corporation and Drax parted company in August 2003. During the restructuring, a number of bids were received from companies wishing to take a stake in the ownership of Drax, but creditors voted overwhelmingly to retain their interest in Drax. In December 2003, the restructuring was completed and Drax came under the ownership of a number of financial institutions.Almost exactly two years later, on 15 December 2005, Drax underwent a re-financing and shares in Drax Group plc were listed on the London Stock Exchange for the first time.In 2009, Drax Group acquired Haven Power – enabling it to sell electricity directly. In 2015, the Group acquired Billington Bioenergy, specialists in providing sustainable biomass pellets for domestic energy systems.In 2016, Drax Group acquired Opus Energy for £340 million funded by a new acquisition debt facility of up to £375 million. In October 2017, Drax sold Billington Bioenergy for £2 million to an AIM-listed energy company called Aggregated Micro Power Holdings.On 16 October 2018, Drax Group announced that it had agreed to acquire Scottish Power's portfolio of pumped storage, hydro and gas-fired generation for £702 million in cash from Iberdrola, subject to shareholder approval. Drax confirmed that approval had been granted on 1 January 2019. The acquisition brings with it Cruachan pumped storage power station, Rye House power station, Damhead Creek power station, Galloway hydro-electric power scheme, Lanark Hydro Electric Scheme, Shoreham Power Station and Blackburn Mill power station.On 15 December 2020, Drax Group announced the sale of Rye House, Damhead Creek, Shoreham and Blackburn Mill to VPI Holdings for £193.3m.On 13 April 2021, Drax announced that it had completed the acquisition of Pinnacle Renewable Energy Inc.
Operations
Drax Group's key asset is Drax Power Station. Originally built, owned and operated by the Central Electricity Generating Board (CEGB), Drax Power Station was constructed and commissioned in two stages. Stage one (units 1, 2 and 3) was completed in 1974. Some 12 years later in 1986, stage two (units 4, 5 and 6) was completed. Drax was the last coal-fired power station to be built in the UK, and was initially designed to use low-sulphur coal from the nearby Selby coalfield in six generating units. Each unit has a capacity of 660 MW when burning coal, giving a total capacity of just under 4 GW. This made Drax the largest power station in the UK.Related enterprises include Drax Biomass (which specialises in producing biomass pellets to be used to generate electricity and fuel domestic heating systems), Baton Rouge Transit (which is responsible for storing and loading the biomass at the Port of Baton Rouge), Haven Power (an electricity supplier) and Opus Energy (a supplier of gas and electricity to businesses across the United Kingdom).
Controversies
The company has attracted a series of protests in the past: (i) a climate camp on 31 August 2006, attended over 600 people protesting against the high carbon emissions: 39 people were arrested after trying illegally to gain access to the plant, (ii) a train protest on 13 June 2008, attended by 30 climate change campaigners who halted an EWS coal train en route to the station, and (iii) a worker strike on 18 June 2009, when up to 200 contractors walked out of or failed to show up in a wildcat strike. Also in October 2011 a fire started by spontaneous combustion in a stockpile at the Port of Tyne biomass facility. Another fire occurred at the same facility in a conveyor transfer tower in October 2013.A "virtual protest" was held in April 2020 by Biofuelwatch, which claims that Drax is the UK's largest emitter of carbon dioxide and that the wood pellets Drax burns are leading to the destruction of forests in the southern United States. Protesters also claim that the company is burning more wood than any other power station in the world.The company had proposed to build a new 3.6 GW gas-fired power plant at Selby which was expected to produce 75% of the UK's power sector emissions once the plant was underway. A protest took place outside the company's offices in London in July 2019 and further protests took place in Yorkshire in August 2020. Protesters claimed that the company was asking for substantial subsidies to operate the new plant "in addition to the £2.36 million a day it already receives for burning biomass." After ministers overruled objections from the planning authority and approved the plant, a legal challenge was brought against the decision but failed in the courts in January 2021. However, the company announced in February 2021 that the plans for the new plant had been abandoned.On 3 October 2022, BBC Panorama aired an episode showing how pellets burned in DRAX powerplants came from natural growth forest in British Columbia, travelling 11,000 miles on ships.
References
External links
Official website |
skyscraper | A skyscraper is a tall, continuously habitable building having multiple floors. Modern sources currently define skyscrapers as being at least 100 meters (330 ft) or 150 meters (490 ft) in height, though there is no universally accepted definition, other than being very tall high-rise buildings. Historically, the term first referred to buildings with between 10 and 20 stories when these types of buildings began to be constructed in the 1880s. Skyscrapers may host offices, hotels, residential spaces, and retail spaces.
One common feature of skyscrapers is having a steel frame that supports curtain walls. This idea was invented by Viollet le Duc in his discourses on architecture. These curtain walls either bear on the framework below or are suspended from the framework above, rather than resting on load-bearing walls of conventional construction. Some early skyscrapers have a steel frame that enables the construction of load-bearing walls taller than of those made of reinforced concrete.
Modern skyscraper walls are not load-bearing, and most skyscrapers are characterized by large surface areas of windows made possible by steel frames and curtain walls. However, skyscrapers can have curtain walls that mimic conventional walls with a small surface area of windows. Modern skyscrapers often have a tubular structure, and are designed to act like a hollow cylinder to resist wind, seismic, and other lateral loads. To appear more slender, allow less wind exposure and transmit more daylight to the ground, many skyscrapers have a design with setbacks, which in some cases is also structurally required.
As of September 2023, fourteen cities in the world have more than 100 skyscrapers that are 150 m (492 ft) or taller: Hong Kong with 552 skyscrapers; Shenzhen, China with 373 skyscrapers; New York City, US with 314 skyscrapers; Dubai, UAE with 252 skyscrapers; Guangzhou, China with 188 skyscrapers; Shanghai, China with 183 skyscrapers; Tokyo, Japan with 168 skyscrapers; Kuala Lumpur, Malaysia with 156 skyscrapers; Wuhan, China with 149 skyscrapers; Chongqing, China, with 144 skyscrapers; Chicago, US, with 137 skyscrapers; Chengdu, China with 117 skyscrapers; Jakarta, Indonesia, with 112 skyscrapers; and Bangkok, Thailand, with 111 skyscrapers.
Definition
The term "skyscraper" was first applied to buildings of steel-framed construction of at least 10 stories in the late 19th century, a result of public amazement at the tall buildings being built in major American cities like New York City, Philadelphia, Boston, Chicago, Detroit, and St. Louis.The first steel-frame skyscraper was the Home Insurance Building, originally 10 stories with a height of 42 m or 138 ft, in Chicago in 1885; two additional stories were added. Some point to Philadelphia's 10-story Jayne Building (1849–50) as a proto-skyscraper, or to New York's seven-floor Equitable Life Building, built in 1870. Steel skeleton construction has allowed for today's supertall skyscrapers now being built worldwide. The nomination of one structure versus another being the first skyscraper, and why, depends on what factors are stressed.The structural definition of the word skyscraper was refined later by architectural historians, based on engineering developments of the 1880s that had enabled construction of tall multi-story buildings. This definition was based on the steel skeleton—as opposed to constructions of load-bearing masonry, which passed their practical limit in 1891 with Chicago's Monadnock Building.
What is the chief characteristic of the tall office building? It is lofty. It must be tall. The force and power of altitude must be in it, the glory and pride of exaltation must be in it. It must be every inch a proud and soaring thing, rising in sheer exaltation that from bottom to top it is a unit without a single dissenting line.
— Louis Sullivan's The Tall Office Building Artistically Considered (1896)Some structural engineers define a high-rise as any vertical construction for which wind is a more significant load factor than earthquake or weight. Note that this criterion fits not only high-rises but some other tall structures, such as towers.
Different organizations from the United States and Europe define skyscrapers as buildings at least 150 m (490 ft) in height or taller, with "supertall" skyscrapers for buildings higher than 300 m (984 ft) and "megatall" skyscrapers for those taller than 600 m (1,969 ft).The tallest structure in ancient times was the 146 m (479 ft) Great Pyramid of Giza in ancient Egypt, built in the 26th century BC. It was not surpassed in height for thousands of years, the 160 m (520 ft) Lincoln Cathedral having exceeded it in 1311–1549, before its central spire collapsed. The latter in turn was not surpassed until the 555-foot (169 m) Washington Monument in 1884. However, being uninhabited, none of these structures actually comply with the modern definition of a skyscraper.High-rise apartments flourished in classical antiquity. Ancient Roman insulae in imperial cities reached 10 and more stories. Beginning with Augustus (r. 30 BC-14 AD), several emperors attempted to establish limits of 20–25 m for multi-stories buildings, but were met with only limited success. Lower floors were typically occupied by shops or wealthy families, with the upper rented to the lower classes. Surviving Oxyrhynchus Papyri indicate that seven-stories buildings existed in provincial towns such as in 3rd century AD Hermopolis in Roman Egypt.The skylines of many important medieval cities had large numbers of high-rise urban towers, built by the wealthy for defense and status. The residential Towers of 12th century Bologna numbered between 80 and 100 at a time, the tallest of which is the 97.2 m (319 ft) high Asinelli Tower. A Florentine law of 1251 decreed that all urban buildings be immediately reduced to less than 26 m. Even medium-sized towns of the era are known to have proliferations of towers, such as the 72 towers that ranged up to 51 m height in San Gimignano.The medieval Egyptian city of Fustat housed many high-rise residential buildings, which Al-Muqaddasi in the 10th century described as resembling minarets. Nasir Khusraw in the early 11th century described some of them rising up to 14 stories, with roof gardens on the top floor complete with ox-drawn water wheels for irrigating them. Cairo in the 16th century had high-rise apartment buildings where the two lower floors were for commercial and storage purposes and the multiple stories above them were rented out to tenants. An early example of a city consisting entirely of high-rise housing is the 16th-century city of Shibam in Yemen. Shibam was made up of over 500 tower houses, each one rising 5 to 11 stories high, with each floor being an apartment occupied by a single family. The city was built in this way in order to protect it from Bedouin attacks. Shibam still has the tallest mudbrick buildings in the world, with many of them over 30 m (98 ft) high.An early modern example of high-rise housing was in 17th-century Edinburgh, Scotland, where a defensive city wall defined the boundaries of the city. Due to the restricted land area available for development, the houses increased in height instead. Buildings of 11 stories were common, and there are records of buildings as high as 14 stories. Many of the stone-built structures can still be seen today in the old town of Edinburgh. The oldest iron framed building in the world, although only partially iron framed, is The Flaxmill in Shrewsbury, England. Built in 1797, it is seen as the "grandfather of skyscrapers", since its fireproof combination of cast iron columns and cast iron beams developed into the modern steel frame that made modern skyscrapers possible. In 2013 funding was confirmed to convert the derelict building into offices.
Early skyscrapers
In 1857, Elisha Otis introduced the safety elevator at the E. V. Haughwout Building in New York City, allowing convenient and safe transport to buildings' upper floors. Otis later introduced the first commercial passenger elevators to the Equitable Life Building in 1870, considered by some architectural historians to be the first skyscraper. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. An early development in this area was Oriel Chambers in Liverpool, England. It was only five floors high. The Royal Academy of Arts states, "critics at the time were horrified by its "large agglomerations of protruding plate glass bubbles". In fact, it was a precursor to Modernist architecture, being the first building in the world to feature a metal-framed glass curtain wall, a design element which creates light, airy interiors and has since been used the world over as a defining feature of skyscrapers".Further developments led to what many individuals and organizations consider the world's first skyscraper, the ten-story Home Insurance Building in Chicago, built in 1884–1885. While its original height of 42.1 m (138 ft) does not even qualify as a skyscraper today, it was record setting. The building of tall buildings in the 1880s gave the skyscraper its first architectural movement, broadly termed the Chicago School, which developed what has been called the Commercial Style.The architect, Major William Le Baron Jenney, created a load-bearing structural frame. In this building, a steel frame supported the entire weight of the walls, instead of load-bearing walls carrying the weight of the building. This development led to the "Chicago skeleton" form of construction. In addition to the steel frame, the Home Insurance Building also utilized fireproofing, elevators, and electrical wiring, key elements in most skyscrapers today.Burnham and Root's 45 m (148 ft) Rand McNally Building in Chicago, 1889, was the first all-steel framed skyscraper, while Louis Sullivan's 41 m (135 ft) Wainwright Building in St. Louis, Missouri, 1891, was the first steel-framed building with soaring vertical bands to emphasize the height of the building and is therefore considered to be the first early skyscraper. In 1889, the Mole Antonelliana in Italy was 167 m (549 ft) tall.
Most early skyscrapers emerged in the land-strapped areas of New York City and Chicago toward the end of the 19th century. A land boom in Melbourne, Australia between 1888 and 1891 spurred the creation of a significant number of early skyscrapers, though none of these were steel reinforced and few remain today. Height limits and fire restrictions were later introduced. In the late 1800s, London builders found building heights limited due to issues with existing buildings. High-rise development in London is restricted at certain sites if it would obstruct protected views of St Paul's Cathedral and other historic buildings. This policy, 'St Paul's Heights', has officially been in operation since 1937.Concerns about aesthetics and fire safety had likewise hampered the development of skyscrapers across continental Europe for the first half of the 20th century. Some notable exceptions are the 43 m (141 ft) tall 1898 Witte Huis (White House) in Rotterdam; the 51.5 m (169 ft) tall PAST Building (1906-1908) in Warsaw, the Royal Liver Building in Liverpool, completed in 1911 and 90 m (300 ft) high; the 57 m (187 ft) tall 1924 Marx House in Düsseldorf, Germany; the 61 m (200 ft) Kungstornen (Kings' Towers) in Stockholm, Sweden, which were built 1924–25, the 89 m (292 ft) Edificio Telefónica in Madrid, Spain, built in 1929; the 87.5 m (287 ft) Boerentoren in Antwerp, Belgium, built in 1932; the 66 m (217 ft) Prudential Building in Warsaw, Poland, built in 1934; and the 108 m (354 ft) Torre Piacentini in Genoa, Italy, built in 1940.
After an early competition between New York City and Chicago for the world's tallest building, New York took the lead by 1895 with the completion of the 103 m (338 ft) tall American Surety Building, leaving New York with the title of the world's tallest building for many years.
Modern skyscrapers
Modern skyscrapers are built with steel or reinforced concrete frameworks and curtain walls of glass or polished stone. They use mechanical equipment such as water pumps and elevators. Since the 1960s, according to the CTBUH, the skyscraper has been reoriented away from a symbol for North American corporate power to instead communicate a city or nation's place in the world.
Skyscraper construction entered a three-decades-long era of stagnation in 1930 due to the Great Depression and then World War II. Shortly after the war ended, Russia began construction on a series of skyscrapers in Moscow. Seven, dubbed the "Seven Sisters", were built between 1947 and 1953; and one, the Main building of Moscow State University, was the tallest building in Europe for nearly four decades (1953–1990). Other skyscrapers in the style of Socialist Classicism were erected in East Germany (Frankfurter Tor), Poland (PKiN), Ukraine (Hotel Moscow), Latvia (Academy of Sciences), and other Eastern Bloc countries. Western European countries also began to permit taller skyscrapers during the years immediately following World War II. Early examples include Edificio España (Spain) and Torre Breda (Italy).
From the 1930s onward, skyscrapers began to appear in various cities in East and Southeast Asia as well as in Latin America. Finally, they also began to be constructed in cities in Africa, the Middle East, South Asia, and Oceania from the late 1950s.
Skyscraper projects after World War II typically rejected the classical designs of the early skyscrapers, instead embracing the uniform international style; many older skyscrapers were redesigned to suit contemporary tastes or even demolished—such as New York's Singer Building, once the world's tallest skyscraper.
German-American architect Ludwig Mies van der Rohe became one of the world's most renowned architects in the second half of the 20th century. He conceived the glass façade skyscraper and, along with Norwegian Fred Severud, designed the Seagram Building in 1958, a skyscraper that is often regarded as the pinnacle of modernist high-rise architecture.
Skyscraper construction surged throughout the 1960s. The impetus behind the upswing was a series of transformative innovations which made it possible for people to live and work in "cities in the sky".
In the early 1960s Bangladeshi-American structural engineer Fazlur Rahman Khan, considered the "father of tubular designs" for high-rises, discovered that the dominating rigid steel frame structure was not the only system apt for tall buildings, marking a new era of skyscraper construction in terms of multiple structural systems. His central innovation in skyscraper design and construction was the concept of the "tube" structural system, including the "framed tube", "trussed tube", and "bundled tube". His "tube concept", using all the exterior wall perimeter structure of a building to simulate a thin-walled tube, revolutionized tall building design. These systems allow greater economic efficiency, and also allow skyscrapers to take on various shapes, no longer needing to be rectangular and box-shaped. The first building to employ the tube structure was the Chestnut De-Witt apartment building, considered to be a major development in modern architecture. These new designs opened an economic door for contractors, engineers, architects, and investors, providing vast amounts of real estate space on minimal plots of land. Over the next fifteen years, many towers were built by Fazlur Rahman Khan and the "Second Chicago School", including the hundred-story John Hancock Center and the massive 442 m (1,450 ft) Willis Tower. Other pioneers of this field include Hal Iyengar, William LeMessurier, and Minoru Yamasaki, the architect of the World Trade Center.
Many buildings designed in the 70s lacked a particular style and recalled ornamentation from earlier buildings designed before the 50s. These design plans ignored the environment and loaded structures with decorative elements and extravagant finishes. This approach to design was opposed by Fazlur Khan and he considered the designs to be whimsical rather than rational. Moreover, he considered the work to be a waste of precious natural resources. Khan's work promoted structures integrated with architecture and the least use of material resulting in the smallest impact on the environment. The next era of skyscrapers will focus on the environment including performance of structures, types of material, construction practices, absolute minimal use of materials/natural resources, embodied energy within the structures, and more importantly, a holistically integrated building systems approach.
Modern building practices regarding supertall structures have led to the study of "vanity height". Vanity height, according to the CTBUH, is the distance between the highest floor and its architectural top (excluding antennae, flagpole or other functional extensions). Vanity height first appeared in New York City skyscrapers as early as the 1920s and 1930s but supertall buildings have relied on such uninhabitable extensions for on average 30% of their height, raising potential definitional and sustainability issues. The current era of skyscrapers focuses on sustainability, its built and natural environments, including the performance of structures, types of materials, construction practices, absolute minimal use of materials and natural resources, energy within the structure, and a holistically integrated building systems approach. LEED is a current green building standard.Architecturally, with the movements of Postmodernism, New Urbanism and New Classical Architecture, that established since the 1980s, a more classical approach came back to global skyscraper design, that remains popular today. Examples are the Wells Fargo Center, NBC Tower, Parkview Square, 30 Park Place, the Messeturm, the iconic Petronas Towers and Jin Mao Tower.
Other contemporary styles and movements in skyscraper design include organic, sustainable, neo-futurist, structuralist, high-tech, deconstructivist, blob, digital, streamline, novelty, critical regionalist, vernacular, Neo Art Deco and neohistorist, also known as revivalist.
3 September is the global commemorative day for skyscrapers, called "Skyscraper Day".New York City developers competed among themselves, with successively taller buildings claiming the title of "world's tallest" in the 1920s and early 1930s, culminating with the completion of the 318.9 m (1,046 ft) Chrysler Building in 1930 and the 443.2 m (1,454 ft) Empire State Building in 1931, the world's tallest building for forty years. The first completed 417 m (1,368 ft) tall World Trade Center tower became the world's tallest building in 1972. However, it was overtaken by the Sears Tower (now Willis Tower) in Chicago within two years. The 442 m (1,450 ft) tall Sears Tower stood as the world's tallest building for 24 years, from 1974 until 1998, until it was edged out by 452 m (1,483 ft) Petronas Twin Towers in Kuala Lumpur, which held the title for six years.
Design and construction
The design and construction of skyscrapers involves creating safe, habitable spaces in very tall buildings. The buildings must support their weight, resist wind and earthquakes, and protect occupants from fire. Yet they must also be conveniently accessible, even on the upper floors, and provide utilities and a comfortable climate for the occupants. The problems posed in skyscraper design are considered among the most complex encountered given the balances required between economics, engineering, and construction management.
One common feature of skyscrapers is a steel framework from which curtain walls are suspended, rather than load-bearing walls of conventional construction. Most skyscrapers have a steel frame that enables them to be built taller than typical load-bearing walls of reinforced concrete. Skyscrapers usually have a particularly small surface area of what are conventionally thought of as walls. Because the walls are not load-bearing most skyscrapers are characterized by surface areas of windows made possible by the concept of steel frame and curtain wall. However, skyscrapers can also have curtain walls that mimic conventional walls and have a small surface area of windows.
The concept of a skyscraper is a product of the industrialized age, made possible by cheap fossil fuel derived energy and industrially refined raw materials such as steel and concrete. The construction of skyscrapers was enabled by steel frame construction that surpassed brick and mortar construction starting at the end of the 19th century and finally surpassing it in the 20th century together with reinforced concrete construction as the price of steel decreased and labor costs increased.
The steel frames become inefficient and uneconomic for supertall buildings as usable floor space is reduced for progressively larger supporting columns. Since about 1960, tubular designs have been used for high rises. This reduces the usage of material (more efficient in economic terms – Willis Tower uses a third less steel than the Empire State Building) yet allows greater height. It allows fewer interior columns, and so creates more usable floor space. It further enables buildings to take on various shapes.
Elevators are characteristic to skyscrapers. In 1852 Elisha Otis introduced the safety elevator, allowing convenient and safe passenger movement to upper floors. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. Today major manufacturers of elevators include Otis, ThyssenKrupp, Schindler, and KONE.
Advances in construction techniques have allowed skyscrapers to narrow in width, while increasing in height. Some of these new techniques include mass dampers to reduce vibrations and swaying, and gaps to allow air to pass through, reducing wind shear.
Basic design considerations
Good structural design is important in most building design, but particularly for skyscrapers since even a small chance of catastrophic failure is unacceptable given the tremendous damage such failure would cause. This presents a paradox to civil engineers: the only way to assure a lack of failure is to test for all modes of failure, in both the laboratory and the real world. But the only way to know of all modes of failure is to learn from previous failures. Thus, no engineer can be absolutely sure that a given structure will resist all loadings that could cause failure; instead, one can only have large enough margins of safety such that a failure is acceptably unlikely. When buildings do fail, engineers question whether the failure was due to some lack of foresight or due to some unknowable factor.
Loading and vibration
The load a skyscraper experiences is largely from the force of the building material itself. In most building designs, the weight of the structure is much larger than the weight of the material that it will support beyond its own weight. In technical terms, the dead load, the load of the structure, is larger than the live load, the weight of things in the structure (people, furniture, vehicles, etc.). As such, the amount of structural material required within the lower levels of a skyscraper will be much larger than the material required within higher levels. This is not always visually apparent. The Empire State Building's setbacks are actually a result of the building code at the time (1916 Zoning Resolution), and were not structurally required. On the other hand, John Hancock Center's shape is uniquely the result of how it supports loads. Vertical supports can come in several types, among which the most common for skyscrapers can be categorized as steel frames, concrete cores, tube within tube design, and shear walls.
The wind loading on a skyscraper is also considerable. In fact, the lateral wind load imposed on supertall structures is generally the governing factor in the structural design. Wind pressure increases with height, so for very tall buildings, the loads associated with wind are larger than dead or live loads.
Other vertical and horizontal loading factors come from varied, unpredictable sources, such as earthquakes.
Steel frame
By 1895, steel had replaced cast iron as skyscrapers' structural material. Its malleability allowed it to be formed into a variety of shapes, and it could be riveted, ensuring strong connections. The simplicity of a steel frame eliminated the inefficient part of a shear wall, the central portion, and consolidated support members in a much stronger fashion by allowing both horizontal and vertical supports throughout. Among steel's drawbacks is that as more material must be supported as height increases, the distance between supporting members must decrease, which in turn increases the amount of material that must be supported. This becomes inefficient and uneconomic for buildings above 40 stories tall as usable floor spaces are reduced for supporting column and due to more usage of steel.
Tube structural systems
A new structural system of framed tubes was developed by Fazlur Rahman Khan in 1963. The framed tube structure is defined as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation". Closely spaced interconnected exterior columns form the tube. Horizontal loads (primarily wind) are supported by the structure as a whole. Framed tubes allow fewer interior columns, and so create more usable floor space, and about half the exterior surface is available for windows. Where larger openings like garage doors are required, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. Tube structures cut down costs, at the same time allowing buildings to reach greater heights. Concrete tube-frame construction was first used in the DeWitt-Chestnut Apartment Building, completed in Chicago in 1963, and soon after in the John Hancock Center and World Trade Center.
The tubular systems are fundamental to tall building design. Most buildings over 40 stories constructed since the 1960s now use a tube design derived from Khan's structural engineering principles, examples including the construction of the World Trade Center, Aon Center, Petronas Towers, Jin Mao Building, and most other supertall skyscrapers since the 1960s. The strong influence of tube structure design is also evident in the construction of the current tallest skyscraper, the Burj Khalifa.Trussed tube and X-bracing:
Khan pioneered several other variations of the tube structure design. One of these was the concept of X-bracing, or the trussed tube, first employed for the John Hancock Center. This concept reduced the lateral load on the building by transferring the load into the exterior columns. This allows for a reduced need for interior columns thus creating more floor space. This concept can be seen in the John Hancock Center, designed in 1965 and completed in 1969. One of the most famous buildings of the structural expressionist style, the skyscraper's distinctive X-bracing exterior is actually a hint that the structure's skin is indeed part of its 'tubular system'. This idea is one of the architectural techniques the building used to climb to record heights (the tubular system is essentially the spine that helps the building stand upright during wind and earthquake loads). This X-bracing allows for both higher performance from tall structures and the ability to open up the inside floorplan (and usable floor space) if the architect desires.
The John Hancock Center was far more efficient than earlier steel-frame structures. Where the Empire State Building (1931), required about 206 kilograms of steel per square metre and 28 Liberty Street (1961) required 275, the John Hancock Center required only 145. The trussed tube concept was applied to many later skyscrapers, including the Onterie Center, Citigroup Center and Bank of China Tower.
Bundled tube:
An important variation on the tube frame is the bundled tube, which uses several interconnected tube frames. The Willis Tower in Chicago used this design, employing nine tubes of varying height to achieve its distinct appearance. The bundled tube structure meant that "buildings no longer need be boxlike in appearance: they could become sculpture."Tube in tube:
Tube-in-tube system takes advantage of core shear wall tubes in addition to exterior tubes. The inner tube and outer tube work together to resist gravity loads and lateral loads and to provide additional rigidity to the structure to prevent significant deflections at the top. This design was first used in One Shell Plaza. Later buildings to use this structural system include the Petronas Towers.Outrigger and belt truss:
The outrigger and belt truss system is a lateral load resisting system in which the tube structure is connected to the central core wall with very stiff outriggers and belt trusses at one or more levels. BHP House was the first building to use this structural system followed by the First Wisconsin Center, since renamed U.S. Bank Center, in Milwaukee. The center rises 601 feet, with three belt trusses at the bottom, middle and top of the building. The exposed belt trusses serve aesthetic and structural purposes. Later buildings to use this include Shanghai World Financial Center.Concrete tube structures:
The last major buildings engineered by Khan were the One Magnificent Mile and Onterie Center in Chicago, which employed his bundled tube and trussed tube system designs respectively. In contrast to his earlier buildings, which were mainly steel, his last two buildings were concrete. His earlier DeWitt-Chestnut Apartments building, built in 1963 in Chicago, was also a concrete building with a tube structure. Trump Tower in New York City is also another example that adapted this system.Shear wall frame interaction system:
Khan developed the shear wall frame interaction system for mid high-rise buildings. This structural system uses combinations of shear walls and frames designed to resist lateral forces. The first building to use this structural system was the 35-stories Brunswick Building. The Brunswick building was completed in 1965 and became the tallest reinforced concrete structure of its time. The structural system of Brunswick Building consists of a concrete shear wall core surrounded by an outer concrete frame of columns and spandrels. Apartment buildings up to 70 stories high have successfully used this concept.
The elevator conundrum
The invention of the elevator was a precondition for the invention of skyscrapers, given that most people would not (or could not) climb more than a few flights of stairs at a time. The elevators in a skyscraper are not simply a necessary utility, like running water and electricity, but are in fact closely related to the design of the whole structure: a taller building requires more elevators to service the additional floors, but the elevator shafts consume valuable floor space. If the service core, which contains the elevator shafts, becomes too big, it can reduce the profitability of the building. Architects must therefore balance the value gained by adding height against the value lost to the expanding service core.Many tall buildings use elevators in a non-standard configuration to reduce their footprint. Buildings such as the former World Trade Center Towers and Chicago's John Hancock Center use sky lobbies, where express elevators take passengers to upper floors which serve as the base for local elevators. This allows architects and engineers to place elevator shafts on top of each other, saving space. Sky lobbies and express elevators take up a significant amount of space, however, and add to the amount of time spent commuting between floors.
Other buildings, such as the Petronas Towers, use double-deck elevators, allowing more people to fit in a single elevator, and reaching two floors at every stop. It is possible to use even more than two levels on an elevator, although this has never been done. The main problem with double-deck elevators is that they cause everyone in the elevator to stop when only person on one level needs to get off at a given floor.
Buildings with sky lobbies include the World Trade Center, Petronas Twin Towers, Willis Tower and Taipei 101. The 44th-floor sky lobby of the John Hancock Center also featured the first high-rise indoor swimming pool, which remains the highest in the United States.
Economic rationale
Skyscrapers are usually situated in city centres where the price of land is high. Constructing a skyscraper becomes justified if the price of land is so high that it makes economic sense to build upward as to minimize the cost of the land per the total floor area of a building. Thus the construction of skyscrapers is dictated by economics and results in skyscrapers in a certain part of a large city unless a building code restricts the height of buildings.
Skyscrapers are rarely seen in small cities and they are characteristic of large cities, because of the critical importance of high land prices for the construction of skyscrapers. Usually only office, commercial and hotel users can afford the rents in the city center and thus most tenants of skyscrapers are of these classes.
Today, skyscrapers are an increasingly common sight where land is expensive, as in the centres of big cities, because they provide such a high ratio of rentable floor space per unit area of land.
Another disadvantage of very high skyscrapers is the loss of usable floorspace, as many elevator shafts are needed to enable performant vertical travelling. This led to the introduction of express lifts and sky lobbies where transfer to slower distribution lifts can be done.
Environmental impact
Constructing a single skyscraper requires large quantities of materials like steel, concrete, and glass, and these materials represent significant embodied energy. Skyscrapers are thus material and energy intensive buildings.
Skyscrapers have considerable mass, requiring a stronger foundation than a shorter, lighter building. In construction, building materials must be lifted to the top of a skyscraper during construction, requiring more energy than would be necessary at lower heights. Furthermore, a skyscraper consumes much electricity because potable and non-potable water have to be pumped to the highest occupied floors, skyscrapers are usually designed to be mechanically ventilated, elevators are generally used instead of stairs, and electric lights are needed in rooms far from the windows and windowless spaces such as elevators, bathrooms and stairwells.
Skyscrapers can be artificially lit and the energy requirements can be covered by renewable energy or other electricity generation with low greenhouse gas emissions. Heating and cooling of skyscrapers can be efficient, because of centralized HVAC systems, heat radiation blocking windows and small surface area of the building. There is Leadership in Energy and Environmental Design (LEED) certification for skyscrapers. For example, the Empire State Building received a gold Leadership in Energy and Environmental Design rating in September 2011 and the Empire State Building is the tallest LEED certified building in the United States, proving that skyscrapers can be environmentally friendly. The 30 St Mary Axe in London, the United Kingdom is another example of an environmentally friendly skyscraper.
In the lower levels of a skyscraper a larger percentage of the building floor area must be devoted to the building structure and services than is required for lower buildings:
More structure – because it must be stronger to support more floors above
The elevator conundrum creates the need for more lift shafts—everyone comes in at the bottom and they all have to pass through the lower part of the building to get to the upper levels.
Building services – power and water enter the building from below and have to pass through the lower levels to get to the upper levels.In low-rise structures, the support rooms (chillers, transformers, boilers, pumps and air handling units) can be put in basements or roof space—areas which have low rental value. There is, however, a limit to how far this plant can be located from the area it serves. The farther away it is the larger the risers for ducts and pipes from this plant to the floors they serve and the more floor area these risers take. In practice this means that in highrise buildings this plant is located on 'plant levels' at intervals up the building.
Operational energy
The building sector accounts for approximately 50% of greenhouse gas emissions, with operational energy accounting for 80-90% of building related energy use. Operational energy use is affected by the magnitude of conduction between the interior and exterior, convection from infiltrating air, and radiation through glazing. The extent to which these factors affect the operational energy vary depending on the microclimate of the skyscraper, with increased wind speeds as the height of the skyscraper increases, and a decrease in the dry bulb temperature as the altitude increases. For example, when moving from 1.5 meters to 284 meters, the dry bulb temperature decreased by 1.85oC while the wind speeds increased from 2.46 meters per seconds to 7.75 meters per second, which led to a 2.4% decrease in summer cooling in reference to the Freedom Tower in New York City. However, for the same building it was found that the annual energy use intensity was 9.26% higher because of the lack of shading at high altitudes which increased the cooling loads for the remainder of the year while a combination of temperature, wind, shading, and the effects of reflections led to a combined 13.13% increase in annual energy use intensity. In a study performed by Leung and Ray in 2013, it was found that the average energy use intensity of a structure with between 0 and 9 floors was approximately 80 kBtu/ft/yr, while the energy use intensity of a structure with more than 50 floors was about 117 kBtu/ft/yr. Refer to Figure 1 to see the breakdown of how intermediate heights affect the energy use intensity. The slight decrease in energy use intensity over 30-39 floors can be attributed to the fact that the increase in pressure within the heating, cooling, and water distribution systems levels out at a point between 40 and 49 floors and the energy savings due to the microclimate of higher floors are able to be seen. There is a gap in data in which another study looking at the same information but for taller buildings is needed.
Elevators
A portion of the operational energy increase in tall buildings is related to the usage of elevators because the distance traveled and the speed at which they travel increases as the height of the building increases. Between 5 and 25% of the total energy consumed in a tall building is from the use of elevators. As the height of the building increases it is also more inefficient because of the presence of higher drag and friction losses.
Embodied energy
The embodied energy associated with the construction of skyscrapers varies based on the materials used. Embodied energy is quantified per unit of material. Skyscrapers inherently have higher embodied energy than low-rise buildings due to the increase in material used as more floors are built. Figures 2 and 3 compare the total embodied energy of different floor types and the unit embodied energy per floor type for buildings with between 20 and 70 stories. For all floor types except for steel-concrete floors, it was found that after 60 stories, there was a decrease in unit embodied energy but when considering all floors, there was exponential growth due to a double dependence on height. The first of which is the relationship between an increase in height leading to an increase in the quantity of materials used, and the second being the increase in height leading to an increase in size of elements to increase the structural capacity of the building. A careful choice in building materials can likely reduce the embodied energy without reducing the number of floors constructed within the bounds presented.
Embodied carbon
Similar to embodied energy, the embodied carbon of a building is dependent on the materials chosen for its construction. Figures 4 and 5 show the total embodied carbon for different structure types for increasing numbers of stories and the embodied carbon per square meter of gross floor area for the same structure types as the number of stories increases. Both methods of measuring the embodied carbon show that there is a point where the embodied carbon is lowest before increasing again as the height increases. For the total embodied carbon it is dependent on the structure type, but is either around 40 stories, or approximately 60 stories. For the square meter of gross floor area, the lowest embodied carbon was found at either 40 stories, or approximately 70 stories.
Air pollution
In urban areas, the configuration of buildings can lead to exacerbated wind patterns and an uneven dispersion of pollutants. When the height of buildings surrounding a source of air pollution is increased, the size and occurrence of both "dead-zones" and "hotspots" were increased in areas where there were almost no pollutants and high concentrations of pollutants, respectively. Figure 6 depicts the progression of a Building F's height increasing from 0.0315 units in Case 1, to 0.2 units in Case 2, to 0.6 units in Case 3. This progression shows how as the height of Building F increases, the dispersion of pollutants decreases, but the concentration within the building cluster increases. The variation of velocity fields can be affected by the construction of new buildings as well, rather than solely the increase in height as shown in the figure. As urban centers continue to expand upward and outward, the present velocity fields will continue to trap polluted air close to the tall buildings within the city. Specifically within major cities, a majority of air pollution is derived from transportation, whether it be cars, trains, planes, or boats. As urban sprawl continues and pollutants continue to be emitted, the air pollutants will continue to be trapped within these urban centers. Different pollutants can be detrimental to human health in different ways. For example, particulate matter from vehicular exhaust and power generation can cause asthma, bronchitis, and cancer, while nitrogen dioxide from motor engine combustion processes can cause neurological disfunction and asphyxiation.
LEED/green building rating
Like with all other buildings, if special measures are taken to incorporate sustainable design methods early on in the design process, it is possible to obtain a green building rating, such as a Leadership in Energy and Environmental Design (LEED) certification. An integrated design approach is crucial in making sure that design decisions that positively impact the whole building are made at the beginning of the process. Because of the massive scale of skyscrapers, the decisions made by the design team must take all factors into account, including the buildings impact on the surrounding community, the effect of the building on the direction in which air and water move, and the impact of the construction process, must be taken into account. There are several design methods that could be employed in the construction of a skyscraper that would take advantage of the height of the building. The microclimates that exist as the height of the building increases can be taken advantage of to increase the natural ventilation, decrease the cooling load, and increase daylighting. Natural ventilation can be increased by utilizing the stack effect, in which warm air moves upward and increases the movement of the air within the building. If utilizing the stack effect, buildings must take extra care to design for fire separation techniques, as the stack effect can also exacerbate the severity of a fire. Skyscrapers are considered to be internally dominated buildings because of their size as well as the fact that a majority are used as some sort of office building with high cooling loads. Due to the microclimate created at the upper floors with the increased wind speed and the decreased dry bulb temperatures, the cooling load will naturally be reduced because of infiltration through the thermal envelope. By taking advantage of the naturally cooler temperatures at higher altitudes, skyscrapers can reduce their cooling loads passively. On the other side of this argument, is the lack of shading at higher altitudes by other buildings, so the solar heat gain will be larger for higher floors than for floors at the lower end of the building. Special measures should be taken to shade upper floors from sunlight during the overheated period to ensure thermal comfort without increasing the cooling load.
History of the tallest skyscrapers
At the beginning of the 20th century, New York City was a center for the Beaux-Arts architectural movement, attracting the talents of such great architects as Stanford White and Carrere and Hastings. As better construction and engineering technology became available as the century progressed, New York City and Chicago became the focal point of the competition for the tallest building in the world. Each city's striking skyline has been composed of numerous and varied skyscrapers, many of which are icons of 20th-century architecture:
The E. V. Haughwout Building in Manhattan was the first building to successfully install a passenger elevator, doing so on 23 March 1857.
The Equitable Life Building in Manhattan was the first office building to feature passenger elevators.
The Home Insurance Building in Chicago, which was built in 1884, was the first tall building with a steel skeleton.
The Singer Building, an expansion to an existing structure in Lower Manhattan was the world's tallest building when completed in 1908. Designed by Ernest Flagg, it was 612 feet (187 m) tall.
The Metropolitan Life Insurance Company Tower, across Madison Square Park from the Flatiron Building, was the world's tallest building when completed in 1909. It was designed by the architectural firm of Napoleon LeBrun & Sons and stood 700 feet (210 m) tall.
The Woolworth Building, a neo-Gothic "Cathedral of Commerce" overlooking New York City Hall, was designed by Cass Gilbert. At 792 feet (241 m), it became the world's tallest building upon its completion in 1913, an honor it retained until 1930.
40 Wall Street, a 71-story, 927-foot-tall (283 m) neo-Gothic tower designed by H. Craig Severance, was the world's tallest building for a month in May 1930.
The Chrysler Building in New York City took the lead in late May 1930 as the tallest building in the world, reaching 1,046 feet (319 m). Designed by William Van Alen, an Art Deco style masterpiece with an exterior crafted of brick, the Chrysler Building continues to be a favorite of New Yorkers to this day.
The Empire State Building, nine streets south of the Chrysler in Manhattan, topped out at 1,250 feet (381 m) and 102 stories in 1931. The first building to have more than 100 floors, it was designed by Shreve, Lamb and Harmon in the contemporary Art Deco style and takes its name from the nickname of New York State. The antenna mast added in 1951 brought pinnacle height to 1,472 feet (449 m), lowered in 1984 to 1,454 feet (443 m).
The World Trade Center officially surpassed the Empire State Building in 1970, was completed in 1973, and consisted of two tall towers and several smaller buildings. For a short time the World Trade Center's North Tower―completed in 1972―was the world's tallest building, until surpassed by Sears Tower in 1973. Upon completion, the towers stood for 28 years, until the September 11 attacks destroyed the buildings in 2001.
The Willis Tower (formerly Sears Tower) was completed in 1974. It was the first building to employ the "bundled tube" structural system, designed by Fazlur Khan. It was surpassed in height by the Petronas Towers in 1998, but remained the tallest in some categories until Burj Khalifa surpassed it in all categories in 2010. It is currently the third tallest building in the United States, after One World Trade Center, (which was built following 9/11) and Central Park Tower in New York City.Momentum in setting records passed from the United States to other nations with the opening of the Petronas Twin Towers in Kuala Lumpur, Malaysia, in 1998. The record for the world's tallest building has remained in Asia since the opening of Taipei 101 in Taipei, Taiwan, in 2004. A number of architectural records, including those of the world's tallest building and tallest free-standing structure, moved to the Middle East with the opening of the Burj Khalifa in Dubai, United Arab Emirates.
This geographical transition is accompanied by a change in approach to skyscraper design. For much of the 20th century large buildings took the form of simple geometrical shapes. This reflected the "international style" or modernist philosophy shaped by Bauhaus architects early in the century. The last of these, the Willis Tower and World Trade Center towers in New York, erected in the 1970s, reflect the philosophy. Tastes shifted in the decade which followed, and new skyscrapers began to exhibit postmodernist influences. This approach to design avails itself of historical elements, often adapted and re-interpreted, in creating technologically modern structures. The Petronas Twin Towers recall Asian pagoda architecture and Islamic geometric principles. Taipei 101 likewise reflects the pagoda tradition as it incorporates ancient motifs such as the ruyi symbol. The Burj Khalifa draws inspiration from traditional Islamic art. Architects in recent years have sought to create structures that would not appear equally at home if set in any part of the world, but that reflect the culture thriving in the spot where they stand.The following list measures height of the roof, not the pinnacle. The more common gauge is the "highest architectural detail"; such ranking would have included Petronas Towers, built in 1996.
Gallery
Future developments
Proposals for such structures have been put forward, including the Burj Mubarak Al Kabir in Kuwait and Azerbaijan Tower in Baku. Kilometer-plus structures present architectural challenges that may eventually place them in a new architectural category. The first building under construction and planned to be over one kilometre tall is the Jeddah Tower.
Wooden skyscrapers
Several wooden skyscraper designs have been designed and built. A 14-story housing project in Bergen, Norway known as 'Treet' or 'The Tree' became the world's tallest wooden apartment block when it was completed in late 2015. The Tree's record was eclipsed by Brock Commons, an 18-story wooden dormitory at the University of British Columbia in Canada, when it was completed in September 2016.A 40-story residential building 'Trätoppen' has been proposed by architect Anders Berensson to be built in Stockholm, Sweden. Trätoppen would be the tallest building in Stockholm, though there are no immediate plans to begin construction. The tallest currently-planned wooden skyscraper is the 70-story W350 Project in Tokyo, to be built by the Japanese wood products company Sumitomo Forestry Co. to celebrate its 350th anniversary in 2041. An 80-story wooden skyscraper, the River Beech Tower, has been proposed by a team including architects Perkins + Will and the University of Cambridge. The River Beech Tower, on the banks of the Chicago River in Chicago, Illinois, would be 348 feet shorter than the W350 Project despite having 10 more storys.Wooden skyscrapers are estimated to be around a quarter of the weight of an equivalent reinforced-concrete structure as well as reducing the building carbon footprint by 60–75%. Buildings have been designed using cross-laminated timber (CLT) which gives a higher rigidity and strength to wooden structures. CLT panels are prefabricated and can therefore save on building time.
See also
CTBUH Skyscraper Award
Emporis Skyscraper Award
Groundscraper
List of cities with the most skyscrapers
List of tallest buildings
List of tallest buildings and structures
Plyscraper
Seascraper
Skyscraper design and construction
Skyscraper Index
Skyscraper Museum in NYC
Skyscrapers in film
Skyline
Vertical farming, "farmscrapers"
World's littlest skyscraper
drag-coefficient
material-fatigue
down-force
Steel frame
References
Further reading
Adam, Robert. "How to Build Skyscrapers". City Journal. Archived from the original on 23 September 2015. Retrieved 4 April 2014.
Judith Dupré. Skyscrapers: A History of the World's Most Extraordinary Buildings-Revised and Updated. (2013). Hachette/Black Dog & Leventhal. 2013 ed.: ISBN 978-1-57912-942-2
Skyscrapers: Form and Function, by David Bennett, Simon & Schuster, 1995.
Landau, Sarah; Condit, Carl W. (1996). Rise of the New York Skyscraper, 1865–1913. New Haven, CT: Yale University Press. ISBN 978-0-300-07739-1. OCLC 32819286.
Willis, Carol, Form Follows Finance: Skyscrapers and Skylines in New York and Chicago. Princeton Architectural Press, 1995. 224 P. ISBN 1-56898-044-2
Van Leeuwen, Thomas A P, The Skyward Trend of Thought: The Metaphysics of the American Skyscraper, Cambridge: MIT Press, 1988.
External links
Skyscrapers at Curlie
Council on Tall Buildings and Urban Habitat
SkyscraperCity construction updates magazine
Skyscraper definition on Phorio Standards
Skyscraper Museum
SkyscraperPage Technical information and diagrams
[1] Technical information and diagrams |
coal | Coal is a combustible black or brownish-black sedimentary rock, formed as rock strata called coal seams. Coal is mostly carbon with variable amounts of other elements, chiefly hydrogen, sulfur, oxygen, and nitrogen.
Coal is a type of fossil fuel, formed when dead plant matter decays into peat and is converted into coal by the heat and pressure of deep burial over millions of years. Vast deposits of coal originate in former wetlands called coal forests that covered much of the Earth's tropical land areas during the late Carboniferous (Pennsylvanian) and Permian times.Coal is used primarily as a fuel. While coal has been known and used for thousands of years, its usage was limited until the Industrial Revolution. With the invention of the steam engine, coal consumption increased. In 2020, coal supplied about a quarter of the world's primary energy and over a third of its electricity. Some iron and steel-making and other industrial processes burn coal.
The extraction and use of coal causes premature death and illness. The use of coal damages the environment, and it is the largest anthropogenic source of carbon dioxide contributing to climate change. Fourteen billion tonnes of carbon dioxide were emitted by burning coal in 2020, which is 40% of the total fossil fuel emissions and over 25% of total global greenhouse gas emissions. As part of worldwide energy transition, many countries have reduced or eliminated their use of coal power. The United Nations Secretary General asked governments to stop building new coal plants by 2020. Global coal use was 8.3 billion tonnes in 2022. Global coal demand is set to remain at record levels in 2023. To meet the Paris Agreement target of keeping global warming below 2 °C (3.6 °F) coal use needs to halve from 2020 to 2030, and "phasing down" coal was agreed upon in the Glasgow Climate Pact.
The largest consumer and importer of coal in 2020 was China, which accounts for almost half the world's annual coal production, followed by India with about a tenth. Indonesia and Australia export the most, followed by Russia.
Etymology
The word originally took the form col in Old English, from Proto-Germanic *kula(n), which in turn is hypothesized to come from the Proto-Indo-European root *g(e)u-lo- "live coal". Germanic cognates include the Old Frisian kole, Middle Dutch cole, Dutch kool, Old High German chol, German Kohle and Old Norse kol, and the Irish word gual is also a cognate via the Indo-European root.
Geology
Coal is composed of macerals, minerals and water. Fossils and amber may be found in coal.
Formation
The conversion of dead vegetation into coal is called coalification. At various times in the geologic past, the Earth had dense forests in low-lying wetland areas. In these wetlands, the process of coalification began when dead plant matter was protected from biodegradation and oxidation, usually by mud or acidic water, and was converted into peat. This trapped the carbon in immense peat bogs that were eventually deeply buried by sediments. Then, over millions of years, the heat and pressure of deep burial caused the loss of water, methane and carbon dioxide and increased the proportion of carbon. The grade of coal produced depended on the maximum pressure and temperature reached, with lignite (also called "brown coal") produced under relatively mild conditions, and sub-bituminous coal, bituminous coal, or anthracite coal (also called "hard coal" or "black coal") produced in turn with increasing temperature and pressure.Of the factors involved in coalification, temperature is much more important than either pressure or time of burial. Subbituminous coal can form at temperatures as low as 35 to 80 °C (95 to 176 °F) while anthracite requires a temperature of at least 180 to 245 °C (356 to 473 °F).Although coal is known from most geologic periods, 90% of all coal beds were deposited in the Carboniferous and Permian periods, which represent just 2% of the Earth's geologic history. Paradoxically, this was during the Late Paleozoic icehouse, a time of global glaciation. However, the drop in global sea level accompanying the glaciation exposed continental shelfs that had previously been submerged, and to these were added wide river deltas produced by increased erosion due to the drop in base level. These widespread areas of wetlands provided ideal conditions for coal formation. The rapid formation of coal ended with the coal gap in the Permian–Triassic extinction event, where coal is rare.Favorable geography alone does not explain the extensive Carboniferous coal beds. Other factors contributing to rapid coal deposition were high oxygen levels, above 30%, that promoted intense wildfires and formation of charcoal that was all but indigestible by decomposing organisms; high carbon dioxide levels that promoted plant growth; and the nature of Carboniferous forests, which included lycophyte trees whose determinate growth meant that carbon was not tied up in heartwood of living trees for long periods.One theory suggested that about 360 million years ago, some plants evolved the ability to produce lignin, a complex polymer that made their cellulose stems much harder and more woody. The ability to produce lignin led to the evolution of the first trees. But bacteria and fungi did not immediately evolve the ability to decompose lignin, so the wood did not fully decay but became buried under sediment, eventually turning into coal. About 300 million years ago, mushrooms and other fungi developed this ability, ending the main coal-formation period of earth's history. Although some authors pointed at some evidence of lignin degradation during the Carboniferous, and suggested that climatic and tectonic factors were a more plausible explanation, reconstruction of ancestral enzymes by phylogenetic analysis corroborated a hypothesis that lignin degrading enzymes appeared in fungi approximately 200 MYa.One likely tectonic factor was the Central Pangean Mountains, an enormous range running along the equator that reached its greatest elevation near this time. Climate modeling suggests that the Central Pangean Mountains contributed to the deposition of vast quantities of coal in the late Carboniferous. The mountains created an area of year-round heavy precipitation, with no dry season typical of a monsoon climate. This is necessary for the preservation of peat in coal swamps.Coal is known from Precambrian strata, which predate land plants. This coal is presumed to have originated from residues of algae.Sometimes coal seams (also known as coal beds) are interbedded with other sediments in a cyclothem. Cyclothems are thought to have their origin in glacial cycles that produced fluctuations in sea level, which alternately exposed and then flooded large areas of continental shelf.
Chemistry of coalification
The woody tissue of plants is composed mainly of cellulose, hemicellulose, and lignin. Modern peat is mostly lignin, with a content of cellulose and hemicellulose ranging from 5% to 40%. Various other organic compounds, such as waxes and nitrogen- and sulfur-containing compounds, are also present. Lignin has a weight composition of about 54% carbon, 6% hydrogen, and 30% oxygen, while cellulose has a weight composition of about 44% carbon, 6% hydrogen, and 49% oxygen. Bituminous coal has a composition of about 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis. This implies that chemical processes during coalification must remove most of the oxygen and much of the hydrogen, leaving carbon, a process called carbonization.Carbonization proceeds primarily by dehydration, decarboxylation, and demethanation. Dehydration removes water molecules from the maturing coal via reactions such as
2 R–OH → R–O–R + H2O
2 R-CH2-O-CH2-R → R-CH=CH-R + H2ODecarboxylation removes carbon dioxide from the maturing coal and proceeds by reaction such as
RCOOH → RH + CO2while demethanation proceeds by reaction such as
2 R-CH3 → R-CH2-R + CH4
R-CH2-CH2-CH2-R → R-CH=CH-R + CH4In each of these formulas, R represents the remainder of a cellulose or lignin molecule to which the reacting groups are attached.
Dehydration and decarboxylation take place early in coalification, while demethanation begins only after the coal has already reached bituminous rank. The effect of decarboxylation is to reduce the percentage of oxygen, while demethanation reduces the percentage of hydrogen. Dehydration does both, and (together with demethanation) reduces the saturation of the carbon backbone (increasing the number of double bonds between carbon).
As carbonization proceeds, aliphatic compounds (carbon compounds characterized by chains of carbon atoms) are replaced by aromatic compounds (carbon compounds characterized by rings of carbon atoms) and aromatic rings begin to fuse into polyaromatic compounds (linked rings of carbon atoms). The structure increasingly resembles graphene, the structural element of graphite.
Chemical changes are accompanied by physical changes, such as decrease in average pore size. The macerals (organic particles) of lignite are composed of huminite, which is earthy in appearance. As the coal matures to sub-bituminous coal, huminite begins to be replaced by vitreous (shiny) vitrinite. Maturation of bituminous coal is characterized by bitumenization, in which part of the coal is converted to bitumen, a hydrocarbon-rich gel. Maturation to anthracite is characterized by debitumenization (from demethanation) and the increasing tendency of the anthracite to break with a conchoidal fracture, similar to the way thick glass breaks.
Types
As geological processes apply pressure to dead biotic material over time, under suitable conditions, its metamorphic grade or rank increases successively into:
Peat, a precursor of coal
Lignite, or brown coal, the lowest rank of coal, most harmful to health when burned, used almost exclusively as fuel for electric power generation
Jet, a compact form of lignite, sometimes polished; used as an ornamental stone since the Upper Palaeolithic
Sub-bituminous coal, whose properties range between those of lignite and those of bituminous coal, is used primarily as fuel for steam-electric power generation.
Bituminous coal, a dense sedimentary rock, usually black, but sometimes dark brown, often with well-defined bands of bright and dull material. It is used primarily as fuel in steam-electric power generation and to make coke. Known as steam coal in the UK, and historically used to raise steam in steam locomotives and ships
Anthracite coal, the highest rank of coal, is a harder, glossy black coal used primarily for residential and commercial space heating.
Graphite is difficult to ignite and not commonly used as fuel; it is most used in pencils, or powdered for lubrication.
Cannel coal (sometimes called "candle coal") is a variety of fine-grained, high-rank coal with significant hydrogen content, which consists primarily of liptinite.There are several international standards for coal. The classification of coal is generally based on the content of volatiles. However the most important distinction is between thermal coal (also known as steam coal), which is burnt to generate electricity via steam; and metallurgical coal (also known as coking coal), which is burnt at high temperature to make steel.
Hilt's law is a geological observation that (within a small area) the deeper the coal is found, the higher its rank (or grade). It applies if the thermal gradient is entirely vertical; however, metamorphism may cause lateral changes of rank, irrespective of depth. For example, some of the coal seams of the Madrid, New Mexico coal field were partially converted to anthracite by contact metamorphism from an igneous sill while the remainder of the seams remained as bituminous coal.
History
The earliest recognized use is from the Shenyang area of China where by 4000 BC Neolithic inhabitants had begun carving ornaments from black lignite. Coal from the Fushun mine in northeastern China was used to smelt copper as early as 1000 BC. Marco Polo, the Italian who traveled to China in the 13th century, described coal as "black stones ... which burn like logs", and said coal was so plentiful, people could take three hot baths a week. In Europe, the earliest reference to the use of coal as fuel is from the geological treatise On Stones (Lap. 16) by the Greek scientist Theophrastus (c. 371–287 BC):
Among the materials that are dug because they are useful, those known as anthrakes [coals] are made of earth, and, once set on fire, they burn like charcoal [anthrakes]. They are found in Liguria ... and in Elis as one approaches Olympia by the mountain road; and they are used by those who work in metals.
Outcrop coal was used in Britain during the Bronze Age (3000–2000 BC), where it formed part of funeral pyres. In Roman Britain, with the exception of two modern fields, "the Romans were exploiting coals in all the major coalfields in England and Wales by the end of the second century AD". Evidence of trade in coal, dated to about AD 200, has been found at the Roman settlement at Heronbridge, near Chester; and in the Fenlands of East Anglia, where coal from the Midlands was transported via the Car Dyke for use in drying grain. Coal cinders have been found in the hearths of villas and Roman forts, particularly in Northumberland, dated to around AD 400. In the west of England, contemporary writers described the wonder of a permanent brazier of coal on the altar of Minerva at Aquae Sulis (modern day Bath), although in fact easily accessible surface coal from what became the Somerset coalfield was in common use in quite lowly dwellings locally. Evidence of coal's use for iron-working in the city during the Roman period has been found. In Eschweiler, Rhineland, deposits of bituminous coal were used by the Romans for the smelting of iron ore.
No evidence exists of coal being of great importance in Britain before about AD 1000, the High Middle Ages. Coal came to be referred to as "seacoal" in the 13th century; the wharf where the material arrived in London was known as Seacoal Lane, so identified in a charter of King Henry III granted in 1253. Initially, the name was given because much coal was found on the shore, having fallen from the exposed coal seams on cliffs above or washed out of underwater coal outcrops, but by the time of Henry VIII, it was understood to derive from the way it was carried to London by sea. In 1257–1259, coal from Newcastle upon Tyne was shipped to London for the smiths and lime-burners building Westminster Abbey. Seacoal Lane and Newcastle Lane, where coal was unloaded at wharves along the River Fleet, still exist.These easily accessible sources had largely become exhausted (or could not meet the growing demand) by the 13th century, when underground extraction by shaft mining or adits was developed. The alternative name was "pitcoal", because it came from mines.
Cooking and home heating with coal (in addition to firewood or instead of it) has been done in various times and places throughout human history, especially in times and places where ground-surface coal was available and firewood was scarce, but a widespread reliance on coal for home hearths probably never existed until such a switch in fuels happened in London in the late sixteenth and early seventeenth centuries. Historian Ruth Goodman has traced the socioeconomic effects of that switch and its later spread throughout Britain and suggested that its importance in shaping the industrial adoption of coal has been previously underappreciated.: xiv–xix The development of the Industrial Revolution led to the large-scale use of coal, as the steam engine took over from the water wheel. In 1700, five-sixths of the world's coal was mined in Britain. Britain would have run out of suitable sites for watermills by the 1830s if coal had not been available as a source of energy. In 1947 there were some 750,000 miners in Britain, but the last deep coal mine in the UK closed in 2015.A grade between bituminous coal and anthracite was once known as "steam coal" as it was widely used as a fuel for steam locomotives. In this specialized use, it is sometimes known as "sea coal" in the United States. Small "steam coal", also called dry small steam nuts (DSSN), was used as a fuel for domestic water heating.
Coal played an important role in industry in the 19th and 20th century. The predecessor of the European Union, the European Coal and Steel Community, was based on the trading of this commodity.Coal continues to arrive on beaches around the world from both natural erosion of exposed coal seams and windswept spills from cargo ships. Many homes in such areas gather this coal as a significant, and sometimes primary, source of home heating fuel.
Chemistry
Composition
The composition of coal is reported either as a proximate analysis (moisture, volatile matter, fixed carbon, and ash) or an ultimate analysis (ash, carbon, hydrogen, nitrogen, oxygen, and sulfur). The "volatile matter" does not exist by itself (except for some adsorbed methane) but designates the volatile compounds that are produced and driven off by heating the coal. A typical bituminous coal may have an ultimate analysis on a dry, ash-free basis of 84.4% carbon, 5.4% hydrogen, 6.7% oxygen, 1.7% nitrogen, and 1.8% sulfur, on a weight basis.The composition of ash, given in terms of oxides, varies:
Other minor components include:
Coking coal and use of coke to smelt iron
Coke is a solid carbonaceous residue derived from coking coal (a low-ash, low-sulfur bituminous coal, also known as metallurgical coal), which is used in manufacturing steel and other iron products. Coke is made from coking coal by baking in an oven without oxygen at temperatures as high as 1,000 °C, driving off the volatile constituents and fusing together the fixed carbon and residual ash. Metallurgical coke is used as a fuel and as a reducing agent in smelting iron ore in a blast furnace. The carbon monoxide produced by its combustion reduces hematite (an iron oxide) to iron.
Waste carbon dioxide is also produced (
2
Fe
2
O
3
+
3
C
⟶
4
Fe
+
3
CO
2
{\displaystyle {\ce {2Fe2O3 + 3C -> 4Fe + 3CO2}}}
) together with pig iron, which is too rich in dissolved carbon so must be treated further to make steel.
Coking coal should be low in ash, sulfur, and phosphorus, so that these do not migrate to the metal.
The coke must be strong enough to resist the weight of overburden in the blast furnace, which is why coking coal is so important in making steel using the conventional route. Coke from coal is grey, hard, and porous and has a heating value of 29.6 MJ/kg. Some coke-making processes produce byproducts, including coal tar, ammonia, light oils, and coal gas.
Petroleum coke (petcoke) is the solid residue obtained in oil refining, which resembles coke but contains too many impurities to be useful in metallurgical applications.
Use in foundry components
Finely ground bituminous coal, known in this application as sea coal, is a constituent of foundry sand. While the molten metal is in the mould, the coal burns slowly, releasing reducing gases at pressure, and so preventing the metal from penetrating the pores of the sand. It is also contained in 'mould wash', a paste or liquid with the same function applied to the mould before casting. Sea coal can be mixed with the clay lining (the "bod") used for the bottom of a cupola furnace. When heated, the coal decomposes and the bod becomes slightly friable, easing the process of breaking open holes for tapping the molten metal.
Alternatives to coke
Scrap steel can be recycled in an electric arc furnace; and an alternative to making iron by smelting is direct reduced iron, where any carbonaceous fuel can be used to make sponge or pelletised iron. To lessen carbon dioxide emissions hydrogen can be used as the reducing agent and biomass or waste as the source of carbon. Historically, charcoal has been used as an alternative to coke in a blast furnace, with the resultant iron being known as charcoal iron.
Gasification
Coal gasification, as part of an integrated gasification combined cycle (IGCC) coal-fired power station, is used to produce syngas, a mixture of carbon monoxide (CO) and hydrogen (H2) gas to fire gas turbines to produce electricity. Syngas can also be converted into transportation fuels, such as gasoline and diesel, through the Fischer–Tropsch process; alternatively, syngas can be converted into methanol, which can be blended into fuel directly or converted to gasoline via the methanol to gasoline process. Gasification combined with Fischer–Tropsch technology was used by the Sasol chemical company of South Africa to make chemicals and motor vehicle fuels from coal.During gasification, the coal is mixed with oxygen and steam while also being heated and pressurized. During the reaction, oxygen and water molecules oxidize the coal into carbon monoxide (CO), while also releasing hydrogen gas (H2). This used to be done in underground coal mines, and also to make town gas, which was piped to customers to burn for illumination, heating, and cooking.
3C (as Coal) + O2 + H2O → H2 + 3COIf the refiner wants to produce gasoline, the syngas is routed into a Fischer–Tropsch reaction. This is known as indirect coal liquefaction. If hydrogen is the desired end-product, however, the syngas is fed into the water gas shift reaction, where more hydrogen is liberated:
CO + H2O → CO2 + H2
Liquefaction
Coal can be converted directly into synthetic fuels equivalent to gasoline or diesel by hydrogenation or carbonization. Coal liquefaction emits more carbon dioxide than liquid fuel production from crude oil. Mixing in biomass and using CCS would emit slightly less than the oil process but at a high cost. State owned China Energy Investment runs a coal liquefaction plant and plans to build 2 more.Coal liquefaction may also refer to the cargo hazard when shipping coal.
Production of chemicals
Chemicals have been produced from coal since the 1950s. Coal can be used as a feedstock in the production of a wide range of chemical fertilizers and other chemical products. The main route to these products was coal gasification to produce syngas. Primary chemicals that are produced directly from the syngas include methanol, hydrogen and carbon monoxide, which are the chemical building blocks from which a whole spectrum of derivative chemicals are manufactured, including olefins, acetic acid, formaldehyde, ammonia, urea and others. The versatility of syngas as a precursor to primary chemicals and high-value derivative products provides the option of using coal to produce a wide range of commodities. In the 21st century, however, the use of coal bed methane is becoming more important.Because the slate of chemical products that can be made via coal gasification can in general also use feedstocks derived from natural gas and petroleum, the chemical industry tends to use whatever feedstocks are most cost-effective. Therefore, interest in using coal tended to increase for higher oil and natural gas prices and during periods of high global economic growth that might have strained oil and gas production.
Coal to chemical processes require substantial quantities of water. Much coal to chemical production is in China where coal dependent provinces such as Shanxi are struggling to control its pollution.
Electricity generation
Energy density
The energy density of coal is roughly 24 megajoules per kilogram (approximately 6.7 kilowatt-hours per kg). For a coal power plant with a 40% efficiency, it takes an estimated 325 kg (717 lb) of coal to power a 100 W lightbulb for one year.27.6% of world energy was supplied by coal in 2017 and Asia used almost three-quarters of it.
Precombustion treatment
Refined coal is the product of a coal-upgrading technology that removes moisture and certain pollutants from lower-rank coals such as sub-bituminous and lignite (brown) coals. It is one form of several precombustion treatments and processes for coal that alter coal's characteristics before it is burned. Thermal efficiency improvements are achievable by improved pre-drying (especially relevant with high-moisture fuel such as lignite or biomass). The goals of precombustion coal technologies are to increase efficiency and reduce emissions when the coal is burned. Precombustion technology can sometimes be used as a supplement to postcombustion technologies to control emissions from coal-fueled boilers.
Power plant combustion
Coal burnt as a solid fuel in coal power stations to generate electricity is called thermal coal. Coal is also used to produce very high temperatures through combustion. Early deaths due to air pollution have been estimated at 200 per GW-year, however they may be higher around power plants where scrubbers are not used or lower if they are far from cities. Efforts around the world to reduce the use of coal have led some regions to switch to natural gas and electricity from lower carbon sources.
When coal is used for electricity generation, it is usually pulverized and then burned in a furnace with a boiler (see also Pulverized coal-fired boiler). The furnace heat converts boiler water to steam, which is then used to spin turbines which turn generators and create electricity. The thermodynamic efficiency of this process varies between about 25% and 50% depending on the pre-combustion treatment, turbine technology (e.g. supercritical steam generator) and the age of the plant.A few integrated gasification combined cycle (IGCC) power plants have been built, which burn coal more efficiently. Instead of pulverizing the coal and burning it directly as fuel in the steam-generating boiler, the coal is gasified to create syngas, which is burned in a gas turbine to produce electricity (just like natural gas is burned in a turbine). Hot exhaust gases from the turbine are used to raise steam in a heat recovery steam generator which powers a supplemental steam turbine. The overall plant efficiency when used to provide combined heat and power can reach as much as 94%. IGCC power plants emit less local pollution than conventional pulverized coal-fueled plants; however the technology for carbon capture and storage (CCS) after gasification and before burning has so far proved to be too expensive to use with coal. Other ways to use coal are as coal-water slurry fuel (CWS), which was developed in the Soviet Union, or in an MHD topping cycle. However these are not widely used due to lack of profit.
In 2017 38% of the world's electricity came from coal, the same percentage as 30 years previously. In 2018 global installed capacity was 2TW (of which 1TW is in China) which was 30% of total electricity generation capacity. The most dependent major country is South Africa, with over 80% of its electricity generated by coal; but China alone generates more than half of the world's coal-generated electricity.Maximum use of coal was reached in 2013. In 2018 coal-fired power station capacity factor averaged 51%, that is they operated for about half their available operating hours.
Coal industry
Mining
About 8000 Mt of coal are produced annually, about 90% of which is hard coal and 10% lignite. As of 2018 just over half is from underground mines. More accidents occur during underground mining than surface mining. Not all countries publish mining accident statistics so worldwide figures are uncertain, but it is thought that most deaths occur in coal mining accidents in China: in 2017 there were 375 coal mining related deaths in China. Most coal mined is thermal coal (also called steam coal as it is used to make steam to generate electricity) but metallurgical coal (also called "metcoal" or "coking coal" as it is used to make coke to make iron) accounts for 10% to 15% of global coal use.
As a traded commodity
China mines almost half the world's coal, followed by India with about a tenth. Australia accounts for about a third of world coal exports, followed by Indonesia and Russia, while the largest importers are Japan and India.
The price of metallurgical coal is volatile and much higher than the price of thermal coal because metallurgical coal must be lower in sulfur and requires more cleaning. Coal futures contracts provide coal producers and the electric power industry an important tool for hedging and risk management.
In some countries new onshore wind or solar generation already costs less than coal power from existing plants.
However, for China this is forecast for the early 2020s and for southeast Asia not until the late 2020s. In India building new plants is uneconomic and, despite being subsidized, existing plants are losing market share to renewables.
Market trends
Of the countries which produce coal China mines by far the most, almost half the world's coal, followed by less than 10% by India. China is also by far the largest consumer. Therefore, market trends depend on Chinese energy policy. Although the effort to reduce pollution means that the global long-term trend is to burn less coal, the short and medium term trends may differ, in part due to Chinese financing of new coal-fired power plants in other countries.
Major producers
Countries with annual production higher than 300 million tonnes are shown.
Major consumers
Countries with annual consumption higher than 500 million tonnes are shown. Shares are based on data expressed in tonnes oil equivalent.
Major exporters
Exporters are at risk of a reduction in import demand from India and China.
Major importers
Damage to human health
The use of coal as fuel causes ill health and deaths. Mining and processing of coal causes air and water pollution. Coal-powered plants emit nitrogen oxides, sulfur dioxide, particulate pollution and heavy metals, which adversely affect human health. Coal bed methane extraction is important to avoid mining accidents.
The deadly London smog was caused primarily by the heavy use of coal. Globally coal is estimated to cause 800,000 premature deaths every year, mostly in India and China.Burning coal is a major emitter of sulfur dioxide, which creates PM2.5 particulates, the most dangerous form of air pollution.Coal smokestack emissions cause asthma, strokes, reduced intelligence, artery blockages, heart attacks, congestive heart failure, cardiac arrhythmias, mercury poisoning, arterial occlusion, and lung cancer.Annual health costs in Europe from use of coal to generate electricity are estimated at up to €43 billion.In China, improvements to air quality and human health would increase with more stringent climate policies, mainly because the country's energy is so heavily reliant on coal. And there would be a net economic benefit.A 2017 study in the Economic Journal found that for Britain during the period 1851–1860, "a one standard deviation increase in coal use raised infant mortality by 6–8% and that industrial coal use explains roughly one-third of the urban mortality penalty observed during this period."Breathing in coal dust causes coalworker's pneumoconiosis or "black lung", so called because the coal dust literally turns the lungs black from their usual pink color. In the US alone, it is estimated that 1,500 former employees of the coal industry die every year from the effects of breathing in coal mine dust.Huge amounts of coal ash and other waste is produced annually. Use of coal generates hundreds of millions of tons of ash and other waste products every year. These include fly ash, bottom ash, and flue-gas desulfurization sludge, that contain mercury, uranium, thorium, arsenic, and other heavy metals, along with non-metals such as selenium.Around 10% of coal is ash: coal ash is hazardous and toxic to human beings and some other living things. Coal ash contains the radioactive elements uranium and thorium. Coal ash and other solid combustion byproducts are stored locally and escape in various ways that expose those living near coal plants to radiation and environmental toxics.
Damage to the environment
Coal mining, coal combustion wastes and flue gas are causing major environmental damage.Water systems are affected by coal mining. For example, mining affects groundwater and water table levels and acidity. Spills of fly ash, such as the Kingston Fossil Plant coal fly ash slurry spill, can also contaminate land and waterways, and destroy homes. Power stations that burn coal also consume large quantities of water. This can affect the flows of rivers, and has consequential impacts on other land uses. In areas of water scarcity, such as the Thar Desert in Pakistan, coal mining and coal power plants would use significant quantities of water.One of the earliest known impacts of coal on the water cycle was acid rain. In 2014 approximately 100 Tg/S of sulfur dioxide (SO2) was released, over half of which was from burning coal. After release, the sulfur dioxide is oxidized to H2SO4 which scatters solar radiation, hence its increase in the atmosphere exerts a cooling effect on climate. This beneficially masks some of the warming caused by increased greenhouse gases. However, the sulfur is precipitated out of the atmosphere as acid rain in a matter of weeks, whereas carbon dioxide remains in the atmosphere for hundreds of years. Release of SO2 also contributes to the widespread acidification of ecosystems.Disused coal mines can also cause issues. Subsidence can occur above tunnels, causing damage to infrastructure or cropland. Coal mining can also cause long lasting fires, and it has been estimated that thousands of coal seam fires are burning at any given time. For example, Brennender Berg has been burning since 1668 and is still burning in the 21st century.The production of coke from coal produces ammonia, coal tar, and gaseous compounds as byproducts which if discharged to land, air or waterways can pollute the environment. The Whyalla steelworks is one example of a coke producing facility where liquid ammonia was discharged to the marine environment.
Emission intensity
Emission intensity is the greenhouse gas emitted over the life of a generator per unit of electricity generated. The emission intensity of coal power stations is high, as they emit around 1000 g of CO2eq for each kWh generated, while natural gas is medium-emission intensity at around 500 g CO2eq per kWh. The emission intensity of coal varies with type and generator technology and exceeds 1200 g per kWh in some countries.
Underground fires
Thousands of coal fires are burning around the world. Those burning underground can be difficult to locate and many cannot be extinguished. Fires can cause the ground above to subside, their combustion gases are dangerous to life, and breaking out to the surface can initiate surface wildfires. Coal seams can be set on fire by spontaneous combustion or contact with a mine fire or surface fire. Lightning strikes are an important source of ignition. The coal continues to burn slowly back into the seam until oxygen (air) can no longer reach the flame front. A grass fire in a coal area can set dozens of coal seams on fire. Coal fires in China burn an estimated 120 million tons of coal a year, emitting 360 million metric tons of CO2, amounting to 2–3% of the annual worldwide production of CO2 from fossil fuels. In Centralia, Pennsylvania (a borough located in the Coal Region of the U.S.), an exposed vein of anthracite ignited in 1962 due to a trash fire in the borough landfill, located in an abandoned anthracite strip mine pit. Attempts to extinguish the fire were unsuccessful, and it continues to burn underground to this day. The Australian Burning Mountain was originally believed to be a volcano, but the smoke and ash come from a coal fire that has been burning for some 6,000 years.At Kuh i Malik in Yagnob Valley, Tajikistan, coal deposits have been burning for thousands of years, creating vast underground labyrinths full of unique minerals, some of them very beautiful.
The reddish siltstone rock that caps many ridges and buttes in the Powder River Basin in Wyoming and in western North Dakota is called porcelanite, which resembles the coal burning waste "clinker" or volcanic "scoria". Clinker is rock that has been fused by the natural burning of coal. In the Powder River Basin approximately 27 to 54 billion tons of coal burned within the past three million years. Wild coal fires in the area were reported by the Lewis and Clark Expedition as well as explorers and settlers in the area.
Climate change
The largest and most long-term effect of coal use is the release of carbon dioxide, a greenhouse gas that causes climate change. Coal-fired power plants were the single largest contributor to the growth in global CO2 emissions in 2018, 40% of the total fossil fuel emissions, and more than a quarter of total emissions. Coal mining can emit methane, another greenhouse gas.In 2016 world gross carbon dioxide emissions from coal usage were 14.5 gigatonnes. For every megawatt-hour generated, coal-fired electric power generation emits around a tonne of carbon dioxide, which is double the approximately 500 kg of carbon dioxide released by a natural gas-fired electric plant. In 2013, the head of the UN climate agency advised that most of the world's coal reserves should be left in the ground to avoid catastrophic global warming. To keep global warming below 1.5 °C or 2 °C hundreds, or possibly thousands, of coal-fired power plants will need to be retired early.
Pollution mitigation
Standards
Local pollution standards include GB13223-2011 (China), India, the Industrial Emissions Directive (EU) and the Clean Air Act (United States).
Satellite monitoring
Satellite monitoring is now used to crosscheck national data, for example Sentinel-5 Precursor has shown that Chinese control of SO2 has only been partially successful. It has also revealed that low use of technology such as SCR has resulted in high NO2 emissions in South Africa and India.
Combined cycle power plants
A few Integrated gasification combined cycle (IGCC) coal-fired power plants have been built with coal gasification. Although they burn coal more efficiently and therefore emit less pollution, the technology has not generally proved economically viable for coal, except possibly in Japan although this is controversial.
Carbon capture and storage
Although still being intensively researched and considered economically viable for some uses other than with coal; carbon capture and storage has been tested at the Petra Nova and Boundary Dam coal-fired power plants and has been found to be technically feasible but not economically viable for use with coal, due to reductions in the cost of solar PV technology.
Economics
In 2018 US$80 billion was invested in coal supply but almost all for sustaining production levels rather than opening new mines.
In the long term coal and oil could cost the world trillions of dollars per year. Coal alone may cost Australia billions, whereas costs to some smaller companies or cities could be on the scale of millions of dollars. The economies most damaged by coal (via climate change) may be India and the US as they are the countries with the highest social cost of carbon. Bank loans to finance coal are a risk to the Indian economy.China is the largest producer of coal in the world. It is the world's largest energy consumer, and coal in China supplies 60% of its primary energy. However two fifths of China's coal power stations are estimated to be loss-making.Air pollution from coal storage and handling costs the US almost 200 dollars for every extra ton stored, due to PM2.5. Coal pollution costs the €43 billion each year. Measures to cut air pollution benefit individuals financially and the economies of countries such as China.
Subsidies
Subsidies for coal in 2021 have been estimated at US$19 billion, not including electricity subsidies, and are expected to rise in 2022. As of 2019 G20 countries provide at least US$63.9 billion of government support per year for the production of coal, including coal-fired power: many subsidies are impossible to quantify but they include US$27.6 billion in domestic and international public finance, US$15.4 billion in fiscal support, and US$20.9 billion in state-owned enterprise (SOE) investments per year. In the EU state aid to new coal-fired plants is banned from 2020, and to existing coal-fired plants from 2025. As of 2018, government funding for new coal power plants was supplied by Exim Bank of China, the Japan Bank for International Cooperation and Indian public sector banks. Coal in Kazakhstan was the main recipient of coal consumption subsidies totalling US$2 billion in 2017. Coal in Turkey benefited from substantial subsidies in 2021.
Stranded assets
Some coal-fired power stations could become stranded assets, for example China Energy Investment, the world's largest power company, risks losing half its capital. However, state-owned electricity utilities such as Eskom in South Africa, Perusahaan Listrik Negara in Indonesia, Sarawak Energy in Malaysia, Taipower in Taiwan, EGAT in Thailand, Vietnam Electricity and EÜAŞ in Turkey are building or planning new plants. As of 2021 this may be helping to cause a carbon bubble which could cause financial instability if it bursts.
Politics
Countries building or financing new coal-fired power stations, such as China, India, Indonesia, Vietnam, Turkey and Bangladesh, face mounting international criticism for obstructing the aims of the Paris Agreement. In 2019, the Pacific Island nations (in particular Vanuatu and Fiji) criticized Australia for failing to cut their emissions at a faster rate than they were, citing concerns about coastal inundation and erosion. In May 2021, the G7 members agreed to end new direct government support for international coal power generation.
Opposition to coal pollution was one of the main reasons the modern environmental movement started in the 19th century.
Transition away from coal
In order to meet global climate goals and provide power to those that do not currently have it coal power must be reduced from nearly 10,000 TWh to less than 2,000 TWh by 2040. Phasing out coal has short-term health and environmental benefits which exceed the costs, but some countries still favor coal, and there is much disagreement about how quickly it should be phased out. However many countries, such as the Powering Past Coal Alliance, have already or are transitioned away from coal; the largest transition announced so far being Germany, which is due to shut down its last coal-fired power station between 2035 and 2038. Some countries use the ideas of a "Just Transition", for example to use some of the benefits of transition to provide early pensions for coal miners. However, low-lying Pacific Islands are concerned the transition is not fast enough and that they will be inundated by sea level rise, so they have called for OECD countries to completely phase out coal by 2030 and other countries by 2040. In 2020, although China built some plants, globally more coal power was retired than built: the UN Secretary General has also said that OECD countries should stop generating electricity from coal by 2030 and the rest of the world by 2040. Phasing down coal was agreed at COP26 in the Glasgow Climate Pact. Vietnam is among few coal-dependent developing countries that pledged to phase out unabated coal power by the 2040s or as early as possible thereafter
Peak coal
Switch to cleaner fuels and lower carbon electricity generation
Coal-fired generation puts out about twice as much carbon dioxide—around a tonne for every megawatt hour generated—as electricity generated by burning natural gas at 500 kg of greenhouse gas per megawatt hour. In addition to generating electricity, natural gas is also popular in some countries for heating and as an automotive fuel.
The use of coal in the United Kingdom declined as a result of the development of North Sea oil and the subsequent dash for gas during the 1990s. In Canada some coal power plants, such as the Hearn Generating Station, switched from coal to natural gas. In 2017, coal power in the US provided 30% of the electricity, down from approximately 49% in 2008, due to plentiful supplies of low cost natural gas obtained by hydraulic fracturing of tight shale formations.
Coal regions in transition
Some coal-mining regions are highly dependent on coal.
Employment
Some coal miners are concerned their jobs may be lost in the transition. A just transition from coal is supported by the European Bank for Reconstruction and Development.
Bioremediation
The white rot fungus Trametes versicolor can grow on and metabolize naturally occurring coal. The bacteria Diplococcus has been found to degrade coal, raising its temperature.
Cultural usage
Coal is the official state mineral of Kentucky, and the official state rock of Utah and West Virginia. These US states have a historic link to coal mining.
Some cultures hold that children who misbehave will receive only a lump of coal from Santa Claus for Christmas in their stockings instead of presents.
It is also customary and considered lucky in Scotland and the North of England to give coal as a gift on New Year's Day. This occurs as part of first-footing and represents warmth for the year to come.
See also
Notes
References
Sources
Gençsü, Ipek (June 2019). "G20 coal subsidies" (PDF). Overseas Development Institute. Archived from the original (PDF) on 31 August 2020. Retrieved 26 June 2019.
Further reading
Freese, Barbara (2003). Coal: A Human History. Penguin Books. ISBN 978-0-7382-0400-0. OCLC 51449422.
Thurber, Mark (2019). Coal. Polity Press. ISBN 978-1509514014.
Paxman, Jeremy (2022). Black Gold : The History of How Coal Made Britain. William Collins. ISBN 9780008128364.
External links
Coal Transitions
World Coal Association
Coal – International Energy Agency
Coal Online – International Energy Agency Archived 19 January 2008 at the Wayback Machine
CoalExit
European Association for Coal and Lignite
Coal news and industry magazine
Global Coal Plant Tracker
Centre for Research on Energy and Clean Air
"Coal" . Encyclopædia Britannica. Vol. 6 (11th ed.). 1911. pp. 574–93.
"Coal" . New International Encyclopedia. 1905.
"Coal" . Collier's New Encyclopedia. 1921. |